[music] 00:20 Joshua Preston: You're listening to Tech Unbound with the GVU Center at Georgia Tech. On this show, we explore human-focused technology and computing research at the institute. The work that takes place here today could be the technology you're using tomorrow. I'm your host, Joshua Preston, and I'll be talking with experts in our research community who will share with us cutting edge innovations, how they impact society, and how the researchers pulled it off. The GVU Center is keeping the human in the loop. [music] 01:06 JP: Researchers at Georgia Tech are seeking to improve artificial intelligence literacy and give people opportunities to engage directly with AI systems in order to understand the potential and capabilities of the technology. But really, how does artificial intelligence work? We don't usually ask that of our other technology, we just expect our laptops to turn on, our apps to be helpful, and the internet to be fast. The artificial intelligence systems are in essence thinking machines and perform different types of tasks differently based on the scenario. So even though AI-assisted tech is increasingly common, it's often hard to spot where the AI shows up in people's daily use of their devices and online services. A Georgia Tech group called the Expressive Machinery Lab is tackling a specific challenge in the AI landscape. They've developed public exhibitions where the AI agents are front and center, and people are able to create with them. These AI partners have included a dancer, a visual storyteller, music maker, and even a comedic improv performer. 02:10 JP: I'm joined today by Duri Long, one of the researchers in the group who's here to talk about this research and maybe even share some wild anecdotes from her time as a show runner, a curator on some of these AI-powered exhibits. Thanks for coming on the show, Duri. 02:25 Duri Long: Yeah, thanks so much for having me. 02:27 JP: So tell us about yourself and more about a unique approach, what is admittedly a unique approach to AI work. 02:35 DL: Yeah, so I'm a PhD student here in the Human Centered Computing program, working in the expressive machinery lab like you said with Brian Magerko, my advisor, and we do a lot of work focused on how AI can collaborate with people and create together with people. And recently, one of the problems that I've become really interested in is how we can teach people a little bit more about AI through these collaborative creative experiences. So like you said, a lot of people don't understand how AI works, and I think that's becoming increasingly important as AI is becoming more integrated in our day-to-day lives. And there have been a couple of projects or efforts that are looking at how to teach people about AI in the classroom, but what we're really looking at is how we can teach people about AI in public spaces, so how you can have experiences where you learn a little something about AI in places like museums or shopping malls. And we found that these collaborative creative interactions really provide a good way to engage people in these public spaces. 03:39 JP: So you've traveled up the East Coast I know, and probably even further doing several of these AI exhibits. What types of spaces have these been shown in and are there any favorites that you have? 03:50 DL: Yeah, we've worked in a lot of different spaces. I was at the Children's Museum of Pittsburgh for about a month two years ago, installing two of our projects that we discuss in the paper, Sound Happening and LuminAI. Sound Happening is a co-creative play space in which participants can move colorful beach balls around and generate music based on the location and the color of the beach balls, and LuminAI is a co-creative dance partner. So it's a humanoid figure projected onto a screen, and you can walk up to it and dance with it and improvise movements, and it'll respond to your movements with dance moves of its own. So I installed both of those projects at the Children's Museum of Pittsburgh, which caters mostly to kids under the age of 8, so a really young audience. It was really an adventure to see how the kids interacted with the system. And I feel like a lot of my learning experiences came particularly from the children's museum, because you never really know how a research exhibit stands up in public until you ask a bunch of five-year-olds to descend on it. 04:55 DL: We've also installed our projects at the Smithsonian, the National Museum of American History, at the ACC Creativity and Innovation Festival in Washington DC. We installed LuminAI for three days up there and then we did a performance with the Sound Happening project at a different year of the festival. And we've also installed in a number of places around Atlanta. So we've done installations in art galleries in the Atlanta area, so the Eyedrum Art & Music Gallery, which is unfortunately no longer open, but a few years back we did an installation of another project, The Shape of Story there, which was the sort of communal storytelling experience that you mentioned, where an AI agent creates a visualization of a story that's told by a group of maybe five to 10 people. 05:47 JP: It's like sitting in a camp fire and essentially you or a facilitator has people add to the story and then the AI creates a piece of art based on that story. 05:58 DL: Yeah, we sort of riffed off of a game that's used in improvisation a lot, where people will be asked to tell a story together, line by line. So I would say the first line of a story, like once upon a time and you would add on to that and say a girl walked to the store, and then progressively around the circle, we would all tell a story together. So as each line was said, the agent drew a symbolic representation of that line. 06:26 JP: So it was real time, it was something... 06:27 DL: It was real time, yeah, it was sort of a drawing that was projected into the center of the circle. And so, as you spoke, you would see shapes and figures appear in the center of the circle, and you could kind of draw an interpretation as to what you thought the shapes were reflecting in your story, but it was sort of mapped to things like, it drew pictures of shapes that represented the particular agent that was acting in the story. So if you said a girl would walk to the store, we had a particular shape that represented female agents and male agents, and then we had different shapes that represented different types of actions, so different symbols to represent actions like moving versus actions like speaking for example. 07:11 JP: So, this is something people might not immediately pick up on, but AI's... People are like, "Okay, how does it differentiate from other technology?" Well, again, it does different tasks based on the scenario, but it's designated maybe to drive us one day when the self-driving cars come about, or we start a pantry if we get smart refrigerators, but those are very specific task. Was it really eye opening for you to see how people understood the new possibilities of... In a creative endeavor? You're asking AIs to... These thinking machines to do just open-ended type of interactions. So was there anything there that just was immediately, okay, people know this can be AI as well? 07:53 DL: Yeah, I think that one of the reasons that these creative interactions with AI are well-suited for places like museums is that people come to museums for entertainment and leisure, and they want an exciting, out of the ordinary experience. You don't wanna interact with your everyday Roomba in the museum, you want something a little bit more awe inspiring. And so I think that's part of why these projects are really well-suited for those spaces, is that they're really sort of engaging for people. But I think that we've seen people interact with them in a variety of ways. 08:29 DL: So one thing we've noticed with the LuminAI project, installing it in art galleries as well as the children's museum, is that the amount of explanation we provide around the exhibit really influences how well people understand what they're interacting with and what it's capable of. So for example, we installed a version of the project at the Goat Farm Arts Center here in Atlanta for a performance, and we installed it in sort of a geodesic dome, so we had multiple agents that were projected onto the side of the dome and people could walk in, and they could dance with them. And the visualization we were using for the agents at the time was sort of a ethereal firefly visualization. So there were all these little light modes that were making up the human figure which was really beautiful, but it was a little bit difficult to tell what the figure was doing in response to you, and it was difficult for people to make that mapping between, "Oh, when I do this, the agent responds in this way." And so when we installed the project at the Children's Museum of Pittsburgh, we created a different visualization that was a lot more humanoid. It looked much more like a person or an avatar that you might see in a video game, for example, and people were able to grasp more easily just based on the dialogue that we heard what the agent was doing, the different types of responses it could give you. 09:56 DL: We recently installed a project at the CODAME ART+TECH Festival in San Francisco, and we added text prompts to the installation, so when you walk up to the agent, it says, "Will you teach me how to dance, human?" sort of over it in text. And then when it's responding to your movements, it'll give some explanation of how it's responding, so it'll say, "Oh, I'm copying your moves right now," or, "Oh, right now I'm gonna do that same move, but change it in some way." And that really helped people also to reason about what the agent was doing. 10:28 JP: So you've taken all these experiences, you've come up with a design roadmap, and there's a lot to unpack in that so we can't hit on everything, but one part of the roadmap talks about AI having to make real-time decisions in this live creative interaction with people, right? You can program an AI, give it so much training data it can almost hit it's mark in 100% of the time on any specific task. But your challenge is totally different, you don't know what the human's gonna do. Not an easy task, I can imagine. How you do make AI useful in this creative process with that unpredictability, right? 11:03 DL: Yeah, yeah, that's a great question. And I think that's a lot of the direction that AI is going in in general, is we need to make AI that's able to interact in these more social open-ended interactions where it doesn't have a pre-programmed idea of what to do, it can draw on these improvisational strategies. So we draw a lot in our lab on improvisational strategies from theater and dance, for example, to equip the agent with a set of strategies it can use to respond to inputs it's never seen before. So for example, we have drawn on some theory of jazz improvisation, which suggests that when a bunch of musicians are together and they're improvising Jazz together, they move in certain patterns or they respond to each other in certain ways. And one of those ways is they will start off by mimicking the person that they're playing with, so they might play the same set of notes that the person that they're playing with just played, and then they'll change it in a little bit, so they do a little bit of transformation, and then maybe they riff off into something new. So we use that same three-step strategy in the LuminAI program, and we mimic what the user does. So if the user does a movement, the agent copies it, then it might change up that movement in some way, so transform it, maybe reflect it, maybe add a little something to it, and then it responds with the movement that it thinks is similar that it's previously learned. 12:40 DL: And another strategy we use out of improvisation is drawing on dance theory. So the LuminAI agent uses viewpoint's dance theory, which is a set of ways of reasoning about movement. So using viewpoint, you can think about things like the rhythm of the movement, or the amount of space that a movement takes up, or the tempo of a movement. So it's the set of dimensions like rhythm, tempo, space, time, that you use to think about a movement. And so that's how the agent determines a movement that's similar in some way. So I'm not sure if that fully answered your question. 13:22 JP: Yeah, that's impressive that you can actually make AIs react outside that... A given task to be able to be more social. So I think that's where the power of the technology comes in, right? People just need to understand, okay, this is totally a difference and a game changer in some respects. 13:39 DL: I think something else that we do is we make our agents capable of learning from humans over time, so they're able to easily grow the knowledge that they have to work with based on their interactions with people. And so for example, the LuminAI system, it learns a gesture from every person that comes up to it and interacts with it. And it stores those gestures in a knowledge base, and then it can use those gestures as responses to other people who come and interact with it later on in the day. And so it can be kind of cool that at the end of a day at the museum, you're really interacting with all of the other people that have come up to the exhibit earlier in the day. 14:20 JP: So, it integrates all the actions except the naughty gestures, we keep those out of the system. And we're not gonna go down the deep technical rabbit hole, but you basically have to filter out the bad data, make sure it's creative or it's considered good dancing. So that's another thing. You brought up a couple of really good points. So what's considered like good art? That's a whole different question in the art world, what's good art? So what's good and bad? And you've come up with, I guess, a technical way to filter out the bad data and make it technically good performances? 14:56 DL: Yeah, the LuminAI system actually does not have a way at the moment of determining between good and bad gestures. We sort of cheat a little bit and we eliminate some of the bad gestures by, for example, we don't allow fingers in the dance moves, which eliminates some bad gestures, as you might imagine. Another project that we talk about in the paper, which is my colleague Mikhail Jacob's research is the robot improv circus, which is a sort of virtual reality experience, where you put on your VR goggles and you're able to improvise movement together with a robotic scene partner, and this is less dance-oriented and more about improvising movement with objects. So it's sort of based off of the props game, which is another game drawn from improv, where you can get these abstract objects, so something that maybe looks a little bit like a windmill but not quite, and you have to move it around in space and sort of manipulate it and experiment with it. And so the human manipulates this object and then the agent manipulates the object. 16:04 DL: And that system has a number of different metrics built into it to evaluate the quality of a human's movement. So I think the system evaluates quality based on creativity, so it has some metrics for evaluating how creative the movement is, including how novel the movement is, so compared to other gestures in the data space, how novel is this particular movement? I think another one of the metrics it uses is surprise, so how surprising is this particular movement? Is it really similar to what we've been doing or is it something new that we haven't seen before? 16:41 JP: I am starting to think that this is harder than coming up with the self-driving cars. [chuckle] This is like really... Across the spectrum, creativity is just a much, much different challenge. And what you said about dancing, basically at the end of day, because the AI has danced with a whole number of people, you're dancing with all those people in essence. And that elicits the social experience that you won't normally think of 'cause you're with technology, you're not working or experiencing with people, but that's counter to what you're saying; what you're accomplishing actually is more a human-to-human interaction experience. 17:19 DL: Yeah, I think that collaboration is something that we really shoot for, especially in these public spaces where people usually come in groups, they usually come with their family or their friends, and they don't wanna just have sort of siloed individual interactions. And that's a lot of what can happen when you introduce technology in these spaces is you get these kiosks, and one person is interacting and everyone else is standing around watching. So we try to build in some opportunities in our system for multiple people to interact together. So for example, in the LuminAI system, like I mentioned, the agent is sort of projected onto the screen, but next to the agent is a virtual version of your shadow, so you can see what you're doing when you're dancing. And we allow for multiple virtual shadows to be up there at one time. So if you're dancing with the agent and the agent is looking at your move specifically, you can also still see a couple of other people's shadows up there dancing. So a family, for example, could walk up to the screen and all be dancing together and be getting some response from the system, even though the agent is only responding to one person's actions, if that makes sense. 18:22 JP: Very cool. And I heard there was a salsa couple dancing? 18:26 DL: Yeah, we've seen a lot of really fun different emergent social interactions with this system. When we were at the Smithsonian, we saw... We had one couple come up, and she was actually a salsa teacher, and so she and a partner were salsa dancing with LuminAI. We saw a bunch of teenagers do a dance circle where they all stood around and then one person got in the center and did their moves, and then they swapped out and another person got in the center and did their moves. And then we also see a lot of little kids who just love to see their shadow, and maybe aren't paying quite as much attention to the AI agent and just love to jump around and dance in front of the installation. So I think we see a whole range of different social interactions depending on who comes up and what they wanna do. 19:18 JP: So this may be a tough one, but if you had to pick... It's like picking your favorite child, but out of all the installations you went to, exhibits or the actual AI partners that you've worked with, can you pick a favorite? 19:30 DL: I think LuminAI is my favorite. I don't think that's quite as hard of a question as I thought it was gonna be. But I've worked the most with LuminAI. I feel like I have installed it in the most places and have seen the most people interact with it. And I really feel like as an installation, there's a lot of versatility to use it for a variety of different purposes, which I think is one of the things that I really like about it. So we've used LuminAI, like I said, in a children's museum context for kids just love to come up to it and see how the system reacts to their movements. We've used it in more of an arts context where adults come up and can sort of experiment with it a little bit more in depth, and we've also experimented using it with professional dancers in more of a performance-based setting, where you can potentially explore movement in a little bit more depth. And one of the master's students in our lab, Swar, is currently on incorporating a different type of movement theory and using a motion capture suit which has a little bit more fine-grained ability to capture motion to make it a better tool for dancers to use as well. 20:45 JP: So this might be a tough one. How do people get started in this area? Specifically your approach to AI, what advice would you give people if they wanna go down the same path you're going down? 20:58 DL: That's a good question. I got started in this area, I did my undergraduate and I got a double major in Computer Science and Dramatic Art, which didn't go together at all, and in graduate school, I really wanted to find a way to maybe bring together my interests in creativity and computing. And I think I see that path in a lot of the other people who are working in our lab. People have these interdisciplinary interests that span computing as well as a creative discipline like dance or theater, or visual arts, or music and they sort of bring them together. So I would say, I guess for someone else wanting to get started, think about what your interests are and what type of creative knowledge you wanna bring to programming artificial intelligence or working in computing. 21:54 JP: So you're saying this is the lab to be in if you're the creative type, you use your left and right side the brain, basically? 22:00 DL: Yeah, if you can't make up your mind about what you wanna do, just bring it all together and try to do it all at once. 22:08 JP: So I cherry-picked just one or two parts of the design roadmap, I'm calling it, from your published paper. Is there anything significant that you would just bring up, as far as helping people if they wanna create AI exhibits? 22:22 DL: Yeah, yeah, so we've touched on three different areas, so we've talked about the design of the technology, so that's some of the stuff that we've already touched on, is incorporating a way for the agent to learn from people over time, incorporating some of these improvisation strategies, and then other more practical things like making sure that people can maintain the technology. So if you're a researcher, you install your exhibit in a museum and you say, "Bye," a lot of times the research project is difficult to maintain if you're not an expert, if you don't know exactly how the AI works. So developing tools so that museum staff can turn the exhibit on, turn it off, maybe make some changes, deal with issues that arise. We talk about the interaction design and that's sort of this collaborative aspect that we've been talking about, so designing to support things like social interaction, supporting intuitive interaction. So a lot of our projects involve embodied or physical interaction. So for instance, the Sound Happening project, you move around colorful beach balls in space in order to make music, and that's a really intuitive interface. Even two-year-olds walk up to it and they know how to push the ball around and then they can hear the music. 23:36 DL: And so, sort of designing these intuitive interfaces that are easy for people to walk up to and understand is really important in museums because people don't stay at exhibits for long and they often leave them if they can't figure out how to operate them. So that's a really important aspect. And then the last thing we talk about is research design. So for researchers working in this space, thinking about how they can design installations that are tough enough to exist in these spaces. So I think I mentioned earlier, you never know how an exhibit works until 20 5-year-olds descend on it at once and they'll just tear it apart. And so you have to think about how you design exhibits that are durable, that don't have choking hazards, a lot of issues that you don't think about when you're working in the lab. 24:24 JP: Alright, be honest with me, how did you handle that when that happened? [chuckle] 24:29 DL: We definitely had some interesting experiences at the Children's Museum of Pittsburgh. So I think the funniest was with Sound Happening. We originally installed the project with bouncy balls, and we've installed it in a number of different art spaces with more adult audiences. We've had no issues with the bouncy balls. I think the bouncy balls were installed for five minutes maximum and chaos erupted. There were probably 15 maybe like eight-year-old boys who came in, and they just weaponized them, they were throwing them at each other, they were off the ceilings and the walls, the museum staff members were coming over, they were like, "What is going on here?" 25:07 JP: That sounds about right, with boys, right? 25:09 DL: Yeah, yeah, we had to remove the bouncy balls, and we actually... We tried balloons next, because we thought, "Oh, balloons, they don't fly as far, they're not as fast, they're a little bit more slow moving," and that worked, but balloons are free in public knowledge. So the minute we put out a balloon, someone walked away with the balloon. They just vanished. I even had people... We were moving the balloons on the elevator and they were in a little cart, and people were pulling the balloons out of the cart as I was moving down the elevator. So we ended up compromising and our final solution was to use beach balls, which are slow moving like the balloons, but people don't perceive them to be as free gifts for them. 25:58 JP: They're not as fun to play with in the car when you're just tryna have something to do. Did this all happen... Did you have to figure out this solution, like plan C finally worked, all at the same exhibit, the same visit? 26:10 DL: It was not the same day, but it was our same visit. So I was up there for about a month, and I think over the course of a week we sort of went through all of these different types of balls. My advisor and I went shopping and just bought 20 different types of balls at Walmart to figure out what balls worked in this space, what size was good, how bouncy they needed to be. 26:33 JP: Man, you think the AI part of this exhibit would be the hardest part, but no, it's the balls. 26:37 DL: It's the balls, yeah. 26:39 JP: Oh man, Duri, that's great. Well, so you've proven that PhD work isn't all work, it's fun too. Thank you for joining us, I really enjoyed it. We will include in the show notes, the paper that you published recently with this design roadmap, and links to the lab. Is there anything else that you can think of or share with us? 27:02 DL: Well, thank you so much for having me. I don't know if I have any closing thoughts other than that the PhD can be fun and you can have interesting sort of creative experiences with the technology that you're developing. 27:16 JP: So we'll prove it by including hopefully some videos that your website has a lot of cool stuff to look at and people can figure out what this AI creativity look like. But thanks again, Duri. This has been the Tech Unbound podcast with The GVU Center. Thanks for joining us, and check out our website for more research. We'll catch you next time. [music]