[music] 00:20 Joshua Preston: You're listening to Tech Unbound, with the GVU Center at Georgia Tech. On the show, we explore human focused technology and computing research at the institute. The work that takes place here today could be the technology you're using tomorrow. I'm your host, Joshua Preston, and I'm joined by my co-host, David Mitchell, from the School of Interactive Computing. Welcome, David. 00:41 David Mitchell: Thanks for having me on, Josh. Now, am I a guest or a host or a guest host? 00:48 JP: You are a guest host, for now. 00:51 DM: Perfect. Perfect. 00:52 JP: David, you produced the Interaction Hour for the School of Interactive Computing, and you can find that on iTunes. On this show, we talk to experts in our research community, who are creating cutting edge innovations, how they impact society, and how they pulled it off. In the GVU Center, we're keeping the human in the loop. [music] 01:30 JP: People are going to have a wide variety of reactions to artificial intelligence systems when we start working with them regularly. To a certain degree, we interact with AI systems already, we just don't know it or we're not really paying that close attention, but that's another conversation. Today, we're talking about the next step in using technology and automated systems to perform our daily work. Specifically, how AI systems will be integrated into our software and help creative professionals and make better interactive entertainment, AKA, video games. We're joined by Dr. Matthew Guzdial, who will share some of the AI systems he's created in this space and what this future might look like. Thanks for joining us, Matthew. 02:14 Dr. Matthew Guzdial: Happy to be here. 02:15 JP: Your family has a tradition in academia and there is already a Dr. Guzdial in the family, so is it okay if we call you Matthew? 02:22 DG: Please. 02:24 DM: Can we say "number two" or "Dr. Two," "Dr. Junior," any of those work? 02:29 DG: Yeah. I'll answer to anything, just Dr. Guzdial, that's my dad. 02:33 DM: You put it in a lot of time to earn this degree though, so, Doctor... We should probably call you Doctor at some point though, right? [chuckle] 02:41 DG: Sure, whatever floats your fancy. 02:42 JP: So we're keeping the tradition going. Can you tell us a little bit about yourself, Matthew, and your research area? And feel free to dispel any myths that games are not serious research, which I'm sure, by the end of this conversation, there'll be no question about that. 02:58 DG: Yeah, sure. As earlier said, my name is, I guess, Dr. Matthew Guzdial now, and I've been doing this thing for about five years. This would have been the five years coming up in the fall, if I wasn't finishing up now in the next few days. And my research generally focuses on computational creativity, with the domain often being games or other interactive experiences. By computational creativity, I basically mean can we get creativity behaviors that a unbiased observer would think of as being creative to work in a computer, to get the computer to demonstrate behaviors that we'd think of as being creative. And that's in all kinds of different ways and in different forms, but that's the high-level pitch. 03:49 JP: That's super high-level, it goes beyond games. But you've pursued video games as a passion and a serious topic for academic research. I've witnessed a few of your milestones from the outside looking in, like you taught an AI to watch YouTube, and based on what it learned from play-throughs on YouTube that created its own game levels. And then to top that, you made your own version of Mario Maker that included the AI agent to help human creators make game levels. That's just scratching the surface. And I know this has been a year's long journey, like you mentioned. Congratulations for beating the curve; under five years, I think, is really below average. But I'm curious what it's actually taken for you... What this journey has looked like, just give us a glimpse of the path you've taken and how you got here. Right? 04:38 DG: Yeah, for sure. In undergrad, and I also did my undergrad here at Georgia Tech, that's not necessarily the best idea, but I did it anyway, and it worked out for me. [chuckle] Not doing an undergrad at Georgia Tech, that's a great idea. But doing the same undergrad and graduate institution. 04:55 DM: Oh, thanks for clarifying. [laughter] 04:57 DG: So, started here in undergrad, got interested in games in undergrad. And I was sort of debating at the time whether I was going to go do industry or academia. If I was gonna go make games professionally, or if I was going to research making games or areas around games, or I wasn't really sure at the time, this was five years ago. And so I had some internship experiences in the industry, some very good and some not so good, so that ended up leaning me towards academia. 05:31 JP: Can you name names? 05:34 DG: Sure, it's been long enough. [chuckle] I had some really... I had a really good experience at Zynga, oh, geez, eight years ago now. 05:45 JP: I remember them. 05:46 DG: Yeah. Well, they're still around, to some degree. I was working then with Jonathan Knight, who was the producer for Sims 2 and Sims 3. And I basically worked with just him and I, and then it sort of grew to a larger team by the end of the summer internship on trying to create a prototype for a Zynga-esque Sims game, because that's what he really wanted, he wanted to make a social simulation game but with Zynga. And I don't think that ended up going anywhere 'cause I never saw it released. But by the end of the summer, we had an artist, another programmer, a designer on it, besides me. But it got all the way there, and so that was an incredible experience. 06:30 DG: Then a couple of summers after that, summer of senior year, when I was deciding between what I was gonna do, or I guess leading up to senior year, I had another internship with Zynga. But this time it was in EA Sports Studio that had recently been acquired. So, it was sort of an EA Sports internship and sort of a Zynga internship, but they were making a football mobile app, which did get released. But that was a very... The worst aspects of video game culture. There was a lot of crunch, there was working on weekends, I was getting salaried, so it wasn't helping me that... I didn't get more money for working on weekends, just mindless bug crunching, feature implementation, nothing creative happening. And I just said, "I have no interest in just being a code monkey. I wanna be doing the things that I wanna be doing." 07:25 JP: That sealed the deal. 07:28 DM: Yeah. I'm interested, when you talk about AI or machine learning, there are endless paths that you can take, right? 07:34 DG: Uh-huh. 07:34 DM: So, why video games for you? And this idea of creativity and video games, why does that resonate so much with you? 07:41 DG: Yeah, a great question. As part of my time in undergrad at Georgia Tech, I helped set up this club, VGDev, which is the Georgia Tech Video Game Development Club. You can find their stuff at vgdev.org, and they produce some incredible things. They tend to produce between five and seven professional or semi-professional-looking games every semester, which is just wild. But running that club, being involved with that club over the years, I saw a lot of times where people would pitch games that were too big, [chuckle] that you couldn't possibly do, and they didn't know better. And at the beginning of every semester, I would give the same talk where I'd basically say, "Here's what you're going to think you're gonna be able to do in a semester, and you're not gonna be able to do that." And ultimately, I would never be listened to. But it didn't matter because, by the end of the semester, people would have had to have learned the lessons anyway 'cause they would have tried and failed to do all these things. 08:46 DG: That prompted an interest early on in what are ways that I can help people not have to go through this very painful learning process. How can I make video game development as easy as opening up a Word document and writing a story, or downloading any of the very excellent music-making pieces of software and making a piece of music? What's missing here? Why can't this just happen the same way? Now, those are still very difficult creative tasks, but a lot of the heavy lifting is done elsewhere. In a music-making application, you don't have to play each instrument yourself. We have digitized versions of instruments that you can just tell them what to play and when to play, and things like that. So, what's the equivalent? And that sort of drew my initial interest into games and making them easier for people to make. 09:44 JP: Where does AI enter the picture? Because you're talking, "I'm a PhD student now," and you entered relatively unexplored territory. AI is now helping people, right? We're gonna get to the specific penultimate project that they publish, which I think is amazing, but AI entered the picture, and how... What did that look like? 10:05 DG: Yeah. There's been area of research for many, many years, procedural content generation, which is using algorithms to make game content, mostly been focused on things like levels for games or textures for games. There's even commercial tools that use little pieces of this. Like SpeedTree, for example, which you just Google that, you can see this tool that lets you draw trees, 3D models of trees in the scenes, so you don't have to individually place every tree in a forest, which would just be terrible. 10:40 JP: Code monkey. 10:41 DG: Right, yeah, exactly. You wouldn't wanna be doing that. And even in games when you don't think they're being used, procedural content generation, or PCG, is being used. Like the initial map for Skyrim was procedurally generated and then tweaked by people. So, the pitch for PCG had been, we're going to use algorithms, and algorithms are going to do all the annoying work that I just mentioned, all the crunch, all the code monkey stuff, so that people can just be creative and expressive. There's a problem, which that these algorithms take a lot of specific knowledge about whatever they are creating. A really simple example would be like a rules-based approach, where you say, "Okay, so if we're placing trees," for example, there's some rules about where a tree should and shouldn't go, right? And an algorithm doesn't know that initially, we need to tell it. We need to tell, "Okay, a tree should be on the ground, for one," that's a start, "but then a tree should also not be inside another tree, so let's avoid that. We might have paths that we want the player to be walking, so trees can't go there either," and so on, and so on, and so on, and so on. And that's just for placement, let alone what a tree should look like. 12:00 DG: So, there's all this knowledge we need to put into these algorithms. And that was really the stopping point or blockage that was stopping people from being able to use these in the way that I imagined. So, the thought was, "What's another way we can get this knowledge besides having a person have to manually input it?" And that brought me to machine learning. How can we find another source for this knowledge that doesn't require direct human intervention and let the algorithm, the AI, learn what it needs itself to be able to apply it in these PCG algorithms? 12:36 DM: Just to play devil's advocate, we talk about creativity, right? And people are creative in different ways. Some people, "Maybe somebody think a tree should be flying in the air," right? How does that fall in with, if you're training this algorithm to do that for you, do you then have to tweak it or can you get... Is it possible or feasible to train it to your own creative style and process? 13:01 DG: Right. There's two answers here, whether we're using PCG, just default standard PCG, which is using non-ML, non-machine-learning algorithms, to make content. Or if we're using what has been more recently dubbed PCGML, as there were few people working on this while I was doing my PhD, and we ended up writing this nice journal article about PCGML, "Procedural Content Generation via Machine Learning." So, the PCG answer first. Now, this is totally a viable thing and one of the cool things about this encoding of knowledge that I mentioned earlier, I mentioned it as a downside of this, like, oh, having to input all this knowledge. But what it can allow you to do is that it can allow you to create a space of possible content that matches your design vision. The most successful games that use PCG, things like Spelunky, for example, these are games which have algorithms that have been precisely tuned by the game's creator to create only the kinds of content that they want. Even if it doesn't make the most sense, or another person would have come up with it, it was their designed vision, and their way of inputting that design vision into the algorithm was by doing this manual encoding of this knowledge. That's one way. 14:18 DG: Way number two, PCGML. That gets to an issue with most PCGML which prompted the next step in my research. There's been a lot of work in this area, a lot of it in Super Mario Brothers, because it's a very common standard domain to be working in for this research. This is things like Adam Summerville at the University of California, Santa Cruz, who'd use deep neural nets, specifically a deep neural net called a long short-term memory recurrent neural network, or LSTM, you don't have to remember the whole thing, to be able to train it on existing Mario levels and produce new Mario levels. Or Sam Snodgrass, out of Drexel University, who used a Markov chain, or myself, who used, from YouTube videos, a probabilistic graphical model to be able to model level design knowledge. 15:08 DG: But that's just gonna get you something that replicates what you've seen already. So, how do you get that creativity, how do you get something novel? For that, we use am algorithmic attempt at replicating a kind of human creativity, called combinational creativity. As suggested from the name, that's the kind of creativity that you or I use when we are re-combining existing knowledge to create something new. The really simple version of this is imagine a new animal. Almost always what you're gonna say is something like, "Oh, it's like X and Y. It's like a dog, but it has wings." Or, "It's like a horse, but it has scales." 15:45 DM: A liger. 15:46 DG: Right, exactly. 15:47 DM: A lion and a tiger. 15:47 DG: Well, that's a real one. [laughter] But, yeah, it's the same sort of like, "We're gonna combine old knowledge to create something new." And people use it all the time for all kinds of tasks when they're asked to make new stuff. So, there's been a lot of different approaches to combinational creativity, a lot of different algorithms to try to represent this very human ability to recombine things. We used a very standard one of those initially, called conceptual blending, to be able to recombine these learned models of Super Mario level design, to be able to create brand new kinds of levels that had never existed before. For example, like an underwater castle. At the end of the Super Mario Brothers level, there's these big castle or boss levels. And we created a version of that or a level design model that could create levels like that, but they were underwater, sort of like a drowned castle level, or something like that. 16:41 JP: Did the flames still go under the water? 16:43 DG: Yes, they did, because it doesn't know better, but maybe they're hot enough and then it doesn't... It's okay. 16:50 JP: Right, sure. Physics apply. We can make it water. [chuckle] 16:53 DM: This is our own world. Right? 16:54 DG: Right. Right. So that's one way to do this, to sort of invent new knowledge from the things that we've learned to be able to create cool and interesting and creative stuff. 17:05 JP: So, you didn't just advance machine learning, you didn't sit here and build it to a point where you created new models that could do better PCG. You've actually created a software prototype to be an equal partner to humans. So, that's where it comes to. I think this is your crowning achievement. Tell me if I'm wrong, but the fact that you did a study with this software tool that says, "Hey, it's a piece of software that creates game level designs and it helps you, as a human creator, fulfill your vision or give you new ideas." And you did a study on human reactions to that. And that's where we ended up. And I thought like, "Wow, that's what we needed, an AI, we need to figure out the human element to that." How did you tackle this challenge? It seems like a really big challenge. 17:53 DG: Yeah. As I mentioned, to begin with, that was always my area of interest, always of helping people to make things, to be able to avoid those annoying bits. From the start, that's what I knew I wanted to do. So, once I had these learned level design models, the very natural step was, "Okay, are these actually helpful to people?" We've gotten past the step, you didn't need to encode all of this knowledge yourself. It has this knowledge that it's learned, how does it actually work? How can it work with another person? And the very initial approach was a terrible Python, GUI, that I whipped up myself in a couple of days. 18:41 JP: Python can do that, right? 18:42 DG: Yeah. Well, [chuckle] it's not good for it really. Don't make user interfaces in Python. It can handle transparency, that was the big one. But it was like one screen where you'd hit a button, you'd add individual pieces of, in this case, Super Mario chunks, you'd put them into a level screen of just a little subset of the level. And then every time you added anything, the AI would prop up with two suggested additions, like, "Oh, you're putting ground here, do you want more ground? Oh, you're putting ground here, do you also want some bricks floating in the sky?" And to suggest that this was very similar to Clippy, say, the very annoying Microsoft Assistant, [chuckle] if you're old enough to remember Clippy... 19:31 DM: Oh, I remember Clippy. 19:32 JP: Yes. [chuckle] It would be a very fair comparison. It was a very annoying interaction. That's when I thought, "You know what, it might make sense to get some of the excellent people here, who do actual HCI research, to take a look at this." And I've been blessed with a lot of really good master's and undergrad students who've worked with me throughout the years. And several iterations later, we had something that I think is an actual workable interface, where instead of constantly making additional suggestions every time you do anything, instead the human chooses, "Okay, I've made some additions to the level, I'm gonna hit end turn, and now it's gonna be AI's turn to make some additions. And then it's gonna go back to me, I make some more changes, then the AI." And this turn-taking, sort of exquisite corp style creation, where two people, or a person and an AI, are going back and forth and making something together, that ended up being a much more natural and much less frustrating interaction for people. 20:31 JP: And it's almost industry level software. We're thinking if academia doesn't work out, maybe it's gonna be CEO Guzdial, of your own company with this AI-enabled software. The human reactions to the software were, I thought, fascinating, because you categorized them into four different areas. And it was really funny to see, literally, humans, and all their infinite diversity, are gonna react totally different to AI. That's what we're trying to drive at, what's this gonna look like? So, I love that you had the friend, you want your AI to shoot the breeze with you, just bat around, brainstorm ideas. You had this student, you want the AI to do what you say, you're gonna be the taskmaster and say, "I'm the boss here." You had the manager, which is the human actually deferring to the AI, and literally saying, "What do I do? Tell me what to do next." And then the collaborator, which is an equal design partner, expecting the AI to be an expert. So there's a lot of expectations for your AI. And I know it's early stages, but what did you find really interesting and re-characterize it, if I messed it up, of these human reactions to your software tool? 21:38 DG: Yeah, great question, and I don't think you messed it up at all. 21:42 DM: Good job, Josh. He's been practicing. [chuckle] 21:47 DG: We ran two studies. That's the crux here, which may not be immediately clear if you're just listening in. In the first one, we actually had three different AI. Right? There was my probabilistic graphical models, the stuff learned from YouTube, and then there were the two other ones that I already mentioned, Adam Summerville's LSTM and Sam Snodgrass's Markov Chain. And we had just a whole bunch of tech students, basically, I think it was like 90-something at the end, interact with two of these three possible AI agents and design two levels. One level designed with one randomly selected agent, and then another with another. And that told us some initial things about people's diversity of expectations. There wasn't any one of these agents that won out in terms of overall being considered better in any sort of metric that we could come up with, but people had very strong opinions about particular AIs they interacted with still. 22:50 DG: For the LSTM, for example, if you were building a very standard Mario level, if what you wanted was to just make the most Mario, Mario level, you'll love the LSTM 'cause that's what it was good at. Now, if you wanna do something wild and out there and creative, the Markov Chain was your friend, because it could handle making little local suggestions that didn't worry too much if the global level looked nothing like a Mario level. So, that gave us some initial notion of, like, "Oh, people are really different, huh?" What we did was we made a new AI, which was trained on the interactions from those first three AI. And this was a deeper enforcement learner. It's not important exactly how it works, but basically what it's gonna do is it's going to learn to try to interact with a human in a way that it doesn't get negative reward. In this case, people deleting its additions. That's what it's gonna try to figure out. And not only is it training on those 90-plus interactions from the first study, it's also gonna train on your interactions while you are interacting with it. 23:53 DG: So, the hope was, we have this agent that sort of generalized over these three. It has a nice, right in the middle of the road starting point. And then as it's interacting with somebody, hopefully it can cue in to figuring out what this person seems to like to be able to adapt to them during the interaction. And our hope was this would cover all the variants that people can possibly throw at us. Of course, not true. The four examples you gave, those were from the second study, and this was with industry and indie game design practitioners, so people who make games. We had people from Bethesda, from Bungie, from all over the place, Microsoft. 24:38 DG: And in interacting with this tool, we still found these variants. While they did find that it was adapting to them and they did like that, and that was a positive reaction, there was still this case that people wanted all kinds of different things about the AI. And our thought was, this primarily comes down to people not having enough experience interacting with an AI in their day-to-day life, or at least not an AI like this, that is actively trying to collaborate with them on a creative task. And because they didn't have any prior examples to draw on, they were just going with their gut of, "Oh, this is what this interaction should be like." So this big question going forward is, how do we set people's expectations in ways that are the most beneficial so they get what they want out of the tool without being frustrated or annoyed that it isn't managing them and telling them, "Oh, you create an unplayable level, you fool," or something like that? 25:35 DM: I'm really fascinated by this because I think you kinda summarized it really easily when you said people are different and people expect different things, right? And that is the case in normal human-human interactions, as well as human-AI interactions. So, extrapolating this out beyond just the video games, which I know has been your focus, is that just the natural human instinct? Do you think that this applies to other human-AI interactions that we may come across in the future, where you are gonna assign these roles to yourself and your AI partner or helper or whatever that you're gonna have? 26:14 DG: Yeah. I think we're seeing this now. If we look at, say, an Alexa, for example. If you've interacted... If you have an Alexa in your home or if you know somebody who has an Alexa in their home, you might have seen this person interacting with Alexa in a very particular way. Now, let's put a kid in front of the Alexa. And immediately this interaction changes. My parents have an Alexa, they love it, they use it mostly for setting timers, or for playing music, or for turning off the lights. Then they had some of their grandnieces visiting, and immediately all that these little girls wanted was for Alexa to tell them jokes. [chuckle] For them, Alexia is a joke machine. That's it. 26:57 DM: I'm not gonna lie, I do that very often still with Siri, too. "Just tell me something funny, liven up the day a little bit." 27:05 DG: Right. Yeah, I think that's exactly it. I think that people are going to come at these AI agents with very different expectations in mind. And while we can do some stuff to try to adapt to user expectation or try to set user expectations with how the AI itself is designed or framed for the user, still there's gonna be this massive variance and we have to be able to handle that going forward. 27:29 JP: That brings up the serious issue. You actually work with practitioners in the video game industry. You said, "Here's the tool, this could be the future of your industry." And so you probably got a lot of different reactions, you probably had some fun anecdotes. What's the issue addressing, "Does this make my job more secure?" The fear of AI, how it could go bad, and the ethics behind how do we approach that real fear, basically? 27:55 DG: Yeah, great question. In the back half of the paper, we talked about this a little bit. But the big thing for this work in particular is that the AI we made could not work on its own. This is not something where it's gonna go and produce excellent levels, interesting levels, all by itself. It's just not gonna happen. Now, with a person using it, that's when it unlocks its potential that you can actually do some really cool, interesting work with this tool, and it can help lessen the amount of time, hopefully, that it takes the designer to express their design intent. That's the goal, that if we focus on these AI agents, these creative agents that only really work or work best with people, that we can minimize the risks of some greedy person saying, "Oh, well, I'll just use this. Why do I hire a game designer, no problem." 28:54 DM: There aren't any of those in the game industry, are there? Greedy people? [chuckle] No. Just to play devil's advocate, if this technology, AI generator, could exist, where it could exist without the human and create, yeah, maybe that's in the future. I don't know if we're there yet, but maybe that's in the future. Why is it... Why do you think it's important to keep the human a part of this? Why is that in itself very vital? 29:20 DG: The thing that I've been saying to that question is that, as long as humans are the primary consumers of games, humans will be the primary producers of games. It's just that simple. A human is going to know better what humans want than an AI system. Even if there was a perfect AI system, we can imagine the optimal AI system that produces the perfect games, people change, right? People's expectations change for games. What was cool three years ago, five years ago, a few months ago, is not as cool anymore in terms of games. And to be able to chase the times, chase trends, to understand and even expect where things are going, you're gonna need somebody who's immersed in the society where that's happening. And until we have AGI, general intelligence, you're gonna need a human for that. 30:13 DM: Two morals: People are different and people change. [chuckle] 30:18 JP: But I guess the question is how do people who wanna follow in your footsteps do this? This is the big advice question. 30:26 DG: I can only say what I did, which is read a lot. [chuckle] The first couple years of a PhD are mostly reading. But if you don't like reading, and you don't wanna do a PhD, which I understand, there's a lot of resources in terms of helping you understand what's currently going on in the field and where can we go from here. I really recommend looking at Mike Cook, he originally created or is the original creator of Angelina, which is the first game generation, big picture system, generating lots of different kinds of games, still using PCG, not PCGML. But he has a lot of great stuff in this space. He runs PROCJAM, which is a procedural content generation game jam, where the motto is, "Make something that makes something," which is very good, I recommend checking that out. He has a lot of free resources and tutorials on how to get started in that area. He also has the Seeds magazine, which I think is currently taking submissions, which is trying to gather together academic and industry, and hobbyist interested in PCG. Just talk about what you're doing, put it in a big digital magazine, and just send it out to other people, a great way to do that. And there's also a lot of recorded talks that he has from a lot of different workshops and special guest lecture series that are great to look at, to get a sense of what's possible in this space. 31:58 DM: Hang on a second, let me just reach into my pocket here and pull out my crystal ball and dust that off and let you look into the future of this space and what it's gonna be. Is this something where, me as a... I'm not a game developer. I don't create games, but I play a lot of video games. Is there a future where we're gonna be playing our video games? And perhaps I wanna do something new on this same game that I've been playing. Is that something that's deployable for the general public, or is this only for game developers? 32:33 DG: That's a great question. I think, in fact, it is more suited in the near future for the general public. These tools, by their nature, are gonna be really difficult to make as high quality in their output as big Triple-A games. But if your eye just wants to make something quick and easy, then I think that, in the near future, we can definitely see tools on the horizon that will allow you to make little games fast. 33:03 JP: Yeah. I think that already exists, it's called Minecraft. [laughter] 33:07 DM: Good point. 33:08 JP: I mean, without the AI. 33:09 DM: Good point. Of course. 33:11 JP: But good stuff. Modders, rejoice, this is coming down the road. I could say, "Do you have any questions for us, Matthew?" But we're not the experts. Dr. Guzdial, thank you for joining us, I don't know where this will lead us, but we expect great things from you. You're gonna continue down this path. I'm guessing you're gonna stay in academia? 33:32 DG: Oh, for sure. 33:33 JP: Right, not industry. We had that Zynga experience. Well, thanks for joining us on the show. If you wanna learn more about Matthew, we'll include his website in our show notes. And also don't forget to check out the Interaction Hour. We have a podcast network going now here at Georgia Tech, in the College of Computing. 33:50 DM: Growing podcast network. 33:52 JP: A growing podcast network. And I mentioned this, I guess we'll make it official, when Dr. Guzdial does come back one day, we'll have a legitimate studio with a full setup, a couch, and we'll invite him back any time. I've really enjoyed this. And thank you for your time and sharing more in depth about how you got here and where you're going. 34:11 DG: Thank you. It's been a lot of fun. 34:11 JP: And thanks for joining us. This has been the Tech Unbound podcast with the GVU Center, check us out online. If you wanna come to Tech, you've heard from a tech student now to a graduate, and you know all the amazing possibilities and opportunities that exist. Thank you. Take care. [music]