GVU Graduate Student Awards Program 2019


Interdisciplinary research is part of the culture of the GVU Center. Ranging from creating cutting-edge computing innovations, to understanding the impacts that these innovations will have on our lives, the students in GVU represent the future of technology.


The GVU Center recognizes top Ph.D. and master's students at Georgia Tech through the annual GVU Graduate Student Awards Program, funded by the James D. Foley GVU Center Endowment.   

Each of the 2019 finalists for the program’s two award levels—the Foley Scholarship and the GVU Distinguished Master’s Student Award—are involved in research to advance computing technology for the improvement of our daily lives. The GVU Graduate Student Awards Program is the center's highest recognition for student excellence in research contributions to computing.

Learn more below about the four award winners and eight finalists in this year's program. 

Picture above (l to r): Ari Schlesinger, Eshwar Chandrasekharan, Lara Martin, Eric Corbett, Matthew Hong, Emily Wall, Yuhui Zhao, Emma Logevall, Darsh Thakkar, and Alana Pendleton. Not pictured: Ceara Byrne and Brianna Tomlinson.



Matthew Hong, Lara Martin, and Emily Wall


Matthew Hong, Ph.D. student in human-centered computing and co-advised by Lauren Wilcox and Rosa Arriaga, has helped to bridge HCI and medical informatics through his work. He has developed extensive field studies to understand family management of adolescents’ chronic conditions and created and tested novel co-design approaches to engaging patients, family members, and clinicians in groundbreaking participatory design studies. Hong has drawn from design insights gathered through formative studies to create and deploy technology to help families track and manage aspects of their care in the clinic (e.g., through tools that provide lay-friendly, interactive explanations of patients’ radiology reports) at Children’s Healthcare of Atlanta. He led the first study on adolescent engagement with electronic health record data through log analyses of an actual health record system at CHOA, combining log analyses with surveys and interviews to contextualize the behavior from usage logs. Hong has recently drawn from his command of design and research methods to inform the development of a novel mobile health (mHealth) system for adolescents with chronic conditions and their family members. 



Lara Martin, Ph.D. student in human-centered computing and advised by Mark Riedl, researches interactive storytelling, the question of how computers can be made to understand, generate, and tell stories in a real-time interactive setting. Her thesis argues that computers will be better partners if they can understand and engage with people in narrative terms. Martin’s 2018 AAAI paper is the first neural network based story generation paper to appear in a top tier conference. This paper preceded a growing interest in the research community in using deep neural networks and is considered the canonical baseline for newer research on neural story generation. Her second notable paper, published in IJCAI 2019, focuses on controllability of neural generation systems. The problem with neural networks is that they generate stories word by word without any clear sense of where they are going or how the story will unfold. Martin led a team of Master’s students that developed an algorithm that teaches a neural story generator how to achieve a given target goal. It is now possible to tell a neural network to generate a story that ends in a particular way. This is the first time a neural story generation has been able to produce a sequence that ends in a pre-specified goal, and the research lays the groundwork for another step forward in neural text generation.



Emily Wall, Ph.D. student in computer science and advised by Alex Endert, conducts fundamental research in mitigating cognitive biases in visual analytics. The motivation for this research came from the realization that while many data-driven decisions rely on human expertise and reasoning to make sense of the data, analysts have the potential to introduce cognitive biases during interactive analysis. This is especially critical for mixed-initiative visual analytic systems that adapt analytic models based on user feedback. When such models learn from user interaction stemming from biased user behavior, they have the potential to cause ill-informed decisions in many domains of societal importance, national security, and science. In response, Wall is designing computational models of user interaction sequences and system parameters that correspond to specific biases. Further, her work explores what systems can do to help people make less biased decisions from data analysis. This research includes fundamental advances in visual analytic techniques that balance human and computation responsibilities for data analysis.




Ceara Byrne, Ph.D. student in computer science and co-advised by Melody Jackson and Thad Starner, has been a key contributor to the FIDO (Facilitating Interactions for Dogs with Occupations) project – a research effort focused on providing technology to enable working dogs to communicate. Byrne designed affordances for wearable dog vests that allowed the dogs to summon help in a medical emergency, to support Search and Rescue, and other service dog roles. She also developed a system for handlers to communicate remotely to working dogs via vibrating motors on their vests. Byrne has been researching whether augmented dog toys, such as a ball with IMU and barometric sensors, can be used to predict whether young dogs will be successful in advanced service training. Her machine learning system predicted which dogs would fail advanced training with 88 percent accuracy. Releasing the “fail” dogs early can save $5 million a year for the team’s partner, Canine Companions for Independence, a charity that raises service dogs to give to individuals in need for no-cost.


Eshwar Chandrasekharan, Ph.D. student in computer science and advised by Eric Gilbert, is applying machine learning methods to combat abusive behavior in online communities. He has applied this machinery to both building models that will be used in production systems, as well as to study the effects of specific interventions on platforms. In recent work, he shows that building models trained on other communities’ data could make better predictions about what should be removed off-site in a separate community. The intuition was that if a post on a particular website looks a lot like content seen on 4chan’s /b/ (where almost anything is permissible), then perhaps moderators shouldn’t have such content on their sites. Chandrasekharan also led a study to understand the impact of a 2015 ban of various hate communities by Reddit. Working largely on his own with an expert in causal inference methods, Chandrasekharan was able to blend his expertise in NLP with these new methods to determine that the ban mitigated the spread of objectionable behavior to other parts of the platform. Reddit has cited the study as one reason for its various closures of other highly toxic subreddits.


Eric Corbett, Ph.D. student in digital media and advised by Christopher Le Dantec, is involved in computing research for digital civics— the intersection of approaches by smart cities to sensing and data analytics, and digital democracy’s efforts to broaden sustainable and accessible civic participation via computing and networked technologies. Using Atlanta as a testbed for deploying experimental systems, Corbett has begun to layout a framework for thinking about how trust—in technology and in institutions—can be an impediment or resource for design. His insights have resulted from fieldwork with the City of Atlanta, understanding the daily practices, expectations, and limits of how officials reach and work with city residents on everything from long-term planning and policy generation to the daily challenges of service delivery. From this fieldwork, Eric has developed a theoretically and empirically grounded understanding of how trust is developed, maintained, and used in order to do the work of governance. His findings, in turn, have established a foundation for experimental design work he is completing now and which is anchoring the work of scholars in other contexts wrestling with the same problems of how to build civic technology such that it is useful and usable for municipal officials as well as urban residents.


Ari Schlesinger, Ph.D. student in human-centered computing and co-advised by Beki Grinter and Keith Edwards, researches understanding the barriers to diversity in computing, with a focus on open-source software development. Her work addresses a timely issue as companies face increasing scrutiny about their lack of diverse workforces—and the products that this produces that often exclude various groups. While much work has focused on understanding how to increase the workforce pipeline, Schlesinger’s focus is on retention—a problem given more visibility recently—but about which open questions still remain, such as why women and minorities quit trying to contribute or participate in computing. She has also taken a look at some very sensitive topics in HCI research. One example: she undertook a major review of the CHI Conference publications to examine how the research community formulates and operationalizes constructs of race, gender, and ethnicity. Schlesinger argued that the HCI community at CHI, a major ACM conference, has some way to go in how it operationalizes people’s identity in ways that are inclusive to all the constituencies that technology can serve. In short, if HCI researchers are the advocates for building technologies that are inclusive, then they need to be very thoughtful to ensure that they include people in ways that reflect individual identities.


Brianna Tomlinson, a Ph.D. student in human-centered computing and advised by Bruce Walker, uses sound as an alternative presentation method to convey information in engaging, interactive learning experiences. Recently, she has been studying how to design auditory interfaces and created a rigorous means to evaluate those interfaces. With no existing standard approach or tools to assess auditory interfaces, Tomlinson led an effort to develop and validate a quick and effective tool (“BUZZ”) to measure performance and effectiveness, preferences, and aesthetic aspects of an auditory interface. This tool is now being used and further validated by researchers around the world. This drive for rigor has led to a new important tool with lasting impact on the entire field of auditory display design. The BUZZ scale complements Tomlinson’s large body of research, which has included helping blind students in Kenya learn math and studying how to make accessible STEM tools for U.S. schools.



Emma Logevall with GVU Center Director Keith Edwards


Emma Logevall, MS-HCI student and advised by Ellen Zegura, has been a core member of an Institutional Transformation award under NSF’s Cultivating Cultures for Ethical STEM program. She partnered with Ph.D. student Daniel Schiff to tackle the qualitative component of the project, creating interview guides and designing a research study that would allow the team to understand the student experience in ways beyond quantitative survey instruments. Logevall invested time outside meetings to iterate on the research design with Schiff, and they each conducted pilot interviews and used their collective experience to improve the design. From the 20+ study interviews, they worked together to create a codebook and to select coding software, and then each independently coded the interviews. The NSF work is highly interdisciplinary, drawing on ethics, philosophy, engineering and computing education, and professional development literature.




Alana Pendleton, MS-HCI student and advised by Carrie Bruce, has a diverse portfolio of work to her name. Through Pursuant Health, she advocated for user-centered design in the development of the "Health Age" assessment and Diabetes Risk Test features in their interactive kiosks deployed at Walmart. She led her Research Methods course project team in exploring accessibility for online ordering. In her Masters Research Project, she is working on improving the experience of participants in the Georgia Women, Infant, Children (WIC) Program, which provides nutritional assistance for low-income mothers. Pendleton has taken the lead on managing the relationship with the WIC Research Manager at the Georgia Department of Public Health (DPH), work that has included running approval and planning meetings, and navigating the IRB processes for Georgia Tech and DPH. Her work illustrates a consistent focus on research methods and design characteristics that bring additional value to research and consumer systems.


Darsh Thakkar, MS-HCI student and advised by John Stasko, has excelled in his data visualization work. One semester-long group visualization design project involved a dataset including over 30 million checkouts from the Seattle public library. The team built a visualization system that allowed a person to browse library records and look for patterns over time, search for the histories of specific books, and learn about topic- or theme-based trends. The project illustrated a creative approach to visualizing big data and making a topic of broad interest accessible to end users who wanted to find their own insights in the data. Starting this fall, Thakkar’s MS-HCI team will gather data about past College of Computing graduate student internships, post-graduation jobs, and further graduate studies. The idea is to build a visual analytics system that exposes all this data to help current students make better decisions about their future career paths. Georgia Tech students have an abundance of internship options—as Thakkar can attest to with his recent UPS internship—and he hopes to help those who follow after him more easily navigate the intern journey.


Yuhui Zhao, MS-HCI student and co-advised by Gregory Abowd and Thad Starner, recently conducted work on BrainBraille, an effort to achieve world record brain-computer interface (BCI) texting rates using non-invasive techniques that may be appropriate for locked-in (ALS or brain-stem stroke) patients to communicate. BrainBraille has been shown to work at speeds up to 8 wpm, a speed similar to that of someone texting on a feature phone. With BrainBraille, users must tense (or, in the case of ALS patients, attempt to tense) different regions of their body (hands, feet, butt and tongue) to cause blood flow in six regions of the motor cortex, which represent the six dots of a Braille character. Using the Center for Advanced Brain Imaging's fMRI, Zhao identified up to 9 different regions that would be appropriate for BrainBraille. After testing these regions, he narrowed down the voxels that distinguish to the activations of the 6 best regions and collected several datasets. He created a novel recognition technique where letters can be performed continuously every 3 seconds (4 wpm) and be recognized with 95% accuracy without using a vocabulary or grammar (open vocabulary). By limiting the vocabulary and using a stochastic grammar, the system can achieve 91% word accuracy at 8 wpm.