GVU Research Showcase: Demos

Location: TSRB 222 People: Bruce Walker Brittany Noah, Thomas Gable
Automated safety systems, a first step toward autonomous vehicles, are already available in many commercial vehicles. These are systems such as adaptive cruise control, which has the capability to slow down due to traffic, and automatic lane keeping, which maintains position within a lane without driver intervention. In order to ensure that these systems are properly used by drivers it is essential that they understand and appropriately trust the technology.

Lab: Sonification Lab

Location: TSRB 228 People: Mark Riedl Zhiyu Lin, Kyle Xiao
A framework to use machine learning techniques to generate rhythm action game stages from music

Lab: Entertainment Intelligence Lab

Location: TSRB 309 People: Brian Jones Kristin Hare, Jayanth Krihsna, Akhil Oswal, William Gao,Fengrui (ChenChen) Zou; Previous: Youssef Asaad, Alex Kim, Reema Upadhyaya
No matter what age we are, we have likely forgotten to turn off the stove or oven, iron, heater or even water. Forgetfulness can lead to serious events that may result in costly damage to the home or even injury or death. Older adults are more prone to such forgetfulness. When an older adult forgets to turn off a hazardous appliance, it is often attributed to losing mental capacity and may lead to loss of self-confidence, embarrassment, and judgment from others.

Lab: Aware Home Research Initiative

Location: TSRB TSRB 228 People: Thomas Ploetz Shruthi Hiremath
Understanding Data Complexity in datasets collected and used in the wearable community

Lab: Ubiquitous Computing Group

Location: TSRB MS-HCI Lounge People: John Stasko Darsh Thakkar
Leveraging publicly available reddit API in order to extract data and then perform relevant machine learning analysis to resulting visual interface tool in order to analyze the results.

Lab: Visual Analytics Lab

Location: TSRB 222 People: Bruce Walker Brittany Noah, Thomas Gable
Automated safety systems, a first step toward autonomous vehicles, are already available in many commercial vehicles. These are systems such as adaptive cruise control, which has the capability to slow down due to traffic, and automatic lane keeping, which maintains position within a lane without driver intervention. In order to ensure that these systems are properly used by drivers it is essential that they understand and appropriately trust the technology.

Lab: Sonification Lab

Location: TSRB GVU Cafe People: Miroslav Malesevic, Noah Posner Hank Duhaime, Hao Wu, Zhang Ziyin
CampusVR is a Virtual Reality sandbox. It's purpose is to help visualize spatial data and review IMAGINE Lab's 3D assets such as models of a campus and vegetation.   

Lab: IMAGINE Lab

Location: TSRB 328 People: Neha Kumar Azra Ismail
There is a scarcity of trained healthcare professionals in India. Further, the current approach to healthcare delivery in India is symptom-driven and fails to address the underlying causes of disease which may be a result of the local socioeconomic, cultural, gender, environmental, or infrastructural situation. Chitra is a mobile platform that empowers community health workers to fill this gap between government healthcare delivery and patients' lived realities.

Lab: TanDEm

Location: TSRB 345 People: Lauren Wilcox, Rosa Arriaga Matthew Hong, Jung Wook Park
CO-OP is an interactive mHealth application that utilizes visual illustrations of everyday illness experiences to investigate how technology can support chronically ill patients and family caregivers' collaborative effort to track and co-create personally meaningful representations of everyday illnesss experiences in non-clinical settings. The system will elicit and probe patients' and family caregivers' observations of illness experiences in relation to everyday activities, and their design input--through a suit of media technology readily available on their mobile device.

Lab: Health Experience and Applications Lab (Hx Lab)

Location: TSRB 113 People: Anne Sullivan Jordan Graves, Anna Malecki
Code Crafters is a project that investigates the connection between quilting and computational thinking, via design-based research to develop instructional workshops for an adult population of quilters.

Lab: StoryCraft Lab

Location: TSRB 309 People: Brian Jones, Beth Mynatt, Brad Fain, Sarah Farmer, Megan Denham, Jeremy Johnson William Gao, Cooper Link, Clayton Feustal
The Cognitive Empowerment Program seeks to empower fellows with MCI and their care partners and care providers. The home plays an important role in empowerment.

Lab: Aware Home Research Initiative

Location: TSRB 333 People:
The Convergence Innovation Competition (CIC) is a unique competition open to all Georgia Tech students and is run in both the Fall and Spring semesters. Each year the categories in the CIC are defined by our Industry partners who provide mentorship, judging, and category-specific resources which are often available exclusively to CIC competitors. While the competition is not tied to any specific course, competitors are often able to take advantage of class partnerships where lecture and lab content, guest lectures, and projects are aligned with competition categories.

Lab: Research Network Operations Center (RNOC)

Location: TSRB 243 People: Thad Starner Cheryl Wang, Kshitish Deo, Aditya Vishwanath
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language.  95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children.  As short-term memory skills are learned from acquiring a language, many deaf children enter school with the short-term memory of fewer than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents.  Our systems address this problem directly.

Lab: Contextual Computing Group

Location: TSRB 225 People: Gregory Abowd Dingtian Zhang, Nivedita Arora, Jung Wook Park
COSMOS (COmputational Skins for Multi-functional Objects and Systems) is an interdisciplinary collaborative project to design, manufacture, fabricate, and apply "computational skins". COSMOS consist of dense, high-performance, seamlessly-networked, ambiently-powered computational nodes in the form of 2D flexible surfaces that can process, store, and communicate sensor data. Achieving this vision will redefine the basis of human-environment interactions by creating a world in which everyday objects and information technology become inextricably entangled.

Lab: Ubiquitous Computing Group

Location: TSRB 333 People: Bill Eason, Matt Sanders, Russ Clark

Lab: Research Network Operations Center (RNOC)

Location: TSRB 344 People: Elizabeth Mynatt, Craig Zimring, Jennifer Dubose, Brian Jones, Jeremy Johnson, Brad Fain and many terrific faculty colleagues Aparna Ramesh, Cooper Link, Judah Krug
A new transdisciplianry program led by Georgia Tech and Emory to create theraputic programs, innovations in home and mobile technology, and transformative built environments to empower individuals with mild cognitive impairment and their informal care partners.

Lab: GVU Affiliate Projects (No Lab)

Location: TSRB HCI Lounge
Research Areas: Human-Computer Interaction
People: Dr. Wei Wang Manasee Narvilkar
What are the most relevant HCI principles to design a context aware HMI for passengers in a self-driving vehicle?

Lab: DesigNext Lab

Location: TSRB 243 People: Melody Jackson and Thad Starner Larry Freil, Ceara Byrne
Detecting EEG in the ear

Lab: BrainLab

Location: TSRB 338 People: Amy Bruckman Shagun Jhaver
Using a sample of 32 million Reddit posts, we characterize the removal explanations that are provided to Redditors, and link them to measures of subsequent user behaviors—including future post submissions and future post removals.

Lab: Electronic Learning Communities

Location: TSRB 235A People: Gregory Abowd, Thomas Ploetz Mehrab Bin Morshed
We propose a real-time eating detection using a commodity smartwatch and use it in the context of assessing the well-being of Georgia Tech students.

Lab: Ubiquitous Computing Group

Location: TSRB 222 People: Bruce Walker Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.

Lab: Sonification Lab

Location: TSRB 228 People: Mark Riedl Upol Ehsan, Shukan Shah, Pradyumna Tambwekar,
Building on our prior work on AI agents to think out loud in plain English, we are taking the next step. You will not only see Frogger think out loud using plan English, you will also be able to visually connect which parts of his language correspond to which parts of the game. For e.g, the statement "I am trying to avoid the red truck to the left" will be accompanied by a visual indication of the red truck. While language is instrumental in making black-boxed AI systems explainable to lay users, having a layer of visual correlation makes our approach even more powerful.

Lab: Entertainment Intelligence Lab

Location: TSRB 243 People: Melody Jackson, Thad Starner, Clint Zeagler, Scott Gilliland Giancarlo Valentin, Larry Freil, Ceara Byrne
The FIDO Sensors team is creating wearable technology to allow working dogs to communicate. Assistance dogs can tell their owners with hearing impairments what sounds they have heard; guide dogs can tell their owners if there is something in their path that must be avoided. We will be demonstrating a variety of wearable sensors designed for dogs to activate.

Lab: Animal-Computer Interaction Lab

Location: TSRB GVU Café People: Noah Posner, HyunJoo Oh Himani Deshpande, Akash Talyan
This study presents a set of fabrication techniques for upcycling HDPE (High Density PolyEthylene) plastic bags. It enables not only recycling abandoned plastic bags but also creating 3D objects by folding and joining the newly fused plastic sheet.

Lab: Interactive Product Design Lab

Location: TSRB 309 People: Brian Jones, David Byrd Akhil Oswal, Youssef Asaad
Multiple studies have shown a consistently strong association between gait speed of frail older adults and negative functional (e.g., survival) and activity outcomes. However, health care professionals have been slow to measure this physiologic parameter, largely due to the lack of a simple, standardized way of measuring it.

Lab: Aware Home Research Initiative

Location: TSRB 328 People: Neha Kumar Josiah Mangiameli, Maya Holikatti
Researching how women in Delhi deal with menstrual health outside of their home with the goal of designing an application to aid them in categorizing and finding bathrooms and other safe spaces to meet their needs.

Lab: TanDEm

Location: TSRB GVU Café People: Sang-won Leigh Sang-won Leigh
Guitar Machine is a guitar loaded with a variety of robotic components. It negotiates between conventional ways of playing music with electromechanical extensions of the fingertips, either driven by software or by a musician directly.

Lab: Interactive Product Design Lab

Location: TSRB Prototyping Lab People: Tim Trent Astra Zhang, Shelby Reilly, Minje Park
Tours will be held once every hour (starting at 5 minutes past the hour)! Come see the tools that we use to create one-of-a-kind research prototypes. We have everything from laser cutters and 3D printers to table saws and soldering irons, and we use them to create many of the custom electronics, cases, and wearable prototypes you see in our demos. Stop by the elevator to the basement or ask the folks at the registration desk if you need help finding us! For more information please visit the GVU Prototyping Lab Website

Lab: GVU Prototyping Lab

Location: TSRB 235A People: Thomas Ploetz, Gregory Abowd Hyeokhyen Kwon, Harish Haresamudram
Activity and Gesture Recognition for Mobile and Wearable Computing

Lab: Ubiquitous Computing Group

Location: TSRB MS-HCI Lounge People: Wei Wang Zhao Yu
Under the background of highly autonomous vehicles (HAVs), machines make most of the driving decisions. Although it spares more time for people to enjoy the trip, it usually leads to passenger anxiety and distrust because of unexplained driving decisions and possible emergencies. There is no suitable office platform in the car. Different road conditions, such as bumps, sharp turns, and sudden brakes, can cause motion sickness for passengers working in the car. In this project, we have explored the limited multimodal interactions to maintain drivers' attention. We propose the haptic interaction

Lab: DesigNext Lab

Location: TSRB 328 People: Neha Kumar Karthik Bhat
The goal of this research is to investigate the role that intelligent agents might play in facilitating these interactions to ensure that patients are empowered to learn about and manage their health, that doctors are not overburdened, and that the communication and coordination between the two remains effective and efficient in the short and long term.

Lab: TanDEm

Location: TSRB 209 People: Dr. Anne Pollock, Dr. Nassim Parvin, and Dr. Lewis Wheaton Christina Bui, Thanawit Prasongpongchai, Aditya Anupam, Charles Denton, Shubhangi Gupta, Olivia Cox
Heart Sense takes biometric data from participants and produces captivating visualizations as their bodies react to visual stimuli.

Lab: Design and Social Interaction Studio

Location: TSRB MS-HCI Lounge
Research Areas: Human-Computer Interaction
People: Dr. Wei Wang Manasee Narvilkar
What are the most relevant HCI principles to design a context aware HMI for passengers in a self-driving vehicle?

Lab: DesigNext Lab

Location: TSRB 235A People: Gregory Abowd, Thomas Ploetz Mehrab Bin Morshed, Pallavi Chetia, Preston Choe
Food journaling refers to the idea of logging food intake with specific details such as calorie count, food ingredients, among other things. Such practice often relies on self-reports, which is prone to recall bias. Even logging during the time of food intake, depending on the context, a user might not be well-informed to log every detail of a food journaling application. Our goal of the project is to improve the user experience of food journaling by minimizing the response burden and utilizing the context of users.

Lab: Ubiquitous Computing Group

Location: TSRB 228
Research Areas: Social Computing
People: Munmun De Choudhury Sindhu Ernala
Improving the well-being of people with mental illness requires not only clinical treatment but also social support. This research examines how major life transitions around mental illnesses are exhibited on social media and how social and clinical care intersect around these transitionary periods.

Lab: Social Dynamics and Wellbeing Lab

Location: TSRB 209 People: Yanni Loukissas Muniba Kahn, Kaci Kluesner, Meghan Kulkarni, Jude Mwenda, Chris Polack, Annabel Rothschild
The goal of the Atlanta Map Room is to document and reflect upon the connections and disjunctions between civic data and lived experience in the city, through the collaborative creation of large-scale, interpretive maps.

Lab: Local Data Design Lab

Location: TSRB 325 People: Brian Magerko Duri Long, Swar Gujrania, Lucas Liu, Cassandra Naomi, Meha Kumar, Jonathan Moon
LuminAI is an interactive art installation that explores the improvisation of proto-narrative movement between humans and virtual AI agents using full body, expressive, movement-based interaction. Interactors can co-create movement with an autonomous virtual agent that learns movement, response, and improvisation directly from interacting with human teachers. It analyses their movement using Viewpoints movement theory.

Lab: Expressive Machinery Lab (formerly ADAM Lab)

Location: TSRB 344 People: Beth Mynatt Maia Jacobs, Rachel Feinberg
We design, deploy, and evaluate mobile health tools that support and meet patients needs over time from diagnosis of a chronic disease, through treatment and into survivorship. Our research explores the ability for personalized, adaptable, mobile tools to support patients over the course of their individual breast cancer journeys.

Lab: Everyday Computing Lab

Location: TSRB 235A People: Thomas Ploetz, Gregory Abowd Hyeokhyen Kwon, Harish Haresamudram
Activity and Gesture Recognition for Mobile and Wearable Computing

Lab: Ubiquitous Computing Group

Location: TSRB TSRB 225 People: Gregory Abowd, Thad Starner, Bernard Kippelen Dingtian Zhang, Jung Wook Park, Nivedita Arora, Yuhui Zhao, Yunzhi Li, Diana Wang, Tanvi Bhagwat
OptoSense is a ubiquitous Imaging Surface which conforms to everyday objects, harvests energy from ambient light, and senses a variety of human activities without compromising privacy. By leveraging organic semiconductor (OSC) optoelectronics with thin & flexible form factor, large-area compatibility, and highly customizable characteristics, we aim to develop distributed imaging technologies for human activity sensing which truly “weaves” into fabrics of everyday life.

Lab: Ubiquitous Computing Group

Location: TSRB 234c People: Thomas Ploetz, Irfan Essa Dan Scarafoni
In collaborative human-robot assembly tasks, the robot will need to identify important elements of a scene (humans, objects) and understand their behavior and interaction. Estimated skeleton and object interaction information are often used for video-based human activity recognition, but most research focuses on depth sensors. There are many more standard RGB cameras and videos in the world, but unreliability of pose/object estimations hinders their adoption in this domain. We present novel techniques for dealing with such unreliability to aid in adoption of these techniques for RGB sensors.

Lab: Ubiquitous Computing Group

Location: TSRB 222 People: Bruce Walker Jonathan Schuett, Brianna Tomlinson, Jared Batterman, Jonathan Schuett, Brianna Tomlinson, Jared Batterman, Mike Winters, Zachary Kondak, Henry Wang, Prakriti Kaini, TJ Funso. In collaboration with: PhET Interactive Simulations project from University of
The graphs and figures that are so prevalent in math and science education make those topics largely inaccessible to blind students. We are working on auditory graphs that can represent equations and data to those who cannot see a visual graph. A number of new areas we're starting research on are: looking into teaching astronomy concepts through (like the Solar System) and the teaching and understanding of weather information through a combination of sonification and auditory description.

Lab: Sonification Lab

Location: TSRB 243
Research Areas: Educational Technologies, Gaming
People: Thad Starner Bianca Copello, Domino Weir, Ellie Goebel, Cheryl Wang
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language. 95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children. As short term memory skills are learned from acquiring a language, many deaf children enter school with short term memory of less than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents. Our systems address this problem directly.

Lab: Contextual Computing Group

Location: TSRB 235 People: Gregory Abowd, Munmun De Choudhury, Lauren Wilcox, Kaya De Barbaro,
College students encounter many challenges in the pursuit of their educational goals. When these challenges are prolonged, they can have drastic consequences on health and on personal, social, and academic life. Our multi-institution project, called CampusLife, conceptualizes the student body as a quantified community to quantify, assess, infer, and understand factors that impact well-being.

Lab: Ubiquitous Computing Group

Location: TSRB 243 People: Melody Moore Jackson, Thad Starner Ceara Byrne
Instrumented Dog Toys

Lab: Animal-Computer Interaction Lab

Location: TSRB 335 People: Alex Endert Subhajit Das
Visual analytics (VA) systems with semantic interaction help users craft machine learning (ML) based solutions in various domains such as bio-informatics, finance, sports, etc. However current semantic interaction based approaches are data and task-specific which might not generalize across different problem scenarios. In this project, we describe a novel technique of abstracting user intents and goals in the form of an interactive objective function which can guide any auto-ML based model optimizer (such as Hyperopt, Sigopt, etc.) to construct classification models catering to the expectation

Lab: Visual Analytics Lab

Location: TSRB 345 People: Lauren Wilcox Matthew Hong, Clayton Feustel, Chaitanya Bapat, Serena Tan
Diagnostic radiology reports are increasingly being made available to patients and their family members. However, these reports are not typically comprehensible to lay, recipients, impeding effective communication about report findings. Rapport is a prototype system that aims to facilitate communication about radiology imaging findings among pediatric patients, their family members and clinicians in the clinical setting.

Lab: Health Experience and Applications Lab (Hx Lab)

Location: TSRB 309 People: Maribeth Gandy, Peter Presti, Scott Robertson, Clint Zeagler, Brian Jones, Jeff Wilson, Jeremy Johnson, Laura Levy Aditya Kundu
We will be showcasing a variety of projects that highlight our applied research and development at the intersections of wearable computing, machine learning, smart textiles, internet of things, virtual/augmented reality, health and wellness, games with a purpose, educational technologies, assistive technology, the future of work, the smart city and smart home, and the arts.

Lab: Interactive Media Technology Center (IMTC)

Location: TSRB 333 People: Kim Cobb, Russ Clark, Tim Cone, Emanuele Di Lorenzo, David Frost, Jayma Koval, Kyungmin Park Lalith Polepeddi
Georgia Tech scientists and engineers who are working together to install a network of internet-enabled sea level sensors across Chatham County.

Lab: Research Network Operations Center (RNOC)

Location: TSRB 316A People: Christopher Le Dantec Alexandra Nguyen
Data visualization tool to identify stressful factors for cyclist to help city planners make better decisions on cycling infrastructure in Atlanta

Lab: Prototyping eNarrative Lab

Location: TSRB 225 People: Gregory Abowd Jung Wook Park, Dingtian Zhang, Sienna Sun
We propose self-sustainable, intelligent sensor systems that can be easily retrofitted onto current vehicles.

Lab: Ubiquitous Computing Group

Location: TSRB 309 People: Jon A. Sanford; Brian Jones; Peter Presti; Brad Fain, Su Jin Lee, Harshal Mahajan Prasanna Natarajan, Shambhavi Mahajan
The needs and abilities of people who are aging with progressive chronic conditions, such as MS, Parkinson's, ALS and Arthritis fluctuate from day to day. Yet, even when they have supportive AT, such as grab bars, to compensate for functional limitations, those features are fixed, only able to support some abilities, some of the time. The purpose of this project is to develop a SmartBathroom environment capable of assessing an individual's abilities at any point in time and spontaneously adjusting supportive environmental features to accommodate those abilities.

Lab: Aware Home Research Initiative

Location: TSRB 228 People: Munmun De Choudhury, Gregory Abowd Dong Whi Yoo, Sindhu Ernala, Bahador Saket, Kelsie Belan
Our goal is to learn whether social media analysis can support mental health clinicians to assess their patients. We developed a medium fidelity prototype of a social media augmented assessment tool. We are conducting user studies with clinicians in which they explore the tool and provide their own opinions, such as whether they could see medical value in it, whether they could understand the system without a problem, and whether they would like to incorporate the system into their work practices.

Lab: Social Dynamics and Wellbeing Lab

Location: TSRB 222
Research Areas:
People: Bruce Walker Brianna Tomlinson, Mike Winters, Chris Latina, Smruthi Bhat, Milap Rane
Students in the Sonification Lab and Center for Music Technology designed Solar System Sonification, an auditory experience of the planets. Using non-speech audio to convey information, they built a musical model of the solar system. Planetariums typically rely on visuals with various levels of speech description, but have not explored using auditory cues to present information about space. Auditory displays, like the ones developed for Solar System Sonification, enable more immersive experiences and make information accessible to people with visual impairments.

Lab: Sonification Lab

Location: TSRB 222 People: Bruce Walker Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.

Lab: Sonification Lab

Location: TSRB 243 People: Thad Starner Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo
In this study, we address the problem of performing continuous speech recognition where audio is not available (e.g. due to a medical condition) or is highly noisy (e.g. during fighting or combat). Our Tongue Magnet Interface (TMI) uses 3-axis magnetometers to measure the movement of a small magnet glued to the user‚ tongue. Tongue movement corresponding to speech is isolated from the continuous data by comparing the variance of a sliding window of data to the variance of signal corresponding to silence. Recognition relied on hidden Markov model (HMM) based techniques.

Lab: Contextual Computing Group

Location: TSRB 325
Research Areas: Music Technology
People: Brian Magerko Richard Savery, Duri Long, Nick Sinclair
Sound Happening is a collaborative music-making installation that allows several people to explore and create music in a space by playing with colorful bouncy balls. Using a webcam and Max/MSP, Sound Happening tracks each ball's location relative to the space to manipulate and trigger various samples, resulting in intriguing sound combinations that are constantly changing as the balls move.

Lab: Expressive Machinery Lab (formerly ADAM Lab)

Location: TSRB 243 People: Thad Starner Shawn Wu, Malcolm Haynes
Warehouses throughout the world distribute approximately $1 trillion in goods per year from nearly a million warehouses. Order Picking is the process of collecting items from inventory and sorting them into orders for distribution. It represents one of the main activities performed in warehouses. About 60% of the total operational costs of these warehouses is order picking. Most are still picked by hand, often using paper pick lists.

Lab: Contextual Computing Group

Location: TSRB 246
Research Areas: Human-Computer Interaction
People: Sauvik Das, Gregory Abowd Youngwook Do, Linh Hoang
We designed and implemented a novel smartwatch wristband, Spidey Sense, that can produce expressive and repeatable squeezing sensations and effectively explore the design space of squeezing patterns

Lab: SPUD Lab

Location: TSRB GVU Café People: Zhong Lin Wang, Gregory Abowd, HyunJoo Oh Chris Chen, David Howard, Steven Zhang, Youngwook Do, Sienna Sun, Tingyu Cheng
Self-powered Paper Interfaces (SPIN) combining folding paper creases with triboelectric nanogenerator (TENG). Embedding TENG into paper creases, we developed a design editor and set of fabrication techniques to create paper-based interfaces that power sensors and actuators.

Lab: Interactive Product Design Lab

Location: TSRB 225 People: Gregory Abowd, Thad Starner Nivedita Arora, Tingyu Cheng, Sarthak Srinivas, Chad Ramey
Self-sustainable Water Leak Detection System for Buildings

Lab: Ubiquitous Computing Group

Location: TSRB 228 People: Munmun De Choudhury Koustuv Saha, Vedant Das Swain
Multimodal Sensing to Model Individual Differences and Job Performance at Workplaces

Lab: Social Dynamics and Wellbeing Lab

Location: TSRB 222 People: Bruce Walker Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.

Lab: Sonification Lab

Location: TSRB 309 People: Brian D. Jones Undergraduate: William Gao, Cooper Link, Ben Flamm
The Aware Home, an Institute for People and Technology (IPaT) Living Lab, provides an authentic home environment in which to conduct research in the areas of: health and well-being, connected home, home security and resource management, and the future of digital media and entertainment at home. Common uses of the home include: 1. innovating the next home technology, 2. performing human subject studies of our research in a controlled environment, 3. testing installation of solutions before deploying into peoples‚ actual homes, such as those individuals enrolled in HomeLab.

Lab: Aware Home Research Initiative

Location: TSRB 228 People: Jacob Eisenstein Ian Stewart
People adopt ideas and customs from other cultures as a result of widespread online communication. One result of intercultural exchange is the adoption of loanwords: for example, the Spanish word "tuitear" comes from the English word "tweet", used as a verb on Twitter. Loanwords are not merely adopted but often integrated into the target language, as when the English word "tweet" gained Spanish morphology when it became the verb "tuitear". This project examines the relationship between loanword integration and social attitudes to gain insight into the process of cultural exchange online.

Lab: GVU Affiliate Projects (No Lab)

Location: TSRB 344
Research Areas: Social Computing
People: Elizabeth Mynatt Jessica Pater
This project aims to define the concpet of digital self-harm for the HCI community. In this project we have explored the limited HCI scholarship related to self-harm within a social computing context. We offer the community an operatlonalized defintion of digital self-harm and propose a theoretical base to orientate related research questions into actionable activities. We also describe a research agenda for digital self-harm, highlighting how the HCI community can contribute to the understanding and designing of technologie sfor self-harm prevention, mitigation, and treatment.

Lab: Everyday Computing Lab

Location: TSRB 235A People: Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.

Lab: Ubiquitous Computing Group

Location: TSRB 235A People: Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.

Lab: Ubiquitous Computing Group

Location: TSRB 229 People: Ashok Goel, Robert Bates, Spencer Rugaber Akshay Agarwal, Christopher Cassion, Taylor Hartman, Animesh Mehta, Abbinayaa Subrahmanian
Protecting the environment is among the biggest challenges facing our society. Big data is an essential element of addressing this big challenge. Encyclopedia of Life (EOL) is the world's largest database of biological species and other biodiversity information. EOL also works closely with scores of other biodiversity datasets such as BISON, GBIF, and OBIS. We seek to make EOL and related biodiversity data sources accessible, usable, and useful, by integrating extant AI tools for information extraction, modeling and simulation, and question answering; we call the resulting system EOL+.

Lab: Design & Intelligence Laboratory

Location: TSRB 229 People: Ashok Goel, David Joyner, Spencer Rugaber Ida Camacho, Marissa Gonzales, Eric Gregori
It has been said that Jill Watson is the most famous teaching assistant in the world. Jill's origin actually is quite humble. She was conceived in summer 2015 with the purpose of helping Georgia Tech's Online MS in CS Program (OMSCS) and specifically with my online course on knowledge-based artificial intelligence (KBAI) as a part of OMSCS program (http://www.omscs.gatech.edu/cs-7637-knowledge-based-artificial-intellige...). Jill had a very difficult birth in fall 2015. Jill was quite precocious almost from the beginning.

Lab: Design & Intelligence Laboratory

Location: TSRB VA Lab People: Alex Endert Meghan Galanif
Improve the usabilty of a vaccine data visualization, including ways to visualize uncertainty

Lab: Visual Analytics Lab

Location: TSRB 233
Research Areas: Augmented Reality
People: Jay Bolter, Blair MacIntyre Colin Freeman, John Womack
The AEL has been preparing and pioneering this idea for ten years with our custom browser, Argon. Drawing on this background, we are now transitioning to the use of open-source technology such as A-frame and the Mozilla browser, WebXR viewer.

Lab: Augmented Environments Lab

Location: TSRB 228 People: Mark Riedl Upol Ehsan, Pradyumna Tambwekar, Sruthi Sudhakar
Our AI agent (Frogger) explains its decisions in plain English—why do we need that? As Aritificial Intelligence (AI) becomes ubiquitous in our lives, there is a greater need for them to be explainable, especially to end-users who need not be AI experts. Come by to see how our system produces plausible rationales and understand how human perceptions of these rationales affect user acceptance and trust in AI systems.

Lab: Entertainment Intelligence Lab

Location: TSRB 243 People: Thad Starner, Scott Gilliland Chad Ramey
Working with Dr. Denise Herzing of the Wild Dolphin Project, we are creating wearable computers for conducting two-way communication experiments with cetaceans. With CHAT, one researcher uses the waterproof system to broadcast a sound, associated with an object with which dolphin's like to play. A second researcher, upon detecting the sound, passes the object to the first. The researchers pass objects back and forth, further associating the sound with the object. The goal is to see if the dolphins mimic the sound in order to "ask" for the play object.

Lab: Contextual Computing Group

Location: TSRB 322
Research Areas: Virtual Reality
People: Janet Murray Sammi Hudock
A virtual reality experience in a dystopian world. With Love, Thunderbird is the story of a woman, nick-named Thunderbird, Peter and the struggles they face living in a society ruined by an unsuccessful coup and a string of useless ‘presidents’ who only accomplished increasing the inflation and unemployment rates. You play as Peter, navigating your way through this new world with your small robotic bird guide, with one simple goal: get medicine for Thunderbird.

Lab: Prototyping eNarrative Lab

Location: TSRB 225 People: Gregory Abowd, Thad Starner Nivedita Arora, Diego Osorio, Qiuyue Xue, Michelle Ma, Peter McAughan, Dhruva Bansal
ZEUSSS or Zero Energy Ubiquitous Sound Sensing Surface allows physical objects and surfaces to be instrumented with a thin, self-sustainable material, giving rise to revolutionary applications such as interactive walls, localization of sound sources and people, surveillance via audio, contextualization and safer authentication services.

Lab: Ubiquitous Computing Group