GVU Center Research Showcase | Fall 2019 Preview

Free and open to the public, Oct. 30, 2019, 2-5 p.m., Technology Square Research Building

 

▼ Preview of select demos by research area. See the full schedule and plan your visit with the GVU Showcase Web App.

 

Accessibility


Sonified Social Media Data
Sonification Lab | Room:  222
Bruce Walker, Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.

 

Artificial Intelligence


What do humans think of AI agents that speak in plain English?
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Brent Harrison, Upol Ehsan, Pradyumna Tambwekar, Sruthi Sudhakar
Our AI agent (Frogger) explains its decisions in plain English—why do we need that? As Aritificial Intelligence (AI) becomes ubiquitous in our lives, there is a greater for them to be explainable, especially to end-users who need not be AI experts. Come by to see how our system produces plausible rationales and understand how human perceptions of these rationales affect user acceptance and trust in AI systems.


AI learns to generate rhythm action game stages
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Zhiyu Lin, Kyle Xiao
A framework to use machine learning techniques to generate rhythm action game stages from music


QUESTO: Interactive Objective Functions for Model Selection
Visual Analytics Lab | Room:  335
Alex Endert, Subhajit Das
Visual analytics (VA) systems with semantic interaction help users craft machine learning (ML) based solutions in various domains such as bio-informatics, finance, sports, etc. However current semantic interaction based approaches are data and task-specific which might not generalize across different problem scenarios. In this project, we describe a novel technique of abstracting user intents and goals in the form of an interactive objective function which can guide any auto-ML based model optimizer (such as Hyperopt, Sigopt, etc.) to construct classification models catering to the expectation

 

Civic Computing


Sea Level Sensors
Research Network Operations Center (RNOC) | Room:  333
Kim Cobb, Russ Clark, Tim Cone, Emanuele Di Lorenzo, David Frost, Jayma Koval, Kyungmin Park, Lalith Polepeddi
Georgia Tech scientists and engineers who are working together to install a network of internet-enabled sea level sensors across Chatham County.

 

Cognitive Science


What do humans think of AI agents that speak in plain English?
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Brent Harrison, Upol Ehsan, Pradyumna Tambwekar, Sruthi Sudhakar
Our AI agent (Frogger) explains its decisions in plain English—why do we need that? As Aritificial Intelligence (AI) becomes ubiquitous in our lives, there is a greater for them to be explainable, especially to end-users who need not be AI experts. Come by to see how our system produces plausible rationales and understand how human perceptions of these rationales affect user acceptance and trust in AI systems.

 

Collaborative Work


Sea Level Sensors
Research Network Operations Center (RNOC) | Room:  333
Kim Cobb, Russ Clark, Tim Cone, Emanuele Di Lorenzo, David Frost, Jayma Koval, Kyungmin Park, Lalith Polepeddi
Georgia Tech scientists and engineers who are working together to install a network of internet-enabled sea level sensors across Chatham County.

 

Educational Technologies


PopSign: Teaching American Sign Language Using Mobile Games
Contextual Computing Group | Room:  243
Thad Starner, Bianca Copello, Domino Weir, Ellie Goebel, Cheryl Wang
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language. 95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children. As short term memory skills are learned from acquiring a language, many deaf children enter school with short term memory of less than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents.  Our systems address this problem directly.


Fused HDPE Upcycling
Interactive Product Design Lab | Room:  GVU Café
Noah Posner, HyunJoo Oh, Himani Deshpande, Akash Talyan
This study presents a set of fabrication techniques for upcycling HDPE (High Density PolyEthylene) plastic bags. It enables not only recycling abandoned plastic bags but also creating 3D objects by folding and joining the newly fused plastic sheet.

 

Ethics


What do humans think of AI agents that speak in plain English?
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Brent Harrison, Upol Ehsan, Pradyumna Tambwekar, Sruthi Sudhakar
Our AI agent (Frogger) explains its decisions in plain English—why do we need that? As Aritificial Intelligence (AI) becomes ubiquitous in our lives, there is a greater for them to be explainable, especially to end-users who need not be AI experts. Come by to see how our system produces plausible rationales and understand how human perceptions of these rationales affect user acceptance and trust in AI systems.

 

Gaming


AI learns to generate rhythm action game stages
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Zhiyu Lin, Kyle Xiao
A framework to use machine learning techniques to generate rhythm action game stages from music


PopSign: Teaching American Sign Language Using Mobile Games
Contextual Computing Group | Room:  243
Thad Starner, Bianca Copello, Domino Weir, Ellie Goebel, Cheryl Wang
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language. 95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children. As short term memory skills are learned from acquiring a language, many deaf children enter school with short term memory of less than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents.  Our systems address this problem directly.

 

Human-Computer Interaction


What do humans think of AI agents that speak in plain English?
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Brent Harrison, Upol Ehsan, Pradyumna Tambwekar, Sruthi Sudhakar
Our AI agent (Frogger) explains its decisions in plain English—why do we need that? As Aritificial Intelligence (AI) becomes ubiquitous in our lives, there is a greater for them to be explainable, especially to end-users who need not be AI experts. Come by to see how our system produces plausible rationales and understand how human perceptions of these rationales affect user acceptance and trust in AI systems.


QUESTO: Interactive Objective Functions for Model Selection
Visual Analytics Lab | Room:  335
Alex Endert, Subhajit Das
Visual analytics (VA) systems with semantic interaction help users craft machine learning (ML) based solutions in various domains such as bio-informatics, finance, sports, etc. However current semantic interaction based approaches are data and task-specific which might not generalize across different problem scenarios. In this project, we describe a novel technique of abstracting user intents and goals in the form of an interactive objective function which can guide any auto-ML based model optimizer (such as Hyperopt, Sigopt, etc.) to construct classification models catering to the expectation


Fused HDPE Upcycling
Interactive Product Design Lab | Room:  GVU Café
Noah Posner, HyunJoo Oh, Himani Deshpande, Akash Talyan
This study presents a set of fabrication techniques for upcycling HDPE (High Density PolyEthylene) plastic bags. It enables not only recycling abandoned plastic bags but also creating 3D objects by folding and joining the newly fused plastic sheet.


Analyzing political subreddit similariy with a Visual Analytics approach
Visual Analytics Lab | Room:  HCI Lounge
John Stasko, Darsh Thakkar
Leveraging publicly available reddit API in order to extract data and then perform relevant machine learning analysis to resulting visual interface tool in order to analyze the results.


Understanding Health-oriented Privacy Perceptions Regarding Ubiquitous Technology Use
Ubiquitous Computing Group | Room:  235A
Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz, Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.


Effect of Auditory Heartbeats on Empathy
Sonification Lab | Room:  222
Bruce Walker, Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.


Spidey Sense: Designing Wrist-Mounted Haptics to Improve Awareness of Cybersecurity Warnings
SPUD Lab | Room:  246
Sauvik Das, Gregory Abowd, Youngwook Do, Linh Hoang
We designed and implemented a novel smartwatch wristband, Spidey Sense, that can produce expressive and repeatable squeezing sensations and effectively explore the design space of squeezing patterns.

 

Information Visualization


QUESTO: Interactive Objective Functions for Model Selection
Visual Analytics Lab | Room:  335
Alex Endert, Subhajit Das
Visual analytics (VA) systems with semantic interaction help users craft machine learning (ML) based solutions in various domains such as bio-informatics, finance, sports, etc. However current semantic interaction based approaches are data and task-specific which might not generalize across different problem scenarios. In this project, we describe a novel technique of abstracting user intents and goals in the form of an interactive objective function which can guide any auto-ML based model optimizer (such as Hyperopt, Sigopt, etc.) to construct classification models catering to the expectation


Analyzing political subreddit similariy with a Visual Analytics approach
Visual Analytics Lab | Room:  HCI Lounge
John Stasko, Darsh Thakkar
Leveraging publicly available reddit API in order to extract data and then perform relevant machine learning analysis to resulting visual interface tool in order to analyze the results.

 

Machine Learning


QUESTO: Interactive Objective Functions for Model Selection
Visual Analytics Lab | Room:  335
Alex Endert, Subhajit Das
Visual analytics (VA) systems with semantic interaction help users craft machine learning (ML) based solutions in various domains such as bio-informatics, finance, sports, etc. However current semantic interaction based approaches are data and task-specific which might not generalize across different problem scenarios. In this project, we describe a novel technique of abstracting user intents and goals in the form of an interactive objective function which can guide any auto-ML based model optimizer (such as Hyperopt, Sigopt, etc.) to construct classification models catering to the expectation


Analyzing political subreddit similariy with a Visual Analytics approach
Visual Analytics Lab | Room:  HCI Lounge
John Stasko, Darsh Thakkar
Leveraging publicly available reddit API in order to extract data and then perform relevant machine learning analysis to resulting visual interface tool in order to analyze the results.

 

Materials


Fused HDPE Upcycling
Interactive Product Design Lab | Room:  GVU Café
Noah Posner, HyunJoo Oh, Himani Deshpande, Akash Talyan
This study presents a set of fabrication techniques for upcycling HDPE (High Density PolyEthylene) plastic bags. It enables not only recycling abandoned plastic bags but also creating 3D objects by folding and joining the newly fused plastic sheet.

 

Mobile and Ubiquitous Computing


Understanding Health-oriented Privacy Perceptions Regarding Ubiquitous Technology Use
Ubiquitous Computing Group | Room:  235A
Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz, Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.

 

Music Technology


AI learns to generate rhythm action game stages
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Zhiyu Lin, Kyle Xiao
A framework to use machine learning techniques to generate rhythm action game stages from music

 

Perception


The 'Urban Dictionary' of Emoji: Twitter's Top 10 Emoji
Sonification Lab | Room:  222
Bruce Walker, Stanley J. Cantrell, Mike Winters
Communication is complicated. Face-to-face communication, which many would consider to be the simplest form of communication, becomes a challenge when you consider factors such as differences in language and culture, the use of body language, and tone of voice, etc. These factors inherently make text-based communication more difficult. This project seeks to address these issues through the research and design of communication systems and tools that allow users to gracefully convey such information effectively.

 

Privacy and Transparency


Understanding Health-oriented Privacy Perceptions Regarding Ubiquitous Technology Use
Ubiquitous Computing Group | Room:  235A
Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz, Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.

 

Procedural Content Generation


AI learns to generate rhythm action game stages
Entertainment Intelligence Lab | Room:  228
Mark Riedl, Zhiyu Lin, Kyle Xiao
A framework to use machine learning techniques to generate rhythm action game stages from music

 

Smart Cities; IoT


Sea Level Sensors
Research Network Operations Center (RNOC) | Room:  333
Kim Cobb, Russ Clark, Tim Cone, Emanuele Di Lorenzo, David Frost, Jayma Koval, Kyungmin Park, Lalith Polepeddi
Georgia Tech scientists and engineers who are working together to install a network of internet-enabled sea level sensors across Chatham County.

 

Social Computing


Understanding Health-oriented Privacy Perceptions Regarding Ubiquitous Technology Use
Ubiquitous Computing Group | Room:  235A
Gregory D. Abowd, Sauvik Das, Munmun De Choudhury, Thomas Ploetz, Mehrab Bin Morshed, Koustuv Saha
In the last few years, there has been tremendous growth in the prevalence and widespread use of smart and ubiquitous technologies. These technologies include the use of smartphones, smartwatches and wearables, and smart devices. While these devices have a variety of benefits, they come at the cost of using our data for number of purposes that is not transparent to an average user. This project aims to understand how people perceive regarding privacy concerns with respect to their health data being collected and used by third party entities.

 

Usable Security


Spidey Sense: Designing Wrist-Mounted Haptics to Improve Awareness of Cybersecurity Warnings
SPUD Lab | Room:  246
Sauvik Das, Gregory Abowd, Youngwook Do, Linh Hoang
We designed and implemented a novel smartwatch wristband, Spidey Sense, that can produce expressive and repeatable squeezing sensations and effectively explore the design space of squeezing patterns

 

User Experience


PopSign: Teaching American Sign Language Using Mobile Games
Contextual Computing Group | Room:  243
Thad Starner, Bianca Copello, Domino Weir, Ellie Goebel, Cheryl Wang
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language. 95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children. As short term memory skills are learned from acquiring a language, many deaf children enter school with short term memory of less than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents.  Our systems address this problem directly.