Eyes darting, or maintaining a steady gaze straight ahead. Heartbeat racing, or maintaining a slow, even rhythm. If we encounter these phenomena in another, how do we respond – not just affectively, but physiologically? Eye movements and heartbeats are among the most intuitively meaningful physiological characteristics that humans observe in one another. Without necessarily consciously realizing it, we often respond empathetically. This project brings together humanities scholars and physiology scholars to create an art installation that uses representation, tracking, and visualization to investigate and reflect upon the physiology of empathy. The installation renders video of eye movements and audio of heartrate of a virtual person, and tracks the eye movements and heartrate of an observing user. We anticipate a mirroring, empathetic physiological response from the user, in which their heartrate also speeds and slows in conjunction with the virtual person. Immediately after the experience, the user will be provided a visual and auditory representation of the data, in order to see and reflect on this empathetic engagement, and also provided with a link to a copy of the video by email if they so choose. The playback could be either in real time, or in a time that is set to either the virtual person or the user’s heartrate as a metronome, to allow a distinctively human-centered exploration of the data.
A Multimodal Human Computer Interface Combining Head Movement, Speech and Tongue Motion for the People with Severe Disabilities
Assistive technologies (ATs) play a crucial role in the lives of individuals with severe disabilities by enabling them to have greater autonomy in performing daily tasks. The Tongue Drive System (TDS) developed at the Georgia Tech Bionics Lab is such an AT, empowering people with severe Spinal Cord Injury (SCI) to be more independent. Earlier versions of the TDS have offered tongue motion and speech as means of driving mouse activity and keyboard input. In this project, we introduce a new multi-modal Tongue Drive System (mTDS), which incorporates head tracking to deliver proportional control of a mouse cursor. The mTDS integrates this new capability while preserving tongue motion and speech from previous versions and offers a richer means of driving computing interfaces, than previously available to individuals with severe disabilities.
Users go to social network sites or online forums to get advice from members of their networks. Individuals with autism adopt and use such computer-mediated communication technology differently from typical users. They require advice about everyday situations ranging from very simple operations to complex social activities. We propose to develop a Q&A system with a robust network of people whom the user is not likely to know but who nonetheless may be willing to provide advice on everyday situations.
Wearable systems play an important role in continuous health monitoring and can contribute to early detection of abnormal health-related events and facilitate the advancement of personalized healthcare. The neck is a unique sensing location because it provides access to a set of health-related data that other wearable devices simply cannot obtain. Activities including breathing, chewing, clearing the throat, coughing, swallowing, speech and even heartbeat can be recorded from around the neck. Two applications of particular interest for this project include medication adherence monitoring and food intake monitoring.
Medication non-compliance, especially for patients with chronic illnesses, is a global issue that has been associated with increased healthcare cost, rehospitalization, complications and disease progression. To address this problem, it is essential to have a portable, wearable health platform that can remind patients of their medication regimen, track medication ingestion, and monitor a patient’s overall health status. The proposed system in the form of a necklace will automatically track medication ingestion using the well established radio frequency (RF) technology in very high frequency (13.56MHz) band. For power management purposes, the system will be ‘asleep’ by default except during a swallowing event when there is a possibility of medication ingestion. For this reason automatic swallowing detection is essential; the ability to differentiate swallowing sounds from other tracheal sounds initiated by speaking, coughing, clearing the throat etc. In previous work, we developed a real-time swallowing detection algorithm based on acoustic signals and patterns that combines computationally-inexpensive features to achieve comparable performance with previously proposed offline methods using acoustic and non-acoustic data. With data from four healthy subjects that includes common tracheal events such as speech, chewing, coughing, clearing the throat, and swallowing of different liquids, our results show an overall recall performance of 79.9% and precision of 67.6%, which are slightly better or close to the offline results.
In our following work, we expanded our scope and explored tracheal activity recognition using a combination of promising acoustic features from related work and apply simplistic classifiers including K-NN and Naive Bayes. For wearable systems in which low power consumption is of primary concern, we have shown that with a sub-optimal sampling rate of 16 kHz, we achieved average classification results in the range of 86.6% to 87.4% using 1-NN, 3-NN, 5-NN and Naive Bayes. All classifiers obtained the highest recognition rate in the range of 97.2% to 99.4% for speech classification. This is promising to mitigate privacy concerns associated with wearable systems interfering with the user’s conversations.
The Accessible Bluetooth Cane project allows visually impaired users to control their iPhone while using the white cane, without having to stop and take out the phone. This is achieved by embedding Bluetooth remote controls with tactile buttons inside the cane handle.
ActEarly: Redesign and Evaluation of an Android Mobile Application for Tracking Developmental Milestones
About 1 in 7 U.S. children will be affected with a developmental disability including Autism and Attention Deficit Hyperactivity Disorder (ADHD). Early diagnosis of developmental delays ensures proper intervention and overall improved qualities of life. In this research, we investigate how parents of young children use an Android mobile application, ActEarly, to log their children’s developmental progress through milestones tracking techniques. The app leverages information from the Centers for Disease Control and Prevention’s “Learn the Signs, Act Early” campaign to provide parents with information needed to identify signs of developmental disabilities, while empowering them to share their questions, doubts, and concerns with pediatricians. The goal this project is to evaluate the interactive ActEarly app and uncover users’ method keeping needs, as we design a solution to help parents become more proactive about milestone tracking and create a useful health care tool for the public sector.
Active Pathways aims to support learning and discovery in systems biology by allowing users to construct and manipulate bio-chemical reaction network simulations using active tangibles on an interactive tabletop display surface. Researchers in systems biology currently run simulation programs that model different experimental parameters such as concentrations inside cells and reaction speeds. Parameters are adjusted algorithmically or by entering numbers into equations. The simulation results are then plotted as graphs in order to discover hidden patterns in the network. Using tangible and tabletop interaction techniques, we provide a direct hands-on way for researchers to construct and manipulate models in order to gain a better understanding of the systems they are studying.
The goal of the research is to identify the ways in which social media could play a role in assisting Georgia Tech students find mental health support. Mental health disorders are extremely prevalent on college campuses, and anxiety and depression in particular have been shown to have a negative effect on academic success. Despite the fact that mental health professionals and program are available to students with mental health conditions, many are not seeking help from their campus resources. Social support is a key component in preventing mental health issues from becoming serious problems, and it has been shown to be a top factor in preventing suicide attempts. By examining the mental health status of current Georgia Tech students as well as their social media usage and behavior, the proposed project aims to discover how a social platform could be used to provide social support to Tech students facing mental health issues.
Many electronic devices, from desktop computers to mobile phones to DVD players, can be thought of as a menu of functions. These functions can be accessible to a blind user if the menus are spoken aloud. However, this is extremely inefficient, so we have been enhancing auditory menus with sophisticated text-to-speech, spearcons, spindex, and other audio extensions. These can also be applied in many different languages and research is ongoing to look at more language applications, including tonal types.
This project is exploring ways of using air gesture technologies, audio and haptic to facilitate exploration of STEM concepts by blind and low vision learners. Efforts will establish the efficacy of this approach, as well as best practices for creating air gesture interfaces that support exploration of a virtual reality space such as a simulated atom, wind tunnel or electrical system- all without the use of vision.
Modern sensor technology is beginning to allow for cost-effective deployment of air gesture interfaces in the vehicle. Unlike the current standard of direct touch, air gesture interfaces do not require that the driver takes their eyes off the road, especially when coupled with properly applied auditory or tactile feedback.
While emerging systems like Apple Carplay and Android Auto support limited speech commands, the majority of tasks still require visually targeted touch interaction, which poses a safety hazard to drivers.
Research in the Sonification Lab centers on developing guidelines for automotive interface designers on how to create air gesture interfaces which provide minimal cognitive, motor and visual demand to drivers. We combine user-centered HCI design with comprehensive engineering psychology evaluation using eye tracking, physiological measures, performance measures and subjective measures to take a data-driven approach to air gesture systems in the vehicle.
The algorithmic detection of subcultural or niche taste trends is of growing importance in targeted advertising. This demonstration presents research using online music analysis tools from Spotify, Musicbrainz, and Rovi coupled with aggregated music listening behavior from Facebook users to detect individual tastes and emerging taste trends amongst social groups.
This research is presented alongside historical signifiers of music taste such as fashion, music collections, and subcultural knowledge.
The goal of the research is to display the growing importance of software based taste detection algorithms in determining niche markets for online content providers, and some of the new methodologies available in to such systems.
Like traditional media, social media in China is subject to censorship. However, in limited cases, activists have employed homophones of censored keywords to avoid detection by keyword matching algorithms. In this paper, we show that it is possible to scale this idea up in ways that make it difficult to defend against. Specifically, we present a non-deterministic algorithm for generating homophones that create large numbers of false positives for censors, making it difficult to locate banned conversations. In two experiments, we show that 1) homophone-transformed weibos posted to Sina Weibo remain on-site three times longer than their previously censored counterparts, and 2) native Chinese speakers can recover the origi- nal intent behind the homophone-transformed messages, with 99% of our posts understood by the majority of our participants. Finally, we find that coping with homophone transformations is likely to cost the Sina Weibo censorship apparatus an additional 15 hours of human labor per day, per censored keyword. To conclude, we reflect briefly on the opportunities presented by this algorithm to build interactive, client-side tools that promote free speech.
Networking and peer inspiration from alumni of your program/school is important when making decisions about the next steps in your career. However, schools lose touch with alumni once they graduate and find it difficult to keep a track of where they are. Networking platforms such as Linkedin are helpful but do not provide a big picture of your alumni network. AlmaBase is a Linkedin extension, that shows a visualization of career trajectories of alumni from your program, for you to find the "right" alumni to network with and get inspired.
Early development of children is a critical issue for young parents. However, symptoms of abnormality may occur in a subtle manner, and parents often fail to recognize them or seek for help at an early stage. This is often because they lack certain knowledge or professional guidance. CDC distributed brochures to promote knowledge of children early development. However, this form of publication contains large volume of information and is hard to popularize. Furthermore, even though parents’ role in solving this problem is significant, they could not do it alone. Enhancing the collaboration among different roles(parents, childcare givers and professionals) will make the most impact in this process. Therefore, this project would utilize a tablet-based interactive storybook to intervene with milestone tracking, and help improve chronic/health care management.
Why is it important?
◦Identify children at risk for developmental disabilities
◦Communicate concerns to primary care provider
◦Get necessary services to improve outcomes for the child in question
◦To understand developmental trajectories of children from different demographics
No matter what age we are, we have likely forgotten to turn off the stove or oven, iron, heater or even water. Forgetfulness can lead to serious events that may result in costly damage to the home or even injury or death. Older adults are more prone to such forgetfulness. When an older adult forgets to turn off a hazardous appliance, it is often attributed to losing mental capacity and may lead to loss of self-confidence, embarrassment, and judgment from others. Many families turn to monitoring when they discover such hazards, but this can result in their loved one feeling a loss of independence. We feel there is an opportunity before monitoring to use technology to provide gentle reminders or cues that empower the resident to determine for themselves when such appliances should be turn off.
To this end, we have performed in-home contextual interviews, designed prototypes of possible solutions, and performed Aware Home interviews and prototype evaluations with older adult participants to understand their needs for notifications and preferences for alert (audible and visual). As we envision it, an ambient alert system should consist of several ambient and/or wearable reminder products that would integrate with existing connected home systems and provide those gentle reminders both at and away from the primary hazard.
Information visualization can augment human cognition in many ways, and has proved useful in professional application areas such as scientific visualization and business management. But what are the potentials of information visualization in everyday life? Using ambient visualization techniques, the opportunity to co-exist with an embodiment of data in the same physical space, and analyze such a metaphor in relation to the space around us could potentially lead to a greater learning environment. For such environments, how could information exist between
aesthetics and utility to support its cause? The project concerns an interactive weather installation that leverages interactive projection mapping to highlight an aesthetic quality to weather data, and signify its relation to space, movement and time. Working with digital projectors, a coding environment such as OpenCV and Processing, and projection-mapping tools, the project aims to create an interactive projection-mapped experience that provides a platform to analyze weather information in meaningful, aesthetic and engaging ways.
In this project, we analyze blocking mechanisms on social media. We perform a comparative analysis of different technically and socially curated block-lists on Twitter. We also conduct interviews with users who are on such block-lists as well as those who subscribe to them. Our analysis reveals nuances of online harassment and the tactics used by harassers. We discuss the limitations of state of the art moderation used by social media platforms like Facebook, Twitter, etc. We examine how the harassment victims appropriate the online tools and resources available to them to cope with online abuse. We also suggest design implications for improved blocking mechanisms.
There are not many computing systems available that will help keep individuals safe when meeting up with strangers offline. Therefore, our team has developed an application that will help keep these individuals safe. In order to improve our current system, we will conduct a two-part research experiment: an interactive activity and interviews. This data, along with the data from the activity, will help to address whether or not using a computer system helps people feel safer when traveling alone. The study will be conducted on the campus of the Georgia Institute of Technology.
Graduate studnets in the Prototyping Interactive Applications class taught by Gregory Abowd worked independently and with high school studnets from Latin American Assoication's leadership program with Cross Keys High School to create interactive Day of the Dead puppets. This project is part of GoSTEM, a larger effort at Georgia Tech to increase interest in science technology, engineering and mathmatics among Latin youth and to bring Latin culture to Georgia Tech. These animated puppet respond to one or more input(s) and produces visual, audio, and/or kinetic output.
Dressing is one of the most common activities in human society. Perfecting the skill of dressing can take an average child three to four years of daily practice. The challenge is primarily due to the combined difficulty of coordinating different body parts and manipulating soft and deformable objects (clothes). We present a technique to synthesize human dressing by controlling a human character to put on an article of simulated clothing. We identify a set of primitive actions which account for the vast majority of motions observed in human dressing. These primitive actions can be assembled into a variety of motion sequences for dressing different garments with different styles.
We use Augmented Reality presentation and sensing technologies to integrate design studio learning models into screen-based classrooms. The goal for this approach is to create STEM learning experiences that encourage creativity, innovation and help build strong peer learning environments. To accomplish this goal we implement room-scale augmented reality technology with projection-based presentation and sensing technologies -- projecting on surfaces and using depth sensing for unencumbered interaction (see http://research.microsoft.com/en-us/projects/roomalive/). This approach allows everyone in the space to participate in the experience, and the cost is fixed regardless of the number of participants.
Two practices from the studio model for learning we build upon are:
– Pinups: In design studios, students will pin their work (completed parts, sketches, parts
in-development) on a wall, and the teacher and students will walk the walls in order to
comment on the pinned-up work. Pinups make both the artifacts and process of design
work visible, and make it possible to compare and contrast approaches when all students
work is pinned up at once.
– Meetups: Students working together in a design studio can look over to see what others
are doing. Collaboration is fluid and at multiple levels. Sometimes, two students move
their work near one another to work together (literally, “closely”). Sometimes, two
students just look at each other’s work to share ideas.
This project integrates augmented reality to redesign the Georgia Aquarium tour experience. Based on the existing digital contents from Georgia Aquarium, AquaRium Tour features user-centered interaction to facilitate the aquarium tour experience, incorporating the functions of navigation, providing knowledge about aquatic life as well as sharing and other social features.
Atlanta has the reputation of being a “city in a forest”, with a large and varied tree population that provides shade for its residents, a habitat for wildlife, consumes carbon dioxide from the atmosphere, produces life-giving oxygen, in addition to many other benefits. In keeping with its context and commitments to environmental awareness and conservancy, the Georgia Tech campus contains hundreds of species of trees that cover the landscape.
The Imagine Lab is building an Augmented Reality application for mobile devices and tablets through which the myriad trees can be viewed interactively. In the app, the user can touch a tree and receive information from a vast database telling the user all the information about it: age, size, species. Not only can the user interact with the visible world, they are also capable of seeing projected tree growth for the next 10, 25, and 50 years, where newly planted trees will grow into shade-giving behemoths, all rendered in 3D on the screen. In addition, the user can experience the subterranean world, with interactive animations for the nearly unprecedented 1.4 million gallon cistern under Tech Green that provides water for the Clough Undergraduate Learning Center and local irrigation. The app allows for someone to receive an augmented view of the world in front of them, enhanced with illustrative educational information that will nurture the environmental consciousness of the user, making them aware of the tremendous benefits of having a green campus, how the systems work, and what it means for the future.
In a race against the clock, players embark on a dangerous adventure. Within moments, the journey goes haywire. Lost and alone, the player finds themselves stranded. In this VR interactive narrative, players fight to survive the dangerous landscape. Utilizing Oculus Rift, Unity, and unique interaction paradigms, Ares explores a wide range of new techniques in VR storytelling. This distinctive, immersive experience will test user’s survival skills and offer an exciting challenge.
Argon is a mobile web browser designed to bridge the gap between Augmented Reality and The Web. Following in the tradition of web browsers like Chrome and Firefox, which differentiate themselves by providing custom functionality that is not yet standardized across all browsers, Argon exposes the core technologies needed to make AR possible. By making computer vision tracking (via the Qualcomm's Vuforia library) available to web pages, Argon provides a browser-based platform for rapid development of fully-interactive 2D/3D AR content & applications. The lab has developed tools to make rapid prototyping easier. The goal is to make it possible for designers and organizations with web app skills to create AR and MR (and even VR) applications. Come see projects & demos built using the Argon platform.
This educational toy concept helps teach children visual-spatial cognitive skills and logical-mathematical reasoning through interactive music creation. Build from LegoMindstorms, the project explores the tool's applications in early stage interactive concept development for designers.
The Beltline Exploration App is a proposed location-based “walking tour” application aimed at increasing community engagement and participation on the Atlanta Beltline. The existing Atlanta Beltline app provides a wealth of information that can be improved with a more participatory interaction from the user and an element of user content creation. The goal is for the app to bring awareness to art, culture, and events along portions of the Atlanta Beltline that will introduce newcomers to the Beltline and promote repeat visits to the Beltline.
We are developing a suite of media experiences to introduce visitors to the rich cultural and economic history of Auburn Avenue. From about 1900 to 1960, Auburn Avenue was the center of African-American cultural and economic life in the city. The street also played a key role in the civil rights movement. From the 1960s on, the street suffered decline, and the local community disintegrated because of a range of social, economic, and urban planning factors. In recent years, however, the community has been the focus of revival efforts with attractive apartments and homes at its eastern end and increased economic activity along its more blighted corridor. In 2014 or 2015, a new streetcar line promises to bring even more tourists to its main attractions: the Martin Luther King Visitors Center, King’s birth home, the Ebenezer Baptist Church, and the King Memorial. Sweet Auburn was designated a National Historic Landmark in 1976. We are working in collaboration with Central Atlanta Progress and the History Preservation Division of the Dept of Natural Resources of the State of Georgia to bring this history to thousands of visitors and residents through an integrated integrated media strategy. Our media strategy centers on a prototype of a mobile app using the Argon browser. This will be supported by web applications that can run on other mobile devices as well as a web site.
The content types and features that we will explore include:
a. audio, images and text delivered on location at places of interest along the avenue.
b. panoramas and historical photographs to depict the visual history of Sweet Auburn.
c. informative texts to replace or complete existing physical signage;
d. forms of interaction that trigger the delivery of these images, audio, and text: for example, when users walk down the street, GPS tracking can tell the phone when to play certain audio or show certain images.
e. links to social media so that visitors can record their experience of the tour of the avenue for friends or for their own later use.
Our ultimate goal is to ensure the broadest possible class of visitors and web users to have a satisfying and informative experience of Auburn Avenue and make sure that the digital media application is a successful and sustainable informational companion that supports the preservation and revitalization efforts in this area.
This project helps teach STEM concepts with an audio-enabled version of the Lemonade Stand Game, in which visually impaired players (or any player that wants to experience a game that is sound dependent) need to manage their own stand while factoring in weather, local events, advertisement, and pricing in order to maximize profit for their business.
The graphs and figures that are so prevalent in math and science education make those topics largely inaccessible to blind students. We are working on auditory graphs that can represent equations and data to those who cannot see a visual graph. A number of new areas we're starting research on is looking at teaching astronomy concepts through (like the Solar System) and the teaching and understanding of weather information through a combination of sonification and auditory description. Additionally we are working on making statistical output accessible for blind users to assist with higher level mathematics applications. We have a whole ecosystem of software and hardware solutions, both desktop and mobile, to help in this space. This project is in collaboration with the Georgia Academy for the Blind and the Center for the Visually Impaired of Atlanta.
In collaboration with policial science researchers from Georgia State University and University at Albany, we are developing augmented reality-based experiments to examine the impact of grievance, opportunity and risk as motivating factors when choosing to engage in political protest or terrorism. Study participants assume the role of a fictional ethnic minority in a fictional country and engage in dialogs with virtual characters that attempt to persude the participant to join a peaceful student-led protest or join a violent resistance movement. Participants wear a head-mounted video display instrumented with cameras that allow them to view computer graphics mixed with the physical space about them. Sitting at a table, the participant can then see and hear virtual characters that appear to sit across from them, allowing the participant to experience a first-person point of view in dialogs with these virtual characters.
Automated safety systems, a first step toward autonomous vehicles, are already available in many commercial vehicles. These are systems such as adaptive cruise control, which has the capability to slow down due to traffic, and automatic lane keeping, which maintains position within a lane without driver intervention. In order to ensure that these systems are properly used by drivers it is essential that they understand and appropriately trust the technology. We are currently investigating personal characteristics and driving environments that influence acceptance and use of automated safety systems and developing multimodal displays to increase situation awareness.
Intelligent tools can ease the burden of game development. One approach to easing this burden is the use of co-creative, artificial agents, capable of helping a human developer by making suggestions or extending an initial design. However, agents capable of design have historically required a large amount of hand-authored design information—domain-specific rules, heuristic functions, or formal logic rules. Due to the time it takes to author this knowledge, such approaches do not remove the development burden, but shift it to the author of the agent. To solve this problem we present a demonstration of a level-authoring tool with a co-creative agent informed by knowledge learned from gameplay videos. The technique is demonstrated in the popular game, Super Mario Bros. We offer the experience of co-designing a level with a co-creative agent and then playing through the level yourself or with a friend.
Since its earliest days, flaming, trolling, harassment and abuse have plagued the Internet. Our aim is to computationally model abusive online behavior to build tools that help counter it, with the goal of making the Internet a more welcoming place. In particular, we look at a novel approach to identify online verbal abuse using cross-community linguistic similarities between posts on different communities. This work will enable a transformative new class of automated and semi-automated applications that depend on computationally generated abuse predictions.
B.B.C.S. is a working system that can be easily deployed at homes, clinics, laboratories and therapy centers among others, in order to help its users collecting relevant behaviors of interest over long periods of time to get a deeper understanding of them.
This system captures behaviors of interest using multiple cameras alongside biological signals, such as heart rate, in a synchronized manner, allowing the user to analyze visible and invisible characteristics of behaviors. B.B.C.S. is intended to be an “everywhere / anywhere” system, so it allows the user to annotate, comment and control the system in situ.
Since B.B.C.S. can store weeks of data it was design that allows quickly browsing and filtering week’s worth of data to get to specific moments of interest.
As an example of a potentially interesting deployment scenario, we could mention the houses of families with individual(s) on the Autism Spectrum. Using this system would enable parents and researchers to obtain lots of data in their natural environment. We would be bringing the Lab home.
This project concerns the analysis and design of a wearable technology garment intended to aid with the instruction of ballet technique to adult beginners. A phenomenological framework is developed, and used to assess physiological training tools. Following this, a garment is developed that incorporates visual feedback inspired by animation techniques that more directly convey the essential movements of ballet. The garment design is presented, and a discussion is provided on the challenges in constructing an e-textile garment using contemporary materials and technique
Barcode Fitness, a fitness application that helps you keep track of the details of your weightlifting workouts at the Campus Recreation Center at Georgia Tech. Ditch your clumsy notebook that you were using to write down all of your sets and repetitions in favor of Barcode Fitness. Barcode Fitness supports over 40 different exercises and allows for nearly instant selection of exercises by allowing you to scan the QR codes located on all supported exercise machines.
Exposure to diverse opinions makes us more informed and engages society in a necessary deliberation process. Inwardly focused groups risk tunnel vision and an inability to challenge their own views. Technically, online we can connect to anyone in the world but social network analyses of blogs and Twitter have shown that we stay connected in groups of like-minded others. There is untapped potential here for online environments to go further towards giving more access to diverse views. Incivility has direct consequences for relationships with others of different opinions. It has been shown that in televised political debates, incivility increases negative feelings towards the other side. While disagreement is necessary in a healthy democracy, alienating arguments result in the current culture wars. I will present work I have done on encouraging civility on a platform like Facebook could alleviate frustrations such as the need to tune out when there is overwhelming disagreement.
In the U.S. alone, approximately 18 million people use crutches each year. The human body was not designed to bear its weight on the forearms and wrists, but all designs of the crutch force patients to do just this. In just a few steps with the underarm crutch, forearm fatigue sets in resulting in patients resting upon the underarm padding. The upward force from the padding leads to pain, chafing, blood vessel compression, nerve compression, and possible nerve damage. Forearm crutches, while avoiding the underarm area, are difficult to use and direct a large amount of torque to the shoulder, resulting in shoulder injuries, frequent imbalance, and falls. Not including the costs of treatments for the side effects of crutch use, patients spend over $800 million each year on these 5000 year old, inefficient devices.
The Better Walk crutch puts a patient’s mind at ease. The redesigned support system reduces the risk of underarm nerve damage, reduces forearm fatigue, and improves patient comfort resulting in increased compliance and a safer, more comfortable rehabilitation process.
Visualizations can help amplify human cognition. In an era where networks are becoming increasingly complex, the desirability of tools to compare and contrast sets, relationships, and reach is significant. Motivated by a practical need articulated by corporate decision makers, this research presents our journey in designing and implementing bicentric diagrams, a novel graph-based set visualization technique. A bicentric diagram enables simultaneous identification of sets, set relationships, and set member reach in integrated ego networks of two focal entities. Our technique builds on the well-established theory of tie strength to visually group and position nodes. We illustrate the broad applicability of bicentric diagrams with examples from four diverse sample domains: university collaboration, technology co-occurrence, health app purchases, and innovation ecosystems network. We assess the value of our technique using an expert-based value-driven evaluation approach. The paper concludes with implications and a discussion of opportunities for implementation in real-world settings.
We present a method for smoothly blending between existing liquid animations.
We introduce a semi-automatic method for matching two existing liquid animations, which we use to create new fluid motion that plausibly interpolates the input.
Our contributions include a new space-time non-rigid iterative closest point algorithm that incorporates user guidance, a subsampling technique for efficient registration of meshes with millions of vertices, and a fast surface extraction algorithm that produces 3D triangle meshes from a 4D space-time surface.
Our technique can be used to instantly create hundreds of new simulations, or to interactively explore complex parameter spaces.
Our method is guaranteed to produce output that does not deviate from the input animations, and it generalizes to multiple dimensions. Because our method runs at interactive rates after the initial precomputation step, it has potential applications in games and training simulations.
BlockParty: A Platform for Building Hyper-local Social Computing Applications on Residential Mesh Networks
Modern social media do a remarkable job of keeping friends and families connected—often across the globe. Yet, these same systems also overlook the communities and neighborhoods where we live our daily lives. In this paper, we present BlockParty, a platform for building hyper-local social computing applications aimed at neighborhoods. A key feature of our platform is that it runs on top of residential wireless routers via an underlying mesh network. Using BlockParty, people can socialize with their neighbors and share resources, without their data ever leaving their local community. The goal of BlockParty is to enable new forms of neighborhood-oriented social computing applications that encourage the creation of local ties and local social capital.
In the era of globalization, the ordinary viewer is exposed to cinematography from different countries and cultures, but does one understand the cultural context portrayed by the artists?
In this project I intend to use interactive television as a medium, that helps the viewer to gain a deeper understanding of a movie, by exposing him/her to its cultural layers.
The earlier autism spectrum disorder (ASD) is detected, the earlier children can receive intervention services, resulting in improved social, cognitive, and adaptive skills. Birth to 5 years is an especially critical time for identifying potential signs of delayed or unusual development that may indicate ASD. Tracking children’s development can lead to an earlier diagnosis, and the Centers for Disease Control and Prevention (CDC) provides developmental monitoring tools for this through its “Learn the Signs. Act Early.” program.
The goal of this project is to develop and evaluate an Android app that makes CDC’s developmental monitoring tools more readily accessible to parents of young children, making it easier for parents to identify early signs of ASD or other developmental delays.
Campus Tour is an augmented reality experience of Georgia Tech's campus. Once the channel is loaded in Argon, a standards-based Augmented Reality (AR) web browser developed by the Augmented Environments Lab. The tour gives information to users through text, pictures and videos. Stops on the tour are panoramic images. Within the panoramas are points of interests that once clicked reveal more information about their topic. Campus Tour allows users to remotely enjoy the beauty of campus or to learn more about Tech while on campus. Campus Tour also lets you build your own expierences and tours. Using our own custom web based editor you can choose which curated elements to use and build off of adding your own custom elements to create your own unique experiences.
College students encounter many challenges in the pursuit of their educational goals. When these challenges are prolonged, they can have drastic consequences on health and on personal, social, and academic life. Our multi-institution project, called CampusLife, conceptualizes the student body as a quantified community to quantify, assess, infer, and understand factors that impact well-being. Our goal is to develop privacy-honoring infrastructure and tools that can first sense lifestyle, moods, activities through active and passive techniques, and then utilize that information in the design of self-reflective tools that could make students more self-aware and pro-active toward improving their well-being
Captioning on Glass is an on-going project creating an app for Google Glass with a companion Android phone app to assist the hard-of-hearing in everyday conversations. We are also working on another version of this app, "Translation on Glass", which will add the ability to translate between English and another language.
Healthcare big data is being widely touted as a potential resource for curbing costs and improving outcomes. However, numerous challenges remain for leveraging this data to its full potential. In this position paper we identify the difficulties that characterize clinical data, based on our experiences working with pediatric asthma data from Children's Healthcare of Atlanta. The specific dataset we explored includes administrative items, medications, lab results, clinical respiratory scores (outcome), timestamps, and demographic information from 5,785 emergency department (ED) visits for asthma exacerbations. We argue that new data and visual analytic techniques are needed that are specifically tailored for solving challenges in healthcare, and we propose characteristics that these techniques should have and give our design rationale. To demonstrate how a tool that embodies these desirable features may be designed, we introduce CareProcessVis, a prototype interactive visual analytics tool that helps clinicians explore and understand the processes involved in pediatric asthma emergency department care.
The Content Aggregation System for Election Observation (CASE) will aggregate real-time election observation data from formal observer missions and social media sources. Our new system, combining the power of crowdsourced data from social media with the precision of formal observers in the field, will create a first-ever fully integrated monitoring system. Simple technical interfaces will allow users to share particular information in real-time while still maintaining necessary data security and privacy. An integrated visual dashboard will allow all project participants to view, analyze and understand real-time data from social media fully integrated with real-time data from participating formal observer groups. The system will be test deployed in 2014 and fully deployed during the 2015 Nigerian national election.
CHAT (Cetacean Hearing Augmentation and Telemetry) and UHURA (Unsupervised Harvesting and Utilitization of Recognizable Acoustics)
Working with Dr. Denise Herzing of the Wild Dolphin Project, we are creating wearable computers for conducting two-way communication experiments with cetaceans. With CHAT, one researcher uses the waterproof system to broadcast a sound, associated with an object with which dolphin's like to play. A second researcher, upon detecting the sound, passes the object to the first. The researchers pass objects back and forth, further associating the sound with the object. The goal is to see if the dolphins mimic the sound in order to "ask" for the play object. The wearable computer uses pattern recognition technology to detect these mimicked sounds. In a more long-term effort, UHURA uses pattern discovery techniques in an attempt to uncover fundamental units of dolphin vocalizations.
CHAT (Cetacean Hearing Augmentation & Telemetry) is a wearable underwater computer system, engineered to assist researchers in establishing two-way communication with dolphins. The project seeks to facilitate the study of marine mammal cognition by providing a waterproof mobile computing platform. An underwater speaker and keyboard enables the researchers to generate whistles. The system is equipped with a two channel hydrophone array used for localization and recognition of specific responses that are translated into audio feedback. The current system is the result of multiple field tests, guided by the researchers feedback and the environmental constraints.
CheckDroid is a service for Android development teams to test and support their applications on different devices. We are creating the next generation testing & debugging tools for mobile developers.
Testing mobile apps across different platforms is challenging because of the sheer number of device types -- 22 iOS devices & 18K Android devices. This is often referred to as the Fragmentation problem.
Our demo will present two tools:
1. App Mirror: This a capture-replay tool that allows a mobile developer to record their interactions with the app in one device and see the results of the same interaction across multiple devices.
It allows for both LIVE replay for manual testing and for reporting any differences or issues in an offline report.
2. Cloud Test: This is a web based environment that allows the developer to interactively write tests for their app and then run these tests on a test-bed of devices.
Digital tools exist for creating practically every type of artistic, creative, or communicative digital artifact, including pictures, music, video, and computer animation. This project explores a combined AI-HCI approach to participatory intelligent agents that help amateurs create digital moving image media, such as machinima.
In collaboration with colleagues from Malmö University in Malmö Sweden, the AEL is helping to develop a mixed-reality experience that is a narrative of cultural moments from the first half of the twentieth century. The Swedish project, under the direction of Profs. Maria Engerberg and Per Linde, is called Stadsfabula. The AEL is helping to create and test an Argon application that will recognize historic photographs on the walls of a museum space and play the video and audio. This is an experiment in the use of the Argon-aframe platform to create a compelling multimedia experience that is also easy to program and to modify.
A research initiative that explores the potentials and challenges of civic and participatory media, investigating a set of research questions that probe the relationship between technology, place, storytelling, and community engagement. The Sweet Auburn Digital Media Initiative aims to create a platform to inform and engage local communities through the mediation of shared public spaces, digital media, mapping, and storytelling. These applications seek to both highlight and preserve the important history of the neighborhood as a vital center of innovation, commerce, and community among African Americans and the center of the Civil Rights Movement during the era of segregation, as well as contribute to the current revitalization efforts within the neighborhood.
ClipLine—A social sharing mobile platform that helps users turn their favorite TV scenes into customized GIFs and instantly share them with their friends and the outside world. Voting up the best GIFs, re-clipping, and following other accounts will also be main features of ClipLine.
Early detection of symptoms is of critical importance in diagnosing and treating cognitive dysfunction. One important instrument utilized for detecting early signs of cognitive dysfunction is the Clock-Drawing Test. In this test, patients are asked to draw a clock face at a certain time, and are evaluated on how well they perform this task. At present, analysts must individually administer and assess each test a person completes. Automating the process would grant many advantages: the patients could complete the clock test more often to measure improvement, stabilization or variation over time; the patients would receive immediate feedback on their results; the evaluation structure would become more standardized for broader assessment; and multiple evaluation tools could be utilized simultaneously. Toward these ends, the ClockReader project will seek to automate the administration and evaluation of Clock-Drawing Tests on tablet PCs. The ClockReader project will then be tested on both past Clock-Drawing Tests and new tests performed by new participants.
Collective sensing is a novel mobile technology which aims to build better human networks. It uses multiple informants to collect information regarding an individual in a variety of contexts with the goal of creating a more holistic story.
This project is developed through an ongoing collaboraton with the Historic Westside Cultural Arts Council. Through a series of design workshops and public events we are co-designing mobile and social technologies to help cultivate a shared community identity to support local civic engagement. By working directly with community members, we are able to build technology platforms suited to their specific needs and which amplify their values and concerns as the community goes through significant changes.
Pretend play helps children develop a wide range of cognitive skills and is therefore a critically important skill for kids to learn. Some children, such as those on the Autism Spectrum, have difficulties engaging in pretend play. This project seeks to understand and model what constitutes successful pretend play in order to design and implement technologies to support and facilitate highly engaging pretend play. The exact nature of that intervention is an open question, and we are exploring several exciting options including a robotic play partner and an immersive virtual play world. The first step in this initiative is building a cognitive model of play and then developing a computational framework that enables a artificial intelligence system to generate improvisational play behaviors based on our computational model of play. In our demo, we will show some early results from a study observing adult dyads engaged in play behavior as well as the first prototype of the immersive virtual play world.
Connected living is the fast-growing intersection of mobile, wearable, home, community, car and other technologies to assist individuals in accomplishing more seamless interactions and goals in daily life. Mobility and cloud computing are two pillars of growth that has brought about significant changes in industry. Cloud computing, big data, mobility and low-cost sensors are driving the internet of things and connected industries, and the internet of things is forcing transformation and innovation across the connected home, connected workplace and connected city. It is estimated that the Connected Living market will reach 730 Billion USD by 2020.
We are in the process of defining the Connected Living Research Initiative (CLRI) to bring together industry stakeholders, academic/research faculty and civic partners in defining the future of the connected life. CLRI is currently on boarding partners to delineate research goals that include (but is not limited to) the future impact of big data, improved user experience in daily activities, and data security and privacy in this ever more connected daily experience.
The Convergence Innovation Competition (CIC) is a unique competition open to all Georgia Tech students and is run in both the Fall and Spring semesters. Each year the categories in the CIC are defined by our Industry partners who provide mentorship, judging, and category specific resources which are often available exclusively to CIC competitors. While the competition is not tied to any specific course, competitors are often able to take advantage of class partnerships where lecture and lab content, guest lectures, and projects are aligned with competition categories. CIC Competitors are supported by GT-RNOC research assistants who provide technical support and shepherd teams through the competition process. The overarching goal of the CIC is to create innovative and viable products and experiences including a strong user experience and a business case. Winning entries will include a working end-to-end prototype which operates on converged services, media, networks, services, and platforms. CIC winners go on to commercialization, other competitions, as well as internship and job opportunities strengthened by their competition experience.
Over 29 million people in the U.S. live with type II Diabetes. There are many types of medications available to help manage Diabetes, and these medications impact patients' lives in unique ways. Following tenets of evidence-based medicine, participatory design and shared decision making, design researchers at the Mayo Clinic have created a set of cards for use in patient-physician conversations, to help both parties reach a decision on diabetes medication choice. I'm working on an updated digital version of this decision aid, which offers opportunities for tailored content and easier-to-update information while aiming to maintain the flexible, accessible spirit of the original tool.
This project involves the design and evaluation of an interactive computer game that allows deaf children to practice their American Sign Language skills. The game includes an automatic sign language recognition component utilizing computer vision and wireless accelerometers. The project is a collaboration with Dr. Harley Hamilton at the Atlanta Area School for the Deaf.
CopyCat and PopSign are two games that help deaf children and their parents acquire language skills in American Sign Language. 95% of deaf children are born to hearing parents, and most of those parents never learn enough sign language to teach their children. As short term memory skills are learned from acquiring a language, many deaf children enter school with short term memory of less than 3 items, much less than hearing children of hearing parents or Deaf children of Deaf parents. Our systems address this problem directly. Even though they are still under development our games have been shown to be effective in multiple user studies.
Every day, ordinary Internet users engage with complex copyright laws. Particularly in the context of creative work and appropriation, they are making decisions related to legal areas that are notoriously gray. Where legal knowledge is imperfect, social norms and ethical intuitions fill in the gaps. This research attempts to understand how these decisions are made, how norms and knowledge differ in different creative communities, and what lessons can be derived for online community management and design.
COSMOS (COmputational Skins for Multi-functional Objects and Systems) is an interdisciplinary collaborative project to design, manufacture, fabricate, and apply "computational skins". COSMOS consist of dense, high-performance, seamlessly-networked, ambiently-powered computational nodes in the form of 2D flexible surfaces that can process, store, and communicate sensor data. Achieving this vision will redefine the basis of human-environment interactions by creating a world in which everyday objects and information technology become inextricably entangled. This will also enable alternative and neuromorphic computing that can change the foundation of computing today.
This project introduces a new simulation technique to enable detailed dexterous manipulation of cloth. Without reimplementation or substantial modification, existing cloth simulators can only be used to approximate limited interaction between cloth and rigid bodies due to the incorrect computation of contact forces. For example, a simple scenario of two fingers pinching a piece of cloth often results in the cloth slipping out of the hand. Our technique provides a simple solution to cloth-rigid coupling using existing cloth and rigid body simulators as-is. We develop a light-weight interface so that the rigid body and cloth simulators communicate on a demand-driven manner to achieve two main goals: allow the rigid bodies to impart friction forces to the cloth and avoid unsolvable collision situations between the rigid bodies and the cloth. We demonstrate a set of basic manipulation skills including gripping, pinching, and pressing, that are frequently seen in daily activities such as dressing and folding clothes.
Social media has quickly risen to prominence as a news source, yet lingering doubts remain about its ability to spread rumor and misinformation. Systematically studying this phenomenon, however, has been difficult due to the need to collect large-scale, unbiased data along with in-situ judgements of its accuracy. In this paper we present CREDBANK, a corpus designed to bridge this gap by systematically combining machine and human computation. Specifically, CREDBANK is a corpus of tweets, topics, events and associated human credibility judgements. It is based on the real-time tracking of more than 1 billion streaming tweets over a period of more than three months, computational summarizations of those tweets, and intelligent routings of the tweet streams to human annotators—within a few hours of those events unfolding on Twitter. In total CREDBANK comprises more than 60 million tweets grouped into 1049 real-world events, each annotated by 30 human annotators. As an example, with CREDBANK one can quickly calculate that roughly 24% of the events in the global tweet stream are not perceived as credible. We have made CREDBANK publicly available, and hope it will enable new research questions related to online information credibility in fields such as social science, data mining and health.
In 2000 the United Nations announced the Millennium Development Goals, a set of development targets and objectives to reduce poverty and improve health, education, and the environment. These goals are set to be completed by 2015. The system of United Nations organizations is currently formulating a new set of development goals for beyond 2015. To create a more participatory process, the International Telecommunication Union uses an online platform to crowdsource the ideas and comments of youth around the world. The ITU requested the assistance of the TID lab to provide interpretation and textual analysis of the youth’s priorities based on the crowdsourced data. We are developing new visualizations and analysis of this unique dataset. This analysis will help inform the post 2015 UN development agenda.
A key idea in CSLearning4U is that we can design CS learning opportunities. Simply wrestling an interpreter or compiler can't be the best way to learn about computer science. Throwing people into the deep end of the pool can teach people to swim, but there are better ways. We want to do better than a book for CS learning, and we want to design the phonics of computing education to integrate with the "whole language learning" of programming.
We are creating a new distance-learning medium for computing education especially for in-service high school teachers based on ideas from instructional design and educational psychology. In-service high school teachers are particularly time-constrained (and thus need efficiency) and they are more metacognitively aware than other students (and thus able to better inform the project design). The new medium will combine multiple modalities, worked examples, and structure based on cognitive models of designers' knowledge. The research questions are that (1) the teachers will learn CS knowledge in the on-line setting, (2) the teachers will be more efficient at programming tasks, and (3) the teachers will find the materials useful and satisfying. Because of its focus on teachers, the project can potentially have broad impact, in particular on the strategies for training the 10,000 teachers envisioned in the CS 10K Project. The project will establish models and design guidelines that can be used for the creation of other learning materials, including materials for students in, for example, the proposed new CS Principles AP course.
CulturEat is an application that bridges the skilled home chefs or cooks with urban diners who are looking for authentic and affordable cultural meals. It gives the cooks the ability to upload their masterpieces and sell them to earn extra income while sharing cultural heritage/stories of the dish with diners. It helps to restructure the current societal food sharing system and promote positive cultural impacts on social cohesion via food.
We present our work on computing an average curve given a set of planar input curves, with select applications. This work, to be soon presented at the Symposium on Geometric and Physical Modeling, provides a mathematical formulation and a fast algorithm for the problem of finding an average curve, given a set of input curves. Applications in the field of animation and statistical analysis are highlighted.
Fifty percent of all trips are 3 miles or less, yet only 1.8% of those trips are biked. Meanwhile, 35.7% of US adults are obese and the transportation sector accounts for 32% of US greenhouse gases. One of the main reasons citizens do not use the healthier mode of cycling is due to a lack of safe infrastructure—dedicated bicycle routes, roads with bicycle lanes, and other designated bicycle facilities. The City of Atlanta has a desire to put proper cycling infrastructure in place but needs better information from citizens about where they currently and would like to cycle. Therefore, the initial goal of the Crowd-sourced Bicycle Route Desirability project is to modify the open-source CycleTracks application (previously adopted in San Francisco, CA, and Austin, TX.) for use in Atlanta. CycleTracks tracks the existing routes of cyclists using their smart phones and allows comparison of these routes to the quickest path from origin to destination. This allows us to begin to make appropriate infrastructure improvements to the most traveled routes in a study area by seeing logical paths that cyclists avoid. A second phase of the project would develop applications allowing riders to express their desired bike routes even if they currently do not cycle because of lack of adequate facilities.
The term “artifact” has at least two meanings. From a technical perspective, an artifact is an unintentional pattern in data, arising from processes of collection and management. From a cultural perspective, an artifact is a designed object, with a social and material history. At metaLAB, which is grounded in both technical and cultural methods, we are examining digital artifacts with both meanings in mind. In Data Artifacts, we are developing visual methods of revealing the often-unacknowledged patterns in digital data that speak to the social and material history of its accumulation. Never raw, all data carries traces of human labor, intentions and values. Data Artifacts is an inquiry into the deep history of digital collections. Digital cultures, which devote vast resources to the harvesting and handling of data sets, can be understood in part through the particular ways in which they pattern data. Artists and designers with knowledge of computing are poised to uncover such data artifacts through visualization. However, most formal approaches to visualization call for data to be filtered and standardized at the outset. In contrast, we focus on the heterogeneity inherent in human-made data. The messiness of data sets can tell us much about the history of their production. The ambition of Data Artifacts is to develop new tools to contemplate such large-scale collection processes and enable richer discussions about their technical and cultural significance.
Data Illustrator is a vector editing tool for creating data visualizations and infographics. Graphic designers can use Data Illustrator to craft their own visualizations by repeating and styling shapes with data-driven rules. The tool supports the creation of expressive, flexible, and parametrically defined visualizations without the need to program them.
Using the Z-wave protocol stack, we are building a controller for the Aware Home using a Raspberry Pi that will allow users to control and query device data on a dashboard. This collected data will then be used to predict usage patterns and serve tips for power saving. Finally, a user-friendly rules engine enables users to create certain rules using sensor data.
DTM API is a musical data sonification toolset for rapid development and experimentation of web-based audio applications. The API offers a data-agnostic, adaptive, and highly interactive real-time system, with reusable and extendable musical structure models to represent data in various ways. The API is being used in several projects, including in Beltline Social Dashboard, Decatur Civic Sonification with Sonic Generator performance, which is presented at Atlanta Science Festival 2015, in collaboration with GTRI Configurable Lab.
Dear Games is an educational program collaboration between Charis Circle, members of the GA Tech Game Studio and Different Games Collective. We offer inclusive events to support diverse participation in videogame developement and culture at the South's oldest independent feminist bookstore, Charis Books and More, with consideration to the ways that longstanding feminist community organizations can inform contemporary efforts to increase diversity in STEM.
Debate Slates is a second screen application experience designed to facilitate discussion of theories and future plot developments of long-form narrative television (e.g. Game of Thrones, True Detective and Fringe); Debate Slates also hopes to facilitate discussions on current events, focusing on televised reportage of ISIS and the developing situation in Iraq and Syria.
This project aims to define the concpet of digital self-harm for the HCI community. In this project we have explored the limited HCI scholarship related to self-harm within a social computing context. We offer the community an operatlonalized defintion of digital self-harm and propose a theoretical base to orientate related research questions into actionable activities. We also describe a research agenda for digital self-harm, highlighting how the HCI community can contribute to the understanding and designing of technologie sfor self-harm prevention, mitigation, and treatment.
We are living in a multitasking society. We are experiencing an unprecedented level of sensory and cognitive overload, in which we have too many things going on at once, making us more likely to be absentminded. How to involve technology in promoting mindfulness and making it part of the process of achieving it is the question we need to answer in this project.
We design, deploy, and evaluate mobile health tools that support and meet patients needs over time from diagnosis of a chronic disease, through treatment and into survivorship. Our research explores the ability for personalized, adaptable, mobile tools to support patients over the course of their individual breast cancer journeys. Our technology needs to anticipate and recognize barriers to care that occur at various points in a cancer journey, adapt with the patient as they navigate these barriers, and successfully provide patients with the tools and resources they need to manage and mitigate such barriers. The goal of our work is to improve patient health outcomes by supporting patients' outside of the clinic by helping them to learn about, engage with, and manage their disease alongside the demands of daily life.
Like traditional media, social media in China is subject to censorship. However, in limited cases, activists have employed homophones of censored keywords to avoid detection by keyword matching algorithms. This project focusses on designing an interactive, client-side tool that promotes free speech. An iterative design process, involving the inputs of end-users will deliver a final design. In the evaluation of the design we will target the following research questions:
RQ 1. Does the UI workflow fit into context of use of the users?
RQ 2. What the user preferences in terms of providing input and receiving output?
RQ 3. Do users prefer a desktop version or mobile or both?
RQ 4. Does the design feel familiar and similar to the UIs in China?
RQ 5. Does the UI feel trust-able ( does the UI breed and draw trust from the users? )
For the info on the algorithm, see project: Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions
Like traditional media, social media in China is subject to censorship. However, in limited cases, activists have employed homophones of censored keywords to avoid detection by keyword matching algorithms. This project focusses on designing an interactive, client-side tool that promotes free speech. An iterative design process, involving the inputs of end-users will deliver a final design. In the evaluation of the design we will target the following research questions:
RQ 1. Does the UI workflow fit into context of use of the users?
RQ 2. What the user preferences in terms of providing input and receiving output?
RQ 3. Do users prefer a desktop version or mobile or both?
RQ 4. Does the design feel familiar and similar to the UIs in China?
RQ 5. Does the UI feel trust-able ( does the UI breed and draw trust from the users? )
For the info on the algorithm, see project: Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions
The rise of ubiquitous technology has resulted in opportunities for the design of new interactive museum exhibits that can be customized to families. Children’s museums can be engaging, informal settings in which children learn fundamental science, technology, engineering, and math (STEM) concepts through hands-on experiences. In order to optimize and personalize learning experiences in such informal environments, we propose the concept of a virtual buddy that uses personal, physical, and social context knowledge regarding the child to facilitate new opportunities for STEM learning. To understand how children choose, perceive and interact with a virtual buddy and how that may impact STEM learning, we conducted participatory design activities with 18 children in a local museum. The goal of this project is to inform the design of a Virtual STEM Buddy (VSB) that could provide contextualized explanations, to seed parents contextualized explanations and to bridge the museum experience to other informal learning experiences.
Designs for Foraging is a design project that explores the use of IoT technologies in support of urban foraging. Through this project we are developing use-cases; prototyping hardwares, software and user interfaces; and exploring the use of open technologies for image capture and analysis. The underlying motivation for this project is to use design as a means of investigating future practices and to provide the basis for near-term open innovation with IoT in support of alternative practices of agriculture.
Students solved these problems in design[ED] Lab (“Design Education Lab”), a user experience workshop that introduced teenagers and pre-professional adults to design-thinking, to encourage problem solving and critical thinking skill development. This workshop was in partnership with The Bridge Academy (College Park, GA), a full-time High School Diploma and GED Prep program offering a nontraditional path for students. Students used a design-thinking approach to respond to problems based on the College Park Comprehensive Plan (2011 – 2031), by defining the problem, brainstorming solutions, thinking empathetically, iterating on the prototype, and critiquing the work. design[ED] Lab aims to expose underrepresented minorities to design-thinking as a method to solve important problems within their community. Empowered with the tools to make a difference, we hope to inspire the minds that will change the world.
design[ED] Lab is a research project created and facilitated by Monet Spells, a Master’s student at Georgia Institute of Technology studying Human-Computer Interaction.
Exergames, or exertion video games, are interactive, exercise based video games that are a promising in-home approach to physical activity, therapeutic and rehabilitation training, and social interaction for older adults. Research shows that older users have difficulty using exergames due to overly complex interfaces, difficult gestures, and an overall lack of training and familiarity with these systems. To alleviate these usability challenges older adults experience with the use of Kinect-based exergames, we developed a Quick-Start Guide (QSG) as a form of instructional guide to display gesture interactions and trouble-shooting techniques to aid system use. Our current study evaluates three different formats of QSG to assess the most effective in helping older adults use these systems. Findings will provide insights into the best methods of constructing quick-start guides for interactive technologies for the older adult population.
Technology is changing the scope and quality of healthcare through applications such as telemedicine and home health technology by offering a cost-effective and accessible means to manage chronic disease. People are increasingly taking a proactive role in monitoring and maintaining their health, e.g., monitoring blood pressure to prevent stroke, or measuring blood sugar levels to regulate diabetes. One of the most pressing health issues we face today is stroke. Statistics from Centers for Disease Control and Prevention indicates that stroke is the leading cause of serious, long-term disability in the United States. While more and more stroke rehabilitation therapy are conducted in patient’s home, care providers still requires patients to visit the clinic to perform the clinical assessments.
We investigate a computational tool – the Digital Box and Block Test (DBBT) – that can help medical professionals record and assess rehabilitation progress of stroke patients with easy setup. Embedding this technology in the residential spaces could also help patients to relearn and recall how to use their arms, hands and fingers. With the system, care providers would be able to more precisely detect, track, and monitor patient’s post-stroke functional motor improvements remotely.
With their high cognition, engineer-like curiosity, and close relation to humans, orangutans are an extraordinary user to study. The project aims to provide animal care staff and organizations new methods in enriching the lives of animals in their care by creating applications with the Kinect for interactive projections.
Currently 6.8 million children in America have asthma, a disease of the respiratory system that causes inflammation of the airways. An asthma action plan is an individualized health management plan that doctors give to their patients to help control their condition. It functions by illustrating what actions to take at different levels of symptom severity from day-to-day medication use to emergency situations. A problem arises for the caregivers of asthmatic children who may not have the educational background to understand the information in an action plan. These children may be in danger if their caregivers are unsure of the proper actions to take to treat their symptoms. The asthma action plan also serves as a partnership between the caregiver and physician. An action plan that is difficult to understand may degrade this partnership, however, research indicates that better communication between caregivers and physicians can lead to better medication adherence. Our solution is to develop a digital icon-based asthma action plan (I-BAAP) that can be integrated into patient's electronic medical records. The system is composed of a physician portal in which doctors input information relevant to a patient. The portal outputs a link to a responsive web application consisting of the I-BAAP and other features that augment communication between caregivers and physicians. The caregivers can access the web app on their phone, increasing possession of an action plan as compared to paper-based plans which are often lost or misplaced.
Digital Naturalism investigates the role that Digital Media can play for Biological Field Work. It looks to uphold the naturalistic values of wilderness exploration, while investigating the new abilities offered by digital technology. Digital Naturalism unites biologists, designers, engineers, and artists to build and analyze new devices. It focuses on crafting DIY technology and interacting with animals in new ways.In particular, Digital Naturalism looks at how digital media can be used to explore animal behaviors situated in their natural context. Most recently, this research has been carried out directly in the field in the form of Hiking Hackathons.This research originally comes from Andrew Quitmeyer‘s PhD research at Georgia Institute of Technology. It now forms a lifelong project and multiple cross-disciplinary collaborations all pursuing the many aspects of Digital Naturalism.
“Don’t Open That Door” is a gesture-based interactive narrative project set in the universe of the TV show Supernatural. This project creates dramatic agency for the interactor by leveraging expec- tations of the horror genre within a seamless scenario that elicits expressive actions and provides a dramatically satisfying response.
We match interaction and narrative elements to support the following design goals: Story-driven Physical Reactions, Persistent and Uninterrupted Narrative, Scripting of the Interactor by Narrative.
Business ecosystems are characterized by large, complex, and global networks of firms, often from many different market segments, all collaborating, partnering, and competing to create and deliver new products and services. Given the rapidly increasing scale, complexity, and rate of change of business ecosystems, as well as economic and competitive pressures, analysts are faced with the formidable task of quickly understanding the fundamental characteristics of these interfirm networks. Existing tools, however, are predominantly query- or list-centric with limited interactive, exploratory capabilities. We have designed and implemented dotlink360, a web-based interactive visualization system that provides capabilities to gain systemic insight into the compositional, temporal, and connective characteristics of business ecosystems. dotlink360 consists of novel, multiple connected views enabling the analyst to explore, discover, and understand interfirm networks for a focal firm, specific market segments or countries, and the entire business ecosystem.
Collaboration is known to push creative boundaries and help individuals sustain creative engagement, explore a more diverse conceptual space, and synthesize new ideas. While the benefits of human collaboration may seem obvious, the cognitive mechanism and processes involved in open-ended improvisational collaboration are active areas of research. Our research group has developed a co-creative drawing partner called the Drawing Apprentice to investigate creative collaboration in the domain of abstract drawing. The Drawing Apprentice draws with users in real time by analyzing their input lines and responding with lines of its own. With this prototype, we study the interaction dynamics of artistic collaboration and explore how a co-creative agent might be designed to effectively collaborate with both novices and expert artists. The prototype serves as a technical probe to investigate new human-computer interaction concepts in this new domain of human-computer collaboration, such as methods of feedback to facilitate learning and coordination (for both the user and system), turn taking patterns, and the role control and ambiguity plays in effective collaboration.
We propose a general framework for character self-dressing interactions with simulated clothing. We show that by breaking the process of dressing into sub goals, we can design specific action controllers which, when combined allow a character to put on a garment via a user defined style.
Applying driving simulators for in-vehicle research allows for a wide range of studies to be performed particularly when investigating cognitive demand and distraction caused by devices in the car. By using simulations, researchers can investigate driving behaviors in high-risk situations without putting participants or others in harmful way. Currently being conducted within the School of Psychology at Georgia Tech, in-vehicle research could provide more insight into behavior and increase in applicability if participants were able to drive in areas that they are familiar with. Specifically, research being done in coordination with the Atlanta Shepherd Center investigating the use of in-vehicle technologies to assist individuals who have had a Traumatic Brain Injury could benefit largely through these real location maps. The Georgia Tech School of Architecture coincidentally has already developed a 3D model of the Georgia Tech campus and some of the surrounding areas including the Peachtree corridor (26 miles along Peachtree Street). However, in order to make this model usable within the simulator, it must be optimized and converted in a compatible format. Researchers in the School of Architecture and School of Psychology will be working on creating methods and conversion processes that will allow any 3D model to be integrated into the simulator. Development of this process of conversion will allow Georgia Tech to offer documentation and map-creation services to other researchers around the world assisting in increasing the applicability of in-vehicle research.
Part of the fun of computer games is to master the skills necessary to complete the game. Challenge tailoring is the problem of matching the difficulty of skill-based events over the course of a game to a specific player’s abilities. We have devised a data-driven approach to predict changes in players’ skill mastery over time. By modeling players’ skill mastery, we are able to dynamically select game content that challenges individual players at the ideal level, avoiding frustration and boredom.
Computational remixing of hip hop (i.e. using code to control loops and beats to compose music) can be used as a tool for the cultural engagement in computing of underrepresented populations. EarSketch is a digital audio workstation environment, with an accompanying curriculum, that will allow high school and summer workshop students to create their own computational remixes through learning computing principles.
Within the computing field, little has been done to systematically analyze online eating disorder (ED) communities. This research project focuses on understanding how individuals use social media platforms to promote and share their eating disorders with their networks and with the world. We use social computing techniques to identify and anzlye content generated across several popular social media platforms. Through this characterization of eating disorder activities online, we draw attention to the increasingly important role that technologists play in understanding how the platforms and technologies that we create are used and misappropriated for negative health purposes. CAUTION: This project includes media that could potentially be a trigger to those dealing with an eating disorder or with other self-injury illnesses.
This study employs gaming technologies and techniques to create an intelligent encapsulated conversational agent (ECA) to act as a virtual coach who will lower the cognitive effort required by prostate cancer patients to understand key aspects of decision-making, provide more appropriate reference points from which patients more accurately interpret personal risk, and frame information to optimize the patient’s chances of applying his own preferences and values to the decision at hand. A stylized, animated ECA will have a brief, focused conversation with a patient in order to explain, in layman's terms, the various treatment options and their risks and benefits and ask questions to assess the patient's medical literacy and values preferences, for example, the patient may value interventions with lower risk of side effects over being cancer-free.
The eCoach ECA is being developed with the Unity3D game engine and uses gaming AI tools such as behavior trees to model a dialog and ECA behavior. The patient will respond to each ECA question by selecting from among several predetermined answers and the history of patient answers will determine how the conversation unfolds. For example, if the ECA determines that the patient is not sure about the risks and benefits of the various treatment options, it will spend more time explaining what these are as well as ask questions to assess knowledge of them afterward.
This study represents a multidisciplinary collaboration between Emory University’s School of Medicine, the College of Computing and the Interactive Media Technology Center (IMTC) at the Georgia Institute of Technology.
Prior research has produced mixed results regarding the usefulness of interactivity in multimedia learning. In this study, participants learned to solve part of a Rubik’s Cube using either a tutorial with interactive features or a passive (video-based) tutorial. Participants with low spatial ability benefited more from interactivity than those with high ability, though no performance main effects were found between the tutorials. Targeted use of interactivity could be effective in engaging students and helping them learn.
People with severe motor disabilities such as ALS may not be able to move their facial muscles to communicate. This study is examining the salient features of facial expressions in order to create "emotional prosthetics" - ways for people with disabilities to express emotion. The resulting prosthetics will be controlled by voluntary and involuntary brain signals.
Enhanced In-Vehicle Technologies: Novel Interfaces and Advanced Auditory Cues to Decrease Driver Distraction
In-vehicle technologies such as modern radios, GPS devices, eco-driving displays, and smartphones require users to interact with multiple types of visual-based menus and lists while driving. Modern technologies require users to navigate these screens using physical buttons and touch screens, although recent advances have included the use of steering wheel buttons, turn wheels, Head Up Displays (HUDs) and others. Through design and prototyping of novel menu system interfaces through innovative visual display methods, interaction techniques, and the application of advanced auditory cues to old designs and these novel interfaces, we can attempt to decrease driver distraction, therefore allowing for better driving performance, while also improving search times and decreasing cognitive load on the driver.
Patients suffering from traumatic brain or spinal cord injuries may benefit from neuroplasticity guided and reinforced by motor learning feedback through reorganization of the neural pathways in intact parts of the brain and spinal cord. An enhanced version of a tongue-operated robotic rehabilitation system is presented for accelerating the rate of improvement in the upper extremity motor functions for patients with severe hemiparesis following stroke. A new rehabilitation robot, called Hand Mentor ProTM (HM) was utilized by reading its pressure and joint angle sensors and combining them with control commands from the Tongue Drive System (TDS) to enable both isometric and isotonic target-tracking tasks in a coordinated tongue-hand rehabilitation paradigm.
Impossible Spaces is a technique that uses self-overlapping architecture to incorporate natural walking in virtual environments without the use of any other movement techniques that like teleportation or portals. I am showcasing some design interventions that when applied to self overlapping architecture enhance the believability of the space and might even lower the threshold of detection of the architectural manipulation. These design techniques can then be used by VR narrative developers to futher enhance the believability of their VR narratives.
In an increasingly global and competitive business landscape, firms must collaborate and partner with other firms to ensure survival, growth, and innovation. Understanding the evolutionary composition of a firm’s relationship portfolio and the underlying formation strategy is a difficult task given the multidimensional, temporal nature of the data. In collaboration with senior executives, we have designed and implemented an interactive visualization system that enables decision makers to gain both systemic (macro) and detailed (micro) insights into a firm’s relationship activities and discover patterns of multidimensional relationship formation. Our system provides sequential/temporal representation modes, a rich set of additive crosslinked filters, the ability to stack multiple enterprise genomes, and a dynamically updated Markov model visualization to inform decision makers of past and likely future strategy moves.
The United States’ medical billing system is exceptionally complex. Medical bills and Explanation of Benefits (EOB) statements are undecipherable and incomprehensible even for experts to understand. In addition, a 2015 survey conducted by TransUnion Healthcare found that 55% of American patients were either sometimes or always confused about their medical bills and that 61% of patients were either sometimes or always surprised about their out-of-pocket costs. Furthermore, forms of healthcare fraud including medical identity theft is one of the fastest growing crimes in the United States, costing the the nation approximately $30 billion a year. The goal of this research is to identify ways to simplify complexities within the EOB. Even though there are major problems which exist on the clinical and payer side of financial transactions including human error in medical coding, the issue considered in this project is empowering the end user to deal with the confusion and frustration involved in understanding one’s medical services. The proposed project aims to leverage principles of participatory design to build a mobile application which takes advantage of the latest OCR technology solution and prevent the relevant information via an easy-to-understand interface.
Many patients and caregivers struggle to complete everyday epilepsy self-management practices: remembering to take daily medications, reporting seizure events and self-regulating behaviors such as getting enough sleep.
Jon Bidwell and Beth Mynatt are working with adolescents patients (11-18 years old), caregivers and clinicians from the Children's Healthcare of Atlanta (CHOA) to investigate how mobile and wearable health tracking technologies can support these everyday self-management needs.
In the coming months we will be providing patients and caregivers with a number of mobile and wearable tools that include:
- a mobile phone app for reporting seizures and health information,
- a smartphone for detecting medication adherence,
- a wristband for measuring seizures and sleep at night and a
- a wristband for measuring daily activities and stress throughout the day
The research will include four experimental conditions to investigate:
- The use of mobile phones for collecting twice daily survey information and seizure reports,
- The impact of "smart", context-sensitive reminders for completing daily surveys,
- The impact of health tracking devices and health dashboards on survey response rates and
- The impact of goal setting and daily financial rewards on survey response rates
If successful this work will contribute to technology design implications for greatly improving upon current epilepsy self-management tools that are available to patients and families.
Health dashboards stand to help clinicians to identify patient challenges and contact patients between appointments. Many patients and caregivers struggle to complete epilepsy self-management practices: remembering to take daily medications, reporting seizure events and self-regulating behaviors such as getting enough sleep.
Jon Bidwell and Beth Mynatt are working with attendings at the Children's Healthcare of Atlanta (CHOA) to develop a health dashboard for clinicians. The proposed health dashboard aims to help nurse practitioners review patient and caregiver collected health data, evaluate how well patients and families are keeping up with daily self-management practices and prioritize phone call follow-ups.
In the coming months, patients and families will be given a range of mobile and wearable health tracking technologies. These technologies include:
- a mobile phone app for reporting seizures and health information,
- a smartphone for detecting medication adherence,
- a wristband for measuring seizures and sleep at night and a
- a wristband for measuring daily activities and stress throughout the day
Healthcare professionals are using technologies to stay increasingly connected with patients and caregivers between appointments. This research seeks to help a small number of clinicians to reach a much larger group of patients.
Healthcare professionals rely heavily on patients and caregivers to self-report important health information during treatment. However, in practice, these self-reports are often inaccurate, incomplete and can even be misleading. Mobile and wearable technologies stand to help patients and caregivers to collect more accurate, consistent and reliable data.
In this study, we investigated clinical self-reporting needs three neurocognitive fields of medicine: neurology, psychiatry and sleep medicine. In-person expert panel sessions were conducted with 14 clinicians (five epilepsy, four psychiatry, and five sleep medicine specialists) to establish the priority of different types of patient-reported data during diagnosis and treatment, respectively. Then we conducted online surveys with clinicians from the same specialty areas for further assessing the availability and quality of current patient and caregiver self-reporting data being collected.
The results highlight several important yet underexplored data collection and design opportunities for supporting diagnoses, treatment and self-management within these three fields as well as expose gaps between clinical data needs and patient practices. The resulting findings stand to benefit from the development of technological tools that support patient data collection activities and shared decision making between patients and providers.
Epilepsy treatment requires accurate seizure accounts between appointments for adjusting medications, however, this information is often either unavailable or inaccurate.
- Most patients are unable to recognize seizures at night and therefore under-report seizures  and by contrast
- Mobile and wearable seizure detection devices often over-report seizures due to high numbers of false alarms .
In this study we're investigating patient and caregiver video review as an approach for addressing the shortcomings wearable technologies that may otherwise be applicable for long-term use in the home.
- The study involves 16 pediatric and 16 adult patients who are being monitored at a hospital Epilepsy Monitoring Unit (EMU).
- The patients are video recorded and wear a pair of seizure detection wristbands that detect possible seizure events.
- The patient and caregivers then review video of these events and dismissing false alarms (e.g. video of head scratching or text messaging while in bed)
The results suggest that a video review can indeed improve the performance of current the wearable seizure reporting as a "second pass". To date we've seen near perfect agreement between patients/caregiver and electroencephalogram technicians.
2. Van de Vel, Anouk, Kris Cuppens, Bert Bonroy, Milica Milosevic, Katrien Jansen, Sabine Van Huffel, Bart Vanrumste, Lieven Lagae, and Berten Ceulemans. “Non-EEG Seizure-Detection Systems and Potential SUDEP Prevention: State of the Art.” Seizure 22, no. 5 (June 2013): 345–55. doi:10.1016/j.seizure.2013.02.012.
Health reporting plays an essential role in the diagnosis and treatment of epilepsy. Healthcare professionals currently rely on patients and caregivers to document a range of patient seizure symptoms and health behaviors; however, studies have shown that patients and caregivers struggle with these responsibilities. Inaccurate, incomplete or inconsistent information can impact clinical decision making and increase the time required to find an effective seizure control medication.
Mobile and wearable technologies stand to help address these challenges by helping patients and caregivers with tools for collecting the types of health indicators. In this study, we surveyed clinicians and reviewed and compared the performance of existing seizure detection technologies to current patient self-reporting.
The results from our survey showed that
- Low-cost video could help clinicians during initial epilepsy diagnosis
- Existing seizure detection devices work best for GTCs (only 30% of seizures)
- Existing seizure detection devices are best suited for nighttime use when patients are less able to report seizures
These findings helped to shape our current research efforts. Bidwell and Mynatt are developing mobile and wearable tools aimed at supporting everyday data collection for patients with epilepsy.
Bidwell, Jonathan, et al. "Seizure reporting technologies for epilepsy treatment: A review of clinical information needs and supporting technologies." Seizure 32 (2015): 109-117.
Mobile apps are available for supporting epilepsy self-management: reminding patients to take medications, reporting seizures and other health indicators and learning to self-regulate behaviors.
The study is in the early stages and will review mobile apps on the Android and iOS app stores to investigate
- What apps are available?,
- What aspects of self-management are addressed? and
- How are health tracking devices utilized?
- How are family members involved if at all?
The findings are expected to provide us with a starting point for developing a self-management app for pediatric patients with epilepsy this spring.
Moving from 2D and digital to 3D and virtual, Escape Room VR explores the opportunities for computers to communicate with humans more effectively in the medium of virtual reality. This is a short demo that will ignite your curiosity of your surroundings and encourage the discovery of playful interactions. Real-time, 3D, and highly interactive, are you ready to escape the room?
This research project is a MSCHI 2nd year Masters project that attempts to design a wearable device that will reduce distraction in classrooms by making it easier for professors to deal with technology issues that may occur (e.x. The wifi cutting out) in a way that will help them maintain focus on the subject matter of the class.
Mental illness such as psychosis and schizophrenia are serious public health concerns. However, timely detection of an episode of psychosis is often difficult due to several reasons such as social stigma, lack of mental health awareness and literacy, and the retrospective nature of clinical therapy. We examine the potential of leveraging social media disclosures as a new kind of lens in characterizing and predicting experiences leading up to a psychotic episode. In contrast to self-report methodology, where responses typically comprise of recollection of (subjective) health facts, social media captures behavior and language in a naturalistic setting. This gives us access to real-time activity and psychological states that can be analyzed to discover and predict behavioral markers associated with a psychotic episode. With an initial dataset of 11,000 tweets which disclose symptoms of psychosis such as hearing voices, having delusions, schizophrenia etc., we develop a computational method to identify behavioral and linguistic markers that attribute to an episode of psychosis. Further, in collaboration with clinical psychologists, we examine specific user timelines that include mentions of relapse or hospitalization. Based on the data analysis, we aim at building a prediction model to identify prospective behavioral markers leading to an episode. We believe information derived from our prediction model can be valuable to clinical psychiatrists in facilitating timely diagnosis.
Exergames -- video games played by engaging in physical activity -- could help older adults become more physically active. However, most exergames on the market are not developed with consideration of older adult users’ physical and cognitive abilities. Our current study is evaluating the usability of commercially available exergames for this population by testing two exergames for Microsoft Xbox 360 with Kinect with participants aged 60 to 79. These findings will be leveraged to develop guidelines for designing a tutorial to teach older adults to use exergames.
In the near future, autonomous and semi-autonomous systems will interact with us with greater frequency. When they fail or perform unexpected behaviors, non-experts must be able to determine what went wrong. We introduce “rationalization”, a technique for automatically generating natural language explanations as if another human were describing what the autonomous system was doing. We demonstrate rationalization in the test-bed domain of the Frogger game.
Fan Funhouse is a browser-based video remixing application that allows users to edit webcam videos using a palette of effects inspired by a pop culture franchise. The increasing prevalence of user-generated media production in apps and on the web has coincided with pop culture brands, ranging from The Powerpuff Girls to Peanuts, providing fans with opportunities to quickly create and share personalized "fan media" in their web browsers. While most of these fan media experiences have involved the production of images or GIFs, Fan Funhouse gives users creative agency to easily remix short video clips in the style of brands they love. The test scenario for Fan Funhouse features the Adult Swim comedy duo Tim & Eric, who are known for their retro, lo-fi aesthetic. In the Tim & Eric Fan Funhouse demo, users can apply whimsical effects to make their own videos look like Tim & Eric sketches.
The FIDO Sensors team is creating wearable technology to allow working dogs to communicate. Assistance dogs can tell their owners with hearing impairments what sounds they have heard; guide dogs can tell their owners if there is something in their path that must be avoided. We will be demonstrating a variety of scenarios with five wearable sensors designed for dogs to activate.
In 2014, Flextronics (now Flex) came to Georgia Tech with an interest in integrating and testing devices in our authentic home environment (The Aware Home). They were at a stage in their development of the Wink Hub where they needed a home environment to test ranges and reliability, as well as show clients how their products would integrate into a home environment. The Wink Hub is now available as a Do It Yourself solutions for the connected home, enabling transfer of messages between in-home devices and the Wink cloud. Devices from different manufacturers with their own dedicated app, could now be integrated with the Wink app to provide a more connected consumer experience e.g. locking the front door lock could trigger light switches to turn off and blinds to close. During this early phase, Georgia Tech students helped with testing the Wink Hub in various locations around the Aware Home to ensure reliability of adding, removing and controlling devices. Since this time, Flex has continued to expand their expertise in designing and manufacturing connected home/living products.
Aware Home researchers have collaborated with Flex to consider the future of the connected home environment and helped to educate their clients on how connected living solutions with greater data intelligence could improve the lives of residents, including solutions that targeted at energy resource management, independence, health and home management.
Currently, 4.4 million Americans have been diagnosed with atrial fibrillation (AF), in which the heart beats in an irregular rhythmic pattern. That number is estimated to reach 12-16 million by the year 2050. Patients with atrial fibrillation have over a fivefold increase in the chance of stroke. Due to complications from the current standard of care, anticoagulants (i.e. blood thinning drugs), to treat resulting thromboembolism (i.e. clotting) from AF, alternative treatments are actively being sought out to decrease complications and risk of stroke. With 90% of clots found in the brain originating from the left atrial appendage (LAA), a fingerlike projection off the heart that rarely contracts in AF patients, LAA closure devices have become an increasingly attractive option, but current options undergoing FDA approval are considered by cardiologists to be first generation devices that need to be improved upon due to limited functionality. Flow Medtech is currently developing occlusal technology that fully blocks the LAA from thromboembolism, features customizability to conform to the unique shapes and sizes of individual LAAs, and uses a secure anchoring system to prevent migration. These features will prohibit thromboembolism in the LAA, and thus, will significantly reduce the risk of stroke in AF patients.
Over the past 2 years, we have performed experiments to understand what activities within a video game context result in cognitive gains (and which do not). From these findings, we have developed a custom cognitive game called "Food for Thought."
The specific goals of this research program are to: understand how video games can contribute to improvements in cognition, what properties of the gaming environment (novelty, active attention, and/or social interaction) are critical for cognitive improvement, create an older adult specific game that leverages the critical properties identified empirically, and test the efficacy of this theoretically designed game to produce the largest gains in the cognitive performance of older adults.
Much of the research on educational technology (e.g., MOOCs and adaptive learning systems) has been driven by the capabilities of technology instead of the pedagogy and cognition of learners. Our research takes the opposite approach. A review of the literature on educational technology and instructional methods for teaching STEM courses was used to identify the strengths of technology in education. These findings being used to develop educational technology and provide heuristics and guidelines for developing effective STEM courses that optimally support learning.
Successful social interactions are essential to enhance our quality of life. Being aware of our own internal emotional state as we interact with other individuals maximizes our chances of effectively co-regulating with them, and therefore enhancing the quality of our interactions. The challenge we face regarding self-regulation lies in the fact that in today’s busy world often times we become so overwhelmed that we lose the ability to read our own internal emotional states hindering our ability of self – regulate potentially hurting the quality of our interactions.
Given that strong evidence indicates that in successful social interactions, synchrony occurs at the physiological level between individuals as they interact, we are introducing G.L.I.M. (Glass Live Interaction Monitor), a system that helps the user self-regulate in situ by leveraging the combination of 3 comfortable, wireless, non-obtrusive and wearable devices.
G.L.I.M. components are: a) Google Glass, b) Electrodermal Activity (EDA) / Galvanic Skin Response (GSR) , c) Heart Rate Monitor, and d) Laptop application for offline analysis and self-reflection.
Case Study: Given that interacting with Children with Autism (CWA) can probe to be challenging, and that their primary bridges to the world are their parents and therapists, it is essential for them to be adequately self-regulated in order to maximize the quality of their social interactions. G.L.I.M. offers the possibility of optimizing parental and therapist internal self-awareness while interacting in their natural environment. In a second stage we are also planning to instrument the CWA so we can monitor both the caregiver and CWA internal states in situ, providing a even deeper insight on how the interaction flows. Gaining knowledge of both interacting partners invisible physiological signal can probe to be essential in providing strategies oriented to maximize the quality of their relationship.
A CAPTCHA is a challenge-response test used on the Internet to prevent bots from accessing web services that are designed for humans. We are investigating Automatic Game based CAPTCHA Generation (AGCG), in which an AI system generates games that, when played, distinguish between humans and bots. The game based CAPTCHA takes advantage of not only the bots' difficulty performing pattern/object recognition, but also their lack of commonsense knowledge. Thus it is more secure but remains easy and fun for humans, compared to traditional visual based CAPTCHAs. Furthermore, our AGCG system is capable of learning new commonsense knowledge based on users' response in the game based CAPTCHAs.
Games with a purpose (GWAPs) have proven to be effective solutions to solving difficult problems, labeling data, and collecting commonsense knowledge. Unlike traditional games, GWAPs must balance between acquiring accurate solutions or data and maintaining player engagement. However, when it comes to designing GWAPs, the effects of different game mechanics on accuracy and engagement are not well understood. We have conducted two studies to understand the way different choices of game mechanics and their affect on player behavior. The first study (Cabbage Quest) compares cooperative and collaborative game mechanics. The second study (Gwappy Bird) compares different difficulty levels.
The complexity of television shows has been increasing. In order to follow a story, viewers might be expected to stay abreast of more plot threads, remember more characters, and retain information introduced in earlier seasons. Media technology has made the job easier by allowing viewers to review in various ways; they may replay a scene or entire episodes, they may visit an online forum for fans, or they may play a video game that is related to that story.
Game of Game of Thrones is a video game design intended to explore how a video game could enhance the viewing of a television series. Specifically, what could be accomplished by an episodic game whose episodes are interleaved with series episodes? Our design was guided by two goals: 1) help viewers cope with increasing information requirements and 2) offer an additional dramatic layer to the series, one that could be harmlessly eschewed by viewers who don't care to play a game.
The transition from novel to television creates the issue of compressing story into episodes limited by time and budgets. HBO’s translation of Game of Thrones is a rich tapestry of characters and narratives; however, the viewer can lack back story, geographic awareness, and an understanding of character relationships. This second-screen companion app orients viewers to the world of Westeros by mapping families throughout episodes. Greater character understanding is achieved by the mapping of character relationships, both during characters present in each scene or within the episode.
We have a multi-year project exploring how game performance and player behavior can be used to perform scientifically valid cognitive, personality, skill, and behavioral measures. This project involves hypothesizing about how game mechanics, levels, situations etc. could assess aspects of player that are currently measured via validated traditional tests/activities/interviews, designing games around these hypotheses, and running user studies. Another aspect of this work is exploring how theming, feedback, game type influence the assessment validity and the players' desire to play the game.
Georgia Tech and the Human Interface Branch of NASA partnered together to find a way to detect astronauts’ body positions in space. In the zero gravity space environment it becomes difficult to monitor tasks that lead to repetitive stress injuries or fatigue. Monitoring movement would help NASA pinpoint high stress actions and make adjustments to corresponding mission tasks.We developed an unobtrusive, textile based system to monitor astronauts’ arm position in real time, in zero gravity, and without the constraints of camera based motion-input devices.
Robotics has been considered as one of the five key technology areas for defense against attacks with weapons of mass destruction (WMD). However, due to the mass impact nature of WMD, failures of counter-WMD (C-WMD) missions can have catastrophic consequences. To ensure robots’ success in carrying out C-WMD missions, we have developed a novel verification framework in providing performance guarantees for behavior-based and probabilistic robot algorithms in complex real-world environments. We cannot assume the luxury of a do-over; we must get it right the first time.
Giants in the sky is a Tangible User Interface (TUI) that explores the role of mass and gravity in the life and death of exosolar systems. With the use of various tangibles with different physical attributes, this TUI aims to teach basic concepts of astronomy in science museums. These tangibles allow users to create and manipulate digital celestial objects in a sandbox simulation.
Gleaning is the practice of salvaging food left over from its intended use. Our research delved into the activities of gleaning with an emphasis on the tools used in gleaning. From this research we identified a series of design opportunities. Perhaps the most fertile opportunities are related to socio-technical networking: the processes and infrastructures for providing information about the availability of food for gleaning and access to the actors who can move and store gleaned food.
The gloHood is a wearable technology garment that amplifies and augments the expressive movement of a dancer. It provides the novice audience with an available affordance to better appreciate and understand modern dance, and the dancers new tools to better communicate with the audience, with each other, and with themselves. The garment provides the dancer with a gesture control interface through embedded RFID tags, and a motion control interface through accelerometers sewn into the garment.Each of these can trigger playback of animated light patterns on an array for LEDs arranged over the neck and shoulders. This allows the dancer control over the garment while in use, and the ability to enhance his movement.This garment was designed and tested in collaboration with local dance troupe gloATL.
The gloSkirt is a wearable technology garment design for Mary Jane Pennington of dance troupe gloATL. The team wanted to give her an experimental tool to challenger her own movement style and better engage audiences new to dance. A base layer of LEDs responds to resistive sensors embedded within layer of the skirt, causing the garment to ‘pulse’ and ‘breathe’ as the dancer crushes and separates with her movements.
This project is a qualitative study of non-textual mobile communication practices in Southern China. Examining the rapid proliferation of emoji in WeChat use, we attend to the lessening dependence on text. We use interview and observation data from 30 participants to investigate how rural, small town, and urban Chinese adults creatively and innovatively balance the use of emoji and text in their communication, as we envision the evolution of emoji into a modality of its own. We look into various possibilities for future work to explore circumventing the prerequisite of print literacy for mobile communication, especially for low-literate populations.
Despite growing awareness of the term food desert millions of people still have poor access to healthy food. The focus of the research is to help students living in food deserts get better access to grocery stores. Grocery Pool is a mobile application that students can use to collaborate and plan trips to grocery stores.
The annual Clough Commons Art Crawl serves as a unique opportunity for Georgia Tech students to close their books, catch their breath, and enjoy the therapeutic effects of art. The blank walls of the Clough Commons will once again be transformed into a make-shift gallery, all centered around the artistic work of Georgia Tech students.
The RNOC has built the companion app for the Art Crawl utilizing Augmented Reality technologies and the RNOC's Dev Hub platform
GTJourney is an opportunity for all members of the Georgia Tech community to collaborate on applications and solutions that benefit the campus. It is a virtual focal point for students, faculty, and staff to develop ideas and solutions, find technical support and resources, advertise and access campus data, and share applications and experiences.
GTMobile is a web portal, built and maintained by the GT-RNOC, for the
deployment of web applications. GTMobile is meant to be a resource that
benefits the Georgia Tech community by providing a place where any
student, staff, alumni & faculty can host their application or service.
GTMobile features capabilities such as integration with Campus
authentication and authorization to ensure applications and services can
be differentiated and offered to the active GT community or the public.
GTMobile is also the showcase for the winning entries of Georgia Tech’s
Fall Convergence Innovation Competition (cic.gatech.edu).
GTMobile is open to the entire GT community and all are encouraged to host
their applications on this portal and ensure that GTMobile is the
continued singular web point of presence for GT based services.
This glassware is designed for the Georgia Tech campus community and visitors. It uses your location information to help you know what buildings are nearby as well as find the nearest bus stop. This demos how easy it is to leverage our existing APIs and resources in order to support new platforms and development.
Gundam VR is a virtual reality adaptation of the Japanese animated television show Mobile Suit Gundam: Iron-Blooded Orphans. In this virtual reality experience, you will take the role of protagonist Mikazuki Augus, a young soldier that pilots a giant robot, known as a Gundam, in battle as a mercenary. Left paralyzed on his right side due to the physical strain from piloting his Gundam, Mikazuki is only able to control his entire body when plugged into his robot, the Barbatos. The scenario has Mikazuki transport himself to his Gundam through a hangar while still paralyzed on his left side only to regain his bodily autonomy once he is plugged into the Barbatos and ready for combat. This virtual reality experience asks the question of how the giving and taking away of agency in VR can be used to simulate physical impairment.
Come see the tools that we use to create one-of-a-kind research prototypes. We have everything from laser cutters and 3D printers to table saws and soldering irons, and we use them to create many of the custom electronics, cases, and wearable prototypes you see in our demos.
The Prototyping Lab is located in the basement of the building, so just look for signs by the elevators to go down there, or meet by the elevators on the 2nd floor every quarter hour on the quarter hour to get a tour.
“Haptic Mirror Therapy Glove” is an interactive mirror therapy glove for the treatment of a paretic limb following a stroke. It allows the user to stimulate the fingertips of their effected hand by tapping the fingers of their unaffected hand using force sensing resistors to trigger linear resonance actuators on the corresponding fingers. The glove may potentially be useful to stroke survivors and their therapists by encouraging the development of new multi-sensory rehabilitation exercises, which might better help recover lost sensation and strength in their fingers.This project was selected as the “Best Functional Design” at the 2013 International Symposium on Wearable Computing in Zurich, Switzerland.
An augmented reality mobile application that brings the wizarding world of Harry Potter to the real world for the purpose of answering the following research question: How do the following factors - timers, audio, interacting with virtual objects in the real world, interacting with real objects in the virtual world - increase or decrease a user's motivation to follow an interactive location-based narrative? This project will inform a set of design guidelines for motivating users to follow an interactive location-based narrative.
It is true that social networking has been a powerful force for good; however, these sites have also enabled sharing and connectivity for more nefarious purposes. Specifically, the Internet connects people in ways that can enable and amplify the destructive power of eating disorders (EDs). Some pro-ED communities have emerged that support users' choices of self harm as a reasonable lifestyle alternative. These communities are not only dangerous for those with EDs but also for potential contagious effects of these communities on those who don’t already have these behaviors. Instagram, the photo and video sharing site, has taken proactive steps to block hashtags associated with eating disorders, yet the pro-ED community works around these bans by create new hashtags with lexical permutations to congrea
My research examines the formation of the pro-ED community on Instagram around these hashtags and the life cycles for these hashtags. I hope to examine questions such as: what categories of lexical permutations are created for banned hashtags? What characteristics of a hashtag make it "stick" and used around the network? Can we predict what lexical characteristics make a hashtag better or worse at avoiding detection and connecting the pro-ED community together?
Heads-Up is a Google Glass prototype that functions as a translucent second screen over the television while re-watching a favorite show. Our demo will focus on HBO’s highly acclaimed series, Game of Thrones. The intention of the design is to allow the user to continue viewing the TV screen while receiving synchronized commentary through Google Glass rather than be distracted away from the screen by a computer, phone, or tablet.
This project visualizes health data within the Atlanta metro region. Although some research about health inequities among this region has occurred, it typically is based on county-level data. In order to have a better understanding of health inequities and disparities in our home area, a city profile for Atlanta should be established. This project has created an interactive visualization of data such as rates of teen pregnancies, low birthweight babies, etc. The system allows the viewer to explore correlations among the different variables.
Despite repeated efforts by governments, historically, marginalized communities around the world have had limited access to quality healthcare due to the interplay of complex socioeconomic, political, and cultural factors. Our group studies the nature and extent of this ‘limited access’ to healthcare, to construct a nuanced understanding of this phenomenon. Our goal is to extend lessons from our research work to inform the design of not just healthcare interventions, but interventions in the larger field of information and communication technologies for development (ICTD).
Poor quality of medical care is a major contributor to excess medical morbidity and premature mortality in persons with serious mental illnesses (SMI). To address this problem, community mental health providers are increasingly partnering with safety net medical providers to develop behavioral health homes, integrated clinics in which persons with SMI receive coordinated medical and mental health care. However, behavioral health homes have faced logistical and privacy challenges in integrating electronic medical records across organizations.
This application proposes to develop and test a mobile Personal Health Record (mPHR) to overcome this problem while more fully engaging patients in their health care. The mPHR will have the capability to access medical and mental health medication and lab data in real time; to help clients set and maintain health and lifestyle goals; to provide medication and appointment prompts and reminders; and to facilitate communication with providers via asynchronous communication with the EHRs.
This project is a collaboration with Emory University's Center for Behavioral Health Policy Studies.
The purpose of this study is to investigate how cultural background influences Western and Eastern MMOG players in the case of WOW.
This thesis explores the influence of culture on MMOG players in three different cultural contexts: United States (US) servers, Chinese (CN) servers, and Taiwanese (TW) servers. This comparison will allow a comparison of Western vs. Eastern players as well as two similar Eastern cultures that play slightly different versions of the game. This comparison will also show a distinction among the similarities and differences of culturally-influenced behaviors as well as behaviors that arise out of specific game features. This paper will specifically focus on identifying the differences among these three cultures and the unique aspects of players of different cultural backgrounds that alter the atmosphere of the game. In addition, this study will also examine how Chinese WOW players who have virtually “immigrated” from Chinese servers to Taiwanese servers have influenced the local game culture on Taiwanese servers. This study will cover the following research questions:
Does real world culture influence the form of the virtual world culture?
Sub-questions of this main question are as follows:
What aspects of their cultures do players bring from their own lives and how do they incorporate them into their game behavior?
Does the behavior of players in different cultures reveal different values and attitudes in the game?
When players immigrate to other servers, what are some habits and behaviors from their original servers do they bring to the new homeland servers?
More and more people today are using activity trackers like Fitbit and connecting them to their social media like Twitter. And what more, some people make their daily quantified activity time series public. So on one hand, we have their entire twitter network in the form of timelines, friends, followers and their entire network and on the other hand, we have their entire workout data. We are trying to answer some interesting question from these two sets of data. What is the effect of social network on one health regime? Is there any correlation between the number of times a person posts about health and the average workout the person does. Previous research has shown that having friends who are also health conscious actually increases one's tendency to adhere to health regimes. We are trying answer how does weak and strong ties of health conscious and non-health conscious friends and followers affect a person's adherence to health regimens. Previous research shows support in the form of retweets, comments, loves of a users' health tweet actually motivates him to continue using quantified health devices, but now with the exact data of a person's workout, we are trying to quantify this motivation.
As robots become more commonplace, they will need to address a wide variety of problems. Since a robot cannot be programmed to complete every task, it is necessary for robots to learn new tasks by interacting with a human teacher. Current methods require that the robot receive many demonstrations of a task, or they are limited to completing tasks which are nearly identical to previous demonstrations. We are developing a cognitive system based on case-based analogical learning that may enable a robot to collaborate with a human teacher to transfer task knowledge to a range of target problems.
Motion gestures can be expressive, fast to access and perform, and facilitated by ubiquitous inertial sensors. However, implementing a gesture recognizer requires substantial programming and pattern recognition expertise. Although several graphical desktop-based tools lower the threshold of development, they do not support ad-hoc development in naturalistic settings. We present a mobile tool for in-context motion gesture design. Our tool allows interaction designers to create and test motion gestures using inertial sensors in commodity and custom devices. Therefore, our tool encourages development of gestures with common as well as atypical body parts. Moreover, the data collection, design, and evaluation of envisioned gestural interactions can now occur within the context of its use.
There are many populations who need assistive technologies while driving such as the millions of Americans suffer traumatic brain injuries each year, and the majority of them return to driving at some point following their recovery. However, the residual effects of TBIs can affect perception, cognition, emotion, and motor abilities. In collaboration with the Shepherd Center we are developing software that can help improve the attention and abilities of drivers post-TBI. The system could help all kinds of drivers who may have attention lapses, cognitive processing issues, or other issues that impact driving. Similar types of applications could be built for many other types of issues as well (e.g., novice drivers, aging adults, & quote stressed out drivers).
Through findings from over 60 interviews and a national online survey with 978 with diverse groups of parents, we explored parents’ ability to find learning opportunities, we identify differences in parents’ use of online social networks in finding learning opportunities for their children across different socioeconomics.
Visualization has an important role in science and technology. People rely on visualizations to better understand problems they have to solve. Information visualization has recently increased its domain, from being used for representations of business data, to more political and social uses via groups like visualizing.org and infosthetics.com. In parallel with this growth we have seen the widespread acceptance of mobile technology by masses. Mobile phones, today, are being used for everything from email to ticketing and web browsing to watching videos. As society becomes more mobile, it is important to consider the application of information visualization on mobile and other touch based devices. The aim of this project is to understand if and how traditional information visualization techniques like line charts, bar graphs, and treemaps can be useful in a mobile environment and what the best style of interaction with those charts should be.
This ongoing study investigates the effect that proximate criminal activity has on emotional expression in social media. Proximity to crime and well constant fear of crime can have great negative psychological effects on individuals. Social media currently being one of the most popular means of publicly expressing personal opinions and emotions, we expect to find an effect of temporal and spatial proximity to crime on social media mood expression and other patters of online communication. Moreover, we expect that the use of certain terms related to crime will have different emotional connotations that correlate with the baseline criminal level of activity in the area.
The existence of pro-eating disorder (pro-ED) communities has challenged many social media platforms, such as Instagram. These communities promote the adoption and progression of eating disorders, which are known to have negative impacts on health. Instagram has reacted by banning searches on several pro-ED tags as well as issuing content advisories on others. In response, the pro-ED community has adopted non-standard lexical variations of these moderated tags to circumvent restrictions. This research investigates the impacts of Instagram banning tags on the community. Our work analyzes how the pro-ED community changes what tags it uses to avoid detection, what topics are discussed before and after banning, and what intervention and design strategies can be taken to assist these populations.
Exploring experiences for a real-time companion device application to enhance live sports-watching experiences. The application facilitates for active social engagement typically of sports and assists in understanding the dynamics of the match more efficiently.
Augmented-reality is a technology that can revolutionize children's education and entertainment. In studies of adolescents and adults, it has been shown to have measurable benefits for advancing STEM education through situated 3D simulations, providing entertainment through whole-body interaction, and enhancing physical & cognitive rehabilitation through motivational engagement.
We are interested in bringing such experiences into the hands of elementary-school children. In this project we are studying young children's ability to effectively use various types of handheld-AR interfaces. Handheld-AR interfaces are different from more traditional interfaces, by being small portable windows into physical spaces augmented with digital content, and their use may require more complex motor and cognitive skills than compared to traditional interfaces. Due to the novelty of handheld-AR technology, there are no standard interaction techniques for handheld AR, and little is known about children's ability to use these interfaces.
Through this research we are generating guidelines for technology designers, answering questions such as: What kinds of handheld-AR interaction techniques are suitable for young children? To what degree does age influence children's ability to interact with handheld-AR interfaces? What are best practices for designing handheld-AR interfaces for children ?
Scientists disagree about the effect of adding emotionally interesting details to learning materials. While some argue that interesting information enhances learning, others contend that interesting information is distracting. However, the issue might not be the interestingness of the information, but rather the relevance of the details to the main idea.
Understanding users becomes increasingly complicated when we grapple with various overlapping attributes of an individual’s identity. As a term, the user now represents an expanding, diverse set of people. In this work we introduce intersectionality as a framework for engaging with the complexity of users’—and authors’—identities, and situating these identities in relation to their contextual surroundings. We conducted a meta-review of identity representation in the CHI proceedings, collecting a corpus of 140 manuscripts on gender, ethnicity, race, class, and sexuality published between 1982-2016. Drawing on this corpus, we analyze how identity is constructed and represented in CHI research to examine intersectionality in a human-computer interaction (HCI) context.
We chose intersectionality, a framework that focuses on how various dimensions of identity (e.g., gender, race, and class) coalesce inseparably and relate to the conditions of one’s surroundings, because it supports efforts to situate the relationship between technology and social systems. In situating these relationships, we believe this work can help HCI’s broader agenda to do the right thing, within and outside third wave research. Our goal is to provide HCI researchers with empirical insight into current identity representation practices in CHI as well as to develop principled insights and recommendations for advancing the representation of identity in HCI.
We find that previous identity-focused research tends to analyze one facet of identity at a time. Further, research on ethnicity and race lags behind research on gender and socio-economic class. From these findings, we developed recommendations for incorporating intersectionality in HCI research broadly, encouraging clear reporting of context and demographic information, inclusion of author disclosures, and deeper engagement with identity complexities.
Our research examines the role that low-cost virtual reality technology could play in supporting learning in low-resource contexts. Specifically, we propose to study the potential of creating affordable virtual reality-based learning experiences for children in these contexts. There has been a rising penetration of low-cost mobile technologies and internet connectivity in under-resourced communities, and this motivates us to explore the feasibility of virtual reality as a medium to enhance learning experiences for low-resource contexts. Keeping this in mind, we introduce inspirit - a free mobile platform for hosting VR-based learning content for the classroom. Please visit us at www.inspiritvr.org and download our mobile application from the Google Play Store.
This research explored the effect of intuitive versus rational thinking on creativity. Our objective was to investigate this relationship through design tasks with undergraduate industrial design students. Students performed nine separate design tasks across three conditions. Their work was scored for novelty and feasibility, and we analyzed this performance data in conjunction with self-reported mood and information processing assessments. Our results show numerous statistically significant differences. Based on our analysis, we identified a variety of simple, actionable suggestions for design educators to integrate with their teaching, as well as additional thought provoking considerations.
Personal health-tracking technologies have become a part of mainstream culture. Their growing popularity and widespread adoption present an opportunity for the design of new interventions to improve wellness and health. However, there is an increasing concern that these technologies are failing to inspire long-term adoption. In order to understand why users abandon personal health-tracking technologies, we analyzed advertisements of secondary sales of such technologies on Craigslist. We conducted iterative inductive and deductive analyses of approximately 1600 advertisements of personal health-tracking technologies posted over the course of one month across the US. We identify health motivations and rationales for abandonment and present a set of design implications. We call for improved theories that help translate between existing theories designed to explain psychological effects of health behavior change and the technologies that help people make those changes.
The Internet of Things (IoT) will soon touch nearly all of the interactions we have with our world and with the things around us, and the interaction of those things with each other. GT-RNOC is developing a number of IoT-related projects that help students demonstrate and better understand some of the complexity and range of applications that the IoT encompasses.
Isola is a VR experience that takes place in a fantasy world consisted of many floating islands. Special pieces representing forgotten dreams are scattered in the space. The player has to find his lost pieces to become complete again.
During the Journey, a special vehicle will be available for navigation and interaction. With the vehicle, the player can sail among the floating islands. He/she will overcome bad weather, throw a ring to attract giant fish, and at the end, combine two broken pieces collected along the way to form a full star.
The entire story is accompanied by a little bird which guides the player’s attention and cues for possible interactions.
This project explores the possibility of using a non-playable companion character to diegetically inform the player how to interact within a virtual space
Many types of investigators routinely perform analysis that involves large collections of documents. The Jigsaw system helps investigative analysts with reasoning and sense-making in such scenarios. Jigsaw acts like a visual index
onto a document collection. It first analyzes the documents, identifies
entities, clusters related documents, analyzes sentiment, and summarizes each document. Next, it provides multiple visualizations of the documents, entities within, and the analysis results. We have used Jigsaw to explore a wide variety of domains and document collections including academic papers, grants, product reviews, business press releases, news articles, intelligence and police reports, statutes, and even books such as the Bible.
Given the importance of developers for the success of mobile platforms, it is critical for vendors to understand how platform innovations impact developer interaction activity and what issues and topics are discussed. An understanding of these issues can help providers improve their release strategies, manage developer expectations, and avoid negative reputation effects. To facilitate this understanding, we are analyzing knowledge ecosystem reactions to change in mobile software development platforms. As part of this work, we have developed a method for gathering information about change events from two sources: endogenous information derived from traces of user interactions within knowledge ecosystems, and exogenous information harvested from official documentation, press releases, and news reports. The method is being applied to data describing interactions on Stack Overflow, the world’s most popular social information seeking community for developers. By demonstrating how such data can be processed to highlight periods of rapid change, and how this evidence can be combined with external indicators of change events, we are contributing a new technique to supplement approaches based on direct consultation of system participants.
While there is a growing focus on leveraging technology use for learning gains across the world, this focus is yet to extend to infrastructurally limited environments in India, among other countries. We draw on qualitative research conducted in the Indian states of Tamil Nadu, Maharashtra, and West Bengal to highlight the challenges of designing educational technologies for "low-resource'' contexts, particularly when they are "low-resource'' along different dimensions. We also present findings from a survey of online educational technology providers in India to highlight the gaps that must be addressed before these can target socioeconomically disadvantaged populations. Taken together, our research provides a deeper understanding of the nuances that accompany "low-resource'' and how a careful assessment of these might inform appropriate design of educational technology interventions in the field of HCI for Development (HCI4D).
We present a general approach to simulate and control a human character riding a bicycle. The rider not only learns to steer and to balance in normal riding situations, but also learns to perform a wide variety of stunts, including wheelie, endo, bunny hop, front wheel pivot and back hop.
The Quixote system is an artificial intelligence technique for teaching robots and artificial virtual agents how to do things by telling them stories. Stories present a natural means of communicating complicated, tacit procedural knowledge. Quixote thus reads in natural language stories and learns to emulate the behaviors of the characters in the stories. The long term goal of the project is to make AI programming accessible to non-programmers and non-AI experts.
We have also shown that stories can be an effective means of demonstrating ethical behavior to robots and AIs.
This project discuses the findings from a 4-year study of 10-15 year old students from a large Metro-Atlanta school district. Over the course of the project 164 students took surveys and participated in focus groups and interviews regarding the amount of connectivity they experience, where they are going online, and what behavioral issues that are pervasive for this demographic as it relates to their online peer interactions. This work demonstrates how social computing influences communication patterns within this population as well as how social computing influences the everyday behavioral and emotional health and wellness of digitally connected teens.
Lonely Mountain is a virtual reality adaptation of the movie The Hobbit: The Battle of the Five Armies. In this VR experience, Lonely Mountain has fallen into the claws of Smaug the Terrible. You will take the role of the Hobbit, Bilbo Baggins. Your mission is to find and recover the Arkenstone, and unite the dwarf realms once more under the same banner to save Lonley Mountain. The scenario has Bilbo reaches the treasure room and picks up a tool to grab the Arkenstone from the claws of Smaug without waking up Smaug. This virtual reality experience asks the question of how can diegetic elements be used to show the current state of an NPC.
LuminAI is an interactive art installation that explores the improvisation of proto-narrative movement between humans and virtual AI agents using full body, expressive, movement-based interaction. Interactors can co-create movement with an autonomous virtual agent that learns movement, response, and improvisation directly from interacting with human teachers. It analyses their movement using Viewpoints movement theory.
MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use
Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users' normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an "Everyday Gesture Library" or EGL). MAGIC can output synthetic examples of the gesture to train a chosen classifier.
Magic Window supports immersive augmented video experiences allowing viewers to change perspective, as if they are looking through a real window.
A rich set of collaborative interactions with live and pre-recorded media content as well as connected devices are possible through gesture-based controls.
Interviews with 61 makers, along with observations in several maker communities, provide empirical insight on the nuances between different types of communities and how these differences are influenced by the space and place of the makerspaces. Our exploration led to the identification of five prototypical maker communities; closed and regulated, open and messy, hybrid, online large-scale, and online small-scale.
Driving is the second highest expense for the average American household -- more than food or healthcare, and behind only housing. Yet most people do not understand the total cost of owning and operating their vehicles, and they cannot accurately estimate the cost of a common driving trip (such as a commute from home to work). That’s because the costs of owning and operating a vehicle are spread over many expenses incurred at different times. For example, you may fill up the gas tank once a week, make a monthly car payment, and pay insurance twice a year. Depreciation is a significant invisible expense of driving.<br><br>
We have developed a trip cost meter that makes the total cost of each driving trip visible to the user. We are exploring how this tool can help people make better informed personal transportation decisions, including choice of vehicle and choice of alternate modes of transportation (e.g., Uber, transit, ridesharing, or walking/biking).
Mapping iThemba draws on ethnographic research that Professor Anne Pollock began in 2010 at iThemba Pharmaceuticals (pronounced ee-TEM-ba), a small start-up pharmaceutical company in the outskirts of Johannesburg that was founded in 2009 with the mission of drug discovery for TB, HIV, and malaria. The synthetic chemistry research that scientists do at iThemba is no different than what might be done in a well-equipped lab anywhere in the world. Yet, place matters. The interactive map is an opportunity to explore how.
Mapping iThemba has been made possible by a grant from the National Science Foundation program for Science, Technology, and Society (Award #1331049). Professor Anne Pollock did the research and wrote the text for this site, new media artist Katherine Behar conceived the interactive map, and Digital Media master's student Russell Huffman designed, illustrated, and programmed it.
This site provides only one small window into the project. More is available in an article that Anne Pollock published in Social Studies of Science: "Places of pharmaceutical knowledge-making: Global health, postcolonial science, and hope in South African drug discovery." Email firstname.lastname@example.org if you would like to request a copy. Currently, she is writing a book manuscript on the project with the provisional title Synthesizing Hope: Global Health, Postcolonial Science, and South African Drug Discovery. For updates on publications from the project, see her website at Georgia Tech.
As part of the exhibit, Mapping Place: Africa Beyond Paper, which contrasts western concepts of mapping (i.e. Cartesian plots of locations) with other traditional practices, Synlab students created an interactive tabletop installation that lets participants tell their own stories by creating a digital Lukasa, a mnemonic device used by the Luba people of central Africa to record genealogy and history. The exhibition was at the Robert C. Williams Paper Museum from February 27 to June 6, 2014.
An ever increasing number of smart technologies are being developed all over the world to coach people on healthier, as well as more responsible behaviors, providing them timely and ubiquitously with personalised information and support.
Marlin is one such wearable swim coach, specifically for distance swimmers, which constantly monitors a swimmer’s performance and provides them necessary real time feedback through sonification while swimming. It is tool for coaches to plan a detailed training program, set new targets and sync them to the swimmers’ device. It allows both the coaches and swimmers to analyze their performance by tracking their progress and giving them guidance immediately. In this project, we evaluate the usability of the interface for the coaches and the swimmers, and study if they modify their behavior according to the feedback.
HCC professionals, psychologists, and many other researchers are interested in understanding how to better influence participant engagement in interventions that involve technology. This is especially true in instances when researchers are not able to provide monetary incentives over an extended period of time. Georiga Tech along with a team at the University of Michigan (working under an NIH-funded MD2K project) are exploring how to keep individuals motivated in such interventions through mobile technology and gamification.
We offer a ?rst attempt to measure the global digital native population with a model for calculating the number of digital natives in each country of the world. We have calculated the size of the digital native population by country, by region and by income level and have related the presence of digital natives to education and literacy levels, and ultimately to policy-making. According to the model, in 2012 there were around 363 million digital natives out of a world population of around 7 billion – or 5.2 per cent. De?ning “youth” as young people aged 15 to 24, this means that 30 per cent of the world’s youth have been active online for at least ?ve years. While it follows that fewer than a third of the world’s young people today are digital natives, this group nonetheless plays an important role: ?rst, because where the online population is concerned, youth are clearly overrepresented, and second, because digital natives are key drivers when it comes to ICT uptake, innovation and impact.
Medium Probe: A Method for Seeding Dialogue to Explore the Suitable Medium of Communication in Design
A method for engaging community members in productive discussions about technology with technology designers. Medium probes offer designers and participants a chance for open-ended exploration of design issues related to participants’ practices with technology, cultural values, skills, and access level.
Mermaids is a massively multiplayer online game set in an underwater world in which players take the roles of hatchlings coming to life in the ruins of a long-extinct mermaid culture. The over-arching goal and storyline is to rebuild the lost Mermaid culture and reclaim their various skills and cultural practices, while at the same time trying to avoiding the mistakes that caused the extinction of their ancestors. Mermaids is designed as an experiment in emergent game play, with specific affordances designed to promote social emergence. This presentation will include a live demo of the game, plus a poster on the modular mermaid construction system currently being developed by an undergraduate student research team.
Midtown Buzz is an experiment in mobile innovation focused on engaging urban communities. It includes mobile platform and app development, open-source data curation, contextually aware environments, social navigation, developer workshops, hackathons, trials, needs assessments and the creation of a Live-Work-Play “Laboratory” for exploring the potential of media technologies in creating a climate for innovation.
Be sure to check out the innovative Buzz projects, such as Storyoke and Auggy! Please visit www.midtownbuzz. org for more information.
Sharing emotions is a way to connect people to one another, this project uses a wearable system to recognize, share & connect people via emotions. MoodChat is a wearable system that automatically recognize human emotion and allows people to share their feelings and emotions through simple interaction. The system includes a wristband and a mobile app.
This research aims to explore the use of glanceable reminders with a motivational component to support medication adherence. The healthcare industry has begun to focus on mobile health (mHealth) to improve medication adherence through the use of medication reminders. To date, mHealth apps have provided reminders that are text-based and purely informational in nature. The goal of using motivational glanceable reminders is to provide reminders that appeal to the emotional side of a person's decision making process and can be interpreted at a glance without the need to read, or even be literate. The research focuses on the pediatric asthma population. This research uncovers insights that can inform the design of future medication reminder mHealth apps that seek to integrate motivational glanceable reminders.
A collection of projects that explore the convergence of entertainment formats and computation, with focus on HCI design and research methods.
Sanat Rath: Giggles, an application to help viewers relive moments from their favorite sitcoms.
Sruthi Padala: A second screen application for the popular TV show 'The Voice'.
Vipul Thakur: Talkista, an application that serves as your information resource, companion in conferences, meetups and classrooms.
Amrutha Krishnan: Newspad, design of a second screen application for news that enables viewers to understand the news better by providing them the required context as well as supplementary information.
This demo shows a system called Dust and Magnet (DnM) that is a general purpose data visualization system. DnM represents data items as iron dust. Each attribute of the data then is a magnet. The system is implemented on a large multi-touch display where the analyst can deploy magnets and drag them around the view. Data points will then be attracted more strongly or weakly depending on that data item's value of the attribute represented by each magnet. This system provides a very hands-on, visceral data exploration experience.
Biological systems in general are multifunctional and environmentally sustainable.
Thus, biologically inspired design is posited as leading to multifunctional and environmentally
sustainable designs. Design in general is characterized as a problem-driven process. However,
biologically inspired design also entails the twin process of solution-based design. Previous work
has postulated that the solution-based design process is prone to design fixation but leads to more
multifunctional designs. Design Study Library (DSL) is a digital library of eighty-three cases of
biologically inspired design. We present a preliminary analysis of the DSL case studies to examine
two hypotheses. (1) The process of solution-based design results in more multifunctional designs
than the problem-driven design process. (2) The process of solution-based design is more prone to
fixation than the problem-driven design process. We find strong evidence in favor of the first
We present three prototypes designed for a hypothetical museum exhibit that elicit historical and experiential qualities of early 16th century prayer-nuts. As personal religious experiences included a “dependence of spirituality on material objects” during the 16th century, we believe that digitally-enhanced multisensory interactions can help situate the artifact in its historical context. The 3D printed interactive prayer nuts augmented with audio-visual effects support the visual voyage, experience of spirituality, and scents of power. The tactile, aural, visual, olfactory sensory interactions are mapped meaningfully to incorporate some of the original sensory aspects of the artifact and related practices. Our research provides insight on how multisensory interactions can provide museum visitors with the opportunity to experientially engage in content related to an artifact’s history and original use.
Mumerize is an educative music game to help music learners to memorize music intervals and learn the structure of music. This game provides the player with a scene like a platform game where the platforms are created by dividing a melody into music notes. The player should determine the position of the next platform by hearing and selecting an answer of intervals. The player should survive or die after each jump according to whether the answer is right or wrong
The Mwangaza Project is a collaboration among the Sonification Lab, inAble, and Kenyatta University to develop and deploy accessible STEM educational resources to schools for the blind throughout Kenya. Projects that we are working on include accessible weather and climate education, math software for accessing graphing and number lines, and renewable energy as a component of STEM education and support for educational technologies.
Health information management for cancer care is a challenging and personal process that changes over time based on one’s needs, goals, and health status. While technologies supporting health information management appear promising, we do not fully understand how health information tools fit into patients’ daily lives. To better understand the opportunities and usage barriers of these tools, we designed and deployed a mobile, tablet-based health management aid: My Journey Compass. We found that developing a tool that was customizable, mobile, and integrated into the patients’ healthcare system resulted in a set of surprising uses by breast cancer patients for a wide variety of tasks. Our study demonstrates the potential for health management tools to improve the cancer care experience and for HCI research to influence existing healthcare systems.
Navis is a college orientation game designed by graduating Digital Media Master's student Laura Schluckebier. Navis is designed as a campus-side scavenger hunt with team building challenges. Upon arriving to campus for their orientation session, first years work with their teammates to discover clues around campus and to compete in team building challenges. Completing these challenges and earns them points.
Navis' features include 1) a variety of challenges types that teach students through game actions rather than content 2) fluid play groups 3) and an overarching structures that encourages individual intiative.
The game also features a Framework that allows university orientation staff from a variety of college campuses to customize and deploy Navis during their orientation session.
The Nigerian film industry – colloquially known as Nollywood – is enormous, innovative, and digital. We are working to extend its capacities with new media technologies such as games, mobile and social media. In addition, we are developing social messaging campaigns within Nollywood films in particular around health issues. Come see emerging technologies and film content, including the feature length Nollywood film we produced and are premiering.
Respiratory syncytial virus (RSV) is a virus that causes respiratory tract nfections especially in young children. This infection increases the airway resistance and makes it harder to breathe because more pressure has to be generated in the lungs to the extent that the respiratory muscles may get so tired that the patient stops breathing. In the U.S., by the end of 2-3 years of age, nearly all of the children are going to be infected with RSV at least once. Among them, 2-3% will develop bronchiolitis and need to be hospitalized. For this disease, similar to the most of the medical complications, the best strategy is prevention. RSV is a virus, thus a vaccine would be the best answer. Unfortunately, at present no RSV vaccine exists. The last attempt was a trial in 1960s that failed! On the other hand, some symptoms (e.g. temporary difficulty in breathing especially in infants) can easily be mistaken with RSV infection and cause a lot of unnecessary visits to the hospitals/emergency rooms (very high false positive rate).
In this research, we aim to quantify airway resistance through a simple, non-invasive measurement of the chest volume changes over time that can act as a surrogate measure of chest pressure and volumetric airflow. Our approach uses signal and image processing techniques to infer airway resistance using a commercially available infrared depth-sensor, Microsoft Kinect. We envision in the case of commercialization, at a similar price to baby monitors, this technology would have the potential to greatly improve the management of the infant obstructive pulmonary diseases and reduce unnecessary hospital visits.
Due to the lack of standardized notification systems in virtual reality (VR), an immersed user can face different problems like bumping into walls, tripping over pets, losing track of time, missing incoming calls, getting late for scheduled appointments, etc. In this paper we present a study of common interruptions in a VR context and explore methods of representing them in an abstract way in the VR world. We further present NotifiVR, a Unity based notification framework that allows developers and designers to create, integrate, and customize auditory, visual, and haptic notifications in a VR scene.