Inventing the Future of Wearable Technology
Wearable technology isn't new if one considers prescription glasses or the mechanical wristwatch, and consumers for decades have accessorized with devices such as portable music players and pedometers. Within the last decade, advanced computing has made smartphones and other mobile devices indispensable for work and leisure.
Now, a new entrant in wearable innovation has arrived in the form of Google Glass, an eyeglasses-styled device that arguably seeks to redefine the capabilities of this class of technology. The new wearable hardware is defined in part by its potential to create instantaneous access to a networked world in unprecedented ways.
The GVU Center at Georgia Tech has been at the forefront of research in wearable computing for more than a decade; in fact, innovations at GVU have led directly to the new forms of wearable computing exemplified by Google Glass. Thad Starner, a professor in Georgia Tech's School of Interactive Computing, is a technical lead on Google Glass.
Starner envisions a wearable world, having worn head-mounted computing prototypes since his pioneering research in the 1990s. He is now advancing the adoption of his lifelong professional work in wearable computing by enabling researchers nationwide to develop applications for Google Glass.
An application developed in Starner's own Contextual Computing Group at Georgia Tech has been customized for Glass to leverage the power of the futuristic headset. Re-envisioning the SmartSign app, which lets parents of hearing-impaired children learn sign language, Glass puts sign language lessons right in the user's field of vision as frequently as the virtual tutor on the screen is needed.
Today Georgia Tech remains at the forefront of wearable computing research. Glass devices are being used for a variety of innovative research projects across the GVU Center and Georgia Tech, in partnership with Google and other industry leaders. Applications ranging from health and wellness, to assistive technologies, to transportation, to gaming are being led by some of the research community's most innovative thinkers to advance the vision of always-available computing promised by this wearable technology.
Glass Research Explorers @ GVU Center
Automated Dietary Assessment/Nutrition Monitoring
Gregory Abowd, Professor, School of Interactive Computing
Although there is widespread agreement in the medical community that more effective mechanisms for dietary assessment are needed to fight back against obesity and other nutrition-related diseases, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment approaches such as questionnaires and food diaries have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. We are developing an approach for automated dietary assessment combining first-person point-of-view images taken with wearable cameras, human computation and sensor streams. In our initial feasibility testing, this technique has proved to be a promising method for learning about eating behaviors of individuals in real-world settings, with the ability to recognize eating activities in images with 89.68% accuracy. This study used improvised wearable computing functionality built with smartphones and custom hardware including Raspberry Pis, Arduinos, web cameras, and a variety of mounting devices.
Our proposal is to develop and test the proposed technique on the Google Glass device itself. In addition to the challenge of identifying eating behaviors and extracting nutritional information from foods captured in images, the continuous capture of everyday activities comes with a range of issues (e.g. privacy concerns, data management) and these issues would be more effectively studied with the hardware such as Glass.
Augmenting the Input Modalities for Google Glass via Electromyography and
Bio-acoustic Sensing
Keith Edwards, Professor, School of Interactive Computing
New input sensing and gesture recognition may expand and enrich the ways people interact with Google Glass. Mobile computing is intended to support everyday activities, including communication, navigation, and entertainment. The development of novel input techniques for Google Glass may extend these applications and advance the vision of always-available computing. Speech interfaces are one of the most common eyes-free input modalities that do not require any movement of the body (besides the mouth), but are typically limiting when privacy is desired in a public environment. We propose instrumenting the forearm with a small sensing device for interactions, providing both discrete and continuous input controls using a combination of electromyography and bio-acoustic sensing. By exploring the unutilized bandwidth and “canvas” of the human body with novel sensing, we provide a new way of providing input and clutching for Google Glass that may overcome some of the limitations of current interactions, considering social appropriateness and privacy.
Tongue-Based Input Techniques for Google Glass
Maysam Ghovanloo, Director, GT-Bionics Lab
At the GT-Bionics Lab, Ghovanloo and his team are exploring how to develop a new mode of hands-free access to the Google Glass via tongue motion. They have previously developed a wireless and wearable computer input device called the Tongue Drive System (TDS), which translates volitional tongue gestures to user-defined commands without the tongue touching or pressing against anything. A combination of Glass+TDS will allow users to access Glass with far more privacy compared to voice inputs. The resulting system is also expected to be more robust against noise and interference. The team intends to initially establish an interface between the Glass and TDS via an Android phone, and later develop a direct interface to create a unified system.
Google Glass for Brain-Computer Interfaces
Melody Moore Jackson, Associate Professor, School of Interactive Computing
A Brain-Computer Interface (BCI) is a system that measures minute changes in brain signals to provide a channel of communication and control that does not depend on muscle movement. Recent advances in BCIs have been demonstrated to restore communication and environmental control to people whose severe motor disabilities prevent them from using traditional input technology such as switches or mice. The best performance reported to date for a BCI is 68 bits (more than eight characters) per minute. This BCI approach, based on an evoked response called Steady-State Visual Evoked Potentials (SSVEP), employs electroencephalogram (EEG) electrodes on the scalp over the visual cortex and requires little or no training. Accuracy of SSVEP-based BCIs is reported at over 95%. The learnability coupled with the accuracy make SSVEP-based systems the most promising BCIs currently available.
In an SSVEP-based BCI, the user focuses visually on one of several images that are flashing at different frequencies on a computer screen. Through real-time processing of the EEG, we can ascertain the dominant frequency in the visual cortex, and therefore the user's selection of the flashing images. The images could be mapped to anything: “yes”, “no”, phrases, or even commands. We implemented an SSVEP-based wheelchair navigation system using a wheelchair-mounted laptop as the display (featured on Discovery Channel, the Kamen Code series, “The Brain”). Google Glass provides a unique opportunity to create a head-mounted, mobile BCI system, using the Glass display to implement SSVEP stimuli. This project would create a prototype wearable BCI system using Glass.
Wearable Cycling Information System
Christopher Le Dantec, Assistant Professor, School of Literature, Media, and Communication
Cycle Atlanta is a joint project between Georgia Tech and the City of Atlanta where urban cyclists are being enlisted to create new forms of cycling data that will better inform cycling infrastructure investments by the city. In late 2012 we released apps for iOS and Android that enable cyclists to record their rides and upload that data to the researchers at Georgia Tech and Atlanta officials. Among other features, the app also allows us to collect what we are calling “noted locations” where riders may indicate via the app specific points of interest or nuisance for the city to respond to. These noted locations include issues like potholes, unresponsive traffic signals, missing or inadequate bike lanes, and places where additional enforcement would help (such as when cars regularly block bicycle lanes).
A key challenge is that entering noted location data while riding is difficult, if not impossible. Since the data is entered via the phone, the rider needs to stop, note the location and add supporting detail (including photos), before they can continue safely on their way. Moving much of this interaction into Glass would enable a more hands-free interaction where a simple gesture could trigger the noted location and then voice interaction would complete data entry and submission. This would greatly increase the likelihood that issues encountered en route would be noted and available to city planners. Additionally, we plan to begin exploring Glass as an alternate display device for cyclists. We intend on building turn-by-turn navigation based on routes that match rider ability profiles but such navigation often requires the rider to direct attention to their device and away from traffic. By using Glass we could route directions unobtrusively and in a way that keeps the rider's gaze up where traffic and potential safety hazards exist.
Objective Assessment of Behavioral Interventions
James M. Rehg, Director, Center for Behavior Imaging and Professor, School of Interactive Computing
Research over the past 10 years has demonstrated conclusively the beneficial impact of early behavioral intervention in improving the outcomes for children with developmental disorders such as autism. Although interventions have many facets, the intensity and frequency with which therapy is delivered are the most critical factors in its success. Unfortunately, the lack of trained behavioral therapists relative to the number of affected individuals means that few children receive therapy to the extent that they could benefit from. Our long-term goal is to use Glass as a platform to develop a wearable decision support system, capable of scaffolding both novice therapists and interested caregivers, and allowing them to participate in extending and enriching a child's therapy routine.
As a step towards this goal, we propose to develop a Glass-based system which can be worn by a therapist to provide an automatic assessment of the quality of an intervention. In previous work, we have demonstrated the ability to automatically infer daily activities through the analysis of video captured from a head-mounted camera system. Our recent findings include the ability to detect and characterize social interactions throughout the day, and detect moments of eye contact between a clinician and a child during a behavioral evaluation. Building on these successes, we propose to develop methods for characterizing a therapy session in terms of the intensity and quality of the interaction, and the level of responsiveness and engagement of the child. These measures will provide an objective, quantitative portrait of the effectiveness of a therapy session, and they can be gathered without any additional effort on the part of the therapist. We will leverage an existing large-scale, funded research program in computational methods for measuring and modeling children's behavior, including access to expert autism researchers and clinicians, and populations of affected individuals (See www.cbs.gatech.edu).
Alternate Reality Games for Glass
Mark Riedl, Associate Professor, School of Interactive Computing
Mobile gaming is a growing computer game market. Yet most mobile games do not consider the physical context of the player; the games are on devices that sit in pockets to be brought out when one is not otherwise engaged. Google's Glass has the potential to fundamentally alter the mobile game landscape by developing a set of technologies that more closely tie gameplay to physical context. Specifically, Glass can act as a novel platform for a genre of game called Alternate Reality Games (ARGs). ARGs are interactive narratives that use the real world as a platform through which to deliver a story that may be altered by participants' actions in the real world. ARGs are a natural fit for Glass because they make the game more present, with game content instantaneously available through the Glass heads-up display, allowing a Glass user to seamlessly shift between fictional and real-world contexts, resulting in anytime, anywhere, narrative gameplay experiences.
To support the anytime, anywhere nature of ARGs on Glass, we propose two lines of investigation. First, we will design and develop end-user authoring tools for Glass users to create and share their own ARG narratives. End-user authoring, in conjunction with professional and commercial game development, will result in a large library of games available for play. Second, because games may be geo-specific-making explicit references to landmarks and locations in the physical world that players must travel to-we will develop data-driven algorithms for adapting games to be playable in user's locale. Leveraging Google map data and text mined from the Web, geo-location adaptation searches for a mapping of a game's original locations to nearby locations that support immediate game play. Our preliminary work shows feasibility of this approach on a limited scale. Glass, combined with end-user authoring and geo-location adaptation can transform the real world into a large scale, persistent massively multiplayer game world in which players can transition seamlessly from real world contexts to fictional contexts.
Related Links