Conveyance of Affect and Emphasis via Electronic Media

Faculty: 
Bruce Walker
Students: 
Stanley J. Cantrell, Mike Winters

Communication is essential to the human condition. In a society where social media, e-mail, and text messaging have become the dominant modes of communication, people are now able to conduct business, creatively collaborate, and simply chat with others across the globe in a matter of seconds. Commercial video-chat clients, such as Hangouts and FaceTime, have contributed to this shift in communication while preserving the 'face-to-face' aspect of communication. The emergence of commercial Augmentative and Alternative Communication systems (AAC) and Speech Generation Devices (SGDs) has afforded individuals who are incapable of producing speech or text, the ability to communicate, further contributing to this shift. Each of these technologies has redefined what we traditionally consider 'communication'.

Currently, the ability to express the range and variety of human affect and emotion is severely limited. The usage of capital letters, symbols, emojis and emoticons has been the only way to convey information about mood, humor, sarcasm, etc. in a text-ony medium. Similarly, in text-to-speech (TTS) systems-which typically have only one (low-quality) voice available at a time-there is currently no easy way to express emotion, sarcasm, anger, or humor. While these technologies have enabled new communication channels, issues pertaining to preference, quality, and accuracy of communication present many interesting research oppportunites.

Lab: 
Faculty: 
Bruce N. Walker

The Georgia Tech Sonification Lab is an interdisciplinary research group based in the School of Psychology and theSchool of Interactive Computing at Georgia Tech. Under the direction of Prof. Bruce Walker, the Sonification Lab focuses on the development and evaluation of auditory and multimodal interfaces, and the cognitive, psychophysical and practical aspects of auditory displays, paying particular attention to sonification. Special consideration is paid to Human Factors in the display of information in "complex task environments," such as the human-computer interfaces in cockpits, nuclear powerplants, in-vehicle infotainment displays, and in the space program.

[Random Image of Auditory Interface] Since we specialize in multimodal and auditory interfaces, we often work with people who cannot look at, or cannot see, traditional visual displays. This means we work on a lot of assistive technologies, especially for people with vision impairments. We study ways to enhance wayfinding and mobility, math and science education, entertainment, art, music, and participation in informal learning environments like zoos and aquariums.

The Lab includes students and researchers from all backgrounds, including psychology, computing, HCI, music, engineering, and architecture. Our research projects are collaborative efforts, often including empirical (lab) studies, software and hardware development, field studies, usabilty investigations, and focus group studies.