ressing is one of the activities of daily living that the healthcare community has identified as being important for independent living. The challenge of learning to dress at a young age or for some older adults and those with disabilities is mainly due to the combined difficulty in coordinating different body parts and manipulating soft and deformable objects. Our next focus is to develop assistive robotic technologies for the tasks of dressing humans.

- Karen Liu, Associate Professor of Interactive Computing



 SIGGRAPH is the world's largest conference on computer graphics and is attended annually by tens of thousands of computer graphics professionals who work in a wide variety of fields including computer graphics research, software development, digital art, scientific visualization, interactive technology, game design, visual effects, computer science, education, engineering, graphic design, film and television production, scientific research and more. SIGGRAPH offers a five-day interdisciplinary educational experience in the latest computer graphics and interactive techniques including a three-day commercial exhibition that attracts hundreds of exhibitors from around the world.




Moving Animation Forward, One Wardrobe at a Time

Georgia Tech @ SIGGRAPH 2015

Animated characters can mimic human behavior to such a life-like degree that the differences between the real world and computer-generated world are sometimes hard to distinguish. But there's one trick that digital denizens haven’t quite yet mastered: Getting dressed.

They can perform jaw-dropping feats, yes, but putting pants on one leg at a time is another matter. 

Research from the Georgia Institute of Technology has produced a systematic tool that allows animators to create realistic motion for virtual humans who are getting dressed. The new algorithm enables virtual characters to intelligently manipulate simulated cloth to achieve the task of dressing. 


The animation tool can create different dressing styles for different types of garment and fabric. The research team - including Alexander Clegg, Jie Tan, Greg Turk and Karen Liu - shows that the tool can be extended to assistive dressing. It could one day allow robots to efficiently and safely dress those who need assistance. 

The paper, “Animating Human Dressing,” was accepted into the Association for Computing Machinery’s Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2015), Aug. 9-13 in Los Angeles. 

A Georgia Tech seed grant program sponsored by the GVU Center, the Institute for People and Technology and the Institute for Robotics and Intelligent Machines funded the pilot study published at SIGGRAPH this year.

Georgia Tech’s research presented at SIGGRAPH includes two other papers, also co-authored by Karen Liu, associate professor in the School of Interactive Computing.

Online Control of Simulated Humanoids Using Particle Belief Propagation 

Iterative Training Of Dynamic Skills Inspired By Human Coaching Techniques

The GVU Center at Georgia Tech will host a reception at SIGGRAPH 2015 for alumni and friends Aug. 12, 6-9 p.m. For more information contact gvu@cc.gatech.edu.


Writer: Joshua Preston