Prototypes

Fast Avatar Capture

2014 - Present
Project Leaders: Ari Shapiro and Evan Suma

ICT researchers have developed the capability of creating a digital model of a person using commodity hardware (a Microsoft Kinect camera), placing this 3D avatar into a video game or simulation and infusing it with numerous behaviors and capabilities in just a few minutes. With over 20 million Microsoft Kinect cameras currently deployed around the world, this technology enables millions of people to create a personalized avatar for near zero cost in a short amount of time. By changing the economics of 3D avatar creation and simulation, new socially-based virtual interactions can be developed that rely on recognizable representations of individual people. Avatars can be captured and recaptured on a daily basis, reflecting what they wear, their hairstyle, and any other changes in their appearance.

The system uses capture technology from USC that requires the subject to pose for 4 depth scans in front of a Kinect camera at 90 degree angles.
The model is then placed into the SmartBody which automatically adds a skeleton, compute skinning weights to creates a deformable character, and infuses the character with various capabilities such as walking, jumping, gazing and so forth.

This project is a collaboration between the USC ICT’s Character Animation and Simulation Group from Ari Shapiro and Andrew Feng, USC ICT’s Mixed Reality Lab with Evan Suma and Mark Bolas, and the USC Institute for Robotics and Intelligence Systems with Wang Ruizhe and Gerard Medioni with contributions from USC Viterbi’s Hao Li.