Rapid Avatar

2014 - Present
Project Leaders: Ari Shapiro and Evan Suma Rosenberg

Download a PDF overview.

We generate a 3D character from a human subject using commodity sensors and near-automatic processes. While a digital double of a person can be built using a standard 3D pipeline, we are able to generate a photorealistic avatar using commodity hardware, no artistic or technical intervention, in approximately 25 minutes for essentially no cost.

Our pipeline includes the scanning of facial expressions, body scanning, and hand stitching. Our technology includes an automated blend shape system from scans, and an automatic body rigging system.

There are many benefits to constructing a just-in-time photorealistic, rather than a stylized, representational avatar. Details of appearance, such as clothing or hair style can be preserved. In addition, photorealism potentially allows for detailed facial expression not possible through traditional methods; for example, a person might have a particular expression that cannot be easily mimicked through a less realistic proxy. In addition, photorealism allows for easier and faster recognition of represented user, requiring less cognitive load to identify them. A photorealistic just-in-time character would also open the door for research into the use of such avatars, such as the effect of using your own realistic-looking avatar in a simulation, or the effect of interacting with 3D avatars that are familiar to the user.

This project is a collaboration between the USC ICT’s Character Animation and Simulation Group from Ari Shapiro and Andrew Feng, USC ICT’s Mixed Reality Lab with Evan Suma Rosenberg.