Wan-Chun Ma, Graham Fyffe, Paul Debevec: “Optimized Local Blendshape Mapping for Facial Motion Retargeting”

August 11, 2011 | Vancouver, British Columbia

Speaker: Wan-Chun Ma, Graham Fyffe, Paul Debevec
Host: SIGGRAPH 2011

One of the popular methods for facial motion retargeting is local blendshape mapping [Pighin and Lewis 2006], where each local facial region is controlled by a tracked feature (for example, a vertex in motion capture data). To map a target motion input onto blendshapes, a pose set is chosen for each facial region with minimal retargeting error. However, since the best pose set for each region is chosen independently, the solution likely has unorganized pose sets across the face regions, as shown in Figure 1(b). Therefore, even though every pose set matches the local features, the retargeting result is not guaranteed to be spatially smooth. In addition, previous methods ignored temporal coherence which is key for jitter-free results.