Events

Yuyu Xu, Andrew W. Feng and Ari Shapiro: “A Simple Method for High Quality Artist-Driven Lip Syncing”

March 21, 2013 | Orlando, FL

Speaker: Yuyu Xu, Andrew W. Feng and Ari Shapiro
Host: ACM Symposium on Interactive 3D Graphics and Games (I3D)

Abstract: We demonstrate a real-time lip animation algorithmthat can be used to generate synchronized facial movements with audio generated from a text-to-speech engine or from recorded audio. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set (diphones). The diphone animations are then stitched together to construct the final animation. This method can be easily retargeted to new faces that use the same set of visemes. Thus, our method can be applied to any character that utilizes the same, small set of facial poses. In addition, our method is editable in that it allows an artist to directly and easily change specific parts of the lip animation algorithm as needed. Our method requires no learning, can work on multiple languages, and is easily replicated. We make available publicly animations for lip syncing English utterances. We present a study showing the subjective quality of our algorithm, and compare it to the results of a popular commercial software package.