Production-level facial performance capture using deep convolutional neural networks (bibtex)
by Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
Abstract:
We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.
Reference:
Production-level facial performance capture using deep convolutional neural networks (Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen), In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, ACM Press, 2017.
Bibtex Entry:
@inproceedings{laine_production-level_2017,
	address = {Los Angeles, CA},
	title = {Production-level facial performance capture using deep convolutional neural networks},
	isbn = {978-1-4503-5091-4},
	url = {http://dl.acm.org/citation.cfm?doid=3099564.3099581},
	doi = {10.1145/3099564.3099581},
	abstract = {We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.},
	booktitle = {Proceedings of the {ACM} {SIGGRAPH} / {Eurographics} {Symposium} on {Computer} {Animation}},
	publisher = {ACM Press},
	author = {Laine, Samuli and Karras, Tero and Aila, Timo and Herva, Antti and Saito, Shunsuke and Yu, Ronald and Li, Hao and Lehtinen, Jaakko},
	month = jul,
	year = {2017},
	keywords = {Graphics},
	pages = {1--10}
}
Powered by bibtexbrowser