Dynamic Facial Asset and Rig Generation from a Single Scan (bibtex)
by Li, Jiaman, Kuang, Zhengfei, Zhao, Yajie, He, Mingming, Bladin, Karl and Li, Hao
Abstract:
The creation of high-fidelity computer-generated (CG) characters for films and games is tied with intensive manual labor, which involves the creation of comprehensive facial assets that are often captured using complex hardware. To simplify and accelerate this digitization process, we propose a framework for the automatic generation of high-quality dynamic facial models, including rigs which can be readily deployed for artists to polish. Our framework takes a single scan as input to generate a set of personalized blendshapes, dynamic textures, as well as secondary facial components (e.g., teeth and eyeballs). Based on a facial database with over 4, 000 scans with pore-level details, varying expressions and identities, we adopt a self-supervised neural network to learn personalized blendshapes from a set of template expressions. We also model the joint distribution between identities and expressions, enabling the inference of a full set of personalized blendshapes with dynamic appearances from a single neutral input scan. Our generated personalized face rig assets are seamlessly compatible with professional production pipelines for facial animation and rendering. We demonstrate a highly robust and effective framework on a wide range of subjects, and showcase high-fidelity facial animations with automatically generated personalized dynamic textures.
Reference:
Dynamic Facial Asset and Rig Generation from a Single Scan (Li, Jiaman, Kuang, Zhengfei, Zhao, Yajie, He, Mingming, Bladin, Karl and Li, Hao), In ACM Transactions on Graphics, volume 39, 2020.
Bibtex Entry:
@article{li_dynamic_2020,
	title = {Dynamic {Facial} {Asset} and {Rig} {Generation} from a {Single} {Scan}},
	volume = {39},
	url = {https://dl.acm.org/doi/10.1145/3414685.3417817},
	doi = {doi/10.1145/3414685.3417817},
	abstract = {The creation of high-fidelity computer-generated (CG) characters for films and games is tied with intensive manual labor, which involves the creation of comprehensive facial assets that are often captured using complex hardware. To simplify and accelerate this digitization process, we propose a framework for the automatic generation of high-quality dynamic facial models, including rigs which can be readily deployed for artists to polish. Our framework takes a single scan as input to generate a set of personalized blendshapes, dynamic textures, as well as secondary facial components (e.g., teeth and eyeballs). Based on a facial database with over 4, 000 scans with pore-level details, varying expressions and identities, we adopt a self-supervised neural network to learn personalized blendshapes from a set of template expressions. We also model the joint distribution between identities and expressions, enabling the inference of a full set of personalized blendshapes with dynamic appearances from a single neutral input scan. Our generated personalized face rig assets are seamlessly compatible with professional production pipelines for facial animation and rendering. We demonstrate a highly robust and effective framework on a wide range of subjects, and showcase high-fidelity facial animations with automatically generated personalized dynamic textures.},
	number = {6},
	journal = {ACM Transactions on Graphics},
	author = {Li, Jiaman and Kuang, Zhengfei and Zhao, Yajie and He, Mingming and Bladin, Karl and Li, Hao},
	month = nov,
	year = {2020},
	keywords = {ARO-Coop, Graphics}
}
Powered by bibtexbrowser