Learning Formation of Physically-Based Face Attributes (bibtex)
by Li, Ruilong, Bladin, Karl, Zhao, Yajie, Chinara, Chinmay, Ingraham, Owen, Xiang, Pengda, Ren, Xinglei, Prasad, Pratusha, Kishore, Bipin, Xing, Jun and Li, Hao
Abstract:
Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model, capable of producing multifarious face geometry of pore-level resolution, coupled with material attributes for use in physically-based rendering. We aim to maximize the variety of face identities, while increasing the robustness of correspondence between unique components, including middle-frequency geometry, albedo maps, specular intensity maps and high-frequency displacement details. Our deep learning based generative model learns to correlate albedo and geometry, which ensures the anatomical correctness of the generated assets. We demonstrate potential use of our generative model for novel identity generation, model fitting, interpolation, animation, high fidelity data visualization, and low-to-high resolution data domain transferring. We hope the release of this generative model will encourage further cooperation between all graphics, vision, and data focused professionals, while demonstrating the cumulative value of every individual’s complete biometric profile.
Reference:
Learning Formation of Physically-Based Face Attributes (Li, Ruilong, Bladin, Karl, Zhao, Yajie, Chinara, Chinmay, Ingraham, Owen, Xiang, Pengda, Ren, Xinglei, Prasad, Pratusha, Kishore, Bipin, Xing, Jun and Li, Hao), In Proceedings of the CVPR 2020, IEEE, 2020.
Bibtex Entry:
@inproceedings{li_learning_2020,
	address = {Seattle, Washington},
	title = {Learning {Formation} of {Physically}-{Based} {Face} {Attributes}},
	url = {https://www.computer.org/csdl/proceedings-article/cvpr/2020/716800d407/1m3oiaP9ouQ},
	doi = {10.1109/CVPR42600.2020.00347},
	abstract = {Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model, capable of producing multifarious face geometry of pore-level resolution, coupled with material attributes for use in physically-based rendering. We aim to maximize the variety of face identities, while increasing the robustness of correspondence between unique components, including middle-frequency geometry, albedo maps, specular intensity maps and high-frequency displacement details. Our deep learning based generative model learns to correlate albedo and geometry, which ensures the anatomical correctness of the generated assets. We demonstrate potential use of our generative model for novel identity generation, model fitting, interpolation, animation, high fidelity data visualization, and low-to-high resolution data domain transferring. We hope the release of this generative model will encourage further cooperation between all graphics, vision, and data focused professionals, while demonstrating the cumulative value of every individual’s complete biometric profile.},
	booktitle = {Proceedings of the {CVPR} 2020},
	publisher = {IEEE},
	author = {Li, Ruilong and Bladin, Karl and Zhao, Yajie and Chinara, Chinmay and Ingraham, Owen and Xiang, Pengda and Ren, Xinglei and Prasad, Pratusha and Kishore, Bipin and Xing, Jun and Li, Hao},
	month = apr,
	year = {2020},
	note = {arXiv: 2004.03458},
	keywords = {Graphics, UARC}
}
Powered by bibtexbrowser