Modeling vellus facial hair from asperity scattering silhouettes (bibtex)
by Chloe LeGendre, Loc Hyunh, Shanhe Wang, Paul Debevec
Abstract:
We present a technique for modeling the vellus hair over the face based on observations of asperity scattering along a subject's silhouette. We photograph the backlit subject in profile and three-quarters views with a high-resolution DSLR camera to observe the vellus hair on the side and front of the face and separately acquire a 3D scan of the face geometry and texture. We render a library of backlit vellus hair patch samples with different geometric parameters such as density, orientation, and curvature, and we compute image statistics for each set of parameters. We trace the silhouette contour in each face image and straighten the backlit hair silhouettes using image resampling. We compute image statistics for each section of the facial silhouette and determine which set of hair modeling parameters best matches the statistics. We then generate a complete set of vellus hairs for the face by interpolating and extrapolating the matched parameters over the skin. We add the modeled vellus hairs to the 3D facial scan and generate renderings under novel lighting conditions, generally matching the appearance of real photographs.
Reference:
Modeling vellus facial hair from asperity scattering silhouettes (Chloe LeGendre, Loc Hyunh, Shanhe Wang, Paul Debevec), In Proceedings of SIGGRAPH 2017, ACM Press, 2017.
Bibtex Entry:
@inproceedings{legendre_modeling_2017,
	address = {Los Angeles, CA},
	title = {Modeling vellus facial hair from asperity scattering silhouettes},
	isbn = {978-1-4503-5008-2},
	url = {http://dl.acm.org/citation.cfm?doid=3084363.3085057},
	doi = {10.1145/3084363.3085057},
	abstract = {We present a technique for modeling the vellus hair over the face based on observations of asperity scattering along a subject's silhouette. We photograph the backlit subject in profile and three-quarters views with a high-resolution DSLR camera to observe the vellus hair on the side and front of the face and separately acquire a 3D scan of the face geometry and texture. We render a library of backlit vellus hair patch samples with different geometric parameters such as density, orientation, and curvature, and we compute image statistics for each set of parameters. We trace the silhouette contour in each face image and straighten the backlit hair silhouettes using image resampling. We compute image statistics for each section of the facial silhouette and determine which set of hair modeling parameters best matches the statistics. We then generate a complete set of vellus hairs for the face by interpolating and extrapolating the matched parameters over the skin. We add the modeled vellus hairs to the 3D facial scan and generate renderings under novel lighting conditions, generally matching the appearance of real photographs.},
	booktitle = {Proceedings of {SIGGRAPH} 2017},
	publisher = {ACM Press},
	author = {LeGendre, Chloe and Hyunh, Loc and Wang, Shanhe and Debevec, Paul},
	month = aug,
	year = {2017},
	keywords = {Graphics, UARC},
	pages = {1--2}
}
Powered by bibtexbrowser