Real-Time Hair Rendering using Sequential Adversarial Networks (bibtex)
by Lingyu Wei, Liwen Hu, Vladimir Kim, Ersin Yumer, Hao Li
Abstract:
We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semisupervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.
Reference:
Real-Time Hair Rendering using Sequential Adversarial Networks (Lingyu Wei, Liwen Hu, Vladimir Kim, Ersin Yumer, Hao Li), In Proceedings of the 15th European Conference on Computer Vision, Computer Vision Foundation, 2018.
Bibtex Entry:
@inproceedings{wei_real-time_2018,
	address = {Munich, Germany},
	title = {Real-{Time} {Hair} {Rendering} using {Sequential} {Adversarial} {Networks}},
	url = {http://openaccess.thecvf.com/content_ECCV_2018/papers/Lingyu_Wei_Real-Time_Hair_Rendering_ECCV_2018_paper.pdf},
	abstract = {We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semisupervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.},
	booktitle = {Proceedings of the 15th {European} {Conference} on {Computer} {Vision}},
	publisher = {Computer Vision Foundation},
	author = {Wei, Lingyu and Hu, Liwen and Kim, Vladimir and Yumer, Ersin and Li, Hao},
	month = sep,
	year = {2018},
	keywords = {Graphics, UARC}
}
Powered by bibtexbrowser