Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras (bibtex)
by Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg
Abstract:
Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.
Reference:
Rapid Creation of Photorealistic Virtual Reality Content with Consumer Depth Cameras (Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg), In Proceedings of Virtual Reality (VR), 2017 IEEE, IEEE, 2017.
Bibtex Entry:
@inproceedings{chen_rapid_2017,
	address = {Los Angeles, CA},
	title = {Rapid {Creation} of {Photorealistic} {Virtual} {Reality} {Content} with {Consumer} {Depth} {Cameras}},
	isbn = {978-1-5090-6647-6},
	url = {http://ieeexplore.ieee.org/abstract/document/7892385/},
	doi = {10.1109/VR.2017.7892385},
	abstract = {Virtual objects are essential for building environments in virtual reality (VR) applications. However, creating photorealistic 3D models is not easy, and handcrafting the detailed 3D model from a real object can be time and labor intensive. An alternative way is to build a structured camera array such as a light-stage to reconstruct the model from a real object. However, these technologies are very expensive and not practical for most users. In this work, we demonstrate a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. The geometry model and the camera trajectories are automatically reconstructed from a RGB-D image sequence captured offline. Based on the HMD position, selected images are used for real-time model rendering. The result of this pipeline is a 3D mesh with view-dependent textures suitable for real-time rendering in virtual reality. Specular reflections and light-burst effects are especially noticeable when users view the objects from different perspectives in a head-tracked environment.},
	booktitle = {Proceedings of {Virtual} {Reality} ({VR}), 2017 {IEEE}},
	publisher = {IEEE},
	author = {Chen, Chih-Fan and Bolas, Mark and Rosenberg, Evan Suma},
	month = mar,
	year = {2017},
	keywords = {MxR, UARC},
	pages = {473--474}
}
Powered by bibtexbrowser