VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES (bibtex)
by Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg
Abstract:
High-fidelity virtual content is essential for the creation of compelling and effective virtual reality (VR) experiences. However, creating photorealistic content is not easy, and handcrafting detailed 3D models can be time and labor intensive. Structured camera arrays, such as light-stages, can scan and reconstruct high-fidelity virtual models, but the expense makes this technology impractical for most users. In this paper, we present a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade depth camera. The geometry model and the camera trajectories are automatically reconstructed and optimized from a RGB-D imagesequencecapturedoffline. Basedonthehead-mounted display(HMD)position,thethreeclosestimagesareselected for real-time rendering and fused together to smooth the transition between viewpoints. The specular reflections and light-burst effects can also be preserved and reproduced. We confirmed that our method does not require technical background knowledge by testing our system with data captured by non-expert operators.
Reference:
VIEW-DEPENDENT VIRTUAL REALITY CONTENT FROM RGB-D IMAGES (Chih-Fan Chen, Mark Bolas, Evan Suma Rosenberg), In Proceedings of ICIP 2017, IEEE, 2017.
Bibtex Entry:
@inproceedings{chen_view-dependent_2017,
	address = {Beijing, China},
	title = {{VIEW}-{DEPENDENT} {VIRTUAL} {REALITY} {CONTENT} {FROM} {RGB}-{D} {IMAGES}},
	url = {http://people.ict.usc.edu/~suma/papers/chen-icip2017},
	abstract = {High-fidelity virtual content is essential for the creation of compelling and effective virtual reality (VR) experiences. However, creating photorealistic content is not easy, and handcrafting detailed 3D models can be time and labor intensive. Structured camera arrays, such as light-stages, can scan and reconstruct high-fidelity virtual models, but the expense makes this technology impractical for most users. In this paper, we present a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade depth camera. The geometry model and the camera trajectories are automatically reconstructed and optimized from a RGB-D imagesequencecapturedoffline. Basedonthehead-mounted display(HMD)position,thethreeclosestimagesareselected for real-time rendering and fused together to smooth the transition between viewpoints. The specular reflections and light-burst effects can also be preserved and reproduced. We confirmed that our method does not require technical background knowledge by testing our system with data captured by non-expert operators.},
	booktitle = {Proceedings of {ICIP} 2017},
	publisher = {IEEE},
	author = {Chen, Chih-Fan and Bolas, Mark and Rosenberg, Evan Suma},
	month = sep,
	year = {2017},
	keywords = {MxR, UARC}
}
Powered by bibtexbrowser