Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping (bibtex)
by Chih-Fan Chen, Mark Bolas, Evan Suma
Abstract:
With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint
Reference:
Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping (Chih-Fan Chen, Mark Bolas, Evan Suma), In Proceedings of the SIGGRAPH '16 ACM SIGGRAPH 2016, ACM Press, 2016.
Bibtex Entry:
@inproceedings{chen_real-time_2016,
	address = {Anaheim, CA},
	title = {Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping},
	isbn = {978-1-4503-4371-8},
	url = {http://dl.acm.org/citation.cfm?id=2945162},
	doi = {10.1145/2945078.2945162},
	abstract = {With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint},
	booktitle = {Proceedings of the {SIGGRAPH} '16 {ACM} {SIGGRAPH} 2016},
	publisher = {ACM Press},
	author = {Chen, Chih-Fan and Bolas, Mark and Suma, Evan},
	month = jul,
	year = {2016},
	keywords = {MxR, UARC},
	pages = {1--2}
}
Powered by bibtexbrowser