DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor (bibtex)
by Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu
Abstract:
We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the nonrigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.
Reference:
DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor (Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu), In Proceedings of the 31st IEEE International Conference on Computer Vision and Pattern Recognition, IEEE, 2018.
Bibtex Entry:
@inproceedings{yu_doublefusion:_2018,
	address = {Salt Lake City, UT},
	title = {{DoubleFusion}: {Real}-time {Capture} of {Human} {Performances} with {Inner} {Body} {Shapes} from a {Single} {Depth} {Sensor}},
	url = {http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1321.pdf},
	abstract = {We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with datadriven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the nonrigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.},
	booktitle = {Proceedings of the 31st {IEEE} {International} {Conference} on {Computer} {Vision} and {Pattern} {Recognition}},
	publisher = {IEEE},
	author = {Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Zhao, Jianhui and Dai, Qionghai and Li, Hao and Pons-Moll, Gerard and Liu, Yebin},
	month = jun,
	year = {2018},
	keywords = {Graphics}
}
Powered by bibtexbrowser