HybridFusion: Real-Time Performance Capture Using a Single Depth Sensor and Sparse IMUs (bibtex)
by Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu
Abstract:
We propose a light-weight yet highly robust method for realtime human performance capture based on a single depth camera and sparse inertial measurement units (IMUs). Our method combines nonrigid surface tracking and volumetric fusion to simultaneously reconstruct challenging motions, detailed geometries and the inner human body of a clothed subject. The proposed hybrid motion tracking algorithm and efficient per-frame sensor calibration technique enable nonrigid surface reconstruction for fast motions and challenging poses with severe occlusions. Significant fusion artifacts are reduced using a new confidence measurement for our adaptive TSDF-based fusion. The above contributions are mutually beneficial in our reconstruction system, which enable practical human performance capture that is real-time, robust, low-cost and easy to deploy. Experiments show that extremely challenging performances and loop closure problems can be handled successfully.
Reference:
HybridFusion: Real-Time Performance Capture Using a Single Depth Sensor and Sparse IMUs (Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, Yebin Liu), In Proceedings of the 15th European Conference on Computer Vision, Computer Vision Foundation, 2018.
Bibtex Entry:
@inproceedings{zheng_hybridfusion:_2018,
	address = {Munich, Germany},
	title = {{HybridFusion}: {Real}-{Time} {Performance} {Capture} {Using} a {Single} {Depth} {Sensor} and {Sparse} {IMUs}},
	url = {http://openaccess.thecvf.com/content_ECCV_2018/papers/Zerong_Zheng_HybridFusion_Real-Time_Performance_ECCV_2018_paper.pdf},
	abstract = {We propose a light-weight yet highly robust method for realtime human performance capture based on a single depth camera and sparse inertial measurement units (IMUs). Our method combines nonrigid surface tracking and volumetric fusion to simultaneously reconstruct challenging motions, detailed geometries and the inner human body of a clothed subject. The proposed hybrid motion tracking algorithm and efficient per-frame sensor calibration technique enable nonrigid surface reconstruction for fast motions and challenging poses with severe occlusions. Significant fusion artifacts are reduced using a new confidence measurement for our adaptive TSDF-based fusion. The above contributions are mutually beneficial in our reconstruction system, which enable practical human performance capture that is real-time, robust, low-cost and easy to deploy. Experiments show that extremely challenging performances and loop closure problems can be handled successfully.},
	booktitle = {Proceedings of the 15th {European} {Conference} on {Computer} {Vision}},
	publisher = {Computer Vision Foundation},
	author = {Zheng, Zerong and Yu, Tao and Li, Hao and Guo, Kaiwen and Dai, Qionghai and Fang, Lu and Liu, Yebin},
	month = sep,
	year = {2018},
	keywords = {Graphics, UARC}
}
Powered by bibtexbrowser