Motion recognition of self and others on realistic 3D avatars (bibtex)
by Sahil Narang, Andrew Best, Andrew Feng, Sin-hwa Kang, Dinesh Manocha, Ari Shapiro
Abstract:
Current 3D capture and modeling technology can rapidly generate highly photorealistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and retargeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space.We then performed a study where participants were asked to identify their own motion in varying visual representations and scenarios. Similarly, participants were asked to identify the motion of familiar individuals. Unlike prior studies that used captured footage with simple “point-light” displays, we rendered the motion on photo-realistic 3D virtual avatars of the subject. We found that self-recognition was significantly higher for virtual avatars than with point-light representations. Users were more confident of their responses when identifying their motion presented on their virtual avatar. Recognition rates varied considerably between motion types for recognition of others, but not for self-recognition. Overall, our results are consistent with previous studies that used recorded footage and offer key insights into the perception of motion rendered on virtual avatars.
Reference:
Motion recognition of self and others on realistic 3D avatars (Sahil Narang, Andrew Best, Andrew Feng, Sin-hwa Kang, Dinesh Manocha, Ari Shapiro), In Computer Animation and Virtual Worlds, volume 28, 2017.
Bibtex Entry:
@article{narang_motion_2017,
	title = {Motion recognition of self and others on realistic 3D avatars},
	volume = {28},
	issn = {15464261},
	url = {http://onlinelibrary.wiley.com/doi/10.1002/cav.1762/epdf},
	doi = {10.1002/cav.1762},
	abstract = {Current 3D capture and modeling technology can rapidly generate highly photorealistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and retargeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space.We then performed a study where participants were asked to identify their own motion in varying visual representations and scenarios. Similarly, participants were asked to identify the motion of familiar individuals. Unlike prior studies that used captured footage with simple “point-light” displays, we rendered the motion on photo-realistic 3D virtual avatars of the subject. We found that self-recognition was significantly higher for virtual avatars than with point-light representations. Users were more confident of their responses when identifying their motion presented on their virtual avatar. Recognition rates varied considerably between motion types for recognition of others, but not for self-recognition. Overall, our results are consistent with previous studies that used recorded footage and offer key insights into the perception of motion rendered on virtual avatars.},
	number = {3-4},
	journal = {Computer Animation and Virtual Worlds},
	author = {Narang, Sahil and Best, Andrew and Feng, Andrew and Kang, Sin-hwa and Manocha, Dinesh and Shapiro, Ari},
	month = may,
	year = {2017},
	keywords = {Virtual Humans}
}
Powered by bibtexbrowser