Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR (bibtex)
by Hartholt, Arno, Fast, Ed, Reilly, Adam, Whitcup, Wendy, Liewer, Matt and Mozgai, Sharon
Abstract:
We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in roomscale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR.
Reference:
Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR (Hartholt, Arno, Fast, Ed, Reilly, Adam, Whitcup, Wendy, Liewer, Matt and Mozgai, Sharon), In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), IEEE, 2019.
Bibtex Entry:
@inproceedings{hartholt_ubiquitous_2019,
	address = {San Diego, CA, USA},
	title = {Ubiquitous {Virtual} {Humans}: {A} {Multi}-platform {Framework} for {Embodied} {AI} {Agents} in {XR}},
	isbn = {978-1-72815-604-0},
	url = {https://ieeexplore.ieee.org/document/8942321/},
	doi = {10.1109/AIVR46125.2019.00072},
	abstract = {We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in roomscale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR.},
	booktitle = {Proceedings of the 2019 {IEEE} {International} {Conference} on {Artificial} {Intelligence} and {Virtual} {Reality} ({AIVR})},
	publisher = {IEEE},
	author = {Hartholt, Arno and Fast, Ed and Reilly, Adam and Whitcup, Wendy and Liewer, Matt and Mozgai, Sharon},
	month = dec,
	year = {2019},
	keywords = {UARC, Virtual Humans},
	pages = {308--3084}
}
Powered by bibtexbrowser