Multi-Platform Expansion of the Virtual Human Toolkit: Ubiquitous Conversational Agents (bibtex)
by Hartholt, Arno, Fast, Ed, Reilly, Adam, Whitcup, Wendy, Liewer, Matt and Mozgai, Sharon
Abstract:
We present an extension of the Virtual Human Toolkit to include a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The Toolkit uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation and rendering. It has been extended to support computing platforms beyond Windows by leveraging microservices. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback leveraging mobile sensors in headset AR.
Reference:
Multi-Platform Expansion of the Virtual Human Toolkit: Ubiquitous Conversational Agents (Hartholt, Arno, Fast, Ed, Reilly, Adam, Whitcup, Wendy, Liewer, Matt and Mozgai, Sharon), In International Journal of Semantic Computing, volume 14, 2020.
Bibtex Entry:
@article{hartholt_multi-platform_2020,
	title = {Multi-{Platform} {Expansion} of the {Virtual} {Human} {Toolkit}: {Ubiquitous} {Conversational} {Agents}},
	volume = {14},
	issn = {1793-351X, 1793-7108},
	url = {https://www.worldscientific.com/doi/abs/10.1142/S1793351X20400127},
	doi = {10.1142/S1793351X20400127},
	abstract = {We present an extension of the Virtual Human Toolkit to include a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The Toolkit uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation and rendering. It has been extended to support computing platforms beyond Windows by leveraging microservices. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback leveraging mobile sensors in headset AR.},
	number = {03},
	journal = {International Journal of Semantic Computing},
	author = {Hartholt, Arno and Fast, Ed and Reilly, Adam and Whitcup, Wendy and Liewer, Matt and Mozgai, Sharon},
	month = sep,
	year = {2020},
	keywords = {UARC, Virtual Humans},
	pages = {315--332}
}
Powered by bibtexbrowser