Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study (bibtex)
by Ulysses Bernardet, Sin-Hwa Kanq, Andrew Feng, Steve DiPaola, Ari Shapiro
Abstract:
Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking – speech breathing – into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of lowand high-level parameters, the system produces dynamic signals in real-time that control the virtual character’s anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing).
Reference:
Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study (Ulysses Bernardet, Sin-Hwa Kanq, Andrew Feng, Steve DiPaola, Ari Shapiro), In 2019 IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), IEEE, 2019.
Bibtex Entry:
@inproceedings{bernardet_speech_2019,
	address = {Osaka, Japan},
	title = {Speech {Breathing} in {Virtual} {Humans}: {An} {Interactive} {Model} and {Empirical} {Study}},
	isbn = {978-1-72813-219-8},
	url = {https://ieeexplore.ieee.org/document/8714737/},
	doi = {10.1109/VHCIE.2019.8714737},
	abstract = {Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking – speech breathing – into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of lowand high-level parameters, the system produces dynamic signals in real-time that control the virtual character’s anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing).},
	booktitle = {2019 {IEEE} {Virtual} {Humans} and {Crowds} for {Immersive} {Environments} ({VHCIE})},
	publisher = {IEEE},
	author = {Bernardet, Ulysses and Kanq, Sin-Hwa and Feng, Andrew and DiPaola, Steve and Shapiro, Ari},
	month = mar,
	year = {2019},
	keywords = {MxR, UARC, Virtual Humans},
	pages = {1--9}
}
Powered by bibtexbrowser