A Dynamic Speech Breathing System for Virtual Characters (bibtex)
by Ulysses Bernardet, Sin-hwa Kang, Andrew Feng, Steve DiPaola, Ari Shapiro
Abstract:
Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking - speech breathing - into account. We believe that integrating dynamic speech breathing systems in virtual characters can signi⬚cantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of low- and high-level parameters, the system produces dynamic signals in real-time that control the virtual character's anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing). The system is implemented in Python, o⬚ers a graphical user interface for easy parameter control, and simultaneously controls the visual and auditory aspects of speech breathing through the integration of the character animation system SmartBody [1] and the audio synthesis platform SuperCollider [2]. Beyond contributing to realism, the presented system allows for a exible generation of a wide range of speech breathing behaviors that can convey information about the speaker such as mood, age, and health.
Reference:
A Dynamic Speech Breathing System for Virtual Characters (Ulysses Bernardet, Sin-hwa Kang, Andrew Feng, Steve DiPaola, Ari Shapiro), In Proceedings of the 17th International Conference on Intelligent Virtual Agents, Springer, 2017.
Bibtex Entry:
@inproceedings{bernardet_dynamic_2017,
	address = {Stockholm, Sweden},
	title = {A {Dynamic} {Speech} {Breathing} {System} for {Virtual} {Characters}},
	isbn = {978-3-319-67401-8},
	url = {https://link.springer.com/chapter/10.1007/978-3-319-67401-8_5},
	doi = {10.1007/978-3-319-67401-8_5},
	abstract = {Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking - speech breathing - into account. We believe that integrating dynamic speech breathing systems in virtual characters can signi⬚cantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of low- and high-level parameters, the system produces dynamic signals in real-time that control the virtual character's anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing). The system is implemented in Python, o⬚ers a graphical user interface for easy parameter control, and simultaneously controls the visual and auditory aspects of speech breathing through the integration of the character animation system SmartBody [1] and the audio synthesis platform SuperCollider [2]. Beyond contributing to realism, the presented system allows for a 
exible generation of a wide range of speech breathing behaviors that can convey information about the speaker such as mood, age, and health.},
	booktitle = {Proceedings of the 17th {International} {Conference} on {Intelligent} {Virtual} {Agents}},
	publisher = {Springer},
	author = {Bernardet, Ulysses and Kang, Sin-hwa and Feng, Andrew and DiPaola, Steve and Shapiro, Ari},
	month = aug,
	year = {2017},
	keywords = {MedVR, Virtual Humans},
	pages = {43--52}
}
Powered by bibtexbrowser