Computational Study of Human Communication Dynamics (bibtex)
by Morency, Louis-Philippe
Abstract:
Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves and with the speaker through gesture, gaze, posture and facial expressions. To correctly interpret the high-level communicative signals, an observer needs to jointly integrate all spoken words, subtle prosodic changes and simultaneous gestures from all participants. In this paper, we present our ongoing research effort at USC MultiComp Lab to create models of human communication dynamic that explicitly take into consideration the multimodal and interpersonal aspects of human face-to-face interactions. The computational framework presented in this paper has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders (e.g., autism spectrum disorder).
Reference:
Computational Study of Human Communication Dynamics (Morency, Louis-Philippe), In The Third International Workshop on Social Signal Processing, 2011.
Bibtex Entry:
@inproceedings{morency_computational_2011,
	address = {Scottsdale, AZ},
	title = {Computational {Study} of {Human} {Communication} {Dynamics}},
	url = {http://ict.usc.edu/pubs/Computational%20Study%20of%20Human%20Communication%20Dynamics.pdf},
	abstract = {Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves and with the speaker through gesture, gaze, posture and facial expressions. To correctly interpret the high-level communicative signals, an observer needs to jointly integrate all spoken words, subtle prosodic changes and simultaneous gestures from all participants. In this paper, we present our ongoing research effort at USC MultiComp Lab to create models of human communication dynamic that explicitly take into consideration the multimodal and interpersonal aspects of human face-to-face interactions. The computational framework presented in this paper has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders (e.g., autism spectrum disorder).},
	booktitle = {The {Third} {International} {Workshop} on {Social} {Signal} {Processing}},
	author = {Morency, Louis-Philippe},
	month = oct,
	year = {2011},
	keywords = {Virtual Humans}
}
Powered by bibtexbrowser