Learning a Model of Speaker Head Nods using Gesture Corpora (bibtex)
by Lee, Jina, Marsella, Stacy C., Prendinger, Helmut and Neviarouskaya, Alena
Abstract:
During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions, and may also be influ- enced by our emotions. The goal for this work is to build a domain-independent model of speaker's head movements and investigate the effect of using affective information dur- ing the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker's head nods using an annotated corpora of face-to-face human interaction and emotion labels gener- ated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help pre- dict head nods better than when no affective information is used.
Reference:
Learning a Model of Speaker Head Nods using Gesture Corpora (Lee, Jina, Marsella, Stacy C., Prendinger, Helmut and Neviarouskaya, Alena), In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2009.
Bibtex Entry:
@inproceedings{lee_learning_2009,
	address = {Budapest, Hungary},
	title = {Learning a {Model} of {Speaker} {Head} {Nods} using {Gesture} {Corpora}},
	url = {http://ict.usc.edu/pubs/learning%20a%20model%20of%20speaker%20head%20nods.pdf},
	abstract = {During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions, and may also be influ- enced by our emotions. The goal for this work is to build a domain-independent model of speaker's head movements and investigate the effect of using affective information dur- ing the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker's head nods using an annotated corpora of face-to-face human interaction and emotion labels gener- ated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help pre- dict head nods better than when no affective information is used.},
	booktitle = {International {Conference} on {Autonomous} {Agents} and {Multiagent} {Systems} ({AAMAS})},
	author = {Lee, Jina and Marsella, Stacy C. and Prendinger, Helmut and Neviarouskaya, Alena},
	month = oct,
	year = {2009},
	keywords = {Social Simulation}
}
Powered by bibtexbrowser