Evaluating Evaluators: A Case Study in Understanding the Benefits and Pitfalls of Multi-Evaluator Modeling (bibtex)
by Mower, Emily, Mataric, Maja J. and Narayanan, Shrikanth
Abstract:
Emotion perception is a complex process, often measured using stimuli presentation experiments that query evaluators for their perceptual ratings of emotional cues. These evaluations contain large amounts of variability both related and unrelated to the evaluated utterances. One approach to handling this variability is to model emotion perception at the individual level. However, the perceptions of specific users may not adequately capture the emotional acoustic properties of an utterance. This problem can be mitigated by the common technique of averaging evalu- ations from multiple users. We demonstrate that this averaging procedure improves classification performance when compared to classification results from models created using individual- specific evaluations. We also demonstrate that the performance increases are related to the consistency with which evaluators label data. These results suggest that the acoustic properties of emotional speech are better captured using models formed from averaged evaluations rather than from individual-specific eval- uations.
Reference:
Evaluating Evaluators: A Case Study in Understanding the Benefits and Pitfalls of Multi-Evaluator Modeling (Mower, Emily, Mataric, Maja J. and Narayanan, Shrikanth), In Proceedings of Interspeech 2009, 2009.
Bibtex Entry:
@inproceedings{mower_evaluating_2009,
	address = {Brighton, UK},
	title = {Evaluating {Evaluators}: {A} {Case} {Study} in {Understanding} the {Benefits} and {Pitfalls} of {Multi}-{Evaluator} {Modeling}},
	url = {http://ict.usc.edu/pubs/Evaluating%20Evaluators-%20A%20Case%20Study%20in%20Understanding%20the%20Benefits%20and%20Pitfalls%20of%20Multi-Evaluator%20Modeling.pdf},
	abstract = {Emotion perception is a complex process, often measured using stimuli presentation experiments that query evaluators for their perceptual ratings of emotional cues. These evaluations contain large amounts of variability both related and unrelated to the evaluated utterances. One approach to handling this variability is to model emotion perception at the individual level. However, the perceptions of specific users may not adequately capture the emotional acoustic properties of an utterance. This problem can be mitigated by the common technique of averaging evalu- ations from multiple users. We demonstrate that this averaging procedure improves classification performance when compared to classification results from models created using individual- specific evaluations. We also demonstrate that the performance increases are related to the consistency with which evaluators label data. These results suggest that the acoustic properties of emotional speech are better captured using models formed from averaged evaluations rather than from individual-specific eval- uations.},
	booktitle = {Proceedings of {Interspeech} 2009},
	author = {Mower, Emily and Mataric, Maja J. and Narayanan, Shrikanth},
	month = sep,
	year = {2009}
}
Powered by bibtexbrowser