Learning a Sparse Codebook of Facial and Body Microexpressions for Emotion Recognition (bibtex)
by Song, Yale, Morency, Louis-Philippe and Davis, Randall
Abstract:
Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the expectation dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.
Reference:
Learning a Sparse Codebook of Facial and Body Microexpressions for Emotion Recognition (Song, Yale, Morency, Louis-Philippe and Davis, Randall), In Proceedings of the 15th ACM on International conference on multimodal interaction, ACM Press, 2013.
Bibtex Entry:
@inproceedings{song_learning_2013,
	title = {Learning a {Sparse} {Codebook} of {Facial} and {Body} {Microexpressions} for {Emotion} {Recognition}},
	isbn = {978-1-4503-2129-7},
	url = {http://ict.usc.edu/pubs/Learning%20a%20sparse%20codebook%20of%20facial%20and%20body%20microexpressions%20for%20emotion%20recognition.pdf},
	doi = {10.1145/2522848.2522851},
	abstract = {Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the expectation dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.},
	language = {en},
	booktitle = {Proceedings of the 15th {ACM} on {International} conference on multimodal interaction},
	publisher = {ACM Press},
	author = {Song, Yale and Morency, Louis-Philippe and Davis, Randall},
	month = dec,
	year = {2013},
	keywords = {Virtual Humans, UARC},
	pages = {237--244}
}
Powered by bibtexbrowser