Expression-Guided EEG Representation Learning for Emotion Recognition (bibtex)
by Rayatdoost, Soheil, Rudrauf, David and Soleymani, Mohammad
Abstract:
Learning a joint and coordinated representation between different modalities can improve multimodal emotion recognition. In this paper, we propose a deep representation learning approach for emotion recognition from electroencephalogram (EEG) signals guided by facial electromyogram (EMG) and electrooculogram (EOG) signals. We recorded EEG, EMG and EOG signals from 60 participants who watched 40 short videos and self-reported their emotions. A cross-modal encoder that jointly learns the features extracted from facial and ocular expressions and EEG responses was designed and evaluated on our recorded data and MAHOB-HCI, a publicly available database. We demonstrate that the proposed representation is able to improve emotion recognition performance. We also show that the learned representation can be transferred to a different database without EMG and EOG and achieve superior performance. Methods that fuse behavioral and neural responses can be deployed in wearable emotion recognition solutions, practical in situations in which computer vision expression recognition is not feasible.
Reference:
Expression-Guided EEG Representation Learning for Emotion Recognition (Rayatdoost, Soheil, Rudrauf, David and Soleymani, Mohammad), In Proceedings of the ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020.
Bibtex Entry:
@inproceedings{rayatdoost_expression-guided_2020,
	address = {Barcelona, Spain},
	title = {Expression-{Guided} {EEG} {Representation} {Learning} for {Emotion} {Recognition}},
	isbn = {978-1-5090-6631-5},
	url = {https://ieeexplore.ieee.org/document/9053004/},
	doi = {10.1109/ICASSP40776.2020.9053004},
	abstract = {Learning a joint and coordinated representation between different modalities can improve multimodal emotion recognition. In this paper, we propose a deep representation learning approach for emotion recognition from electroencephalogram (EEG) signals guided by facial electromyogram (EMG) and electrooculogram (EOG) signals. We recorded EEG, EMG and EOG signals from 60 participants who watched 40 short videos and self-reported their emotions. A cross-modal encoder that jointly learns the features extracted from facial and ocular expressions and EEG responses was designed and evaluated on our recorded data and MAHOB-HCI, a publicly available database. We demonstrate that the proposed representation is able to improve emotion recognition performance. We also show that the learned representation can be transferred to a different database without EMG and EOG and achieve superior performance. Methods that fuse behavioral and neural responses can be deployed in wearable emotion recognition solutions, practical in situations in which computer vision expression recognition is not feasible.},
	booktitle = {Proceedings of the {ICASSP} 2020 - 2020 {IEEE} {International} {Conference} on {Acoustics}, {Speech} and {Signal} {Processing} ({ICASSP})},
	publisher = {IEEE},
	author = {Rayatdoost, Soheil and Rudrauf, David and Soleymani, Mohammad},
	month = may,
	year = {2020},
	keywords = {UARC, Virtual Humans},
	pages = {3222--3226}
}
Powered by bibtexbrowser