A Multimodal Corpus for the Assessment of Public Speaking Ability and Anxiety (bibtex)
by Mathieu Chollet, Torsten Wortwein, Louis–Philippe Morency, Stefan Scherer
Abstract:
The ability to efficiently speak in public is an essential asset for many professions and is used in everyday life. As such, tools enabling the improvement of public speaking performance and the assessment and mitigation of anxiety related to public speaking would be very useful. Multimodal interaction technologies, such as computer vision and embodied conversational agents, have recently been investigated for the training and assessment of interpersonal skills. Once central requirement for these technologies is multimodal corpora for training machine learning models. This paper addresses the need of these technologies by presenting and sharing a multimodal corpus of public speaking presentations. These presentations were collected in an experimental study investigating the potential of interactive virtual audiences for public speaking training. This corpus includes audio-visual data and automatically extracted features, measures of public speaking anxiety and personality, annotations of participants’ behaviors and expert ratings of behavioral aspects and overall performance of the presenters. We hope this corpus will help other research teams in developing tools for supporting public speaking training.
Reference:
A Multimodal Corpus for the Assessment of Public Speaking Ability and Anxiety (Mathieu Chollet, Torsten Wortwein, Louis–Philippe Morency, Stefan Scherer), In Proceedings of the LREC 2016, Tenth International Conference on Language Resources and Evaluation, European Language Resources Association, 2016.
Bibtex Entry:
@inproceedings{chollet_multimodal_2016,
	address = {Portoroz, Slovenia},
	title = {A {Multimodal} {Corpus} for the {Assessment} of {Public} {Speaking} {Ability} and {Anxiety}},
	isbn = {978-2-9517408-9-1},
	url = {http://www.lrec-conf.org/proceedings/lrec2016/pdf/599_Paper.pdf},
	abstract = {The ability to efficiently speak in public is an essential asset for many professions and is used in everyday life. As such, tools enabling the improvement of public speaking performance and the assessment and mitigation of anxiety related to public speaking would be very useful. Multimodal interaction technologies, such as computer vision and embodied conversational agents, have recently been investigated for the training and assessment of interpersonal skills. Once central requirement for these technologies is multimodal corpora for training machine learning models. This paper addresses the need of these technologies by presenting and sharing a multimodal corpus of public speaking presentations. These presentations were collected in an experimental study investigating the potential of interactive virtual audiences for public speaking training. This corpus includes audio-visual data and automatically extracted features, measures of public speaking anxiety and personality, annotations of participants’ behaviors and expert ratings of behavioral aspects and overall performance of the presenters. We hope this corpus will help other research teams in developing tools for supporting public speaking training.},
	booktitle = {Proceedings of the {LREC} 2016, {Tenth} {International} {Conference} on {Language} {Resources} and {Evaluation}},
	publisher = {European Language Resources Association},
	author = {Chollet, Mathieu and Wortwein, Torsten and Morency, Louis–Philippe and Scherer, Stefan},
	month = may,
	year = {2016},
	keywords = {Virtual Humans, UARC},
	pages = {488--495}
}
Powered by bibtexbrowser