Multimodal Prediction of Psychological Disorders: Learning Verbal and Nonverbal Commonalities in Adjacency Pairs (bibtex)
by Yu, Zhou, Scherer, Stefen, Devault, David, Gratch, Jonathan, Stratou, Giota, Morency, Louis-Philippe and Cassell, Justine
Abstract:
Semi-structured interviews are widely used in medical settings to gather information from individuals about psychological disorders, such as depression or anxiety. These interviews typically consist of a series of question and response pairs, which we refer to as adjacency pairs. We pro-pose a computational model, the Multi-modal HCRF, that considers the commonalities among adjacency pairs and information from multiple modalities to infer the psychological states of the interviewees. We collect data and perform experiments on a human to virtual human interaction data set. Our multimodal approach gives a significant advantage over conventional holistic approaches which ignore the adjacency pair context in predicting depression from semi-structured inter- views.
Reference:
Multimodal Prediction of Psychological Disorders: Learning Verbal and Nonverbal Commonalities in Adjacency Pairs (Yu, Zhou, Scherer, Stefen, Devault, David, Gratch, Jonathan, Stratou, Giota, Morency, Louis-Philippe and Cassell, Justine), In Semdial 2013 DialDam: Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue, 2013.
Bibtex Entry:
@inproceedings{yu_multimodal_2013,
	address = {Amsterdam, The Netherlands},
	title = {Multimodal {Prediction} of {Psychological} {Disorders}: {Learning} {Verbal} and {Nonverbal} {Commonalities} in {Adjacency} {Pairs}},
	shorttitle = {Multimodal {Prediction} of {Psychological} {Disorders}},
	url = {http://www.cs.cmu.edu/afs/cs/user/zhouyu/www/semdial_2013_zhou.pdf},
	abstract = {Semi-structured interviews are widely used in medical settings to gather information from individuals about psychological disorders, such as depression or anxiety. These interviews typically consist of a series of question and response pairs, which we refer to as adjacency pairs. We pro-pose a computational model, the Multi-modal HCRF, that considers the commonalities among adjacency pairs and information from multiple modalities to infer the psychological states of the interviewees. We collect data and perform experiments on a human to virtual human interaction data set. Our multimodal approach gives a significant advantage over conventional holistic approaches which ignore the adjacency pair context in predicting depression from semi-structured inter- views.},
	booktitle = {Semdial 2013 {DialDam}: {Proceedings} of the 17th {Workshop} on the {Semantics} and {Pragmatics} of {Dialogue}},
	author = {Yu, Zhou and Scherer, Stefen and Devault, David and Gratch, Jonathan and Stratou, Giota and Morency, Louis-Philippe and Cassell, Justine},
	month = dec,
	year = {2013},
	keywords = {Virtual Humans, UARC},
	pages = {160--169}
}
Powered by bibtexbrowser