Trust Me: Multimodal Signals of Trustworthiness (bibtex)
by Gale Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch
Abstract:
This paper builds on prior psychological studies that identify signals of trustworthiness between two human negotiators. Unlike prior work, the current work tracks such signals automatically and fuses them into computational models that predict trustworthiness. To achieve this goal, we apply automatic trackers to recordings of human dyads negotiating in a multi-issue bargaining task. We identify behavioral indicators in different modalities (facial expressions, gestures, gaze, and conversational features) that are predictive of trustworthiness. We predict both objective trustworthiness (i.e., are they honest) and perceived trustworthiness (i.e., do they seem honest to their interaction partner). Our experiments show that people are poor judges of objective trustworthiness (i.e., objective and perceived trustworthiness are predicted by different indicators), and that multimodal approaches better predict objective trustworthiness, whereas people overly rely on facial expressions when judging the honesty of their partner. Moreover, domain knowledge (from the literature and prior analysis of behaviors) facilitates the model development process.
Reference:
Trust Me: Multimodal Signals of Trustworthiness (Gale Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch), In Proceedings of the 18th ACM International Conference on Multimodal Interaction, ACM Press, 2016.
Bibtex Entry:
@inproceedings{lucas_trust_2016,
	address = {Tokyo, Japan},
	title = {Trust {Me}: {Multimodal} {Signals} of {Trustworthiness}},
	isbn = {978-1-4503-4556-9},
	url = {http://dl.acm.org/citation.cfm?doid=2993148.2993178},
	doi = {10.1145/2993148.2993178},
	abstract = {This paper builds on prior psychological studies that identify signals of trustworthiness between two human negotiators. Unlike prior work, the current work tracks such signals automatically and fuses them into computational models that predict trustworthiness. To achieve this goal, we apply automatic trackers to recordings of human dyads negotiating in a multi-issue bargaining task. We identify behavioral indicators in different modalities (facial expressions, gestures, gaze, and conversational features) that are predictive of trustworthiness. We predict both objective trustworthiness (i.e., are they honest) and perceived trustworthiness (i.e., do they seem honest to their interaction partner). Our experiments show that people are poor judges of objective trustworthiness (i.e., objective and perceived trustworthiness are predicted by different indicators), and that multimodal approaches better predict objective trustworthiness, whereas people overly rely on facial expressions when judging the honesty of their partner. Moreover, domain knowledge (from the literature and prior analysis of behaviors) facilitates the model development process.},
	booktitle = {Proceedings of the 18th {ACM} {International} {Conference} on {Multimodal} {Interaction}},
	publisher = {ACM Press},
	author = {Lucas, Gale and Stratou, Giota and Lieblich, Shari and Gratch, Jonathan},
	month = nov,
	year = {2016},
	keywords = {UARC, Virtual Humans},
	pages = {5--12}
}
Powered by bibtexbrowser