Shaping Cooperation between Humans and Agents with Emotion Expressions and Framing (bibtex)
by Celso M. de Melo, Peter Khooshabeh, Ori Amir, Jonathan Gratch
Abstract:
Emotion expressions can help solve social dilemmas where individual interest is pitted against the collective interest. Building on research that shows that emotions communicate intentions to others, we reinforce that people can infer whether emotionally expressive computer agents intend to cooperate or compete. We further show important distinctions between computer agents that are perceived to be driven by humans (i.e., avatars) vs. by algorithms (i.e., agents). Our results reveal that, when the emotion expression reflects an intention to cooperate, participants will cooperate more with avatars than with agents; however, when the emotion reflects an intention to compete, participants cooperate just as little with avatars as with agents. Finally, we present first evidence that the way the dilemma is described - or framed - can influence people's decision-making. We discuss implications for the design of autonomous agents that foster cooperation with humans, beyond what game theory predicts in social dilemmas.
Reference:
Shaping Cooperation between Humans and Agents with Emotion Expressions and Framing (Celso M. de Melo, Peter Khooshabeh, Ori Amir, Jonathan Gratch), In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2018.
Bibtex Entry:
@inproceedings{de_melo_shaping_2018,
	address = {Stockholm, Sweden},
	title = {Shaping {Cooperation} between {Humans} and {Agents} with {Emotion} {Expressions} and {Framing}},
	url = {https://dl.acm.org/citation.cfm?id=3238129},
	abstract = {Emotion expressions can help solve social dilemmas where individual interest is pitted against the collective interest. Building on research that shows that emotions communicate intentions to others, we reinforce that people can infer whether emotionally expressive computer agents intend to cooperate or compete. We further show important distinctions between computer agents that are perceived to be driven by humans (i.e., avatars) vs. by algorithms (i.e., agents). Our results reveal that, when the emotion expression reflects an intention to cooperate, participants will cooperate more with avatars than with agents; however, when the emotion reflects an intention to compete, participants cooperate just as little with avatars as with agents. Finally, we present first evidence that the way the dilemma is described - or framed - can influence people's decision-making. We discuss implications for the design of autonomous agents that foster cooperation with humans, beyond what game theory predicts in social dilemmas.},
	booktitle = {Proceedings of the 17th {International} {Conference} on {Autonomous} {Agents} and {MultiAgent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {de Melo, Celso M. and Khooshabeh, Peter and Amir, Ori and Gratch, Jonathan},
	month = jul,
	year = {2018},
	keywords = {ARL, DoD, UARC, Virtual Humans},
	pages = {2224--2226}
}
Powered by bibtexbrowser