People Show Envy, Not Guilt, when Making Decisions with Machines (bibtex)
by de Melo, Celso M. and Gratch, Jonathan
Abstract:
Research shows that people consistently reach more efficient solutions than those predicted by standard economic models, which assume people are selfish. Artificial intelligence, in turn, seeks to create machines that can achieve these levels of efficiency in human-machine interaction. However, as reinforced in this paper, people’s decisions are systematically less efficient – i.e., less fair and favorable – with machines than with humans. To understand the cause of this bias, we resort to a wellknown experimental economics model: Fehr and Schmidt’s inequity aversion model. This model accounts for people’s aversion to disadvantageous outcome inequality (envy) and aversion to advantageous outcome inequality (guilt). We present an experiment where participants engaged in the ultimatum and dictator games with human or machine counterparts. By fitting this data to Fehr and Schmidt’s model, we show that people acted as if they were just as envious of humans as of machines; but, in contrast, people showed less guilt when making unfavorable decisions to machines. This result, thus, provides critical insight into this bias people show, in economic settings, in favor of humans. We discuss implications for the design of machines that engage in social decision making with humans.
Reference:
People Show Envy, Not Guilt, when Making Decisions with Machines (de Melo, Celso M. and Gratch, Jonathan), In Proceedings of ACII 2015, IEEE, 2015.
Bibtex Entry:
@inproceedings{de_melo_people_2015,
	address = {Xi'an, China},
	title = {People {Show} {Envy}, {Not} {Guilt}, when {Making} {Decisions} with {Machines}},
	url = {http://ict.usc.edu/pubs/People%20Show%20Envy,%20Not%20Guilt,%20when%20Making%20Decisions%20with%20Machines.pdf},
	abstract = {Research shows that people consistently reach more efficient solutions than those predicted by standard economic models, which assume people are selfish. Artificial intelligence, in turn, seeks to create machines that can achieve these levels of efficiency in human-machine interaction. However, as reinforced in this paper, people’s decisions are systematically less efficient – i.e., less fair and favorable – with machines than with humans. To understand the cause of this bias, we resort to a wellknown experimental economics model: Fehr and Schmidt’s inequity aversion model. This model accounts for people’s aversion to disadvantageous outcome inequality (envy) and aversion to advantageous outcome inequality (guilt). We present an experiment where participants engaged in the ultimatum and dictator games with human or machine counterparts. By fitting this data to Fehr and Schmidt’s model, we show that people acted as if they were just as envious of humans as of machines; but, in contrast, people showed less guilt when making unfavorable decisions to machines. This result, thus, provides critical insight into this bias people show, in economic settings, in favor of humans. We discuss implications for the design of machines that engage in social decision making with humans.},
	booktitle = {Proceedings of {ACII} 2015},
	publisher = {IEEE},
	author = {de Melo, Celso M. and Gratch, Jonathan},
	month = sep,
	year = {2015},
	keywords = {Virtual Humans}
}
Powered by bibtexbrowser