Increasing Fairness by Delegating Decisions to Autonomous Agents (bibtex)
by Celso M. de Melo, Stacy Marsella, Jonathan Gratch
Abstract:
There has been growing interest in autonomous agents that act on our behalf, or represent us, across various domains such as negotiation, transportation, health, finance, and defense. As these agent representatives become immersed in society, it is critical we understand whether and, if so, how they disrupt the traditional patterns of interaction with others. In this paper, we study how programming agents to represent us, shapes our decisions in social settings. Here we show that, when acting through agent representatives, people are considerably less likely to accept unfair offers from others, when compared to direct interaction with others. This result, thus, demonstrates that agent representatives have the potential to promote fairer outcomes. Moreover, we show that this effect can also occur when people are asked to “program” human representatives, thus revealing that the act of programming itself can promote fairer behavior. We argue this happens because programming requires the programmer to deliberate on all possible situations that might arise and, thus, promote consideration of social norms – such as fairness – when making their decisions. These results have important theoretical, practical, and ethical implications for designing and the nature of people's decision making when they act through agents that act on our behalf.
Reference:
Increasing Fairness by Delegating Decisions to Autonomous Agents (Celso M. de Melo, Stacy Marsella, Jonathan Gratch), In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2017.
Bibtex Entry:
@inproceedings{de_melo_increasing_2017,
	address = {Sao Paulo, Brazil},
	title = {Increasing {Fairness} by {Delegating} {Decisions} to {Autonomous} {Agents}},
	url = {http://dl.acm.org/citation.cfm?id=3091188},
	abstract = {There has been growing interest in autonomous agents that act on our behalf, or represent us, across various domains such as negotiation, transportation, health, finance, and defense. As these agent representatives become immersed in society, it is critical we understand whether and, if so, how they disrupt the traditional patterns of interaction with others. In this paper, we study how programming agents to represent us, shapes our decisions in social settings. Here we show that, when acting through agent representatives, people are considerably less likely to accept unfair offers from others, when compared to direct interaction with others. This result, thus, demonstrates that agent representatives have the potential to promote fairer outcomes. Moreover, we show that this effect can also occur when people are asked to “program” human representatives, thus revealing that the act of programming itself can promote fairer behavior. We argue this happens because programming requires the programmer to deliberate on all possible situations that might arise and, thus, promote consideration of social norms – such as fairness – when making their decisions. These results have important theoretical, practical, and ethical implications for designing and the nature of people's decision making when they act through agents that act on our behalf.},
	booktitle = {Proceedings of the 16th {Conference} on {Autonomous} {Agents} and {MultiAgent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {de Melo, Celso M. and Marsella, Stacy and Gratch, Jonathan},
	month = may,
	year = {2017},
	keywords = {Virtual Humans, UARC},
	pages = {419--425}
}
Powered by bibtexbrowser