Towards a Repeated Negotiating Agent that Treats People Individually: Cooperation, Social Value Orientation, & Machiavellianism (bibtex)
by Johnathan Mell, Gale Lucas, Sharon Mozgai, Jill Boberg, Ron Artstein, Jonathan Gratch
Abstract:
We present the results of a study in which humans negotiate with computerized agents employing varied tactics over a repeated number of economic ultimatum games. We report that certain agents are highly effective against particular classes of humans: several individual difference measures for the human participant are shown to be critical in determining which agents will be successful. Asking for favors works when playing with pro-social people but backfires with more selfish individuals. Further, making poor offers invites punishment from Machiavellian individuals. These factors may be learned once and applied over repeated negotiations, which means user modeling techniques that can detect these differences accurately will be more successful than those that don’t. Our work additionally shows that a significant benefit of cooperation is also present in repeated games—after sufficient interaction. These results have deep significance to agent designers who wish to design agents that are effective in negotiating with a broad swath of real human opponents. Furthermore, it demonstrates the effectiveness of techniques which can reason about negotiation over time.
Reference:
Towards a Repeated Negotiating Agent that Treats People Individually: Cooperation, Social Value Orientation, & Machiavellianism (Johnathan Mell, Gale Lucas, Sharon Mozgai, Jill Boberg, Ron Artstein, Jonathan Gratch), In Proceedings of the 18th International Conference on Intelligent Virtual Agents, ACM, 2018.
Bibtex Entry:
@inproceedings{mell_towards_2018,
	address = {Sydney, Australia},
	title = {Towards a {Repeated} {Negotiating} {Agent} that {Treats} {People} {Individually}: {Cooperation}, {Social} {Value} {Orientation}, \& {Machiavellianism}},
	isbn = {ISBN: 978-1-4503-6013-5},
	url = {https://dl.acm.org/citation.cfm?id=3267910},
	doi = {10.1145/3267851.3267910},
	abstract = {We present the results of a study in which humans negotiate with computerized agents employing varied tactics over a repeated number of economic ultimatum games. We report that certain agents are highly effective against particular classes of humans: several individual difference measures for the human participant are shown to be critical in determining which agents will be successful. Asking for favors works when playing with pro-social people but backfires with more selfish individuals. Further, making poor offers invites punishment from Machiavellian individuals. These factors may be learned once and applied over repeated negotiations, which means user modeling techniques that can detect these differences accurately will be more successful than those that don’t. Our work additionally shows that a significant benefit of cooperation is also present in repeated games—after sufficient interaction. These results have deep significance to agent designers who wish to design agents that are effective in negotiating with a broad swath of real human opponents. Furthermore, it demonstrates the effectiveness of techniques which can reason about negotiation over time.},
	booktitle = {Proceedings of the 18th {International} {Conference} on {Intelligent} {Virtual} {Agents}},
	publisher = {ACM},
	author = {Mell, Johnathan and Lucas, Gale and Mozgai, Sharon and Boberg, Jill and Artstein, Ron and Gratch, Jonathan},
	month = nov,
	year = {2018},
	keywords = {Virtual Humans},
	pages = {125--132}
}
Powered by bibtexbrowser