Varied Magnitude Favor Exchange in Human-Agent Negotiation (bibtex)
by Mell, Johnathan, Lucas, Gale M. and Gratch, Jonathan
Abstract:
Agents that interact with humans in complex, social tasks need the ability to comprehend as well as employ common social strategies. In negotiation, there is ample evidence of such techniques being used efficaciously in human interchanges. In this work, we demonstrate a new design for socially-aware agents that employ one such technique—favor exchange—in order to gain value when playing against humans. In an online study of a robust, simulated social negotiation task, we show that these agents are effective against real human participants. In particular, we show that agents that ask for favors during the course of a repeated set of negotiations are more successful than those that do not. Additionally, previous work has demonstrated that humans can detect when agents betray them by failing to return favors that were previously promised. By contrast, this work indicates that these betrayal techniques may go largely undetected in complex scenarios.
Reference:
Varied Magnitude Favor Exchange in Human-Agent Negotiation (Mell, Johnathan, Lucas, Gale M. and Gratch, Jonathan), In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, ACM, 2020.
Bibtex Entry:
@inproceedings{mell_varied_2020,
	address = {Virtual Event Scotland UK},
	title = {Varied {Magnitude} {Favor} {Exchange} in {Human}-{Agent} {Negotiation}},
	isbn = {978-1-4503-7586-3},
	url = {https://dl.acm.org/doi/10.1145/3383652.3423866},
	doi = {10.1145/3383652.3423866},
	abstract = {Agents that interact with humans in complex, social tasks need the ability to comprehend as well as employ common social strategies. In negotiation, there is ample evidence of such techniques being used efficaciously in human interchanges. In this work, we demonstrate a new design for socially-aware agents that employ one such technique—favor exchange—in order to gain value when playing against humans. In an online study of a robust, simulated social negotiation task, we show that these agents are effective against real human participants. In particular, we show that agents that ask for favors during the course of a repeated set of negotiations are more successful than those that do not. Additionally, previous work has demonstrated that humans can detect when agents betray them by failing to return favors that were previously promised. By contrast, this work indicates that these betrayal techniques may go largely undetected in complex scenarios.},
	booktitle = {Proceedings of the 20th {ACM} {International} {Conference} on {Intelligent} {Virtual} {Agents}},
	publisher = {ACM},
	author = {Mell, Johnathan and Lucas, Gale M. and Gratch, Jonathan},
	month = oct,
	year = {2020},
	keywords = {ARO-Coop, Virtual Humans},
	pages = {1--8}
}
Powered by bibtexbrowser