The Effects of Experience on Deception in Human-Agent Negotiation (bibtex)
by Mell, Johnathan, Lucas, Gale M, Mozgai, Sharon and Gratch, Jonathan
Abstract:
Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences. The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience. This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study \#1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study \#2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study \#3, we show that the lessons learned from the empirical observations in Study \#1 and \#2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.
Reference:
The Effects of Experience on Deception in Human-Agent Negotiation (Mell, Johnathan, Lucas, Gale M, Mozgai, Sharon and Gratch, Jonathan), In Journal of Artificial Intelligence Research (JAIR), volume 68, 2020.
Bibtex Entry:
@article{mell_effects_2020,
	title = {The {Effects} of {Experience} on {Deception} in {Human}-{Agent} {Negotiation}},
	volume = {68},
	url = {https://www.jair.org/index.php/jair/article/view/11924},
	abstract = {Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences.

     The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience.

     This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study \#1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study \#2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study \#3, we show that the lessons learned from the empirical observations in Study \#1 and \#2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.},
	journal = {Journal of Artificial Intelligence Research (JAIR)},
	author = {Mell, Johnathan and Lucas, Gale M and Mozgai, Sharon and Gratch, Jonathan},
	month = aug,
	year = {2020},
	keywords = {MedVR, Virtual Humans, ARO-Coop},
	pages = {28}
}
Powered by bibtexbrowser