The Misrepresentation Game: How to win at negotiation while seeming like a nice guy (bibtex)
by Gratch, Jonathan, Nazari, Zahra and Johnson, Emmanuel
Abstract:
Recently, interest has grown in agents that negotiate with people: to teach negotiation, to negotiate on behalf of people, and as a chal-lenge problem to advance artificial social intelligence. Humans ne-gotiate differently from algorithmic approaches to negotiation: peo-ple are not purely self-interested but place considerable weight on norms like fairness; people exchange information about their men-tal state and use this to judge the fairness of a social exchange; and people lie. Here, we focus on lying. We present an analysis of how people (or agents interacting with people) might optimally lie (maximally benefit themselves) while maintaining the illusion of fairness towards the other party. In doing so, we build on concepts from game theory and the preference-elicitation literature, but ap-ply these to human, not rational, behavior. Our findings demon-strate clear benefits to lying and provide empirical support for a heuristic – the “fixed-pie lie” – that substantially enhances the effi-ciency of such deceptive algorithms. We conclude with implica-tions and potential defenses against such manipulative techniques.
Reference:
The Misrepresentation Game: How to win at negotiation while seeming like a nice guy (Gratch, Jonathan, Nazari, Zahra and Johnson, Emmanuel), In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2016.
Bibtex Entry:
@inproceedings{gratch_misrepresentation_2016,
	address = {Singapore},
	title = {The {Misrepresentation} {Game}: {How} to win at negotiation while seeming like a nice guy},
	isbn = {978-1-4503-4239-1},
	url = {http://dl.acm.org/citation.cfm?id=2937031},
	abstract = {Recently, interest has grown in agents that negotiate with people: to teach negotiation, to negotiate on behalf of people, and as a chal-lenge problem to advance artificial social intelligence. Humans ne-gotiate differently from algorithmic approaches to negotiation: peo-ple are not purely self-interested but place considerable weight on norms like fairness; people exchange information about their men-tal state and use this to judge the fairness of a social exchange; and people lie. Here, we focus on lying. We present an analysis of how people (or agents interacting with people) might optimally lie (maximally benefit themselves) while maintaining the illusion of fairness towards the other party. In doing so, we build on concepts from game theory and the preference-elicitation literature, but ap-ply these to human, not rational, behavior. Our findings demon-strate clear benefits to lying and provide empirical support for a heuristic – the “fixed-pie lie” – that substantially enhances the effi-ciency of such deceptive algorithms. We conclude with implica-tions and potential defenses against such manipulative techniques.},
	booktitle = {Proceedings of the 2016 {International} {Conference} on {Autonomous} {Agents} \& {Multiagent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {Gratch, Jonathan and Nazari, Zahra and Johnson, Emmanuel},
	month = may,
	year = {2016},
	keywords = {Social Simulation, Virtual Humans, UARC},
	pages = {728--737}
}
Powered by bibtexbrowser