ARE AI NEGOTIATORS EFFECTIVE?

Published: March 6, 2025
Category: Essays | News
ARE AI NEGOTIATORS EFFECTIVE?

By James Hale, PhD Computer Science Student, Graduate Research Assistant, USC ICT

James Anthony Hale is currently pursuing a PhD in Computer Science at the USC Viterbi School of Engineering, while working as a Graduate Research Assistant at ICT, under the supervision of Dr. Jonathan Gratch, who leads the Affective Computing Lab. Hale’s research focuses on AI dispute resolution, with a specific interest in mediation. His most recent paper concerns KODIS, a dyadic dispute resolution corpus containing thousands of dialogues from diverse cultures worldwide, which has been accepted to the Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025). In this essay he details how AIs, including LLMs, are being used as effective negotiators with humans. 


Artificial Intelligence (AI) is often described as a disruptive force, challenging entrenched paradigms of work, creativity, and decision-making. Among its most intellectually provocative applications is its potential role as a negotiator. One can envision a future in which business transactions, labor disputes, and even diplomatic accords are mediated not by humans but by algorithmic entities designed to optimize outcomes and minimize conflict.

As a PhD computer science student, my research focuses on human-AI dispute dynamics. Dispute resolution is not merely a strategic exercise but a deeply human process, informed by relationships, emotion, and cultural context. This raises a critical question: Can AI truly understand the nuances of human persuasion?

Contemporary AI negotiators already function in constrained environments. Automated bidding mechanisms, algorithmic trading systems, and customer service chatbots engage in rudimentary forms of negotiation. However, these systems operate within well-defined parameters where stakes are limited and human emotions are largely peripheral. The real challenge emerges in high-stakes contexts—corporate mergers, labor agreements, and geopolitical negotiations—where ambiguity, trust, and ethical imperatives complicate decision-making.

KODIS: A Multicultural Dispute Resolution Dialogue Corpus

Our most recent paper, which has been accepted to NAACL 2025, is KODIS, a dyadic dispute resolution corpus containing thousands of dialogues from diverse cultures worldwide. 

A novel contribution, this corpus adopts a dispute resolution setting in which a buyer accuses a seller of delivering an incorrect item from an online retail store; tensions further escalate as the two exchange negative reviews. Participants match online to role-play one side and argue over pertinent issues — refund, removing reviews, and receiving an apology. Emotional dynamics in our corpus replicate previous findings in dispute literature. For example, we find emotional spirals, where tensions escalate between parties as they respond to anger with anger, until they walk-away with the relationship tarnished. AI mediators can leverage this data-set to mediate disputes, squashing spirals before they happen, and promote cooperative behavior. The multi-cultural aspect of our corpus also allows us to examine how different cultures dispute over this scenario, and train models to account for that.

Social influence (SI) is central to dispute resolution, encompassing changes in cognition, emotion, or behavior arising from social interaction. A buyer, for example, employs SI techniques to establish rapport and negotiate trade-offs with a seller. SI pervades daily life, necessitating that AI-mediators reflect these subtleties. Consequently, systematically modeling SI in dialogue research is imperative. Doing so would refine AI systems’ ability to parse user intent, tailor communicative strategies, personalize responses, and guide interactions. These challenges draw from diverse disciplines, including Natural Language Processing (NLP), Game Theory, Affective Computing, Communication, and Social Psychology.

The study of SI-driven dialogue—spanning negotiation, persuasion, therapy, and argumentation—has gained momentum. Existing conversational models prioritize strategy formulation via dialogue act modeling and strategic annotation. Complementary research explores outcome prediction, argument mining, and deception detection, yet these efforts remain fragmented. Few studies have produced functionally viable SI-driven systems, such as negotiation-oriented chatbots. The chief hurdles lie in ensuring AI models are both interpretable and deployable in real-world, safety-critical scenarios.

AI that understands SI and can mediate disputes has applications not only in acting as an intermediary between clashing parties, but can also train people to better mediate interpersonal conflict. This compliments ICT’s prior research in training US military personnel to better navigate complex social dynamics — e.g., those surrounding conflict — and to address issues common in such settings.

Integration of LLMs with Virtual Character Embodiment

This corpus can aid in the development of virtual agents capable of substituting for human confederates. Importantly, these systems retain the advantages of AI-driven interactivity and control. By leveraging the virtual human toolkit developed at ICT and large language models, we create virtual agents that engage in natural language interactions suitable for use in research on conflict. The intelligent virtual agent (IVA) research community has long employed both AI and scripted systems to examine negotiation, empathetic interaction, and collaborative discourse. Our system enables further exploration of these domains in increasingly naturalistic settings. Additionally, our work, featured in an IVA’24 study, examines the nuanced implications of embodiment in relation to gender dynamics.

The potential benefits of AI-mediated negotiation are considerable. Unlike human negotiators, AI is impervious to cognitive bias, fatigue, or ego-driven miscalculations. It can process vast datasets, model counterfactual scenarios, and identify optimal equilibria with unparalleled efficiency. In principle, AI could facilitate more equitable and effective agreements, counteracting irrational decision-making and strategic errors. Furthermore, in an era of pervasive misinformation and manipulation, AI systems engineered for transparency and fairness could elevate the integrity of negotiations.

Yet, significant challenges remain. Can AI genuinely infer human intent beyond the structured data it is trained on? Will it detect the subtleties of hesitation, historical grievances, or implicit signaling? More importantly, how do we ensure that AI negotiators operate within robust ethical constraints to prevent manipulation or asymmetrical advantage?

A more tenable model involves AI as an augmentative rather than autonomous force. A synergistic paradigm—where AI enhances human negotiation through analytical precision while human negotiators provide contextual intelligence—offers a pragmatic path forward. In such a framework, AI functions not as an arbiter but as an instrument for deeper insight, optimizing agreements while preserving the ethical and adaptive dimensions intrinsic to human discourse.

The evolution of AI negotiators will undoubtedly redefine commerce, dispute resolution, and diplomacy. The critical question is not whether AI will assume a role in negotiation but how we will shape its integration—intelligently, ethically, and with an acute awareness of both its potential and its limitations. The challenge is to ensure that as machines acquire negotiation capabilities, we retain and reinforce the fundamental human values that render negotiation meaningful.

//