Do Emotional Expressions Impact Cooperation?

Published: March 26, 2025
Category: Essays | News
Affective Computing - Emotional Expressions Cooperation

By Jonathan Gratch, Director, Affective Computing Lab, ICT

Understanding human cooperation has long been a subject of scholarly inquiry, bridging disciplines from evolutionary biology to behavioral economics. The ability of individuals to work together for mutual benefit, despite evolutionary pressures favoring self-interest, presents a fascinating paradox. One mechanism proposed to explain cooperation is Indirect Reciprocity (IR), where individuals build reputations based on observed behaviors, encouraging reciprocal cooperative acts. Yet, while much research has focused on the role of social norms in shaping reputations, little attention has been paid to the function of emotional expressions in these dynamics. My colleagues and I sought to explore this question: do emotional expressions influence the evolution of cooperation?

Our findings, published in Scientific Reports, suggest that emotional expressions play a crucial role in cooperative decision-making, serving as a way to establish a positive reputation and recover from occasional errors in judgment. We developed a model based on evolutionary game theory (EGT) to investigate how expressions impact IR and found that incorporating emotion-based judgments significantly enhances cooperation, particularly in noisy environments where errors in action execution, reputation assessment, and reputation assignment are common.

Emotions have long been recognized as an essential component of human social interaction. Darwin’s seminal work, The Expression of the Emotions in Man and Animals, posited that facial expressions evolved as adaptive mechanisms to facilitate communication. Subsequent empirical studies have demonstrated that emotional expressions not only convey internal states but also influence the behavior of observers, shaping their perceptions of trustworthiness and reliability. Building on this foundation, our research integrates these insights into a formalized computational model to examine their evolutionary significance.

Our model simulates interactions between agents engaging in the prisoner’s dilemma, a widely used framework for studying cooperation. Each agent adopts a strategy—either consistently cooperating, defecting, or conditionally cooperating based on the reputations of others. Additionally, agents exhibit one of two emotional profiles. One profile signals a cooperative reputation (expressing joy when partners mutually cooperate but regret upon exploiting another). The other profile signals a competitive reputation (expressing joy when exploiting another and regret upon mutual cooperation – as this was a missed opportunity for exploitation). We then introduce a social norm that incorporates these emotional expressions into moral evaluations.

The results were striking. Under traditional models of IR, errors can rapidly degrade cooperation, as a single mistake can unjustly tarnish an individual’s reputation, leading to a breakdown of reciprocal altruism. However, when emotional expressions were factored into reputation assessments, cooperation levels remained high. Cooperative agents were less likely to be penalized for unintended defections because their regret expressions signaled honest intent. Conversely, defectors who attempted to manipulate the system by feigning cooperative emotions were identified and punished over time. This supports the idea that emotional expressions serve as reliable signals of an individual’s cooperative disposition, reinforcing trust and stabilizing cooperative interactions.

These findings have profound implications, not only for understanding human social behavior but also for the development of artificial intelligence systems. As AI increasingly mediates human interactions—whether through social robots, virtual agents, or automated negotiation systems—it becomes imperative that these systems effectively interpret and respond to human emotions. By leveraging insights from our research, AI can be designed to foster trust, encourage prosocial behavior, and enhance collaboration between humans and machines.

Yet, our study also raises new questions. If emotional expressions can be used to signal cooperative intent, can they also be manipulated? The phenomenon of “cheap talk”—where individuals deceive others by expressing insincere emotions—remains a challenge. Our model suggests that widespread dishonesty in emotional signaling would ultimately undermine cooperation. This aligns with evolutionary theories positing that for emotional expressions to be maintained as credible signals, the costs of deception must outweigh the benefits.

Future research should explore the variability in individuals’ abilities to recognize and express emotions accurately. Could cognitive diversity in emotional intelligence be an evolutionary solution to the problem of deception? Additionally, extending our computational models to examine more nuanced social norms and their interactions with emotion-based judgments could provide deeper insights into the mechanisms underpinning cooperation.

In sum, our work offers empirical support for the long-standing hypothesis that emotional expressions evolved to facilitate cooperative behavior. By integrating emotion into models of IR, we uncover a powerful mechanism that stabilizes cooperation, particularly in uncertain and error-prone environments. As we continue to develop intelligent systems capable of social interaction, understanding the interplay between emotion and cooperation will be crucial in shaping a future where humans and machines work together seamlessly.

Acknowledgements

This research was partially supported by Trustworthy AI, Learning, Optimization and Reasoning (TAILOR), a project funded by EU Horizon 2020 research and innovation programme under GA No. 952215, the JST-MiraiProgram, Grant Number JPMJMI22J3, and the US Army. The content does not necessarily reflect the position or the policy of any Government, and no official endorsement should be inferred.

All generated data and the code supporting the model are available in the following public repository: https://github.com/cfonsecahenrique/SNARE 

//