Affective Computing Group to Present 5 Papers in Glasgow

Published: July 31, 2024
Category: News
Affective Computing Lab, ICT [L-R] Dr. Mohammad Soleymani, Dr. Gale Lucas, PhD student Bin Han, Dr. Jon Gratch, #Intern Sushrita Rakshit, #Intern Emma Clift, and Phd student Ala Tak.

By Dr. Randall W. Hill, Jr., Vice Dean, Viterbi School of Engineering, Omar B. Milligan Professor in Computer Science (Games and Interactive Media), Executive Director, ICT

Our Affective Computing Lab at ICT, led by Dr. Jonathan Gratch, will be presenting 5 papers in Glasgow this September. In this brief essay, Dr. Randall Hill, ICT’s Executive Director, takes a look at some of the academics’ findings, and their future impact. 

ICT is proud to announce that Dr. Jonathan Gratch, Director, Affective Computing Lab, ICT and his students will be attending both the ACM International Conference on Intelligent Virtual Agents, and the 12th International Conference on Affective Computing and Intelligent Interaction, col-located in Glasgow, Sept 2024, to present papers. These include one with 3rd year PHD student James Hale and Lindsey Schweitzer, a former ICT Research Experiences for Undergraduates (REU) intern, sponsored by the National Science Foundation; two with 1st year students Bin Han and Ala Tak, and Cleo Yau, another REU intern, and two with Celso de Melo, a computer scientist at the Army Research Lab (ARL). The full list is at the end of this piece. 

Worth noting are the following insights from our researchers: 

Does GPT-4 Have Situational Awareness of Human Emotions?

In the paper, “GPT-4 emulates average-human emotional cognition from a third-person perspective,” ICT’s Ala Tak and Jonathan Gratch extend recent investigations on the “emotional reasoning abilities of Large Language Models (LLMs)” as current research has “not directly evaluated the distinction between how LLMs predict the self-attribution of emotions and the perceptions of others’ emotions.”

The main takeaway from this paper, Professor Gratch told us, is that “GPT reasons about emotions as if it’s an ‘average observer’ – so it is better at predicting what people think a person should feel in a situation, but not necessarily what people actually feel.”

Why is this important? Because AI can only be effective in any human-machine interaction (HMI) if, as ICT’s researchers point out, it can deploy computational emotion models to understand not just what is being said, but “understand the situational context, including how specific aspects may evoke particular emotions and influence future decision, behaviors and beliefs.”

Simply put: if AI is going to be useful, it’s going to have to do more than take what we say at face value, or rely on the “wisdom of small crowds” and the “aggregate summary of human knowledge” – it’s going to need to understand us humans. 

Can AI Read Us?

Bin Han (with supervisor Jonathan Gratch, alongside co-authors Cleo Yau and Su Lei) continues this field of research in: “Knowledge-based Emotion Recognition using Large Language Models” – another of the ICT papers accepted this summer. 

“Context based emotion recognition is still in its infancy,” the authors point out. “[and] this paper contributes to this growing field by demonstrating how psychological theories of human emotion perception can inform the design of automated methods.”

But how can LLMs be “trained” to take situational context into account? 

There are many approaches, as the paper details, including training recognition methods for specific drivers (emotional states et al) but, as they point out “this limits the generality of the resulting algorithm to these specific contexts.”

What’s the solution? 

“Incorporating situational knowledge while maintaining the advantages of decontextualized emotion recognition [and] by incorporating zero-shot inferences from LLMs about the situational context,” the authors conclude. 

However, LLMs need to take into account the wider context of human responses, namely cultural conditioning and expectations. More collective societies, or interdependent social realms, may rely more heavily on the influence of others’ facial expressions. Yet this is not necessarily true in more individualistic nations. 

“LLMs have been noted for their anglocentric tendencies, which may impact their efficacy in interpreting emotions from diverse cultural contexts,” caution the researchers, and suggest context is investigated alongside any results to remove bias. 

Does Giving AI a “Body” Help (or Hinder)?

ICT’s James Hale (supervised by Jonathan Gratch, and alongside co-author Lindsey Schweitzer) takes this research one step further in “Pitfalls of Embodiment in Human-Agent Experiment Design” which has been accepted at ACM in Glasgow.  

The researchers asked: when negotiating with AIs, do you need your AI to have a face? Or does that make it harder for some people to negotiate with? That’s the starting point for this inquiry. 

“The intelligent virtual agent community often works from the assumption that embodiment confers clear benefits to human-machine interaction,” the authors point out. “However, embodiment has potential drawbacks in highlighting the salience of social stereotypes around race and gender.”

If the AI is purely text-on-screen, we treat it as an AI. As soon as the AI has a body, we assume gender, race and a whole host of other prejudices. Add in a voice (although many AIs within the negotiation field are not vocalized as yet) and our response to the AI is based on our own cultural conditioning with immediate effect.  

“Specifically, women perform worse, and men perform better against an apparently male opponent compared to a disembodied agent – as measured by the subjective value they assign to their outcome,” found Hale and his co-authors, who tracked how well women emerged from salary negotiations, compared to men. 

However, this research has been going on for over a decade within the Affective Computing Lab at ICT. Gratch, et al, have been iterating on this field of inquiry with subsequent cohorts of graduate students and tracking results. As an example, here’s the research back in 2017, when USC Viterbi’s magazine covered this work. 

To that point, it will be interesting to continue to observe and record gender differences as we witness generational shifts enter the workplace. As Daniel Shinaver, Veteran Engagement Officer, noted on LinkedIn: “53% of Gen Z and 46% of millennial workers always negotiate salaries for new job offers, up sharply from 33% and 37% respectively just a year prior. On the flip side, only 23% of Gen X and 15% of boomers consistently engage in salary negotiations.”

If you’re not convinced by this, check out this recent Wall Street Journal (WSJ) piece for how to cope with the incoming Gen Z workforce.

It’s clear there’s a role for ICT’s research to add to this debate, widening up the conversation to incorporate negotiating with embodied (or otherwise) AIs. But if 53 % of Gen Z have been vocal in their pay negotiations with humans, one assumes an embodied AI won’t phase them. 

Or will it?

We look to ICT’s Affective Computing Lab to illuminate this debate still further in the coming year. In the meantime, back at HQ, we wish our researchers a safe trip from California to Scotland and will be posting on our site and socials as they send material back!

Extra Homework

Here’s the full list of papers accepted to the conference, if you want to dig deeper into these findings. They are not published yet on the IEEE network, but we will add links at a later date, when they are. 

  • James Hale, Lindsey Schweitzer and Jonathan Gratch. Pitfalls of Embodiment in Human-Agent Experiment Design. 12th ACM International Conference on Intelligent Virtual Agents. Glasgow, Scotland 2024
  • Bin Han, Cleo Yao, Su Lei, and Jonathan Gratch. Knowledge-based Emotion Recognition using Large Language Models, 12th International Conference on Affective Computing and Intelligent Interaction. Glasgow, Scotland 2024
  • Ala Tak and Jonathan Gratch. GPT-4 emulates average-human emotional cognition from a third-person perspective, 12th International Conference on Affective Computing and Intelligent Interaction. Glasgow, Scotland 2024
  • Ryoya Ito, Celso de Melo, Jonathan Gratch and Kazunori Terada. Emotional expression helps regulate the appropriate level of cooperation with agents, 12th International Conference on Affective Computing and Intelligent Interaction. Glasgow, Scotland 2024
  • Takahisha Uchida, Yuichiro Yoshikawa, Celso de Melo, Jonathan Gratch, and Kazunori Terada. People negotiate better with emotional human-like virtual agents than android robots, 12th International Conference on Affective Computing and Intelligent Interaction. Glasgow, Scotland 2024

Dr. Randall W. Hill, Jr. is the Executive Director of the USC Institute for Creative Technologies, Vice Dean of the Viterbi School of Engineering, and was recently named the Omar B. Milligan Professor in Computer Science (Games and Interactive Media). After graduating from the United States Military Academy at West Point, Randy served as a commissioned officer in the U.S. Army, with assignments in field artillery and military intelligence, before earning his Masters and PhD in Computer Science from USC. He worked at NASA JPL in the Deep Space Network Advanced Technology Program, before joining the USC Information Sciences Institute to pursue models of human behavior and decision-making for real-time simulation environments. This research brought him to ICT as a Senior Scientist in 2000, and he became promoted to Executive Director in 2006. Dr. Hill is a member of the American Association for Artificial Intelligence and has authored over 80 technical publications.