HCI International Conference Accepts FIVE Papers from ICT’s Human-Centered AI Lab

Published: June 19, 2025
Category: News | Essays

BYLINE: Dr. Ning Wang, Director, Human-Centered AI Lab, USC ICT

I am proud to share that the Human-Centered AI Lab at USC’s Institute for Creative Technologies has five papers accepted at the HCI INTERNATIONAL 2025. This reflects the rigorous scholarly work of my students and collaborators, who have helped advance our understanding of the complex dynamics between human and artificial intelligence. The five accepted papers span the breadth of our research mission. Of particular significance are the two papers where my students, Tina Behzad, a 2024 ICT Summer Intern and a Ph.D student at Stony Brook and, and Raja Shaker Chinthakindi, who is pursuing his Master’s Applied Data Science at USC, served as first authors. These contributions represent the scholarly achievements of our next generation AI researchers.

  1. The 7th Patient: Designing an Educational Game for High School AI and Probability Education (David V. Pynadath, Ning Wang, Eric Greenwald, Karen Mayfield, Harold Asturias, Mac Cannady, Melissa A. Collins, Devin August Cavero, Timothy Hurt, Boxi Fu, Anoosh Kapadia, Omkar Masur, Rajay Kumar, Chirag Merchant)
  1. Becoming Fei: An Educational Game for AI and Data Science Education for Novice Learners (Ning Wang, Boxi Fu, Betul Dincer, Omkar Masur, David Faizi, Harshul Ravindran, Julia Wang, Devashish Lai, Chirag Merchant)
  1. Virtually Human: An Exhibit for Public AI Education (Ning Wang, Timothy Hurt, Ari Krakowski, Eric Greenwald, Jim Hammerman, Sabrina de Los Santos, Omkar Masur, Boxi Fu, Chirag Merchant)
  1. Beyond Predictions: A Study of AI Strength and Weakness Transparency Communication on Human-AI Collaboration (Tina Behzad, Nikolos Gurney, Ning Wang, and David V. Pynadath)
  1. Beyond Syntax: Evaluating the Depth, Bias, and Expressiveness of Human vs. AI-Generated Text (Raja Shaker Chinthakindi and Ning Wang)

When AI Learns to Say “I Don’t Know”: Tina Behzad’s Breakthrough

Tina Behzad’s paper, “Beyond Predictions: A Study of AI Strength and Weakness Transparency Communication on Human-AI Collaboration,” tackles one of the most pressing challenges in human-AI interaction: trust calibration. Too often, we see either blind faith in AI systems or complete rejection of their assistance. Tina’s work charts a middle path that informs how we design AI teammates.

Her insight was elegantly simple yet profound: what if AI systems could learn to recognize and communicate their own limitations? Rather than simply providing confidence scores, what if an AI could say, “I tend to struggle with cases like this one”? Working with Nikolos Gurney, David Pynadath, and myself, Tina developed a self-assessing AI model that trains a decision tree on its own mistakes, enabling it to identify patterns in where and why it fails.

The results from her study of 272 participants were intriguing. When AI systems explained their weaknesses, human teammates made seemingly just as good decisions as when AI expressed its confidence. However, when Tina zoomed in on cases when AI made mistakes yet was still supremely confident it was right, the human teammates made much better decisions when AI systems explained their weakness rather than expressing confidence. People became more discerning about when to trust the AI’s recommendations, leading to what we call “calibrated trust” – neither blind acceptance nor automatic rejection, but appropriate reliance based on context.

This work has profound implications beyond the laboratory. When we encounter AI that makes a recommendation, for example, it’s often highly confident in its decisions, even when it is completely wrong. Tina’s work uncovered a strategy that can help human users spot these cases and intervene. In military contexts, where the Army Research Office’s support for this research makes clear sense, soldiers need to know exactly when to trust their AI tools and when to rely on human judgment. The difference between overreliance and under reliance could literally be a matter of life and death.

The Subtle Art of Human Expression: Raja Shaker Chinthakindi’ s Deep Dive into Language

Raja Shaker Chinthakindi’s paper, “Beyond Syntax: Evaluating the Depth, Bias, and Expressiveness of Human vs. AI-Generated Text,” addresses a question that keeps many people awake at night: as AI-generated text becomes increasingly sophisticated, what makes human communication irreplaceably human?

Raja’s approach surveyed the existing research and was not satisfied with what he found. Existing research often focuses on measuring grammatical consistency and sentence coherence.  In his paper, he proposed a framework examining six linguistic dimensions: grammatical consistency, vocabulary diversity, emotional expression, personalization, sensitivity to controversial topics, and response consistency. Using the HC3 dataset of 24,000 paired responses, he conducted a thorough comparative analysis of human versus AI-generated text.

The findings reveal the complexity of human communication. While AI excelled in grammatical accuracy and consistency – producing clean, error-free text – human responses demonstrated superior emotional diversity, vocabulary richness, and genuine personalization. Perhaps most intriguingly, while AI showed better context alignment, it lacked what Raja calls “genuine personalization and deeper emotional nuance.”

This research matters immensely in our current moment. As AI-generated content floods our information ecosystem, understanding these subtle differences becomes crucial for maintaining authenticity and trust in human communication. The implications extend from education and journalism to creative industries and beyond.

Why This Work Matters: The Human in Human-Centered AI

Both of these student-led papers exemplify what I believe is the core mission of human-centered AI research: understanding not just what AI can do, but how it interacts with human cognition, emotion, and social dynamics. They represent a maturation in our field’s thinking, moving beyond simple performance metrics to consider the full spectrum of human-AI interaction.

The military’s investment in this research reflects a broader strategic understanding. In an era of information warfare and AI-powered disinformation campaigns, the ability to detect AI-generated content becomes a national security imperative. When adversaries can flood social media with manufactured content or create convincing fake communications, understanding the linguistic fingerprints that distinguish human from artificial text becomes crucial for operational security.

Similarly, as our military increasingly relies on AI decision-support systems, ensuring that human operators maintain appropriate trust – neither overreliance nor underutilization – becomes essential for effective human-AI teaming. The insights from Tina’s work on transparency and trust calibration could directly inform how we design AI systems for high-stakes environments where lives depend on getting the human-AI partnership right.

But the implications extend far beyond military applications. In healthcare, finance, education, and countless other domains, we need AI systems that can effectively communicate their limitations and humans who can appropriately calibrate their trust. We need to preserve what makes human communication authentically human while leveraging AI’s capabilities where they excel. The future trajectory of artificial intelligence research must prioritize the augmentation of human capabilities while preserving the fundamental aspects of human agency and expression. These five papers represent our contributions toward that future.

//