How ICT Trains AI to Understand Humans (and vice versa)

Published: September 8, 2025
Category: Essays | News
Teaching Humans and AI to Understand

By Dr. Randall W. Hill, Jr., Executive Director, USC Institute for Creative Technologies, Vice Dean, Viterbi School of Engineering, Omar B. Milligan Professor of Computer Science (Games and Interactive Media)

As a University Affiliated Research Center (UARC) for the Department of Defense, sponsored by the US Army, the USC Institute for Creative Technologies (ICT) is mission-driven and defense-focused. Our mandate is to deliver cutting-edge research and prototype solutions that serve the national interest—strengthening the cognitive and operational edge of those who serve.

At ICT, we believe that building better AI is only part of the mission. Just as important is helping people—warfighters, instructors, analysts, and commanders—understand and trust the AI systems they increasingly rely on. That is why we are advancing AI research that is not only intelligent, but intelligible—systems that can reason socially, signal uncertainty, and adapt to human values under pressure.

At ICT, we believe that developing AI isn’t enough. People must also understand it. That’s why we established the Artificial Intelligence Research Center of Excellence for Education (AIRCOEE), funded through a $4.5M Congressional grant. The AIRCOEE program is led by Dr. Benjamin Nye, Director of ICT’s Learning Sciences Lab, Dr. William Swartout, Chief Science Officer at ICT and Dr. Ning Wang, Director, Human-Centered AI Lab

One of AIRCOEE’s flagship tools is PAL3—a mobile and web-based tutor that teaches AI fundamentals through interactive dialogs, code hints, and personalized learning plans. Another, ARC (AI-assisted Revisions for Curricula), helps Army instructors update their training materials at scale. AWE (Army Writing Enhancement) supports critical thinking by guiding soldiers through argument construction and revision, using AI not as a ghostwriter, but as a reasoning partner.

These tools share a common philosophy: AI should amplify human cognition, not replace it. Our systems teach with AI, about AI, and through AI.

Artificial Intelligence is often framed as a binary proposition: a future dominated by machines, or one guarded by human ingenuity. But at the Institute for Creative Technologies (ICT), we know the truth is more collaborative. Our researchers are not asking whether AI will replace humans. We are asking how to build AI that works with humans—socially, ethically, transparently, and intelligently.

AI That Thinks Socially: Cognitive Empathy in Operational Settings

In the crucible of military decision-making, understanding belief, intention, and uncertainty is critical. Dr. Nik Gurney, (interim) research lead for ICT’s Social Simulation Lab, develops AI agents with “theory of mind”—the ability to infer what others know, believe, or intend. His team uses probabilistic models and behavioral heuristics to equip agents for scenarios ranging from mission rehearsal to adversarial games.

Whether determining if a trainee realizes they’ve been spotted, or assessing whether an actor’s behavior suggests deception, these systems model social reasoning under uncertainty. This capability is not a luxury. In collaborative environments, it is foundational—helping agents avoid brittle errors and respond with nuance in fast-moving, human-in-the-loop contexts.

Trust and Transparency: Building Honest AI for High-Stakes Domains

In operational domains—whether in defense, healthcare, or intelligence—false confidence in AI systems can lead to unacceptable risk. That’s why PhD student Tina Behzad, working under the supervision of Dr. Ning Wang, Director of the Human-Centered AI Lab, has pioneered “humble” agents—AI systems designed to disclose their own limitations.

Rather than pretending to know everything, these agents flag situations where their predictions may be unreliable. In trials with 272 participants, humans who worked with these agents calibrated their trust more effectively and made better decisions. In high-consequence settings, transparency isn’t optional—it’s a prerequisite for effective human-machine teaming.

Dr. Wang has long championed AI education as a public good. Her lab’s projects span from immersive exhibits (“Virtually Human”) to high school game-based curricula (“The 7th Patient”). Her team’s papers, five of which were accepted to HCI International 2025, explore how users calibrate trust in AI and how human expression differs fundamentally from AI-generated text.

Emotional Intelligence Under Pressure

Ala N. Tak, a PhD candidate in the Affective Computing Group supervised by Dr. Jonathan Gratch, conducts research at the intersection of language models and emotional reasoning. His work has been accepted to leading conferences, including IEEE Transactions on Affective Computing and the Association for Computational Linguistics.

Tak investigates how LLMs process human emotions—not just superficially, but through the lens of cognitive appraisal theory. His studies show that while models like GPT-4 can accurately label emotions in multilingual narratives, they also exhibit systematic biases. Most notably, these models tend to appraise events through a lens of low agency and pessimism.

That matters. In military contexts, emotional misreadings can reinforce helplessness or misjudge resilience. Tak’s research not only exposes these biases, but shows how emotional pathways inside LLMs can be traced and adjusted—laying groundwork for more responsible, interpretable affective AI systems in resilience training, coaching, and support.

Simulating Personality for Training and Immersion

Bin Han, a PhD student in computer science at ICT supervised by Dr. Jonathan Gratch, brings personality theory into the virtual world. Her work examines whether LLMs can generate consistent behavioral profiles for virtual agents—specifically, how introversion or extraversion manifests in both speech and motion. Her Unity-based simulations show that agents driven purely by language prompts (without hand-coded scripts) can convey personality through both speech and motion.

Participants accurately distinguished these traits in negotiation and small talk scenarios. This opens the door to scalable virtual teammates and role-players in training environments—tools that are psychologically coherent, operationally relevant, and adaptable to a range of mission simulations.

System Evaluation: Trust, Disclosure, and Emotional Fidelity

Dr. Gale Lucas, Director of ICT’s Technology Evaluation Lab, studies trust, persuasion, and empathy in human-agent interaction. Her research spans sensitive use cases, from screening military recruits for trauma to training for active shooter response.

One of her key insights: people often disclose more to virtual agents than to human interviewers—especially when stigma or authority dynamics are involved. This positions AI agents as powerful bridges to care, provided their design prioritizes privacy, safety, and emotional fidelity. Lucas’s work grounds ICT’s AI research in real-world validation and rigorous social science.

Collaborative AI Systems: Multi-Agent Teaming for Mission Complexity

Dr. Volkan Ustun, Director of the Human-Inspired Adaptive Teaming Systems (HATS) Lab, leads ICT’s work in multi-agent collaboration, with research contributions from Soham Hans. Their project trains LLM-powered agents to generate and solve complex physics puzzles in the CREATE environment, using a distributed architecture inspired by human teamwork.

This ReAct-style architecture models human teamwork under constraints. The result: AI systems that generate and validate complex scenarios, refine plans, and align with human intent. These systems support procedural content generation in defense training, education, and mission rehearsal—bringing structure, feedback, and creativity to synthetic environments.

Serving the Warfighter: Operational AI at the Edge

ICT’s role as a DoD UARC means that our innovations are mission-aligned from the start. We are building AI systems that support warfighters with explainable recommendations, cognitive resilience, and collaborative tools. Our researchers are developing stress and fatigue monitors, deception detection platforms, generative-AI forensics, and disinformation countermeasures—many of which have already demonstrated efficacy in operational settings.

We do not pursue innovation for its own sake. We pursue it to answer the real-world needs of instructors, commanders, analysts, and tactical decision-makers across the services.

This work draws on deep technical strength across ICT’s interdisciplinary labs. The Vision and Graphics Lab (VGL), led by Dr. Yajie Zhao, contributes state-of-the-art computer vision, 3D scene understanding, and physically grounded simulation. The Mixed Reality (MxR) Lab, led by David Nelson, brings those capabilities into immersive, embodied environments—developing virtual training platforms that fuse AI, storytelling, and presence to support operational preparedness at scale.

We do not pursue innovation for its own sake. We pursue it to answer the real-world needs of instructors, commanders, analysts, and tactical decision-makers across the services.

Looking Ahead: A Human-Centered AI Future for National Security

Across all these projects runs a common thread: AI must amplify human cognition—not replace it. At ICT, we teach with AI, about AI, and through AI. Our systems support decision-makers, adapt to human behavior, and reflect the values of the people they serve.

From social reasoning to uncertainty disclosure, emotional modeling to multi-agent collaboration, our researchers are building AI that earns its place as a trusted partner in defense and security contexts.

This work is not speculative. It is tested, validated, and evolving—thanks to the creativity and discipline of our scientists, students, and partners.

At ICT, we are not dazzled by what AI can do. We are focused on what it must do—for the mission, for the operator, and for the future of responsible, resilient technology in defense.

//