By Dr. Randall W. Hill, Jr., Vice Dean, Viterbi School of Engineering, Omar B. Milligan Professor of Computer Science (Games and Interactive Media); Executive Director, USC Institute for Creative Technologies
When I joined the faculty at USC, artificial intelligence was still a specialized pursuit, shaped more by theory than by operational application. But even then, it was clear to me that intelligent systems would never reach their full potential by reasoning in a vacuum. To truly support human learning and decision-making, AI would need to model the complexity of the real world—its dynamism, its ambiguity, and, above all, its people.
That belief shaped the course of my academic career, starting with intelligent tutoring systems (ITS) designed for reactive task environments. Unlike traditional systems focused on static domains like algebra or programming, my research addressed the cognitive demands of dynamic situations—environments where tasks unfold in real time, goals shift, and the external world doesn’t wait for a learner to catch up.
NASA JPL
At the start of my research career, while still working at NASA JPL, I built one of our earliest AI systems to train operators of NASA’s Deep Space Network.
In this domain, human operators must navigate frequent and unpredictable state changes while maintaining critical communications with interplanetary spacecraft. To support this, we developed a detailed computational model of skilled behavior and learning—one that could improve with experience, recover from error, and provide just-in-time tutorial assistance.
Built in the Soar cognitive architecture, our model served both as an expert problem-solver and as an artificial student, enabling us to simulate the learning process itself and forecast the effects of curriculum design.
Crucially, our tutor was impasse-driven: it didn’t intervene on every misstep but responded when learners reached conceptual roadblocks—points of uncertainty, conflict, or failure to achieve task goals. This required situated plan attribution: the ability to infer what the learner was trying to do based on their actions, the system feedback, and the task context. This model of human learning—reactive, situated, and interruptible—became a foundation for the rest of my research career.
MILITARY SIMULATIONS
As the 1990s progressed, I turned increasingly to synthetic environments, particularly military simulation, as an ideal proving ground for these ideas.
Under DARPA’s Synthetic Theater of War initiative, we helped pioneer the use of intelligent agents to simulate entire military units. Our group developed a company of synthetic rotary wing aircraft; Apache attack helicopter agents that could navigate complex airspace, execute coordinated assaults, and adapt to mission contingencies.
Each helicopter was piloted by an autonomous agent, governed not by simple scripts but by a deliberative, goal-directed cognitive architecture. These agents didn’t just follow orders; they made decisions, responded to constraints, and coordinated with teammates to accomplish shared objectives.
At the company level, we developed a synthetic commander agent that engaged in continuous planning, mission monitoring, and replanning in response to unfolding events.
The lessons from this work were clear: autonomy and coordination cannot be bolted on after the fact. They must be designed into the architecture from the beginning, with a full understanding of the cognitive, social, and operational dynamics that shape human decision-making.
ICT: MISSION REHEARSAL EXERCISE
This insight reached new heights with our development of virtual humans. At the turn of the century, ICT launched the Mission Rehearsal Exercise (MRE), a bold effort to integrate natural language dialogue, emotion modeling, perception, and simulation into one cohesive training system.
Our aim was to prepare Soldiers for emotionally charged decision-making—moments where there are no clear answers, only human consequences.
In MRE, users interact with virtual characters who can gesture, express emotion, maintain gaze, and respond appropriately to stress. These characters were not just avatars—they were grounded in theories of emotion, social cognition, and belief modeling. A virtual sergeant might shout when threatened, hesitate when confused, or nod to indicate understanding—all behaviors designed to evoke a realistic social and emotional response from the human trainee.
This was more than a technical milestone; it was a conceptual breakthrough. We were no longer simulating tasks. We were simulating people.
LEADERSHIP GAME-BASED TRAINING SYSTEMS
This work gave rise to pedagogically structured, game-based training systems like ELECT BiLAT UrbanSim, and ELITE, which placed trainees in culturally complex or emotionally sensitive scenarios and taught them to navigate difficult interpersonal terrain.
From case-based leadership development (AXL.Net) to emotion-aware tutoring, these systems made it clear that learning is most effective when it is felt, not just understood.
Over the years, my work has also contributed to new theories of perception and attention in virtual agents. Our perceptually driven cognitive mapping models enabled virtual humans to explore environments, remember spatial configurations, and respond realistically to visual stimuli. This gave rise to agents that could plan coordinated tactical movements, establish security perimeters, and communicate spatial intent using natural language.
These foundational ideas—cognitive modeling, social coordination, emotional realism, and contextual learning—continue to shape the field. They also serve as the scaffolding for ICT’s current work, which has grown in depth, ambition, and impact.
AI THAT UNDERSTANDS HUMANS
Today, ICT is home to some of the most innovative research in human-centered AI. Our work remains mission-driven, focused on supporting warfighters, analysts, instructors, and decision-makers with systems that are not only intelligent, but intelligible.
The questions we ask today are the natural evolution of those we began asking decades ago:
How can an AI understand what its human partner knows, believes, or intends? Dr. Nik Gurney’s team is answering that with theory-of-mind modeling for agents in mission rehearsal and adversarial games—equipping AI with the kind of social reasoning once reserved for humans.
How should an AI disclose its limits? Dr. Ning Wang’s Human-Centered AI Lab is building “humble” agents that flag when their predictions may be unreliable—restoring transparency and trust in high-stakes environments.
What does it mean for an AI to feel? Ala Tak’s research, supervised by Dr. Jonathan Gratch, maps emotional reasoning inside large language models—showing how biases in emotional appraisal can shape everything from coaching to decision support.
How do personality traits emerge in synthetic agents? Bin Han’s Unity-based simulations demonstrate that introversion and extraversion can manifest in language and movement—no hand-coded scripts required. This paves the way for psychologically coherent roleplayers in virtual training.
What makes a system trustworthy? Dr. Gale Lucas’s lab evaluates how people disclose information and calibrate trust when interacting with AI—particularly in scenarios shaped by stigma, trauma, or authority. Her work underscores that design is policy.
How can AI teams collaborate? Dr. Volkan Ustun and Soham Hans are building multi-agent systems that use LLMs to plan, reason, and coordinate like human teams—supporting scenario generation, procedural simulation, and mission rehearsal in complex domains.
Meanwhile, ICT’s Learning Sciences Lab, led by Dr. Benjamin Nye, has developed PAL3—a mobile AI tutor that teaches coding and reasoning—and ARC, which helps Army instructors revise their curriculum with AI assistance. These tools don’t just teach about AI; they teach through it.
ICT’s Vision and Graphics Lab (Dr. Yajie Zhao) and MxR Lab (David Nelson) continue to push the frontier of state-of-the-art computer vision, embodied simulation, blending photorealistic rendering, procedural environments, and generative storytelling into fully immersive training experiences.
From deception detection to fatigue monitoring, resilience training to disinformation defense, our research today is a living extension of the principles we laid down three decades ago: cognition, collaboration, context—and above all, trust.
These questions—about trust, cognition, coordination, and context—are not new. I have spent my career exploring them, often when the answers were still uncertain and the technology still emerging. What has changed, remarkably, is how far we’ve come.
Today’s systems may be more powerful, but the fundamental challenge remains: to design AI that earns its place as a reliable partner in human endeavor—grounded in theory, tested in practice, and worthy of trust.
//