By Dr. Randall W. Hill, Jr., Executive Director, USC Institute for Creative Technologies, Vice Dean, Viterbi School of Engineering
Omar B. Milligan Professor of Computer Science (Games and Interactive Media)
The White House has just released America’s AI Action Plan—a sweeping national strategy designed to ensure that artificial intelligence serves the public good, advances U.S. global leadership, and strengthens our national security posture. As Executive Director of the USC Institute for Creative Technologies (ICT), I believe this is a moment tailor-made for our mission.
Since our founding in 1999, ICT has anticipated the convergence of AI, simulation, and warfare. As one of just fifteen University Affiliated Research Centers (UARCs) in the country, our mandate is to deliver mission-critical innovation directly to the Department of Defense. Our current research portfolio answers the Action Plan’s call across every major pillar—translating theory into tools that are already operational in the field.
Making AI Work for Those Who Serve
The Action Plan begins with a simple proposition: AI must serve the American people. At ICT, we start with the men and women in uniform.
Our PAL3 system (Personal Assistant for Lifelong Learning) equips soldiers to stay AI-proficient between training cycles, with personalised, adaptive instruction tailored to individual learning needs. Already deployed in the field, PAL3 has shown a 90% improvement in learner outcomes.
Meanwhile, ARC (AI-Assisted Revisions for Curricula) is helping the Army ensure its training materials remain relevant. ARC automatically flags outdated doctrinal content, and is now active at seven major Army training centers. These are not conceptual systems. They are real, in use, and making a measurable difference.
Supporting an AI-Literate Military Workforce
AI must uplift workers. For our military, this means enhancing cognitive readiness and communication under pressure. ICT’s Army Writing Enhancement (AWE) tool is designed to improve mission clarity—not by generating text, but by helping soldiers think and write more precisely. It reinforces command intent and fosters clarity in complex operational environments.
Our Learning Sciences Lab has also adopted generative AI to accelerate scenario-based content creation. What once took weeks can now be built in days, with branching dialogue trees, realistic roleplay, and mission-specific instruction tailored at scale.
Sustaining America’s Edge in Military Innovation
ICT is advancing the Action Plan’s third pillar—fostering innovation and competitiveness—through advanced wargaming and decision-support technologies.
Our Modeling, Simulation, and Mixed Reality Lab builds real-time After-Action Review (AAR) tools that adapt based on user input and mission context. These systems are helping commanders triage complexity and respond faster in dynamic situations.
In partnership with Global ECCO, we’ve developed immersive strategy platforms like CounterNet, Balance of Terror, and Dark Networks, where military leaders train to confront AI-enabled adversaries in cyber, kinetic, and cognitive domains.
Building AI with American Values at the Core
Trust, transparency, and human oversight are essential to AI’s success in defense. At ICT, we work to ensure warfighters not only use AI—but trust it.
Our research in explainable AI allows commanders to see the rationale behind AI-generated recommendations—making battlefield decision-support more transparent and accountable. In high-stakes settings, this trust can be the difference between confidence and hesitation.
We also lead in affective computing—helping AI systems interpret stress, fatigue, and morale. These human signals are vital to performance, and their integration into training systems reinforces the U.S. commitment to ethical, human-centred design.
Defending Against Malicious AI Use
Adversaries are already leveraging AI for phishing, cyberattacks, and disinformation. ICT’s AI-enabled cybersecurity tools detect phishing and deepfakes with up to 80% accuracy—while deploying decoys and misinformation shields to blunt attacks in real time.
To guard against covert AI interference in military decisions, we’ve developed forensic tools that detect the fingerprints of generative AI in mission planning and communications. This work, described in our recent essay, helps commanders ensure their decisions remain human-led—and their authority uncorrupted.
A Model for Responsible Government Use of AI
As a UARC, ICT helps the government test AI systems in immersive simulations before they reach the field. This includes AI-infused wargames with doctrinally accurate virtual agents, decision-support tools, and ethics-based evaluations that keep accountability front and center.
We also work across sectors—academia, defense, and industry—to build interoperable standards that align with DoD governance, national policy, and international law.
Looking Ahead
The White House AI Action Plan asks what our nation must do. ICT is already doing it.
Our teams are working at the intersection of human judgment and machine speed, anticipating the future fight and delivering tools that help U.S. forces act with agility, clarity, and trust.
AI will define tomorrow’s battlefield—but only if it enhances the decision-making power of those who serve. At ICT, we remain steadfast in our commitment to equipping them with the training, tools, and ethical compass they need to win.
Dr. Randall W. Hill, Jr.
Executive Director, USC Institute for Creative Technologies
Vice Dean, Viterbi School of Engineering
Omar B. Milligan Professor of Computer Science (Games and Interactive Media)
//