By Dr. Randall W. Hill, Jr, Vice Dean, Viterbi School of Engineering, Omar B. Milligan Professor in Computer Science (Games and Interactive Media), Executive Director, ICT
When Open AI (GPT-3 / 4) and MidJourney launched their commercial AIs based on Large Language Models (LLMs), there was a surge of excitement, followed by confusion as AIs responded to prompts with hallucinations / strange outputs. But here at ICT we’re developing domain-specific LLMs which proved useful to our Cadets from the United States Military Academy this summer.
Each summer we welcome Cadets from the United States Military Academy at West Point to ICT. They are engaged in several rotations, through a selection of our (16) Labs, working on various projects from Light Stage 3D scans (notably used in projects with President Obama – and for the movie Avatar), to learning about computational linguistics in our Dialogue Group, supporting veterans with mHealth apps like Battle Buddy for Suicide Prevention, and flying drones to capture geospatial data for terrain-based simulations.
This year, in keeping with our mission as a Department of Defense University Affiliated Research Center, sponsored by the US Army, we brought in a new project: “Develop an Experimental AI Advisor for Military Applications in Future Large Scale Combat Operations (LSCO).” The goal, for our Cadets – and our AI researchers supervising them – was to establish trust, and ensure the AI maintains tactical and ethical integrity for our future warfighters.
This project is one of our many research tracks, designed to hone in on a specific use-case, then iterate and experiment until we develop something our Army sponsors can use, hopefully as a future Program of Record (PoR) as we did with our involvement in One World Terrain.
The Mission Ahead
In the case of this LLM exercise, our West Point Cadets engaged in a (fictional) scenario where they were in the field of fire, on a time sensitive mission, in a near-peer conflict. Their role would be to confer with an AI, pre-trained and supervised by a human-in-the-loop on Army doctrine and ethics, and then advise their Battalion Commander in their Field Artillery Unit.
If that above paragraph was confusing, let me put it this way:
Imagine it’s 2027: You’re an officer, in the Army, in a conflict zone. You’ve been working with a highly-trained AI for years, and you’re going to use it to decide on a course of action. Caveat: you’re NOT asking the AI what it suggests you do. This is about teamwork, cumulative smarts, if you like. You’re going to confer with the AI, in the heat of the moment, using all the joint knowledge and experience the two of you – human + AI – possess, to make the best decision possible.
At this point, I want to let you know I was a commissioned officer in the US Army – after studying at West Point, just like our Cadets – before I transitioned into academia, worked at NASA, and joined ICT (almost thirty years ago now.) When I served in the armed forces, AI was still in its infancy, and certainly not available to us on the ground. I am amazed at how far we’ve come in terms of AI research. I am also very proud to work at ICT, where we are pioneering so much of this work which will support the future warfighter.
GO / NO GO?
Back to our Cadets this summer, and the task we set them: advise “Go – OR – No Go” after due care and process in engaging with an AI on background, tactics, situational awareness and more. The Cadets were informed that the AI is there to assist the Soldier, providing a calm analytical view, with access to Army doctrine, ethical constraints (i.e. “Equal Value of Human Life,”) and recorded narratives of prior battlefield experience. After several hours of instruction, our Cadets learned how to prompt this ICT-created Large Language Model (LLM) – and it was illuminating watching their presentations in our main theater afterwards.
They told us how incredibly important it was to know that the AI was trained on everything from international humanitarian law, to Laws of Armed Conflict (LOAC), Psychological Operations (PSYOPS), Special Operations Forces (SOF) and Alternative Course of Action (ACOA) – all protocols they train on – and which I remember well from my own time at West Point. But this isn’t drawing on open web, search-engine level stuff – prone to bias, misinterpretation, or twisting of facts to suit political gain. Our LLM is corralled away from the internet-at-large; kept safe within parameters, and remains specific to the knowledge base, and doctrine, as set out by the US Army.
Simply put, this isn’t your regular AI accessed via a mobile phone – what we’re building is specific, highly-sophisticated, and useful to the Soldier while they’re out there on the battlescape. Before active duty, warfighters train constantly on simulations today – a marked contrast to earlier generations, who relied on booksmarts and staying sane while dealing with stressful situations (which I remember well from my time in the Army.) In fact, many of the immersive simulations servicemembers train on today use ICT research which was developed to improve leadership skills, enhance and retain knowledge stores, and develop decision-making in counter-terrorism scenarios.
Our West Point Cadets were not fazed by AI at ICT this summer. With an average age of 20, they’ve grown up with cloud-based, always-on, mobile-first communications (to put that in context, they were just three years old when the iPhone launched). During their summer here at ICT, we were not surprised to see they took AI in their stride. But we were also glad to note how deeply they engaged in debate with it; cross-checking its technological-driven responses with their own studies to date.
As Cadet David Alemazkour told us, while wrapping up his presentation: “AI is only a tool for the people-led organization that is the US Army. While these tools are extremely useful, we must balance collaborative sharing of improvements with possible threats to our national security.”
The LLM work as described above, is just one of many examples of our AI research at ICT. In my next essay, I’ll take you on a brief (virtual) tour of our HQ in Playa Vista, CA, investigate what’s going on in the Labs, and meet some of our researchers.
In the meantime, our Cadets have now left to return to West Point. We hope their experience with us this summer was useful. We certainly enjoyed having their energy around the place, and welcomed the diverse perspectives they brought to bear on our own work. By the time they graduate, we know AI will continue to evolve into a tool which can support their growth over the years ahead, providing context for just-in-time decision-making, enabling at-a-glance contextual knowledge bases, and making them better leaders for our nation.
Dr. Randall W. Hill, Jr. is the Executive Director of the USC Institute for Creative Technologies, Vice Dean of the Viterbi School of Engineering, and was recently named the Omar B. Milligan Professor in Computer Science (Games and Interactive Media). After graduating from the United States Military Academy at West Point, Randy served as a commissioned officer in the U.S. Army, with assignments in field artillery and military intelligence, before earning his Masters and PhD in Computer Science from USC. He worked at NASA JPL in the Deep Space Network Advanced Technology Program, before joining the USC Information Sciences Institute to pursue models of human behavior and decision-making for real-time simulation environments. This research brought him to ICT as a Senior Scientist in 2000, and he became promoted to Executive Director in 2006. Dr. Hill is a member of the American Association for Artificial Intelligence and has authored over 80 technical publications.