By Dr. Yajie Zhao, Director, Vision and Graphics Lab, ICT; Research Assistant Professor, Computer Science, USC Viterbi School of Engineering
Dr. Yajie Zhao received her Bachelor’s degree in Computer Science from Xi’an Jiaotong University in 2011 and her Ph.D. degree in 2017 from the University of Kentucky, and her research interests lie in high-quality 3D content creation, including human digitization, performance capturing, and scene reconstruction/understanding. Dr. Zhao currently leads the Vision and Graphics Lab (VGL) in research efforts towards high-resolution human face/body 3D representation, neural rendering, and human/scene-related applications in VR/AR. VGL’s research is currently supported by the Department of Defense, US Army, DARPA, ONR, and IARPA – as well as industry partners including Sony, Nvidia, Meta, and Apple. In this essay Dr. Yajie Zhao talks about the origins of VGL, its famous Light Stage technology, and how the lab is now developing AI-driven tools that enhance realism, efficiency, and adaptability in military training and synthetic training environments (STE).
At the USC Institute for Creative Technologies (ICT), the Vision and Graphics Lab (VGL) has long been at the forefront of innovation in high-quality 3D content creation.
From our pioneering work in light stage technology, which revolutionized digital human rendering in the film industry, to our current focus on AI-powered advancements for military simulations, we have consistently pushed the boundaries of what is possible in computer graphics.
Today, our lab is leading the charge in developing next-generation AI-driven tools that enhance realism, efficiency, and adaptability in military training and synthetic training environments (STE).
The Light Stage Legacy: A Foundation for Realism
When VGL first emerged as a research leader, our primary mission was to refine the way digital humans were captured and rendered. Our work in image-based lighting (IBL) and high dynamic range (HDR) imaging laid the foundation for photorealistic computer-generated characters, earning us two Scientific and Technical Awards from the Academy of Motion Picture Arts and Sciences. Our light stage technology, capable of capturing sub-micron facial details and complex reflectance properties, became a gold standard in Hollywood. Films such as Avatar, Blade Runner 2049, and Ready Player One utilized our innovations to achieve unparalleled realism in digital human representation.
While our work in entertainment solidified VGL’s reputation, we know our expertise fulfills a more impactful domain: military simulations. At VGL, the same technologies that created photorealistic characters for the silver screen are adapted to improve training realism for soldiers, allowing them to operate in virtual environments that closely mirror real-world conditions.
Advancing AI-Powered Military Simulations
As a UARC, ICT’s primary mandate is to engage in defense-focused research, and VGL contributes through advanced computer graphics, powered by artificial intelligence, to expand the capabilities of our work. The integration of AI into our research has allowed us to make groundbreaking advances in synthetic environments, terrain reconstruction, and digital human interaction. Our AI-based terrain completion tools, neural rendering techniques, and real-time performance capture systems are now shaping the future of military training.
One of our most exciting recent developments is our work in Learning Bidirectional Reflectance Distribution Functions (BRDFs) for neural shader applications. This research enhances the realism of battlefield simulations by modeling how different materials reflect light under various conditions, a crucial factor in immersive military training. By leveraging AI to generate high-fidelity reflectance maps, we have improved real-time rendering speeds, ensuring that simulations remain both visually stunning and computationally efficient.
Another key innovation is our Neural Radiance Field (NeRF) technology, which enables editable 3D environments with accurate depth and lighting. This allows trainees to interact dynamically with virtual scenarios, from urban warfare settings to natural terrains. The ability to generate, manipulate, and explore immersive landscapes with AI-driven precision is transforming how the Department of Defense (DoD) approaches synthetic training environments.
AI-Powered Digital Human Representation for Training
Beyond landscapes, human representation remains a central focus of our lab’s research. Our AI-powered Text2Avatar system allows for the rapid creation of realistic digital humans based on textual descriptions. This tool has vast applications for military training, enabling the generation of diverse avatars for interactive scenarios, mission rehearsals, and cultural training exercises. By training on a diverse dataset of over 44,000 high-quality facial models, our system ensures representation across different demographic groups, improving the realism of training programs.
Additionally, our deepfake research initiative, Cat-and-Mouse: Adversarial Teaming for Improving Generation and Detection Capabilities of Deepfakes, explores both the creation and detection of AI-generated synthetic media. By developing a dynamic framework for deepfake generation and testing it against state-of-the-art detection models, we are fortifying national security against misinformation threats while also refining tools for controlled deception training in military exercises.
Expanding the Frontiers of AI-Powered Military Graphics
Learning Bidirectional Reflectance Distribution Function (BRDF) of General Objects for Neural Shader and Inverse Rendering
Realism in military simulations hinges on accurately reproducing how different materials interact with light. Our AI-driven BRDF research provides a powerful framework for high-fidelity material editing and analysis. This innovation allows us to generate reflectance maps for any object, supporting training environments where accurate material properties impact mission success. For example, stealth technology training relies on precise modeling of how materials absorb or reflect light in various conditions. By integrating BRDF-based neural shaders, our simulations ensure soldiers receive training experiences that mimic real-world physics with unprecedented accuracy.
Our AI-based BRDF analysis has also improved efficiency. Traditional methods require extensive computational power and manual adjustments to achieve photorealistic rendering. By using machine learning to predict material reflectance properties, we have significantly reduced processing times, making real-time training in high-fidelity environments more accessible and adaptable.
Cat-and-Mouse: Adversarial Teaming for Deepfake Generation and Detection
Deepfake technology is a double-edged sword: while it presents security risks, it also offers innovative opportunities for controlled deception training in military exercises. Our adversarial research in deepfake generation and detection is ensuring that AI-driven synthetic media can be used safely and ethically in defense applications.
Our approach involves developing a high-quality deepfake generator that integrates computer graphics with AI models. Unlike purely AI-driven deepfake techniques, our hybrid model combines physically-based rendering with generative adversarial networks (GANs) to create more controllable, photorealistic outputs. We are focusing on three primary use cases:
- Motion-driven deepfakes – AI-generated avatars that mimic real-world facial expressions and movements for role-playing exercises.
- Text-driven videos – AI-powered avatars that respond to scripted dialogue, supporting training in negotiation, interrogation, and psychological operations.
- Multimodal synthesis – Combining AI-synthesized speech with realistic video outputs to enhance interactive military training scenarios.
By developing deepfake detectors alongside generators, we are staying ahead of the evolving threat landscape. Our research ensures that military personnel can distinguish between real and AI-generated content, strengthening national security efforts against misinformation campaigns.
Text2Avatar: Rapid Digital Human Creation for Training
Our Text2Avatar project is revolutionizing avatar creation for military applications. By translating user descriptions into high-fidelity 3D avatars, we are streamlining the process of creating personalized training simulations. This technology enables:
- Customizable role-playing scenarios – Soldiers can train with digital personas that represent various cultures, languages, and tactical roles.
- Diverse virtual training environments – Avatars can be modified based on mission-specific needs, ensuring adaptability across different operations.
- Real-time facial animation and behavioral modeling – AI-generated avatars can react to trainee input, enhancing interactivity and realism.
Through AI-driven automation, we have dramatically reduced the cost and time required to generate digital human assets. Our system integrates seamlessly with modern physically-based rendering pipelines, ensuring compatibility with military-grade simulation engines.
Looking Ahead: The Future of AI-Driven Training
VGL’s focus on future AI-powered military simulations represents a natural evolution of our expertise. As we continue refining our research, our goal remains the same: to create digital humans and environments that are indistinguishable from reality. The increasing use of AI in our workflow has allowed us to generate high-quality assets more efficiently than ever before, reducing production costs and increasing accessibility.
We are particularly excited about the intersection of geospatial sciences, computer vision, and generative AI. By fusing these disciplines, we are developing tools that will allow military personnel to train in virtual settings that mirror real-world locations with remarkable accuracy. The implications of these advancements extend beyond defense; they will shape industries ranging from emergency response training to forensic science and beyond.
As director of VGL, I am immensely proud of how far our lab has come. From revolutionizing digital human representation in Hollywood to pioneering AI-driven military simulations, our commitment to pushing the boundaries of computer graphics remains steadfast. With continued collaboration across academia, industry, and government, we are confident that our research will define the next generation of immersive training and simulation technologies.
//