Summer Research program positions
426 - 3D Terrain (3D Vision/Machine Perception) Project
3D Terrain (3D Vision/Machine Perception) Project
Project Description
The 3D terrain focuses on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:
• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
• Procedurally recreate 3D terrain using drones and other capturing equipment
• Develop methods for rapid 3D reconstructions and camera localizations.
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
• Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S)
Job Description
The ML Researcher (3D Vision/Machine Perception) will work with the terrain research team in support of developing efficient solutions for 3D reconstructions and segmentations from 2D monocular videos, imagery, or LiDAR sensors.
Preferred Skills
- Programming experience with machine learning, computer vision or computer graphics(i.e. Pytorch, OpenGL, CUDA, etc).
- Interest/experience in 3D computer vision applications such as SLAM, NeRF, structure-from-motion, and/or depth estimations.
- Experience with Unity/Unreal game engine and related programming skills (C++/C#) is a plus.
- Interest/experience with Geographic information system applications and datasets
427 - Generating and Testing Scenarios for Multi-agent Reinforcement Learning
Generating and Testing Scenarios for Multi-agent Reinforcement Learning
Project Description
Multi-agent reinforcement learning (MARL) is increasingly used in military training to build dynamic and adaptive synthetic characters for interactive simulations. The effectiveness of MARL algorithms heavily relies on the quality of the scenarios utilized in machine learning experiments. Our research addresses this challenge by building a scenario generation capability leveraging generative transformer models to generate machine-learning-friendly simulation scenarios based on effective prompts and a data bank of training content (text and images) associated with a capability.
Job Description
Student research assistants will contribute to this research by running scenario generation experiments and assessing the value of generated scenarios in MARL.
Preferred Skills
- Multi-agent Reinforcement Learning
- Large Language Models
- Agentic Frameworks for LLMs
428 - Acting and Interacting in Generated Worlds
Acting and Interacting in Generated Worlds
Project Description
Emerging technologies for AI generation of images, animations, 3D models, and audio may one day enable small teams of talented filmmakers to make movies with very high production value. This research effort explores new workflows for writers, directors, actors, editors, and composers that closely integrates new AI technology in the creative process.
Job Description
We are seeking a Summer Intern for 2025 that will focus on workflows for compositing the performance of real actors into AI generated virtual environments, with special attention toward interacting with virtual things and people that do not exist on the actor’s stage. The Summer Intern will generate test environments using generative AI models, capture actor performances using novel lighting methods, and engineer software pipelines for compositing performances into rendered scenes.
Preferred Skills
- Proficiency using Blender for 3D modeling, scripting, camera tracking, and video editing.
- Experience using generative AI models for images, audio, 3D models, and/or animation, e.g., using trained Huggingface models with PyTorch in Jupyter notebooks.
- Deep technical understanding of digital photography and 3D rendering.
429 - AI-Enhanced Interoperability for Modeling and Simulation
Project Name
AI-Enhanced Interoperability for Modeling and Simulation
Project Description
This internship is part of the RIDE project. The Rapid Integration & Development Environment (RIDE) is a foundational Research and Development (R&D) platform that unites many Department of Defense (DoD) and Army simulation efforts to deliver an accelerated development foundation and prototyping sandbox that provides direct benefit to the US Army’s Synthetic Training Environment (STE) as well as the larger DoD and Army simulation communities. RIDE integrates a range of capabilities, including One World Terrain (OWT), Non-Player Character (NPC) Artificial Intelligence (AI) behaviors, Experience Application Programming Interface (xAPI) logging, multiplayer networking, scenario creation, machine learning approaches, and multi-platform support. It leverages robust game engine technology while designed to be agnostic to any specific game or simulation engine. RIDE is freely available through Government Purpose Rights (GPR) with the aim of lowering the barrier to entry for R&D efforts within the simulation community, in particular for training, analysis, exploration, and prototyping. See https://ride.ict.usc.edu for more details.
Job Description
RIDE combines the best-in-breed solutions from both academia and industry in support of military training. Some of the challenges associated with this include 1) integrating individual technologies into a common, principled framework, 2) developing demonstrations that showcase integrated capabilities, and 3) create new content that leverages these capabilities. We are addressing these challenges through the creation of an AI Reference Model for enhancing interoperability in the Modeling and Simulation (M&S) community.
Considering these challenges, the tasks outlined for the summer internship are as follows:
• Become familiar with the RIDE platform;
• Support the development of a semantic framework for the AI Reference Model, leveraging large language models (LLMs) already integrated with RIDE;
• Support the design and development of interfaces for the AI Reference Model that allow easy integration with existing M&S tools and frameworks through the RIDE API;
• Design and develop new demonstrations leveraging existing RIDE capabilities;
• Identify and integrate additional 3rd party capabilities.
Working within this project requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of RIDE, the ability to quickly learn how to use existing components and develop new ones is essential.
Preferred Skills
- Fluent with the Unity game engine
- Excellent C# programming skills
- Fluent in one or more scripting languages (e.g., Python)
- Background in artificial intelligence and machine learning a plus
- Background in Unreal Engine and C++ a plus
430 - Real-Time Simulation of Human-Machine Interaction (HMI) Robotics
Project Name
Real-Time Simulation of Human-Machine Interaction (HMI) Robotics
Project Description
This internship is part of the RIDE project. The Rapid Integration & Development Environment (RIDE) is a foundational Research and Development (R&D) platform that unites many Department of Defense (DoD) and Army simulation efforts to deliver an accelerated development foundation and prototyping sandbox that provides direct benefit to the US Army’s Synthetic Training Environment (STE) as well as the larger DoD and Army simulation communities. RIDE integrates a range of capabilities, including One World Terrain (OWT), Non-Player Character (NPC) Artificial Intelligence (AI) behaviors, Experience Application Programming Interface (xAPI) logging, multiplayer networking, scenario creation, machine learning approaches, and multi-platform support. It leverages robust game engine technology while designed to be agnostic to any specific game or simulation engine. RIDE is freely available through Government Purpose Rights (GPR) with the aim of lowering the barrier to entry for R&D efforts within the simulation community, in particular for training, analysis, exploration, and prototyping. See https://ride.ict.usc.edu for more details.
Job Description
Human-Machine Integration (HMI) strives to optimize how the U.S. Army will interface and interact with next-generation systems and platforms that will be fielded into Army maneuver and support formations. Our current efforts focus on expanding the RIDE simulation platform’s capabilities to realize an integrated virtual environment for testing, simulating, and experimenting with diverse autonomous capabilities in a single platform.
The tasks outlined for the summer internship are as follows:
- Become familiar with the RIDE platform;
- Support the development of simulated Light Infantry Battalion robotic units;
- Support the creation Defense and Movement phase actions for these robotic units;
- Support investigating terrain size and number of units performance bottlenecks;
- Design and develop new demonstrations leveraging existing RIDE capabilities;
- Identify and integrate additional 3rd party capabilities.
Working within this project requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of RIDE, the ability to quickly learn how to use existing components and develop new ones is essential.
Preferred Skills
- Fluent with the Unity game engine
- Excellent C# programming skills
- Fluent in one or more scripting languages (e.g., Python)
- Background in artificial intelligence and machine learning a plus
- Background in Unreal Engine and C++ a plus
431 - PAL-ATA: Personalized Assistant for Learning
Project Name
PAL-ATA: Personalized Assistant for Learning
Project Description
In this research, we seek to prototype and study the effects of leveraging artificial intelligence to deliver personalized, interactive learning recommendations. The primary focus of this work is to expand the “content bottleneck” for adaptive systems, such that new content may be integrated and included into the system with minimal training.
The testbed for this work will be the Personal Assistant for Life-Long Learning (PAL3), which uses artificial intelligence to accelerate learning through just-in-time training and interventions that are tailored to an individual. The types of content that will be analyzed using machine learning and included in the system cover a wide range of domains. These include:
– Artificial Intelligence: Training on topics such as AI algorithms and emerging technologies that leverage AI and data science. Content on how to build, evaluate, and compare different AI systems for realistic use cases.
– Self-Regulated Learning: Training on transferable “learning to learn” skills for effective life-long learning. Target audience are learners studying for a high-stakes exams covering topics such as vocabulary, reading comprehension and basic mathematics. As students study core topics, self-regulated learning skills are introduced (e.g., setting and checking study pace, taking notes).
Job Description
The goal of the internship will be to expand the capabilities of the system to enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) Generative AI for Learning Content Creation or Analysis (e.g., multimedia: images, video, text, dynamic content), (2) Natural Language Processing or Dialog Systems for Coaching, (3) Learning Recommendation Systems, and (4) Content-specific technologies (e.g., artificial intelligence). Opportunities will be available to contribute to peer-reviewed publications.
Preferred Skills
- Python, JavaScript/Node.js, R
- Machine Learning Expertise, Basic AI Programming, or Statistics
- Strong interest in machine learning approaches, intelligent agents, human and virtual behavior, and social cognition
432 - AIRCOEE – AI Research Center of Excellence in Education
AIRCOEE – AI Research Center of Excellence in Education
Project Description
The AIRCOEE initiative uses AI to teach AI and to help create content and training materials. It also has a curriculum revision tool and writing guide component.
Job Description
Over the course of the summer, the intern will assist in development of software infrastructure, participate in AI-integrated content development, and informal testing of AI technologies for delivering or updating learning content. Intelligent and interactive content may include dialog systems, question / inquiry about passive content, as well as game-like learning scenarios (e.g., interactive simulations of AI algorithms). Platforms for this work include Python AI/ML libraries including generative AI tools such as OpenAI and LangChain, React Native, React JS, Node.js, and GraphQL. Interns involved in this project will be expected to document their designs, contributions, and analyses to contribute to related publications
Preferred Skills
- At least one AI course
- Javascript/Typescript
- Python, Python Notebook
433 - Dynamic Occlusion for Immersive AR/VR
Dynamic Occlusion for Immersive AR/VR
Project Description
Immersive technologies, such as augmented reality (AR) and virtual reality (VR), have established importance across various industries due to their ability to create highly engaging simulations for training, design, entertainment, and remote collaboration. A fundamental element in creating these realistic and immersive experiences is to ensure that objects in a virtual scene are accurately blocked or revealed depending on the viewer’s perspective, enhancing depth perception and believable immersion. This project aims to develop a system capable of rendering dynamic scenes over a long range of depth, with a focus on ensuring accuracy of occlusion, scale, and illumination. For the successful development of this project, experience with handling 3D geometry and large-scale point clouds, as well as skill with libraries and off-the-shelf tools for 3D graphics programming and rendering, is strongly recommended. Familiarity with depth estimation, long-range depth sensing, and more recent volumetric reconstruction and rendering algorithms would be a valuable addition. Additionally, a background in machine learning, 3D computer vision, and deep learning techniques could provide further advantages for the research and development aspects of the project. The ultimate goal is to push the boundaries of immersive technology by integrating advanced features such as real-time occlusion handling, object editing, and improved depth sensing, ensuring a highly immersive and realistic experience for users.
Job Description
Key Responsibilities:
1. Contribute to research on computer vision techniques focusing on enhancing scene realism, object recognition, and tracking.
2. Develop and refine algorithms for AI-driven rendering and real-time graphics in immersive environments.
3. Collaborate on depth estimation research, using AI and computer vision to improve the accuracy of rendering and occlusion handling in virtual scenes.
4. Explore and apply advanced machine learning models for AI-enhanced immersive experiences, including real-time occlusion, object interaction, and scene sysnthesis.
5. Investigate cutting-edge LiDAR and SLAM technologies to improve spatial awareness and tracking in AR/VR environments.
Preferred Skills
- C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3
- Python, GPU programming, Maya, version control (svn/git)
- Knowledge in modern rendering pipelines, image processing, rigging, blendshape modeling
434 - 3D Scene Understanding and Processing
3D Scene Understanding and Processing
Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding and processing of 3D scenes, specifically in reconstruction, recognition, and segmentation, using learning-based techniques. It has important values in practical applications of auto-driving, AR, and VR. However, generating realistic 3D scene data for training and testing is challenging due to limited photorealism in synthetic data and intensive manual work in processing real data. Besides, the large scale of complex scenes further increases the difficulty in utilizing such data. Thus, we need to develop advanced techniques for better 3D data generation. Our first goal is an automatic method for automatic data cleanup, organization, annotation, and completion of both real data and synthetic data, either in image space or 3D space, to generate well-structured data for multiple learning-based 3D tasks. Then we will use the data to train the neural networks for the joint reconstruction and segmentation of large-scale 3D scenes.
Job Description
The intern’s task will focus on 3D data processing by developing intelligent algorithms, and editing/visualization tools, to fix problems in the current dataset (inaccurate segmentation, incomplete surfaces, distorted textures, and so on) and generate clean and accurate 3D models for training. Meanwhile, the intern will also join the research in 3D scene reconstruction and segmentation using these data.
Preferred Skills
- The intern’s task will focus on 3D data processing by developing intelligent algorithms, and editing/visualization tools, to fix problems in the current dataset (inaccurate segmentation, incomplete surfaces, distorted textures, and so on) and generate clean and accurate 3D models for training. Meanwhile, the intern will also join the research in 3D scene reconstruction and segmentation using these data
- Engineering maths physics and programming, C++, Python, GPU programming, Unity3, OpenGL
- Basic skills in deep learning and experience in training networks
435 - Asynchronous Annotation Embedded Environments (A2E2)
Asynchronous Annotation Embedded Environments (A2E2)
Project Description
The “Asynchronous Annotation Embedded Environments (A2E2)” project aims to investigate the power of human observations and AI-driven analysis to improve sense making, enhance situational awareness and foster better and faster decision making in varied operational contexts. Information derived from collective intelligence encompassing existing intel, crowdsourced forecasting and user-generated observations will be analyzed by an intelligent system, and the results of such a system can be embedded into the environment as site specific annotations. The goal of this project is to bridge this research gap to better understand the dynamics of data analysis and AR-based visualizations and interactions.
Job Description
The MxR team with the assistance of the Summer Intern will investigate various visualization techniques that can effectively convey changes and anomalies detected by AI. This could include heatmaps, visual overlays, color-coded annotations, icons, or contour lines. The team will focus on spatially situated data visualization techniques to support an end user in Army relevant scenarios and contexts.
Preferred Skills
- Unity
- VR/AR Development
- Unreal
436 - A2HMDI
A2HMDI
Project Description
The MxR team seeks to integrate findings from this project into the development and testing of MR interactions focused on communication and collaboration amongst and between echelons (i.e., between distributed command post personnel, as well as from command posts to tactical ground units). This two way communication, flowing up and down between varying units and commanders, across devices (AR/VR), and within varying bandwidth capabilities, requires significant R&D to ensure visualizations provide common situational awareness and understanding.
The goal is to develop interaction and visualization techniques that can be integrated into an existing C2 system (XR COP, and/or the Integrated Battle Command System (IBCS) or others), to be included in test events in live-mixed reality immersive training environments at Home Station, Maneuver Combat Training Centers, NTC, or equivalent.
Job Description
Intern will work closely with lead developers on the project to assist the team in identifying and thoroughly understanding the various challenges, limitations, and potential advantages of using MR technologies in MDO settings, and will also provide insight and real-world operational considerations, enriching the research and development process.
Preferred Skills
- Unity
- VR/AR Development
- Unreal
437 - VHTL Intervention
Project Name
Virtual Human Therapeutics Research Intern
Project Description
The Virtual Human Therapeutics Lab (VHTL) at USC’s Institute for Creative Technologies focuses on the development and deployment of real-time virtual human technologies aimed at improving therapeutic outcomes. Our multidisciplinary research integrates psychology, medicine, and technology to create innovative virtual human applications for clinical interventions, such as mental health support and physical therapy. This project provides an opportunity to contribute to groundbreaking research that blends cutting-edge technology with evidence-based therapeutic approaches.
Job Description
We are seeking a motivated intern with a background in social science or clinical psychology research to join our team. You will assist in both quantitative and qualitative research efforts, including data collection, analysis, and manuscript preparation. Key responsibilities include supporting the submission of Institutional Review Board (IRB) proposals, collaborating on academic papers, and helping create and edit presentations for conferences and internal use. This internship is ideal for someone interested in combining psychological research with technological innovation.
Preferred Skills
- Background in social science or psychology research.
- Experience in quantitative and qualitative analysis (e.g., statistical software, coding thematic data).
- Familiarity with research protocols, including data collection and IRB submission. Strong writing skills for academic papers and reports.
- Competence in editing and preparing visual presentations (e.g., PowerPoint, academic posters).
- Highly organized and detail-oriented with the ability to work independently and in a team setting.
438 – Technology Evaluation Lab
Project Name
Trust Calibration for Human-Agent Teaming
Project Description
As AI proliferates, trust is a critical factor. Current traditional training resources have failed to focus on, or build, trust into the human-machine relationship. This is a huge problem because when users trust the automation less than they could (based on its capabilities), they will underutilize the system and fail to reap the benefits of partnering with automation. The lab uses VLE (Virtual Learning Environments), among other research paradigms, to enable us to study how we can facilitate trust in automation, especially in automated teammates. In this project we are most interested in developing a theory by which we can understand the extent to which users are calibrated -or accurate- in their trust judgements.
Job Description
The intern will serve as a research assistant, helping us to conduct research and analyze the data. In order to develop a theory by which we can understand the extent to which users are calibrated -or accurate- in their trust judgements, we will conduct quantitative and qualitative research. Natural language models, such as LLMs like GPT, may also be utilized in this research in order to facilitate development of a testbed that we will use to conduct quantitative research to test the developing theory.
Preferred Skills
- Experience with research
- Interest in user studies
- At least basic coding skills preferred
Compensation: The base salary range for all intern positions is $1,216/week (undergrad). to $1,340/week (grad), paid at a monthly or hourly rate depending on employment type. See FAQ’s for description of employment types.