Summer Research program positions

439 - PAL-ATA: Personalized Assistant for Learning (PAL) Any Training Anytime

Project Name
PAL-ATA: Personalized Assistant for Learning (PAL) Any Training Anytime

Project Description
In this research, we seek to prototype and study the effects of leveraging artificial intelligence to deliver personalized, interactive learning recommendations. The primary focus of this work is to expand the “content bottleneck” for adaptive systems, such that new content may be integrated and included into the system with minimal training.

The testbed for this work will be the Personal Assistant for Life-Long Learning (PAL3), which uses artificial intelligence to accelerate learning through just-in-time training and interventions that are tailored to an individual. The types of content that will be analyzed using machine learning and included in the system cover a wide range of domains. These include:

  • Artificial Intelligence: Training on topics such as AI algorithms and emerging technologies that leverage AI and data science. Content on how to build, evaluate, and compare different AI systems for realistic use cases.
  • Self-Regulated Learning: Training on transferable “learning to learn” skills for effective life-long learning. Target audience are learners studying for a high-stakes exams covering topics such as vocabulary, reading comprehension and basic mathematics. As students study core topics, self-regulated learning skills are introduced (e.g., setting and checking study pace, taking notes).

Job Description
The goal of the internship will be to expand the capabilities of the system to enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with:

  • Generative AI for Learning Content Creation or Analysis (e.g., multimedia: images, video, text, dynamic content),
  • Natural Language Processing or Dialog Systems for Coaching,
  • Learning Recommendation Systems, and
  • Content-specific technologies (e.g., artificial intelligence).

Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills

  • Python, JavaScript/Node.js, R
  • Machine Learning Expertise, Basic AI Programming, or Statistics
  • Strong interest in machine learning approaches, intelligent agents, human and virtual behavior, and social cognition

Apply now

442 - Acquisition and Technical Management with DAX-GEN: AI-Generation of Training Scenarios for Experiential Learning & Assessment

Project Name
Acquisition and Technical Management with DAX-GEN: AI-Generation of Training Scenarios for Experiential Learning & Assessment

Project Description
Increase effectiveness and reduce training time for virtual training, with defense experiential learning (DAX). Deliver adaptive training scenarios aligned to realistic and relevant missions, by using structured generative AI pipelines to tailor scenarios for specific competencies and career stages.

Job Description
Over the course of the summer, the intern will assist in development of software infrastructure, participate in AI-integrated content development, and informal testing of AI technologies for delivering or updating learning content. Intelligent and interactive content may include dialog systems, question / inquiry about passive content, as well as game-like learning scenarios (e.g., interactive simulations of AI algorithms). Platforms for this work include Python AI/ML libraries including generative AI tools such as OpenAI and AutoGen agents, React Native, React JS, Node.js, and GraphQL. Interns involved in this project will be expected to document their designs, contributions, and analyses to contribute to related publications.

Preferred Skills

  • At least one AI course
  • Javascript/Typescript
  • Python, Python Notebook

Apply now

445 - Generative AI for Course of Action Analysis and Adaptation

Project Name
Generative AI for Course of Action Analysis and Adaptation

Project Description
Human-Inspired Adaptive Teaming Systems (HATS) develops human-centered AI for virtual training. We create interactive adversarial scenarios that deliver challenging, realistic simulated opponents and build decision-support tools for trainees and instructors. This project focuses on leveraging Generative AI for Course-of-Action analysis and adaptation for logistics and/or esports.

Job Description
This summer internship offers hands-on experience with modern machine learning, reinforcement learning and Generative AI methodologies (including multimodal Large Language Models) applied to domains like training simulations, logistics planning, or esports. You will contribute to building, testing, and iterating on prototypes alongside a cross-disciplinary research team.

Preferred Skills

  • Python and Deep Learning Libraries
  • Multimodal LLMs
  • Reinforcement Learning
  • Diffusion Models

Apply now

446 - 3D Terrain (3D Vision/Machine Perception) Project

Project Name
3D Terrain (3D Vision/Machine Perception) Project

Project Description
The 3D terrain focuses on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:

  • Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
  • Procedurally recreate 3D terrain using drones and other capturing equipment
  • Develop methods for rapid 3D reconstructions and camera localizations.
  • Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
  • Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S)

Job Description
The ML Researcher (3D Vision/Machine Perception) will work with the terrain research team in support of developing efficient solutions for 3D reconstructions and segmentations from 2D monocular videos, imagery, or LiDAR sensors.

Preferred Skills

  • Programming experience with machine learning, computer vision or computer graphics (i.e. Pytorch, OpenGL, CUDA, etc).
  • Interest/experience in 3D computer vision applications such as SLAM, 3D Gaussian Splatting, structure-from-motion, and/or depth estimations.
  • Experience with Unity/Unreal game engine and related programming skills (C++/C#) is a plus.
  • Interest/experience with Geographic Information System applications and datasets.

Apply now

447 - Social Simulation

Project Name
Social Simulation

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multi-agent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills

  • Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  • Experience with Python programming.
  • Knowledge of psychosocial and cultural theories and models.

Apply now

448 - Real-Time Simulation of Human-Machine Interaction (HMI) Robotics

Project Name
Real-Time Simulation of Human-Machine Interaction (HMI) Robotics

Project Description
Human-Machine Integration (HMI) strives to optimize how the U.S. Army will interface and interact with next-generation systems and platforms that will be fielded into Army maneuver and support formations. Our current efforts focus on expanding our RIDE simulation platform’s capabilities to realize an integrated virtual environment for testing, simulating, and experimenting with diverse autonomous capabilities in a single platform.

The Rapid Integration & Development Environment (RIDE) is a foundational Research and Development (R&D) platform that unites many Department of Defense (DoD) and Army simulation efforts to deliver an accelerated development foundation and prototyping sandbox that provides direct benefit to the US Army’s Synthetic Training Environment (STE) as well as the larger DoD and Army simulation communities. RIDE integrates a range of capabilities, including One World Terrain (OWT), Non-Player Character (NPC) Artificial Intelligence (AI) behaviors, Experience Application Programming Interface (xAPI) logging, multiplayer networking, scenario creation, machine learning approaches, and multi-platform support.

Job Description
The tasks outlined for the summer internship are as follows:

  • Become familiar with the RIDE platform;
  • Support the development of simulated Heavy Infantry Battalion robotic units;
  • Support the creation Defense and Movement phase actions for these robotic units;
  • Support investigating terrain size and number of units performance bottlenecks;
  • Design and develop new demonstrations leveraging existing RIDE capabilities;
  • Identify and integrate additional 3rd party capabilities.


Working within this project requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of RIDE, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent with the Unity game engine
  • Excellent C# programming skills
  • Fluent in one or more scripting languages (e.g., Python)
  • Background in artificial intelligence and machine learning a plus

Apply now

449 - Consistent Goal Directed Action Generation with LLM

Project Name
Consistent Goal Directed Action Generation with LLM

Project Description
Large Language Models (LLMs) are increasingly being used to generate tokens beyond text, such as images, videos, and most recently actions for virtual agent. A key challenge for LLM-generated actions is to ensure they adhere to the specified objectives and they do so consistently across prompts and throughout the sequence of actions. In this project, we will take an agentic view of LLM and study the goals that drive LLMs’ behaviors, the consistency of the goals across prompts, and persistence of the goal across multiple steps within a plan, and alignment of LLMs goals and those of humans. The goal is to implement a computational pipeline that infers quantitative models of LLMs from their responses that can be compared across decision-making situations.

Job Description
Review the state of the art in action token generation with LLM for agents. Evaluate goals consistency in actions generated by LLMs using existing datasets. Build on agent based framework to explore methods to constrain action tokens generation

Preferred Skills

  • Skilled with Python
  • Experience working with online and offline LLMs through prompt engineering
  • Experience with agent based models

Apply now

450 - Real-Time Display Modeling for Visual Data Realism

Project Name
Real-Time Display Modeling for Visual Data Realism

Project Description
Display technology shapes how scenes are perceived by both humans and cameras and is vital for closing the realism gap for accurate visual reproduction. In modern production, the display surface itself acts as a light source, influencing reflections, shadows, and appearance in a scene. However, physical display technologies are expensive and many synthetic-data workflows assume displays reproduce images with perfect fidelity ignoring non-linear tone response, temporal behavior, and color variation which reduces the accuracy of simulated results. This project builds a display simulation module to take rendered scenes and emulate how they would appear on real displays. This module will be integrated with our rendering stack (materials, lighting, geometry, volumetrics) so the output better matches real image capture. As workflows increasingly require rapid feedback and realism in production, accurate display modeling reduces the gap between simulation and physical builds, lowering the labor and material costs of constructing and testing real display systems. The goal is to create a synthetic data generator with simulation of accurate lighting, display response, camera capture, materials, geometry, temporal dynamics, and sensor noise.

Job Description
The intern will work with lab researchers to develop a tool for display simulation inside a game engine to model how rendered scenes appear on real displays. The intern will work on integrating rendering, lighting, and physical display modeling for realistic visualization and data generation. This tool will support creating a high-quality database for training, testing, and analysis while reducing the cost and effort of building and evaluating real display systems.

Preferred Skills

  • C++, C#, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity, Unreal
  • Python, GPU programming, Maya, version control (svn/git)
  • Knowledge in modern rendering pipelines, image processing, rigging, blendshape modeling.
  • Web-based skills – WebGL, Django, JavaScript, HTML, PHP.

Apply now

451 - Smart Acquisition of Large Scale 3D Scenes

Project Name
Smart Acquisition of Large Scale 3D Scenes

Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding and processing of 3D scenes, specifically in reconstruction, recognition, and segmentation, using learning-based techniques. It has important values in practical applications of auto-driving, AR, and VR. However, generating realistic 3D scene data for training and testing is challenging due to limited photorealism in synthetic datasets and intensive manual work in attaining and processing real data. Large scale complex scenes require faster and smarter data collection techniques that are able to circumvent laborious data post-processing such as 3D reconstruction, fixing inaccurate segmentation, incomplete surfaces, distorted textures to create clean datasets. We will then use the data to train the neural networks for the joint reconstruction and segmentation of large-scale 3D scenes.

Job Description
The intern’s primary task will be assisting ICT VGL’s hardware engineers to build a novel data collection tool and thereby assist in the data collection process. The intern will also help bridge the data collected with a team developing intelligent algorithms to generate clean and accurate 3D data (both indoor and outdoor environments) for training. The intern will also join the research in 3D scene reconstruction and segmentation using this data.

Preferred Skills

  • C++, electrical and embedded system basics , math, physics and programming
  • Python, GPU programming, Maya, version control (svn/git)
  • Knowledge in 3D data pipelines, image processing, photogrammetry workflows.

Apply now

Compensation: The base salary range for all intern positions is $1,254/week (undergrad). to $1,382/week (grad), paid at a monthly or hourly rate depending on employment type.
See FAQ’s for description of employment types | ICT Internship Personal Statment Template