Summer Research program positions

407 - Autonomy-Mediated Trust: Using AI to mediate value-laden conflicts

Project Name
Autonomy-Mediated Trust: Using AI to mediate value-laden conflicts

Project Description
Team decision-making requires compromise between members representing groups that differ not only in their interests but also in their values, norms, and identities. In promoting their own group’s interests, teammates must simultaneously negotiate their relationships and identity within the team and within the group they represent. Teammates may feel unsafe sharing ideas or challenging their teammates and thus reach poor agreements, due to perceived threats to these relationships. Yet trusting relationships can also serve as a positive source of value: trading off immediate material value to strengthen relationships can yield long-term material rewards. Such observations have led dispute-resolution research to focus on the role of relationships in shaping the resolution of disputes, and how in turn, the resolution of conflicts shapes relationships between team members and between members and the groups they represent. In this project, we incorporate theories of relationships into the design of automated dispute resolution methods with the aim of helping to resolve value-laden conflicts.

Job Description
The ideal candidate as a current PHD student with interest and experience using AI methods, particularly large-language models, in negotiation, mediation, or conflict-resolution settings. Interest or experience in value conflicts would be a benefit (e.g,. situations where disputants differ cultural values, sacred values, or ethical frameworks). Strong programming skills and evidence of research potential important.

Preferred Skills

  • BS or MS in Computer Science
  • Experience with Large Language Models
  • Interdisciplinary interests

Apply now

408 - Strategy Optimization in Multi-agent Reinforcement Learning

Project Name
Strategy Optimization in Multi-agent Reinforcement Learning

Project Description
Military training simulations occur in complex, continuous, stochastic, partially observable, non-stationary and doctrine-based environments with multiple players either collaborating or competing against each other. Multi-agent Reinforcement Learning (MARL) presents opportunities in military simulations for creating simulated enemy forces that can learn from and adapt to techniques used by friendly forces to become challenging opponents, developing policies at a level of abstraction useable by an operational planner in military domains.

Job Description
We leverage machine learning to generate autonomous and adaptive synthetic characters for use in interactive simulations. Our current focus is on creating such synthetic characters for military training simulations, with multiple players collaborating or competing against each other. Interns will have the opportunity to learn and use Multi-agent Reinforcement Learning, with various recent representation aggregation and behavior prediction techniques to stabilize the behavior adaptation.

Preferred Skills

  • Deep Multi-agent Reinforcement Learning
  • Graph Neural Networks
  • Unity ML-Agents
  • RLLib

Apply now

409 - Watercraft and Ship Simulator of the Future (WSSOF)

Project Name
Watercraft and Ship Simulator of the Future (WSSOF)

Project Description
ICT’s main role is create the visualizations of the virtual experience by interfacing with scientists and programmers at ERDC, as well as a small team from the Viterbi School of Engineering (who have wave model algorithms that they are already starting to port to VR) and integrating their work into the simulation and additionally to work on the networking to enable multi-users within the simulation.

Job Description
Intern will assist in the design, and help maintain software solutions and frameworks, and support the determination of operational feasibility (e.g., evaluating analyses, defining problems, developing solutions). They will help implement software solutions, prioritizing information needs and collaborating with a broad range of customers, partners, and key stakeholders plus work with researchers and developers to design, develop, test, and document new systems within the platform.

Preferred Skills

  • Unity
  • VR/AR Development
  • Unreal

Apply now

410 - Adaptive HMD

Project Name
Adaptive HMD

Project Description
Our goal for the Adaptive HMD project is to provide a conceptual framework for developing an adaptive and adaptable context-aware interface for head-mounted displays that caters to the warfighter’s needs, and to demonstrate and evaluate with prototypes the benefits of such a framework. If successful, we believe this will initiate a critical shift away from traditional two-domain system interfaces (e.g., graphic user interfaces, or GUIs, for desktop and mobile devices) and towards the design and development of the next-generation system-driven interface.

Job Description
Intern will work closely with lead developers on the Adaptive HMD project to help them build an immersive environment wherein we can prototype and test different user-interfaces to improve decision making in an adaptive (aware of the user, aware of the mission aware of the environment), simulated (for now) AR interaction. We will be exploring the strengths and the concerns of this human-technology interface.

Preferred Skills

  • Unity
  • VR/AR Development
  • Unreal

Apply now

411 - Asynchronous Annotation Embedded Environments (A2E2)

Project Name
Asynchronous Annotation Embedded Environments (A2E2)

Project Description
The “Asynchronous Annotation Embedded Environments (A2E2)” project aims to investigate the power of human observations and AI-driven analysis to improve sense making, enhance situational awareness and foster better and faster decision making in varied operational contexts. Information derived from collective intelligence encompassing existing intel, crowdsourced forecasting and user-generated observations will be analyzed by an intelligent system, and the results of such a system can be embedded into the environment as site specific annotations. The goal of this project is to bridge this research gap to better understand the dynamics of data analysis and AR-based visualizations and interactions.

Job Description
The MxR team with the assistance of the Summer Intern will investigate various visualization techniques that can effectively convey changes and anomalies detected by AI. This could include heatmaps, visual overlays, color-coded annotations, icons, or contour lines. The team will focus on spatially situated data visualization techniques to support an end user in Army relevant scenarios and contexts.

Preferred Skills

  • Unity
  • VR/AR Development
  • Unreal

Apply now

412 - Acting and Interacting in Generated Worlds

Project Name
Acting and Interacting in Generated Worlds

Project Description
Emerging technologies for AI generation of images, animations, 3D models, and audio may one day enable small teams of talented filmmakers to make movies with very high production value. This research effort explores new workflows for writers, directors, actors, editors, and composers that closely integrates new AI technology in the creative process.

Job Description
We are seeking a Summer Intern for 2024 that will focus on workflows for compositing the performance of real actors into AI generated virtual environments, with special attention toward interacting with virtual things and people that do not exist on the actor’s stage. The Summer Intern will generate test environments using generative AI models, capture actor performances using novel light stages, and engineer software pipelines for compositing performances into rendered scenes.

Preferred Skills

  • Experience using generative AI models for images, audio, 3D models, and/or animation, e.g., using trained Huggingface models with PyTorch in Jupyter notebooks.
  • Experience with video editing and/or 3D modeling software tools, e.g., Final Cut Pro and Blender.
  • Deep technical understanding of digital photography and 3D rendering.

Apply now

413 - One World Terrain (3D Vision/Machine Perception) Project

Project Name
One World Terrain (3D Vision/Machine Perception) Project

Project Description
One World Terrain (OWT) focuses on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.

The project seeks to:

  •  Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
  • Procedurally recreate 3D terrain using drones and other capturing equipment
  • Develop methods for rapid 3D reconstructions and camera localizations.
  • Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
  • Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S)

Job Description
The ML Researcher (3D Vision/Machine Perception) will work with the OWT team in support of developing efficient solutions for 3D reconstructions and segmentations from 2D monocular videos, imagery, or LiDAR sensors.

Preferred Skills

  • Programming experience with machine learning, computer vision or computer graphics(i.e. Pytorch, OpenGL, CUDA, etc).
  • Interest/experience in 3D computer vision applications such as SLAM, NeRF, structure-from-motion, and/or depth estimations.
  • Experience with Unity/Unreal game engine and related programming skills (C++/C#) is a plus.
  • Interest/experience with Geographic information system applications and datasets.

Apply now

414 - Social Simulation

Project Name
Social Simulation

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multi-agent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills

  • Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  • Experience with Python programming.
  • Knowledge of psychosocial and cultural theories and models.

Apply now

415 - Learning Artificial Intelligence through Educational Games

Project Name
Learning Artificial Intelligence through Educational Games

Project Description
This project aims to develop a suite of educational games that help youth and learners without engineering background learn about artificial intelligence. We are building two educational games: the first is a role-playing game for learning probabilistic reasoning in AI; the second is a game for learning about basics of machine learning and its applications. Both games will be delivered online through a web browser.

Job Description
We are looking for student researchers who are familiar with foundational concepts of AI and are passionate about game-based learning to join our team of researchers, game developers and students. The student researcher with strong AI background will explore and implement the backend AI algorithms, e.g., Bayesian network, off-the-shelf machine learning algorithms. Game development focused student researcher will connect the backend AI algorithms and integrate them into front-end UI interactions and gameplay.

Preferred Skills

  • Proficient with Unity, Python, C/C++
  • Familiar with basic machine learning algorithms, such as decision tree, neural network, linear regression, SVM etc.
  • Familiarity with probabilistic AI models is a plus.

Apply now

416 - Learning AI by Breaking AI

Project Name
Learning AI by Breaking AI

Project Description
This project aims to help students learn about AI by breaking AI. Students will interact with a suite of state-of-the-art AI technologies in machine vision and natural language processing, and explore conditions when AI works well and when AI doesn’t work well. Through such contrast, students learn about how AI works and its ethical implications.

Job Description
We are looking for student researchers who are familiar with state-of-the-art AI applications in machine vision and NLP (e.g., LLMs) to join our team of researchers and students. The student researcher will incorporate AI applications and implement off-the-shelf algorithms in MV and NLP, explore and define cases when they work well and when they fail.

Preferred Skills

  • Proficient with Unity, Python, C/C++
  • Familiar with MV and NLP applications and algorithms
  • Full-stack development experience is a plus

Apply now

417 - Explainable Self-Aware AI

Project Name
Explainable Self-Aware AI

Project Description
This project aims to develop explainable models for machine learning algorithms to support explanations and visualizations of when and why AI succeed and fail. The explainable self-aware AI will be integrated in human-AI teams. We will study the impact of such explanations and visualizations on human-AI team performances.

Job Description
The student intern will work with a team of researchers to develop explainable models, such as decision trees, Bayesian networks, linear models, to explore relationship between input features and model output of ML algorithms for classifications.

Preferred Skills

  • Proficient with Unity, Python, C/C++
  • Familiar with classification of image and text data

Apply now

418 - Team-Adaptive Coach for Training Artificial Intelligence Competencies (TACTAIC)

Project Name
Team-Adaptive Coach for Training Artificial Intelligence Competencies (TACTAIC)

Project Description
The TACTAIC project aims to use AI to teach AI, by developing a personal learning assistant and associated learning resources to support learners developing competencies in artificial intelligence. We are looking for an intern who has taken one or more courses in artificial intelligence to assist in expanding our prototype and help develop learning resources which leverage intelligent tutoring systems.

Job Description
Over the course of the summer, the intern will assist in development of software infrastructure, participate in AI-integrated content development, and informal testing of the prototype. Content development may include content creation (e.g., recording a video resource) as well as game-like learning scenarios (e.g., interactive simulations of AI algorithms). Platforms for this work include Python AI/ML libraries including generative AI tools such as OpenAI and LangChain, React Native, React JS, Node.js, and GraphQL. Interns involved in this project will be expected to document their designs, contributions, and analyses to contribute to related publications.

Preferred Skills

  • At least one AI course
  • Javascript/Typescript
  • Python, Python Notebook

Apply now

419 - Content Analytics and Tools for the Personal Assistant for Life Long Learning (PAL3)

Project Name
Content Analytics and Tools for the Personal Assistant for Life Long Learning (PAL3))

Project Description
In this research, we seek to prototype and study the effects of leveraging artificial intelligence to deliver personalized, interactive learning recommendations. The primary focus of this work is to expand the “content bottleneck” for adaptive systems, such that new content may be integrated and included into the system with minimal training.

The testbed for this work will be the Personal Assistant for Life-Long Learning (PAL3), which uses artificial intelligence to accelerate learning through just-in-time training and interventions that are tailored to an individual. The types of content that will be analyzed using machine learning and included in the system cover a wide range of domains. These include:

  • Artificial Intelligence: Training on topics such as AI algorithms and emerging technologies that leverage AI and data science. Content on how to build, evaluate, and compare different AI systems for realistic use cases.
  • Self-Regulated Learning: Training on transferable “learning to learn” skills for effective life-long learning. Target audience are learners studying for a high-stakes exams covering topics such as vocabulary, reading comprehension and basic mathematics. As students study core topics, self-regulated learning skills are introduced (e.g., setting and checking study pace, taking notes).

Job Description
The goal of the internship will be to expand the capabilities of the system to enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests.

Possible topics include work with:

  1. Machine Learning for Content Analysis (e.g., multimedia: images, video, text, dynamic content),
  2. Natural Language Processing or Dialog Systems for Coaching,
  3. Learning Recommendation Systems, and
  4. Content-specific technologies (e.g., artificial intelligence). Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills

  • Python, JavaScript/Node.js, R
  • Machine Learning Expertise, Basic AI Programming, or Statistics
  • Strong interest in machine learning approaches, intelligent agents, human and virtual behavior, and social cognition

Apply now

420 - Real-Time Modelling and Rendering of Virtual Humans

Project Name
Real-Time Modelling and Rendering of Virtual Humans

Project Description
The Vision and Graphics Lab at ICT pursues research and works in production to perform high-quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. There has been continued research on how machine learning can be used to model 3D data effortlessly by data-driven deep learning networks rather than traditional methods. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on software to aid both in the visualization of our new facial scan database and to animate and render virtual humans. To realize and test the useability of this we would need a tool that can model and render the created Avatar through a web-based GUI. The goal is a real-time, responsive web-based renderer on a client controlled by software hosted on the server.

Job Description
The intern will work with lab researchers to develop a tool to visualize assets generated by the machine learning model of the rendering pipeline in a web browser using a Unity plugin and also integrate deep learning models to be called by web-based APIs. This will include the development of the latest techniques in physically-based real-time character rendering and animation. Ideally, the intern would have awareness about physically based rendering, subsurface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills

  • C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3
  • Python, GPU programming, Maya, version control (svn/git)
  • Knowledge in modern rendering pipelines, image processing, rigging, blendshape modeling
  • Web-based skills – WebGL, Django, JavaScript, HTML, PHP

Apply now

421 - Material Reflectance Property Database for Neural Rendering

Project Name
Material Reflectance Property Database for Neural Rendering

Project Description
The Vision and Graphics Lab at ICT pursues research and works in physically-based neural rendering for general objects. Although the modern rendering pipeline used in the industry achieves compelling and realistic rendering results, it still has many general issues. It requires professionals to manually tweak the material property to match the natural-looking appearance of each object. Thus it’s costly for a complex scene with multiple objects. We have to model eyeball, teeth, facial hair, and skin separately, only for rendering a human face. On the other side, neural rendering will introduce a revolution of an easy high-quality rendering. By building a radiance field with geometry models, lighting models, and material property models using a network, neural rendering will render a complicated scene in real-time. The material property is no longer required, and the network automatically assigns it according to the object’s material label. This breakthrough will be beneficial for immersive AR/VR experience in the future.

Job Description
In the center of neural rendering is the material property estimation, for which we need a material database. The intern will work with lab researchers to capture, process the material database using our dedicated Light-stage. Our database will include extensive objects (e.g., cloth, wood, hair) with different reflectance properties, measured using our Light Stage via controllable lighting conditions. This database will be used for physically-based neural rendering algorithm development.

Preferred Skills

  • C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
  • Experience with computer vision techniques: multi-camera stereo, optical flow, photometric stereo, Spherical Harmonics.
  • Knowledge in modern rendering pipelines, image processing, computer vision, computer graphics

Apply now

422 - Optical Simulations for Synthetic Image Quality Testing

Project Name
Optical Simulations for Synthetic Image Quality Testing

Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding, processing and advancing physically based rendering of 3D models using learning-based techniques. Rapid generation of these quality benchmarked assets is a goal being chased by the academia and industry. It has important values in practical applications of teleconferencing, holoportation, AR, and VR. However, generating realtime realistic 3D assets is challenging due to limited photorealism in synthetic data and generated images from optical layers in a deep learning network. Provided with an image from a first iteration optically informed deep network, the task is to run simulation tests for our predicted parameters in an optical simulation software to check for accuracy against the theoretical image formation model. Then we will use various image quality checks for SNR, tonal response, image distortion, chromatic aberration among others.

Job Description
The intern’s task will focus on running simulation tests for quality and accuracy of predicted parameters in an optical simulation software to check for accuracy against the theoretical image formation model. The intern will also perform various image quality checks for provided images in terms of resolution, sharpness, dynamic range etc.

Preferred Skills

  • Experience with running optical simulations in studios like Code V, Zemax
  • Familiarity with image and signal processing skills like Fourier analysis, DCT and basic image opertaions
  • Engineering, math, physics and programming, C++, Python, GPU programming, Optional
  • Basic skills in deep learning and experience in training networks

Apply now

423 - Virtual Human Therapeutics

Project Name
Virtual Human Therapeutics

Project Description
As a Psychology Research Intern, you will have the unique opportunity to work in our Virtual Humans Therapeutics Lab, focusing on research projects that explore the psychological aspects of human interaction with virtual humans. In addition, you will be involved in scoping reviews related to the field, contributing to our understanding of the current literature and identifying gaps for future research.

Job Description
Join our team as a Psychology Research Intern, where you’ll contribute significantly to our Virtual Humans and Behavior Change initiatives. Your role involves designing and executing experiments on human interaction with digital avatars, analyzing data to draw meaningful conclusions, and collaborating with a diverse team of researchers, developers, and psychologists. Additionally, you’ll conduct comprehensive literature reviews in psychology and virtual humans, synthesize existing research, and contribute to the development of research proposals. Meticulous documentation of research procedures, active participation in team meetings, and the preparation of reports and presentations are integral aspects of this role. If you are currently enrolled in a psychology or related program, possess a keen interest in human-computer interaction and emerging technologies, and are eager to shape the future of behavior change research, we invite you to join our dynamic and forward-thinking team.

Preferred Skills

  • Enrolled in a graduate or undergraduate program in psychology or a related field
  • Research experience including IRB proposals, running human subjects, and analyzing study data
  • Proficient in literature review methods and scoping reviews
  • Familiarity with statistical analysis tools (R, SPSS, SAS)

Apply now

424 - Virtual Human R&D Integration

Project Name
Virtual Human R&D Integration

Project Description
The Integrated Virtual Human R&D project seeks to create a wide range of virtual human systems by combining the various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics-based question / answer characters to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other both verbally and nonverbally, i.e., they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval,
knowledge representation, and speech recognition.

Job Description
Some of the challenges when developing virtual humans are the complexity of the system and the amount of specialized knowledge one needs in order to create new agents. Tools that support the authoring and debugging of agents are therefore essential, but in no way trivial to develop. For instance, how would you visualize the agent’s state of mind, taking into consideration that this involves the many input data streams a user may provide, the context
of the interaction, and possible next steps? Considering this challenge, the tasks outlined for the summer internship are as follows:

  • Become familiar with the general Virtual Humans Architecture and interact with several virtual humans;
  • Create a new virtual human within a small domain, using existing authoring and debug tools;
  • Give feedback on these tools;
  • Implement this feedback in existing tools and/or create new tools;
  • Identify 3rd party capabilities for possible integration.

Working within this project requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C# and the Unity game engine
  • Fluent in one or more scripting languages (e.g., Python)
  • Background in artificial intelligence and machine learning a plus
  • Excellent general computer skills

Apply now

425 - RIDE Integration

Project Name
RIDE Integration

Project Description
The Rapid Integration & Development Environment (RIDE) is a foundational Research and Development (R&D) platform that unites many Department of Defense (DoD) and Army simulation efforts to deliver an accelerated development foundation and prototyping sandbox that provides direct benefit to the US Army’s Synthetic Training Environment (STE) as well as the larger DoD and Army simulation communities. RIDE integrates a range of
capabilities, including One World Terrain (OWT), Non-Player Character (NPC) Artificial Intelligence (AI) behaviors, Experience Application Programming Interface (xAPI) logging, multiplayer networking, scenario creation, machine learning approaches, and multi-platform support. It leverages robust game engine technology while designed to be agnostic to any specific game or simulation engine. RIDE is freely available through Government Purpose Rights (GPR) with the aim of lowering the barrier to entry for R&D efforts within the simulation community, in particular for training, analysis, exploration, and prototyping.
See https://ride.ict.usc.edu for more details.

Job Description
RIDE combines the best-in-breed solutions from both academia and industry in support of military training. Some of the challenges associated with this include 1) integrating individual technologies into a common, principled framework, 2) developing demonstrations that showcase integrated capabilities, and 3) create new content that leverages these capabilities.

Preferred Skills

  • Become familiar with the RIDE platform
  • Design and develop new demonstrations leveraging existing RIDE capabilities
  •  Provide feedback on authoring tools for creating new content and support implementing improvements
  • Identify and integrated additional 3rd party capabilities

Apply now

ARL 79 - Multi-Modal Human-Robot Dialogue in Atypical and Anomalous Environments

Project Name
Human-Robot Dialogue in Atypical and Anomalous Environments

Project Description
This project studies behavior and develops technology for dialogue systems in which a robot and human establish and share common ground through situated dialogue involving atypical or anomalous environments. The ability for the robot to generate natural language descriptions of environments is critical when the human is overloaded with or completely lacks visual information, and requires an agent to fill in the gaps to coordinate a shared understanding. This work involves aspects of computer vision to visually analyze the environment, commonsense reasoning to understand the most important elements, and natural language understanding and generation to respond to the human. We will work with dialogue corpora surrounding low-quality images taken in real-world and virtual environments.

Job Description

  • Conduct manual and/or computational analysis of dialogue about images and environments
  • Experiment with or combine existing natural language generation, computer vision, or large language models for description and explanation
  • Design evaluation criteria for assessing the quality of the generated text

Preferred Skills

  • Programming expertise for language generation, computer vision, and/or LLMs
  • Human-robot or human-agent dialogue analysis and dialogue systems
  • Experimental design and applied statistics for rating and evaluation

Apply now

ARL 80 - Creative Visual Storytelling

Project Name
Creative Visual Storytelling

Project Description
This project seeks to discover how humans tell stories about images, and to develop computational models for robots or agents to generate these stories. “Creative” visual storytelling takes into consideration several aspects that influence the narrative: the environment and presentation of imagery, the narrative goals of the telling, and the audience who is listening. This work involves aspects of computer vision to visually analyze the image, commonsense reasoning to understand what is happening, and natural language generation and narrative theories to describe it in a cohesive and engaging manner. Paper reference: https://dl.acm.org/doi/10.1145/3544548.3580744

Job Description

  • Conduct manual and/or computational analysis of narrative styles and properties of stories written about images
  • Experiment with or combine existing natural language generation, computer vision, and/or large language models for creative visual storytelling
  • Design evaluation criteria for assessing the quality of stories written about images

Preferred Skills

  • Programming expertise for language generation, computer vision, and/or LLMs
  • Digital narratives and storytelling applied to images
  • Experimental design and applied statistics for rating and evaluating stories

Apply now

ARL 81 - Hybrid human-machine intelligent systems research

Project Name
Hybrid human-machine intelligent systems research

Project Description
Our goal is to develop hybrid systems and technologies that synergistically leverage the unique strengths of human and machine intelligence to create novel “super-human” capabilities that a single person or machine could not achieve on their own.

Our research program has various opportunities for student involvement in basic research in this area including data analysis, algorithm/model development, AI/ML implementation, literature reviews, game development, etc.

Job Description

We will meet with interested students to identify and scope a project that meets their skills and interest, and aligns with our ongoing research efforts. The student will be required to attend a few weekly virtual meetings (i.e. lab meetings, journal club, etc), and perform research and responsibilities daily in-person. The student should ideally have experience in coding and an interest in AI/ML and/or neuroscience/cognitive science.

Preferred Skills

  • Coding (Matlab, python, etc)
  • Machine Learning
  • Statistical analysis
  • Game design or development
  • Background or interest in cognitive science, neuroscience, psychology

Apply now

ARL 82 - Machine Learning and Computer Vision/Graphics

Project Name
Machine Learning and Computer Vision/Graphics

Project Description
Modern computer vision and graphics have undergone revolutionary changes, thanks to the advanced AI and machine learning (ML) technologies. This project focuses on the development of advanced AI/ML algorithms such as generative AI for scene perception and understanding, in particular using diverse datasets and multimodal sensors.

Job Description

The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills

A dedicated and hardworking individual, Experience or coursework related to machine learning, computer vision, Strong programming, writting and presentation skills.

Apply now

ARL 83 - Unlocking eye tracking-based Adaptive Human Agent Teaming in real-world contexts

Project Name
Unlocking eye tracking-based Adaptive Human Agent Teaming in real-world contexts

Project Description
Our mission is to develop opportunistic sensing systems which leverage already existing data streams to inform agents and adapt to changing mission contexts. Specifically, we focus on leveraging eye tracking data streams such as eye movements and pupil size to classify cognitive states and strategies such as search, navigation, stress, effort and depth of focus. Accurately classifying these states will allow an agent to adapt when needed. We have a number of data sets that can be analyzed to develop algorithms which predict and classify various cognitive states and behaviors. Example projects using eye tracking data: -Model pupil size to extract relative cognitive and non-cognitive influences -Characterize complex state and behaviors that emerge from human and human-agent teams. -Predict depth of focus on an individual by individual basis – to develop depth of focus aware software to present objects and information in AR/VR at the right depth -Using the pupillary light response to infer cognitive influence -Visual saliency modeling in complex dynamic environments

Job Description

If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. The scope of the work will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship but may include data collection, literature review and statistical analysis.

Preferred Skills

  • Computational modeling
  • Machine learning
  • Programming and stastical analysis in Matlab, Python or R
  • Interest in Psychophysiology, cognitive neuroscience and/or psychology
  • Signal processing and time series analysis

Apply now

Compensation: The base salary range for all intern positions is $1,180/week (undergrad). to $1,300/week (grad), paid at a monthly or hourly rate depending on employment type. See FAQ’s for description of employment types.