ARL 62 – Research Assistant, Collaborative Robotics

Project Name
Collaborative Unmanned Aerial Vehicles

Project Description
ARL is developing multi-robot, collaborative systems to solve complex problems in dynamic environments. Our research covers a wide range of topics including networking, navigation, beam forming, resiliency and swarm formation.

Job Description
This internship will focus on developing software to support our collaborative robotics research objectives. The student may use a combination of simulation, small unmanned aerial vehicles and ground robots to demonstrate their research accomplishments.

Preferred Skills
– C++
– Python
– ROS

ARL 61 – Programmer, Programmer, multi-agent modeling, simulation, and robotics

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of simulated humans and robots. They must also be accurate enough to be used for sizing human-robot teams in Army missions.

Job Description
The programmer will create functions and modules to be integrated into the codebase. The programmer may create models of humans and futuristic robots. The programmer may implement some algorithms on physical robots. Primary programming languages will be ROS and Python.

Preferred Skills
– ROS, Python, C++, Java development
– Multi-robot coordination algorithms (e.g. swarming, traveling salesman)
– Collaborative development and version control (e.g. Github)
– Familiarity with physics, engineering, or robotics

ARL 60 – Programmer, Cross-Reality Common Operating Picture for Multi-Domain Operations

Project Name
AURORA

Project Description
Project AURORA (Accelerated User Reasoning for Operations, Research, and Analysis) seeks to understand how cross-reality (AR,VR,MR) can be used to enhance the common operating picture for future multi-domain battle. The AURORA project is a platform consisting of network (AURORA-NET) and interface (AURORA-XR) modules that allow researchers to conduct controlled experimentation on the ingestion of battlefield data on the network to its visualization, analysis, and actuation by humans and intelligent agents. There is currently limited literature on how best to use immersive technology for decision-making, particularly across different visualization mediums.

Job Description
The intern will assist the team to develop additional visualization and interaction methods into the AURORA-XR platform. These may involve the use of VR, AR, or both simultaneously. Work may also include some exposure to integration of machine learning, artificial intelligence, and networking.

Preferred Skills
– Fluent in C# and/or python
– Experience with Unity
– Experience developing for virtual or augmented reality
– Excellent general computer skills
– Background in cognitive science or UX is a plus

ARL 59 – Research Assistant, Machine Learning-Driven Scene Perception and Understanding

Project Name
Machine Learning-Driven Scene Perception and Understanding

Project Description
Current manned and unmanned SWAP-constrained platforms for perception, ground or airborne, carry multimodal sensors with future expectation of additional modalities. This project aims to support ARL’s one of essential research programs (ERP)– AI for Maneuver and Mobility (AIMM) ERP, and focuses on the development of advanced machine learning algorithms for scene understanding using multimodal data from single as well as distributed sensing platforms, in particular using a diverse dataset consisting of high-fidelity simulated images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 58 – Research Assistant, Integrated Circuit Design Engineering Intern

Project Name
Efficient, Domain-Specific, Reconfigurable Integrated Circuits (ICs)

Project Description
Our research team designs and tests integrated circuits (ICs) that efficiently perform computation tasks relevant to Army applications. ICs are designed to allow for re-configuration, similar to Field Programmable Gate Arrays (FPGAs), but to also provide efficiency in computation closer to that of Application-Specific ICs (ASICs).

Job Description
For one summer, students selected to be interns will help design, simulate, and verify digital blocks used in energy-efficient integrated circuits developed at the lab. Interns will get first-hand experience working with Army researchers and on Army problems. Interns will also have the opportunity to present their work to other interns and researchers at the lab. Example application spaces for developed circuits include (but are not limited to) Swarms, Artificial Intelligence, and Digital Signal Processing (DSP).

Preferred Skills
– Verilog coding and design experience (Cadence Incisive/Xcelium, Verilator, Icarus Verilog, etc.)Proficiency with using MATLAB and Python
– Familiarity with digital signal processing (DSP), machine learning, and/or swarming algorithms
– Current undergraduate or graduate student in electrical engineering
– Preferred: experience with optimizing circuit performance/power consumption

ARL 57 – Research Assistant, Synthetic Data for Machine Learning

Project Name
Synthetic Data for Machine Learning

Project Description
Machine learning (ML) algorithms require vast amounts of training data to ensure good performance. Nowadays, synthetic data is increasingly used for training cutting-edge ML algorithms. This research aims to develop AI-driven model synthesis approach for generating ML synthetic data. Advanced deep learning techniques, particularly the deep generative networks and geometric deep learning will be explored for model representation and synthesis.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning and data synthesis techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, signal processing,
computer vision/graphics
– Strong programming skills

ARL 56 – Research Assistant, Pupil power: Unlocking the ability to use eye tracking in real-world contexts

Project Name
Investigation of Factors and Estimation of Their Influence on the Pupil Response

Project Description
Because of new cheap and non-invasive eye tracking technology, the pupil response has great potential to be used as a window into the mind to estimate cognitive states that influence performance. However, because pupil size is more strongly driven by non-cognitive factors, it is difficult to attribute pupil size changes to cognitive states. This project will contribute to ongoing efforts to overcome this obstacle by furthering our understanding of the magnitude of various factors that influence the pupil response.

Job Description
If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. The scope of the work will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship but may include data collection, literature review and statistical analysis.

Preferred Skills
– Programming and stastical analysis in Matlab, Python, R
– Interest in Psychophysiology, cognitive neuroscience and/or psychology

ARL 55 – Research Assistant, Quantified Uncertainty Training Research Assistant

Project Name
Evaluating Quantified Uncertainty Training for Adaptation

Project Description
The overarching goal of this project is to advance fundamental understanding of cognitive skills that enable effective adaptation to dynamic and unpredictable situations as well as approaches to training those skills. This project is examining the skill of incorporating multiple sources of uncertain information to make better, faster decisions. We want to know if that skill improves adaptation to dynamic availability and quality of information, and we want to know how to train that skill.

Job Description
The selected research assistant will have the opportunity to contribute to this research. Depending on skills and interest, the intern might assist with literature review, experimental design, data collection, and/or analysis. One specific area of contribution would be to design, implement, and run an online/mturk experiment to replicate in-lab findings.

Preferred Skills
– Experience running user studies/behavioral experiments
– Some programming experience (R, Python, MATLAB, JavaScript)
– Background/coursework in Psychology, Statistics, Data Visualization
– Experience conducting related research
– Experience with crowd-sourced/online experiments

ARL 54 – Research Assistant, Machine Learning and Computer Vision

Project Name
Machine Learning and Computer Vision

Project Description
Current manned and unmanned perception platforms, ground or airborne, carry multimodal sensors with future expectation of additional modalities. This project focuses on the development of advanced machine learning (ML) algorithms for scene understanding using multimodal data, in particular using a diverse dataset consisting of high-fidelity simulated images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 53 – Research Assistant for Materials and Device Simulations

Project Name
Materials and Device Simulations for Power Electronics

Project Description
The project is part of an ongoing emerging materials and device research effort in the US Army Research Laboratory (ARL). One focus area is exploration and investigation of materials and device designs, both theoretically and experimentally, for high-speed, high-frequency and light weight electronic devices.

Job Description
The research assistant will work with ARL scientists to investigate fundamental material and device properties of low-dimensional nanomaterials (functionalized diamond surfaces). For this study, various bottom-up materials and device modelling tools based on atomistic approaches such as first-principles density functional theory (DFT) and molecular dynamics (MD) will be used. In addition, numerical and analytical modeling will be used to quantify and analyze data obtained from atomistic simulation to facilitate comparison to in-house experimental findings.

Preferred Skills

  • An undergraduate or graduate student in electrical engineering, materials science, physics or computational chemistry 
  • Sound knowledge of materials and device physics concepts
  • Proficiency in at least one scripting language
  • Experience with high-performance computing (HPC)
  • Proficiency with atomistic materials modeling concepts and tools, such as VASP, Quantum Espresso, and Lammps etc.
  • Interest in fundamental materials design and discovery
  • Familiar with experimental characterization techniques
  • Ability to work in a collaborative environment as well as independently

ARL 52 – Research Assistant, Deep Learning Models for Human Activity Recognition Using Real and Synthetic Data

Project Name
Human Activity Recognition Using Real and Synthetic Data

Project Description
In the near future, humans and autonomous robotic agents – e.g., unmanned ground and air vehicles – will have to work together, effectively and efficiently, in vast, dynamic, and potentially dangerous environments. In these operating environments, it is critical that (a) the Warfighter is able to communicate in a natural and efficient way with these next generation combat vehicles, and (b) the autonomous vehicle is able to understand the activities that friendly or enemy units are engaged in. Recent years have, thus, seen increasing interest in teaching autonomous agents to recognize human activity, including gestures. Deep learning models have been gaining popularity in this domain due to their ability to implicitly learn the hierarchical structure in the activities and generalize beyond the training data. However, deep models require vast amounts of labeled data which is costly, time-consuming, error-prone, and requires measures to address any potential ethical concerns. Here we’ll look to synthetic data to overcome these limitations and address activity recognition in Army-relevant outdoor, unconstrained, and populated environments.

Job Description
The candidate will implement Tensorflow deep learning models for human activity recognition – e.g., 3D conv nets, I3D – that can be trained using real human gesture data, and synthetic gesture data (generated using an existent simulator). Knowledge of domain transfer techniques (e.g., GANs) may be useful. The candidate will have to research and demonstrate a solution to this problem.

Preferred Skills
– Experience with deep learning models for human activity recognition
– Experience with Python and TensorFlow
– Independent thinking and good communication skills

ARL 51 – Research Assistant, Visual Salience of Obscured Objects

Project Name
Visual Salience of Obscured Objects

Project Description
Visual salience is the perceptual aspect of an image that may grab a person’s attention and is often used to model visual search, as these attention grabbing locations may help a person understand the surrounding environment faster. However, not everything that is informative will grab a person’s attention. This is especially true when an informative object is partially obscured from view (by another object, fog, dust, glare, etc.) leaving only a part of that object visible. But that part of an object can serve as a cue to the presence of the rest of the object and its potential location. This internship will start an investigation into using a parts of object model to enhance a model of visual saliency.
Directorate: Computational and Information Science Directorate (CISD)
Essential Research Program (ERP): AI for Mobility and Manuever (AIMM)

Job Description
The research assistant will read academic papers, attend project meetings, implement and test computer vision and saliency models using python or Matlab, possibly collect or create test data for the AIMM ERP, and write a report/paper or create a poster at the end of the internship.

Preferred Skills
Preferred: Experience with Matlab and Python (tensorflow or pytorch)
Good to have: Experience with Unity or Unreal game engine development

ARL 50 – Research Assistant, Creative Visual Storytelling

Project Name
Creative Visual Storytelling

Project Description
This project seeks to discover how humans tell stories about images, and to develop computational models to generate these stories. “Creative” visual storytelling goes beyond listing observable objects and their visual properties, and takes into consideration several aspects that influence the narrative: the environment and presentation of imagery, the narrative goals of the telling, and the audience who is listening. This work involves aspects of computer vision to visually analyze the image, commonsense reasoning to understand what is happening, and natural language generation and theories of narratives to describe it in a cohesive and engaging manner. We will work with low-quality images and non-canonical scenes. Paper reference: http://www.aclweb.org/anthology/W18-1503

Job Description
Possible projects include:
– Conduct manual and/or computational analysis of the narrative styles and properties of stories written about images
– Experiment with or combine existing natural language generation and/or computer vision software for creative visual storytelling
– Work with project mentor to design evaluation criteria for assessing the quality of stories written about images

Preferred Skills
Interest in and knowledge of some combination of the following:
– Programming expertise for language generation and/or computer vision
– Digital narratives and storytelling applied to images
– Experimental design and applied statistics for rating and evaluating stories

361 – Programmer, Integrated AI for Simulation and Training

Project Name
Integrated AI for Simulation and Training

Project Description
Effective military training requires an advanced simulation that includes a range of AI capabilities, including realistic enemy tactics, friendly unit performance, and civilian behaviors, all delivered through an appropriate platform, e.g. AR, VR, Desktop, Mobile, etc. This project integrates all required capabilities into a single simulation.

Job Description
The position supports the integration of advanced research AI capabilities into a single Unity training simulation, and includes the design, development and testing of demonstrations that highlight these integrated capabilities. The ideal candidate is fluent in Unity, has a strong affinity with modern AI and ML capabilities, is a fast learning, and can work both independently and as part of a team. 

Given that this project is integrating a range of advanced technologies, it allows for exposure to a range of exciting aspects of modern simulation, game development, and AI.

Preferred Skills
 •  Unity
  •  Machine Learning
  •  AR/VR
  •  Mobile

360 – Programmer, Real-Time Rendering of Virtual Humans

Project Name
Real-Time Rendering of Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization of our new facial scan database and to animate and render virtual humans. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering, and animation. Ideally, the intern would have awareness about physically based rendering, sub surface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
 •  Engineering, math, physics
  •  Programming, OpenGL / Direct3D, GLSL / HLSL, Unity3, C++, Python, GPU programming, Maya, version control (svn/git)
  •  Knowledge in modern rendering pipelines, image processing, rigging, blendshape modelling

359 – Programmer, Body Tracking for AR/VR

Project Name
Body Tracking for AR/VR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills
 •  C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
  •  Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
  •  Detection, bilinear morphable models, texture synthesis, markov random field

358 – Programmer, Immersive Virtual Humans for AR/VR

Project Name
Immersive Virtual Humans for AR/VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills
 •  C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL
  •  Knowhow of state of the art deep learning models, strong math skills, tensorflow, pytorch or other deep learning frameworks
  •  Python, GPU programming, Maya, Octane render, svn/git
  •  Knowledge in modern rendering pipelines, image processing, rigging

357 – Research Assistant, Population Modeling for Analysis and Training (PopMAT)

Project Name
Population Modeling for Analysis and Training (PopMAT)

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multi-agent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills
 •  Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  •  Experience with Python programming.
  •  Knowledge of psychosocial and cultural theories and models.

356 – Research Assistant, Develop Explainable Models for Reinforcement Learning

Project Name
Develop Explainable Models for Reinforcement Learning

Project Description
This project aims to develop explainable models for model-free reinforcement learning. The explainable models will be used to generate explanations on how the algorithm generates policies that are understandable to humans who interacts with automations that uses reinforcement learning.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Solid knowledge in artificial intelligence
  •  Strong programming skills in Python

355 – Research Assistant, Teaching Artificial Intelligence through Game-Based Learning

Project Name
Teaching Artificial Intelligence through Game-Based Learning

Project Description
This project aims to develop a role-playing game to help high school students learn basic concepts in artificial intelligence.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Experience with building games. (Games for education a plus)
  •  AI specialization or focus would be ideal
  •  Master’s degree in CS

354 – Research Assistant, Charismatic Virtual Tutor

Project Name
Charismatic Virtual Tutor

Project Description
The primary goal of this project is to analyze the data of charismatic individuals (audio/video of speeches) to learn the indicators of charisma in nonverbal behaviors. The outcome will then be used to procedurally generate synthetized voices and speech gestures to convey charisma in a virtual human.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Background in audio (voice and speech signals) processing and speech synthesis.
  •  Knowledge of AI, NLP and ML.
  •  Experience with applying ML to audio signals a plus.
  •  Experience with COVAREP, openSMILE a plus.
  •  Strong programming skills a must. C/C++, Python preferred.
  •  Minimum education requirement: Masters in CS or EE

353 – PAL3 – Research Assistant / Data Analytics / Summer Intern, SLATS – Semi Supervised Learning for Assessment of Teams in Simulations

Project Name
SLATS – Semi Supervised Learning for Assessment of Teams in Simulations

Project Description
Would you like to use and extend novel machine learning techniques to analyze team behavior and performance in a game-like simulation?

The Learning Sciences group at ICT has developed a general-purpose educational-data-mining pipeline to track engagement and learning for individuals. The goal of the SLATS project is to extend this pipeline to team tasks while continuing our approach of requiring only a limited amount of labeled data by using semi-supervised learning. This research will develop diagnostics of the causes for team performance that bridge theory with common use-cases for team metrics (e.g., real-time feedback, adaptive scenario events). The goal of this work is also to produce a library of metrics that is extensible, so that improved metrics and methodologies can replace older ones through empirical studies.

This research is investigating the following machine learning problems:
– Cold Start Problem: Developing models to assess team performance on semi-structured simulations based on a relatively small number of samples (e.g., < 100 teams)
– Team Diagnostics: Credit attribution between team and individual members
– Actionable Metrics: Reporting metrics in a form that can guide future training

Job Description
Generally, the goal of the internship will be to expand the system although the specific tasks will depend on the progress of the project, and the interests of the candidate. Once the initial pipeline is built, a series of machine learning experiments will be required to classify and diagnose team performance with high accuracy and small training samples.

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
 •  Python, R
  •  Machine Learning, Statistics, or Data Analytics
  •  Strong interest in data science, social/team performance, intelligent agents, and social cognition

352 – PAL3 Research Assistant / Summer Intern, Personal Assistant for Life Long Learning (PAL3)

Project Name
Personal Assistant for Life Long Learning (PAL3)

Project Description
Would you like to work on a system designed to rapidly build an AI virtual mentor from a set of video interview clips for a real-life mentor?
Mentor Panel lets students engage in virtual question-and-answer sessions with senior professionals from a variety of fields. Check out the prototype here: http://mentorpal.org/mentorpanel/

Job Description

Mentor Panel has multiple components in active development. Depending on your skills and interests, you could get involved in research on:
– Natural Language Systems & AI: Dialog systems to improve mentor conversations
– Fullstack UX: Mobile client or web app development for improved mentor interactions
– Rapid Mentor Pipelines: Content-management tools or video-processing pipeline to rapidly create or modify virtual mentors based on video interviews
– Data Mining: Applying statistics or ML to existing MentorPal data to develop new models or find important patterns relevant to publications.

We don’t require interns to come in with a lot of experience in a specific language or framework, but we are looking for bright, highly motivated interns to build on the best of breed in the open source world. It’s important to us that your time here grows your skills using tools that matter in the real world and in job markets. Here’s a short list of technology we’re actively working with or exploring in PAL3:
React/Gatsby, tensorflow/keras, NLTK, jiant, python, flask, javacript, typescript, docker, kubernetes, circleci, AWS

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
  •  React/Gatsby, tensorflow/keras, NLTK, jiant, python, flask, javacript, typescript
  •  docker, kubernetes,
  •  circleci, AWS

351 – PAL3 – Research Programmer, Personal Assistant for Life Long Learning (PAL3)

Project Name
Personal Assistant for Life Long Learning (PAL3)

Project Description
Would you like to work on a next-generation mobile-app coach for personalized learning? 

The Personal Assistant for Life Long Learning (PAL3) is a system for delivering on-the-job training and support lifelong learning via mobile devices. Checkout a brief video about the project here: https://youtu.be/aiUT1gfPm3k

Job Description
PAL3 has a variety of components in active development. Depending on your skill set and interests, you could get involved with research on:
– Fullstack UX: Designing and implementing engaging web-apps and mobile apps hosted by PAL3
– Content Tools: Building content development tools for intelligent tutoring systems and interactive systems
– Automation: Automating build/test/deploy processes for traditional systems (e.g., servers) and advanced systems (e.g., virtual mentors)
– AI/ML: Artificial intelligence and machine learning to give personalized recommendations to users
– Teams: Team-building mechanisms, such as game-like mobile app features or local multiplayer

We don’t require interns to come in with a lot of experience in a specific language or framework, but we are looking for bright, highly motivated interns to build on the best of breed in the open source world. It’s important to us that your time here grows your skills using tools that matter in the real world and in job markets. Here’s a short list of technology we’re actively working with or exploring in PAL3:
React (including Gatsby and React Native/XP), Unity 3d, python, javacript, typescript, node/express, graphql, mongogb, docker, kubernetes, circleci, AWS, JAM stack (e.g. Netlify), XAPI

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
 •  React (including Gatsby and React Native/XP), Unity 3d, python, javacript
  •  , typescript, node/express, graphql, mongogb, docker, kubernetes, circleci,
  •  AWS, JAM stack (e.g. Netlify), XAPI

350 – Research Assistant, Identity Models for Dialogue

Project Name
Identity Models for Dialogue

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue by creating explicit models of dialogue agent and human interlocutor identity, investigating human-like dialogue strategies, and synergies across multiple dialogue tasks.

Job Description
The student intern will work with the Natural Language Research Group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills
 •  Some familiarity with dialogue systems or natural language dialogue
  •  Either programming ability or experience with statistical methods and data analysis
  •  Ability to work independently as well as in a collaborative environment

349 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship. Previous subjects have included Holocaust and Sexual Assault Survivors and Army Heroes.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
 •  Very good spoken and written English (native or near-native competence preferred)
  •  General computer operating skills (some programming experience desirable)
  •  Experience in one or more of the following: 1. Interactive story authoring & design 2. Linguistics, language processing 3. A related field; museum-based informal education

348 – Research Assistant, Narrative Summarization

Project Name
Narrative Summarization

Project Description
The Narrative Summarization project at ICT explores new technologies for generating textual descriptions of time-series data from multiplayer interactive games (military training simulations). Interns working on this project will develop new software for transforming structured, formal descriptions of gameplay events into English text using a combination of linguistic templates, grammatical transformation rules, and deep neural network language models. Interns will work directly with the faculty member / project leader, write software code, write documentation and reports, and conduct evaluations / experiments.

Job Description
Interns working on this project will develop new software for transforming structured, formal descriptions of gameplay events into English text using a combination of linguistic templates, grammatical transformation rules, and deep neural network language models. Interns will work directly with the faculty member / project leader, write software code, write documentation and reports, and conduct evaluations / experiments. Successful applicants will have skills in both software engineering (Python, C#, and/or Rust) and in computational linguistics (language modeling, syntactic parsing, and/or formal semantics).

Preferred Skills
 •  Software engineering (Python, C#, and/or Rust)
  •  computational linguistics (language modeling, syntactic parsing, and/or formal semantics

347 – Programmer, One World Terrain (OWT)

Project Name
One World Terrain (OWT)

Project Description
One World Terrain (OWT) is an applied research effort focusing on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:
• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
• Procedurally recreate 3D terrain using drones and other capturing equipment
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
• Develop efficient run-time application for terrain visualization and simulation
• Reduce the cost and time for creating geo-specific datasets for M&S
• Leverage commercial solutions for storage and distribution of serving geospatial data

Job Description
The programmer will work with the OWT team in support of recreating digital 3D global terrain capabilities that replicate the complexities of the next-gen operational environment for M&S.

Preferred Skills
 •  Experience with machine learning and computer vision (Python, TensorFlow, Pytorch)
  •  Experience with 3D point cloud and mesh processing
  •  Experience with Unity/Unreal game engine and related programming skills ( C++/C#)
  •  Experience with photogrammetry reconstruction process
  •  Web services
  •  3D rendering on browsers
  •  Interest/experience with Geographic information system applications and datasets

346 – Research Assistant, The Sigma Cognitive Architecture

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing the Sigma cognitive architecture – a computational hypothesis about the fixed structures underlying a mind – as the basis for constructing human-like, autonomous, social, cognitive (HASC) systems. Sigma is based on an extension of the elegant but powerful formalism of graphical models, enabling combining both statistical/neural and symbolic aspects of intelligence. Although it is built in Lisp, Sigma’s core algorithms are in the process of being ported to C.

Job Description
Looking for a student interested in developing, applying, analyzing and/or evaluating new intelligent capabilities in an architectural framework.

Preferred Skills
– Programming (Lisp preferred, but can be learned once arrive)
– Graphical models and/or neural networks (experience preferred, but ability to learn quickly is essential)
– Cognitive architectures (experience preferred, but interest is essential)