ARL 32 – Research Assistant, Learning in Enhanced or Virtual Training Environments

Project Name
Learning in Enhanced or Virtual Training Environments

Project Description
This project examines how enhanced training components (such as game elements and virtual reality) affect learning outcomes from training programs. Individual personality and trait differences will be examined, as will physiological and survey-derived state measures.

Job Description
Research intern may be involved in data analysis or data collection for ongoing projects in gamified training and/or training in virtual environments.

Preferred Skills

  • Psychology
  • Human research
  • Data analysis
  • Familiarity with learning science, VR, gaming, EEG, and/or impedance cardiography

ARL 31 – Research Assistant, Device and Materials Simulations

Project Name
Materials and Device Simulations for Army Research Applications

Project Description
The project is a part of an ongoing emerging materials and device research efforts in US Army Research Laboratory (ARL). One of the research focus of the team is to explore and investigate materials and device designs, both theoretically and experimentally, for low-power, high-speed and lighter weight electronic devices.

Job Description
The research assistant will work with ARL scientists to investigate fundamental materials and device properties of low-dimensional nanomaterials (2D materials, 2D/3D materials, and functionalized diamond surfaces). For this study, various bottom-up materials and device modelling tools based on atomistic approach such as the first-principle simulation (DFT) and molecular dynamics (MD) will be used. In addition, numerical and analytical modeling will be used to quantify and analyze observed data from atomistic simulation and compare them to the groups’ experimental findings

Preferred Skills

  • An undergraduate or graduate student from electrical engineering, materials science, physics or computational chemistry department
  • Sound knowledge of materials and device physics concepts
  • Experience with high-performance computing (HPC) environment
  • Proficiency with an atomistic materials modeling concept
  • Interest in fundamental materials design and discovery
  • Working knowledge of quantum transport simulation tools will be advantageous
  • Familiar with experimental characterization techniques
  • Ability to work in a collaborative environment as well as independently

ARL 30 – Research Assistant, Predicting the Effectiveness of Gamified Training

Project Name
Individual Traits and Training Effectiveness: Predicting the Effectiveness of Gamified Training

Project Description
A research focus of The Army Research Laboratory is the study of how individual traits modulate the effectiveness of game-like reward structures to enhance training outcomes. Encephalography (EEG) and impedance cardiography were used to track behavior and cognitive states during training in a gamification environment to monitor trainee motivation in a continuous manner.

Job Description
The research assistant will work with ARL scientists to identify physiological correlates of training performance and state changes based on behavioral, cardiovascular, and neural data within a gamified training environment. The summer project will aim to define components of these state changes relating to performance and trait-based measures.

Preferred Skills

  • A PhD student working in the field of cognitive neuroscience
  • Experience with EEG analysis and/or impedance cardiography preferred
  • Ability to work in a collaborative environment as well as independently
  • Interest in human behavior and training
  • Strong quantitative/programming skills

ARL 29 – Research Assistant, Predicting the Effectiveness of Gamified Training

Project Name
Individual Traits and Training Effectiveness: Predicting the Effectiveness of Gamified Training

Project Description
A research focus of The Army Research Laboratory is the study of how individual traits modulate the effectiveness of game-like reward structures to enhance training outcomes. Encephalography (EEG) will be used to track behavior and cognitive states during training in a gamification environment to monitor trainee motivation in a continuous manner.

Job Description
The research assistant will work with ARL scientists to identify state changes based on behavioral and neural data within a gamified training environment and during the subsequent transfer task. The summer project will aim to define components of these state changes relating to regulatory focus and other trait-based measures by linking behavioral data with conventional EEG analyses.

Preferred Skills

  • A PhD student working in the field of cognitive neuroscience
  • Experience with EEG/MEG analysis preferred
  • Ability to work in a collaborative environment as well as independently
  • Interest in human behavior and training
  • Strong quantitative skills

ARL 28 – Research Assistant, Neuroadaptive Training

Project Name
Building a Brain Computer Interface

Project Description

The Army Research Laboratory is interested in creating a new type of behavioral training paradigm based on physiological measures. One goal is to create a brain-computer interface (BCI) that is able to modify training in real-time based on neurophysiological responses. Encephalography (EEG) will be used to track behavior and monitor neural activity during a task to build a system for ongoing and future studies.

Job Description

The research assistant will work with ARL scientists to identify neural correlates related to performance during training and subsequently build a brain computer interface that will allow for online training changes based on brain activity. The summer project will aim to not only define the specific components relevant to training but also program a system to process data online in hopes of augmenting performance.

Preferred Skills

  • A PhD student working in the field of cognitive neuroscience
  • Experience with EEG analysis, preferably in a BCI context
  • Ability to work in a collaborative environment as well as independently
  • Interest in human behavior and training
  • Strong quantitative/programming skills

ARL 27 – Research Assistant, Human Factors in Smart Battlefield Information Systems

Project Name
Human Factors in Smart Battlefield Information Systems

Project Description
As global threats evolve, it is critical that U.S. Army warfighters can access the most relevant information in military databases. Currently, warfighters, their COs, and C2 analysts are flooded with massive amounts of information, making it difficult to quickly form strategic decisions. AI-inspired algorithmic techniques that can quickly and accurately recommend information have matured, however, effecting understanding and trust in complex information systems remains a significant challenge, and under-utilization plagues systems with improperly engineered human-agent interaction (HAI).

ARL’s Battlefield Information Processing Branch is seeking to:
-Quantitatively model individual, situational, and system-design factors that affect usage of complex information systems and agents and their relationship to mission success.
-Explore novel approaches to AI and Machine Learning that are accurate and useful, but also simple and flexible.

Job Description
ARL is interested in integrating machine learning and recommender systems approaches into the personal information systems carried by each warfighter. Summer interns will assist in the development and design of both algorithmic approaches to information filtering and the development of web-based test-beds for HAI. Interns may opt to experiment with the development of novel visualizations/interactions for AI/ML. Additionally, interns will have the opportunity to contribute their ideas to ARL’s ongoing HAI research.

Preferred Skills

  • Programming, especially Java and scripting languages (Python, etc.)
  • Web development, e.g. HTML5, CSS, JavaScript
  • Databases, e.g. MySQL
  • Statistical Analysis Packages, e.g. R or SPSS
  • Algorithms, machine learning, or other AI experience

ARL 26 – Research Assistant, Multi-modal State Classification During Tactical Hand Gesturing

Project Name
Multi-modal State Classification During Tactical Hand Gesturing

Project Description
The goal of the project is to understand what information from multiple sensors systems allows for accurate, robust, and fast classification of changes in a user’s state while they perform tactical hand gestures under high cognitive load. Analysis will include extraction of human interpretable and literature-driven features from time series data and the application of supervised machine learning techniques to classify state changes.

Job Description
The summer intern will be tasked with helping run participants through behavioral experiments, that include the administration of physiological measurements such as EEG and impedance cardiography. The intern will also be responsible for analyzing and modeling the collected physiological and behavioral data.

Preferred Skills

  • Matlab Programming
  • Machine Learning toolkits
  • Experience in Unity/C#
  • Familiarity with physiological data is a plus

ARL 25 – Research Assistant, Simultaneous Image Restoration and Recognition for Low Quality Imagery

Project Name
Simultaneous Image Restoration and Recognition for Low Quality Imagery

Project Description
Most of the currently available visual recognition algorithms assume sufficient resolution of region of interest. However, imagery obtained from surveillance videos and autonomous systems are often of lower resolution. Further, in most real world cases, the images could be affected by noise, haze and blur. In this project, we plan to explore simultaneous image restoration and recognition for these low-quality images through joint optimization of algorithms.

Job Description
The position will involve integrating currently available joint optimization algorithms (object recognition, de-blurring, and de-hazing) and implementing them on a unified platform for analyzing low quality degraded imagery. The project will also involve developing novel algorithms and architectures for processing low quality visual data.

Preferred Skills

  • Computer vision, Deep Learning, Caffe, Tensorflow

ARL 24 – Research Assistant, Socially Intelligent Assistant in AR

Project Name
Socially Intelligent Assistant in AR

Project Description
Augmented reality (AR) introduces new opportunities to enhance the successful completion of missions by supporting the integration of intelligent computational interfaces in the users’ field of view. This research project studies the role embodied conversational agents can play towards that goal. This type of interface has a virtual body and is able to communicate with the user using natural language and nonverbally (e.g., emotion expression). The core research question is: Can embodied conversational agents facilitate completion of missions above and beyond what is afforded by more traditional types of interfaces?

Job Description
The intern will develop this research on an existent platform for embodied conversational agents in AR. The intern will have to propose a set of key functionalities for the agent, implement them, and demonstrate that it improves mission completion performance. The proposed functionality must pertain to information that is perceived through the camera or 3D sensors available in the AR platform, and may be communicated to the user verbally and nonverbally.

Preferred Skills

  • Experience with AR platforms
  • Experience with Unity and C# programming
  • Some experience with HCI evaluation techniques
  • Some experience with scene understanding techniques and TensorFlow
  • Some experience with embodied conversational agents

ARL 23 – Programmer, Creation of Synthetic Annotated Image Training Datasets for Deep Learning Convolutional Neural Networks

Project Name
Creation of Synthetic Annotated Image Training Datasets Using Computer Graphics for Deep Learning Convolutional Neural Networks

Project Description
Work as part of a team on a project to develop and apply DLCNN on field deployable hardware:
Purpose: Accelerate deep learning algorithms to recognize people, behaviors and objects relevant to military purposes using computer graphics generated training images for complex environments.
Product: A training image generator which creates a corpus of automatically annotated images for a closed list of people, behavior and objects. Optimized fast and accurate machine learning algorithms that can be fielded in low-power, low-cost and low-weight fieldable sensors.
Payoff: Create an inexpensive source of military related training data and optimal deep learning algorithm tuning for fieldable hardware, which could be used to create semi-automatic annotated datasets for further training and be scalable for the next generation machine learning algorithms.

Job Description
Develop scripts for ARMA3 to create “pristine” and sensor degraded synthetic data suitable for training and testing DLCNN’s, e.g. Caffe, TensorFlow, DarkNet,…. Assets such as personnel, vehicles, aircraft, boats and other objects will be rendered under a variety of observation and illumination angle conditions, e.g. full daytime cycle, weather conditions (clear to total overcast, low to high visibility, dry and rain, snow).

Preferred Skills

  • Programming skills: Python,Matlab, scripting
  • Good documentation of algorithms, code and workflow
  • Gaming Engines, e.g. ARMA3, Unreal, Unity, Blender
  • Familiarity with Windows and Linux, cloud computing
  • Familiarity of willingness to learn basics of DLCNNs

ARL 22 – Research Assistant, Real-Time Scene Understanding in Augmented Reality

Project Name
Real-Time Scene Understanding in Augmented Reality

Project Description
Augmented reality (AR) has the potential to facilitate the successful completion of missions by providing users with critical real-time information about the surrounding environment. To accomplish this, we need intelligent systems that are able to integrate camera and 3D data efficiently. Furthermore, these systems need to be able to present this information effectively in the users’ (augmented) field-of-view. This research project focuses on three core questions: 1) What information should we perceive in the environment? 2) How do we perceive this information? 3) How do we present this information to the user?

Job Description
The intern will integrate computer vision techniques in an AR platform to allow recognition of mission-critical entities, and integrate this data with the 3D representation of the environment afforded by the AR platform. This intern will also develop the ideal visual representation for this information and lead an evaluation that demonstrates improvement to relevant mission completion metrics.

Preferred Skills

  • Experience with machine learning for scene understanding
  • Experience with AR platforms
  • Experience with Unity and C# programming
  • Experience with TensorFlow
  • Some experience with HCI evaluation techniques

ARL 21 – Research Assistant, Real-Time Facial Expression Perception to Enhance Human-Agent Collaboration

Project Name
Real-Time Facial Expression Perception to Enhance Human-Agent Collaboration

Project Description
Emotion expressions help regulate social life and communicate important information about our goals to others. Research shows that emotion expressions also have an important impact on the decisions we make with others, even if others are machines. This research project studies whether endowing computational systems with the ability to perceive facial expressions in users can help promote collaboration and cooperation between humans and machines.

Job Description
The intern will develop a system that is capable of perceiving, in real-time, the user’s facial expressions and integrate with an existent web platform for embodied agents. The intern will then study whether embodied agents that can express and perceive emotion promote collaboration and cooperation with human users.

Preferred Skills

  • Experience developing or using off-the-shelf tools for real-time perception of facial expressions
  • Experience with Unity, C#, and Javascript programming
  • Experience with HCI evaluation techniques
  • Some experience with image processing techniques and TensorFlow

ARL 20 – Research Assistant, VPU-Based Deep Learning Inference at the Edge

Project Name
VPU-Based Deep Learning Inference at the Edge

Project Description
VPU (Vision Processing Unit) is an emerging class of system-on-chip (SOC) with dedicated artificial intelligence and neural computing capabilities for accelerating embedded visual intelligence and inference on resource-constrained devices. This project is to explore the powerful VPU capability and to develop innovative technical solutions to streamline the deep neural network-based visual perception algorithms (i.e. object detection, semantic segmentation, and localization) that can be efficiently executed on miniaturized, ultralow powered VPU devices at the edge.

Job Description
The work includes pursuing novel solutions and methods to run the computationally-intensive computer vision and deep neural network inference algorithms on a ultra-compact, self-contained VPU device (e.g. Intel USB-based Neural Compute Stick) at high speed and low power without compromising accuracy and performance. A specified neural network-based perception algorithm (e.g. object detection) will be used for algorithm development and evaluation. Anticipated results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to machine learning, computer vision
  • Strong programming skills

ARL 19 – Research Assistant, Self-Localization and Perception Over Rough Environments

Project Name
Self-Localization and Perception Over Rough Environments

Project Description
Robust and accurate self-tracking and localization is vital to any systems and applications that are spatially-aware (e.g. autonomous driving/flying, location-based situational awareness, augmented reality, etc.). Current techniques, however, have significant limitations in terms of accuracy, robustness, scalability, and reliability, which cannot fulfill the operating requirements especially in unconstrained rough environments. This project is to develop high-performance localization and perception capability over rough environments (e.g. urban rubble, vegetated off-road terrain, non-GPS state estimation).

Job Description
The work includes pursuing robust techniques that enable simultaneous localization and structure perception from multi-sensors. Key innovation of the techniques is to combine advanced SLAM (simultaneous localization and mapping), object recognition, and monocular depth prediction with deep neural network under an unified framework to resolve the challenging problems. The opportunity to merge these techniques is truly unique and has never been done before. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to machine learning, computer vision
  • Strong programming skills

ARL 18 – Research Assistant, Digital Sandtable Using HoloLens AR

Project Name
Digital Sandtable Using HoloLens AR

Project Description
The goal of this project is to develop a “Digital Sandtable” prototype by applying emerging Augmented Reality (AR) and immersive techniques to visual exploration and presentation of various mission sensing data in support of mission planning, assessment, enhanced data comprehension, and decision-making.

Job Description
The work includes pursuing technical solutions, developing core algorithms, and producing a proof of concept prototype (i.e. “Digital Sandtable”) by making use of the algorithms from research efforts and off-the-shelf commercial products such as Microsoft HoloLens and augmented reality development toolkits.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to VR/AR, game development, Unity
  • Strong programming skills

ARL 17 – Research Assistant, Medical Imaging as a Tool for Wound Ballistics

Project Name
Medical Imaging as a Tool for Wound Ballistics

Project Description
The primary purpose of this project is to research forensic aspects of ballistic injury. The motivation for this project results from a desire to better understand the ability of medical imaging tools to provide clinically- and evidentiary-relevant information on penetrating wounds caused by ballistic impacts both pre- and post-mortem.

Job Description
The research assistant will collect and analyze data, including DICOM medical images, as well as document and present findings of the work.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Graduate student in biomedical engineering, mechanical engineering, or related field
  • Some experience working in a laboratory setting
  • Some experience in the medical field
  • Some experience with medical images or radiology
  • Experience in software for data collection, processing and analysis

 

ARL 16 – Programmer, Investigating AR/VR Human Interaction with Complex Analysis Geometry

Project Name
Investigating AR/VR Human Interaction with Complex Analysis Geometry

Project Description
This project will explore virtual and augmented reality methods for displaying Vulnerability / Lethality simulation data in a suitable context. This would include prototyping visuals and human-computer interactions in Unity and working with analysts and evaluators to determine optimal means for conveying complex groupings of data and results.

Job Description
The Army Research Laboratory (ARL) is looking for an enthusiastic, self-motivated student with a background in Unity Programming and 3D Model creation. The student will take the lead in understanding the requirements for a Vulnerability/Lethality analysis and use these requirements to define what interactions need to be explored.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus

Preferred Skills

  • Computer Programming
  • Experience in Unity (AR/VR a plus)
  • 3D Modeling and Texturing a plus

ARL 15 – Research Assistant, Researching Biomedical Imaging Segmentation Techniques

Project Name
Researching Biomedical Imaging Segmentation Techniques

Project Description
A big part of human survivability is adequately representing the variety of anatomy that exists. This project will be a meta-analysis of all current automated and manual segmentation techniques, as well as anatomical atlas creation, to date. The student will research how other models are made, the complexity of the models, document techniques and create a library of found research papers. This research will then be put together as a paper. The paper’s conclusion will suggest avenues for segmentation, image recognition and model creation based on their findings.

Job Description
The research assistant will document all of the findings to date and present their findings at summers’ end. Given what the student finds they will recommend a few avenues for improving segmentation and model creation techniques.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Biomedical background with focus in imaging
  • Knowledge of medical (DICOM) data and anatomy
  • Understanding of FEM models
  • Proficient writer

ARL 14 – Research Assistant, The Biomechanics of Ballistic-Blunt Impact Injuries

Project Name
The Biomechanics of Ballistic-Blunt Impact Injuries

Project Description
The primary purpose of this project is to research the mechanisms and injuries associated with ballistic-blunt impacts. The motivation for this project results from body armor design requirements. Body armor is primarily designed to prevent bullets from penetrating into the body. However, to absorb the energy of the incoming bullet, body armor can witness a large degree of backface deformation (BFD). Higher energy threats, new materials and new armor designs may increase the risk of injury from these events. Even if the body armor systems can stop higher energy rounds from penetrating, the BFD may be severe enough to cause serious injury or death. Unfortunately, there is limited research on the relationship between BFD and injury, hindering new and novel armor developments. Consequently, there is a need to research these injuries and their mechanisms so that proper metrics for the evaluation of both existing and novel system can be established.

Job Description
The research assistant will help design and execute hands-on lab research related to injury biomechanics, collect and analyze data, as well as document and present findings of the work.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Graduate student in biomedical engineering, mechanical engineering, or related field
  • Some experience working in a laboratory setting
  • Some experience in the medical field
  • Experience in software for data collection, processing and analysis

ARL 13 – Programmer, Anatomical Shape Model in WebGL

Project Name
Anatomical Shape Model in WebGL

Project Description
This project will create a 3D shape model that morph through different size human using anthropometric data. 3D geometry and anthropometric measurements will be provided. This shape model will be placed in a 3D WebGL environment (three.js), which will allow the user to define specific features of the human in a Web based environment. This web-based environment should also export the geometry. This application will support the ability to update application with future measurements or similarly formatted anthropometric data.

Job Description
The Army Research Laboratory (ARL) is looking for an enthusiastic, self-motivated student with a background in web-based programming. The student will take the lead in understanding the requirements for the Anatomical Shape Model Program and developing a prototype that will satisfy those requirements. The research assistant will explore approaches to solving the problem; develop working prototypes, as well as document and present their workflow.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus

Preferred Skills

  • Computer programming (web preferred)
  • 3D Web-programming experience (WebGL-three.js)
  • Some 3D animation/modeling experience ( morph targets) would help but not required

324 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology developed for use with virtual humans to physical robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, games, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots.

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities: 1. Porting language technology to a robot platform, 2. Design of task for human-robot collaborative activities, 3. Programming of robot for such activities, or 4. Use of a robot in experimental activities with human subjects.

Preferred Skills

Experience with one or more of:

  • Using and programming robots
  • Dialogue systems, computational linguistics
  • Multimodal signal processing, machine learning

323 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship but might include Holocaust Survivors (as part of the New Dimensions in Testimony Project), or Army heroes, or others.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills

  • Very good spoken and written English (native or near-native competence preferred)
  • General computer operating skills (some programming experience desirable)
  • Experience in one or more of the following:
    1. Interactive story authoring & design
    2. Linguistics, language processing
    3. A related field; museum-based informal education

322 – Research Assistant, Extending Dialogue Interaction

Project Name
Extending Dialogue Interaction

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue which mainly focuses either a system chatting with a single person or assisting a person with accomplishing a single goal. The project will involve investigation of one or more of the following topics: consideration of multiple goals in dialogue, multi-party dialogue (with more than two participants), multi-lingual dialogue, multi-platform dialogue (e.g. VR and phone), automated evaluation of dialogue systems, or extended and repeated interaction with a dialogue system.

Job Description
The student intern will work with the Natural language research group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills

  • Some familiarity with dialogue systems or natural language dialogue
  • Either programming ability or experience with statistical methods and data analysis
  • Ability to work independently as well as in a collaborative environment

321 – Programmer, Advancing Multisense Capabilities to Support Scalable Multiplatform Virtual Human Applications

Project Name
Advancing Multisense Capabilities to Support Scalable Multiplatform Virtual Human Applications

Project Description
This project will develop, test, and implement a scalable multimodal sensing platform that will enable novel research endeavors and enhance existing prototypes. The advantage of this software architecture project will be seen in its ability to easily connect automatic behavioral sensing and user state inference capabilities with any virtual human and non-virtual human application using a microphone, basic webcam, and other computer vision sensors (e.g., Kinect, Intel RealSense, Primesense).

Job Description
The student intern will assist in developing health applications that use motion tracking and body sensing by providing programming and testing. The student intern will work with Kinect, Intel RealSense, and other sensing devices and integrate their functionality into the software.

Preferred Skills

  • Unity 3D
  • C#
  • C++
  • Signal Processing Algorithms
  • Computer Vision
  • Windows Programming

319 – Research Assistant, Analytic Projection for Authoring and Profiling of Social Simulations

Project Name
Analytic Projection for Authoring and Profiling of Social Simulations

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multiagent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills

  • Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  • Experience with Python programming.

318 – Programmer, Body-tracking for VR / AR

Project Name
Body-tracking for VR / AR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills

  • C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
  • Detection, bilinear morphable models, texture synthesis, markov random field

317 – Programmer, Immersive Virtual Humans for AR / VR

Project Name
Immersive Virtual Humans for AR / VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills

  • C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
  • Python, GPU programming, Maya, Octane render, svn/git, strong math skills
  • Knowledge in modern rendering pipelines, image processing, rigging

316 – Programmer, Real-time Rendering for Virtual Humans

Project Name
Real-time Rendering for Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering. Ideally, the intern would have awareness about sub surface scattering techniques, hair rendering, parallax mapping and 3D modeling and reconstruction.

Preferred Skills

  • C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
  • Python, GPU programming, Maya, Octane render, svn/git, strong math skills
  • Knowledge in modern rendering pipelines, image processing, rigging

315 – Research Assistant, Mixed Reality Techniques and Technologies

Project Name
Mixed Reality Techniques and Technologies

Project Description
The ICT Mixed Reality Lab (MxR) researches and develops the techniques and technologies to advance the state-of-the-art for immersive virtual, mixed, and augmented reality experiences. With the guidance of the principal investigators (Dr. Evan Suma Rosenberg and Dr. David Krum), students working in the lab will help to design, create, and evaluate technological prototypes and/or conduct experiments to investigate specific research questions related to human-computer interaction. The specific project will be determined based on interviews with potential candidates, and selected interns will be matched to the projects that best align with their interests and skillset. For summer 2018, particular projects of interest include immersive decision making, EEG data from VR users, 3D interaction techniques, augmented reality user interface prototyping, and interaction between multiple users in shared mixed reality environments.

Job Description
Duties include brainstorming, rapid prototyping of novel techniques, developing virtual environments using the Unity game engine, running user studies, and analyzing experiment data. Some projects may include programming (C++, C#, Python, Unity) or fabrication (3D design and 3D printing).

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual and/or augmented reality technology
  • Programming in C++, C#, or similar languages
  • Familiar with experimental design and user study procedures
  • Prior experience with rapid prototyping equipment, such as 3D printers, laser cutters, etc. (optional)

312 – Programmer, One World Terrain

Project Name
One World Terrain

Project Description
One World Terrain (OWT) is an applied research project designed to assist the DoD in creating the most realistic, accurate and informative representations of the physical and non-physical landscape. Part of the goal of the Army National Simulation Center’s Synthetic Training Environment (STE) concept, is to help establish a next-generation government/industry terrain dataset for modeling and simulation (M&S) hardware and software for training and operational use.

The project seeks to:
-Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
-Utilize commercial cloudfront solutions for storing and serving geospatial data
-Procedurally recreate 3D terrain using drones and other capturing equipment
-Reduce the cost and time for creating geo-specific datasets for M&S

Job Description
The programmer will work with the OWT technical lead in support of recreating digital 3D global terrain capabilities that replicate the complexities of the operational environment for M&S.

Preferred Skills

  • Interest/experience in photogrammetry
  • Interest/experience with geographic information system applications and datasets
  • Unity/WebGL

311 – Research Assistant, Exploring Narrative for Immersive Experience Design

Project Name
Exploring Narrative for Immersive Experience Design

Project Description
The ICT MxR Lab researches and develops the techniques and technologies to advance the state-of-the-art for immersive virtual reality and mixed reality experiences. With the guidance of the principal investigator (Todd Richmond and David Nelson), students working in the lab will assist in creating interactions and experiences rooted in narrative principles, in order to help inform the design and implementation of more effective and compelling immersive training scenarios and applications.

Job Description
Duties will include analyzing immersive experience design to determine necessities for creating rich immersive scenarios; and conceptualizing and possibly the rapid prototyping of novel interactions and techniques developed in immersive environments using the Unity game engine.

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual reality technology or 3D/touch interfaces

310 – Research Assistant, Virtual Human Interviewer

Project Name
Virtual Human Interviewer

Project Description
Would you tell a computer something you wouldn’t tell another person? Our research finds that people are more comfortable talking with a virtual human interviewer than a real human interviewer. This happens in a variety of contexts: interviewing about mental health, financial status, and even when people are learning how to negotiate. Next we need to understand the boundary conditions (when and for whom does this happen). For example, it might only occur because particularly people fear being judged in these contexts. Or mostly among people who do feel concerned about other people. This research will address these important questions to help better use technology to let people talk about things they wouldn’t tell other people.

Job Description
The research assistant will work side by side with lead researchers on this project. They will help to design and implement the next steps in this research project with direct collaboration with the lead researchers. With guidance from our research staff, they will also help to run the study and also work with the team to learn how to process and analyze the data. If appropriate, students will also be actively involved in manuscript writing for this research project.

Preferred Skills

  • Experimental design
  • Running user studies
  • Computer-human or computer-mediated interaction

308 – Research Assistant, Character Animation and Simulation Group

Project Name
Individualized Models of Motion and Appearance

Project Description
Digital characters are an important part of entertainment, simulations and digital social experiences. Characters can be designed to emulate or imitate human-like (and non-human like behavior). However, humans are very complicated entities, and in order to create a convincing virtual human, it is necessary to model various elements, such as human-like appearance, human-like behaviors, and human-like interactions. 3D characters can fail to be convincing representations because of improper appearance, improper behavior, or improper reactions. The goal of this internship is to advance the state-of-the-art in character simulation by improving or adding aspects to a digital character that would make them more convincing representations of real people.

Job Description
Research, develop and integrate methods for use on virtual characters to better fidelity, interaction or realism of characters. Design or implement algorithms from research papers and integrate into animation/simulation system (SmartBody).

Preferred Skills

  • C+
  • Computer graphics and animation knowledge
  • Research in character/animation/simulation/human modeling

307 – Research Assistant, Motion Interpretation and Generation Using Deep Learning

Project Name
Real-time Behavior Interpretation

Project Description
The Real-time Behavior Interpretation project aims to model how people make sense of the behavior of others, inferring their plans, goals, intentions, emotions, and social situations from observations of their actions. This project uses as input short animated movies involving simple shapes, and outputs English narratives that correspond to human-like interpretations of the situation. To accomplish this task, the project employs contemporary deep-learning approaches to action recognition, interpretation using probability-ordered logical abduction, and natural language generation technologies.

Job Description
2018 summer interns on the RBI project will focus on applying deep learning techniques to the problem of action recognition and motion generation, using crowdsourced datasets of the motion trajectories of simple shapes.

Preferred Skills

  • Experience using Tensorflow, including RNNs and CNNs
  • Web programming skills, particularly Javascript and SVG manipulation
  • Interest in animation and motion interpretation

306 – Research Assistant, Voice-controlled Interactive Narratives for Training (VIANT)

Project Name
Voice-controlled Interactive Narratives for Training (VIANT)

Project Description
The VIANT project aims to build interactive training experiences that are audio-only, where fictional scenario content is presented as produced audio clips with professional voice actors, and player actions are accepted as spoken input using automated speech recognition.

Job Description
2018 summer interns on the VIANT project will create at least three complete interactive audio narratives for different training objectives. This will include researching the training domain, creating a post-test evaluation metric, designing a scenario structure, and writing a fictional script that can be used to produce audio content.

Preferred Skills

  • Excellent fiction writing abilities
  • Familiarity with interactive fiction approaches
  • Experience with film / radio / television production

305 – Research Assistant, Non-verbal Communication During Joint Tasks

Project Name
Non-verbal Communication During Joint Tasks

Project Description
How do you know what someone is going to do when they don’t say a word? This research explores how we communicate our intentions non-verbally. Nonverbal behaviors are an important channel of communication and this is especially true in the case of negotiations or joint tasks, where we might not know what the other person is going to do and our outcome depends on it. Studies have shown that humans are able to recognize non-verbal cues that signal others’ intentions. But what exactly are those cues that humans pick up? And how could we identify and track them automatically? This research tackles this important question.

Job Description
The research assistant will work side by side with lead researchers on this project. They will help to design and implement the next steps in this research project with direct collaboration with the lead researchers. With guidance from our research staff, they will also help to run the study and also work with the team to learn how to process and analyze the data. If appropriate, students will also be actively involved in manuscript writing for this research project.

Preferred Skills

  • Experimental design
  • Running user studies
  • Computer-human or computer-mediated interaction

304 – Programmer, Integrated Virtual Humans Programmer

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual humans systems by combining various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question / answer characters, to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other, both verbally and nonverbally, i.e. they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C#, or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA, or PHP
  • Experience with Unity
  • Excellent general computer skills
  • Background in Artificial Intelligence is a plus

303 – Research Assistant, Personal Assistant for Life Long Learning (PAL3)

Project Name
PAL3

Project Description
This research involves a mobile-based learning technology that connects many different technologies from ICT (e.g., virtual humans, NLP dialog systems, affect detection, modeling user knowledge). PAL3 is a system that monitors learning and provides intelligent recommendations and explanations. This project has many different pieces, which include gamification design, data analytics, user interface programming, interactive dialogs (e.g., with a virtual mentor), and a variety of other opportunities. The PAL3 project is unique and pushes the boundaries on some key research topics that will be influential for learning technology in years to come.

Job Description
The ideal candidate would be a quick learner and a self-starter, with a high level of technical competence in AI, UI design, or machine learning & statistics. This research involves a mid-sized team, so there are substantial opportunities for engaging with senior programmers, researchers, and students from other fields. This project also has opportunities for contributing to peer-reviewed publications.

Preferred Skills

  • Programming (C#, Python, JavaScript)
  • UI Design or Game Design (Unity, React.js, etc.)
  • Machine Learning / Statistics

302 – Programmer, Multi-Agent Architecture Programmer

Project Name
GIFT Multi-Agent Architecture

Project Description
This project involves contributing to an open source repository which leverages AI to improve learning. This research integrates a variety of different technologies, with an emphasis on agent-oriented programming and web services. The goal of this work is to make it easier to plug-and-play new modules for a learning technology, and to help design technologies that learn over time.

Job Description
This research is highly technically-oriented, with an emphasis on web services and systems integration (e.g., working on a large system with many moving pieces). The ideal candidate for this work has strong programming skills, and is looking to push those to the limit with complex problems that require solving novel problems for messaging patterns, distributed processing, and software-as-a-service infrastructure. Some opportunities for user experience design are also available for authoring tools that make configuring web services easier.

Preferred Skills

  • Strong programming fundamentals (ideally in JavaScript, Python, Java, etc.)
  • Web service programming
  • Agent-oriented programming

301 – Research Assistant, Learning Analytics Research

Project Name
SMART-E

Project Description
This project is building a service for generalized engagement metrics, with the goal to apply this service to a variety of different learning and training systems (both in real time and to analyze already-collected data).

Job Description
This research has opportunities to research and apply machine learning, statistics, and web service programming. As this research will involve a variety of analytics types, there will be different opportunities depending on skill level and experience (e.g., undergraduate, masters, PhD). Finally, interns will be expected to do work and documentation that contributes to a publication in a peer reviewed venue.

Preferred Skills

  • Machine Learning
  • Programming (Python / R / JS / C# preferred, but not required)
  • Statistics