ARL 49 – Research Assistant, Deep Learning based Holistic Scene understanding Using Heterogeneous Platforms

Project Name
Deep Learning based Holistic Scene understanding Using Heterogeneous Platforms

Project Description
The goal of this project is to provide real-time situational awareness to combat arms units in complex environments by capturing a comprehensive view of a battlefield. Specifically, we plan to perform holistic scene understanding using multiple heterogeneous platforms (ground & aerial) with multimodal sensors (e.g. visible/ thermal cameras, acoustics, etc.) distributed over a region of interest.

Job Description
The student’s work will involves developing AI&ML approaches for distributed learning on heterogeneous platforms with multi-modal data. This includes: a) Developing light-weight ML methods (classification, detection, tracking, etc.) that can operate in resource constraint environment. b) Developing deep learning based networks that can learn from each other by sharing local analytics results (classification labels, object bounding boxes, simple tracking data, etc.) from neighboring nodes and combining them at individual nodes for further refinements to provide improved scene understanding.

Preferred Skills
Deep learning, machine learning, computer vision, caffe, tensorflow

ARL 48 – Research Assistant, Socially Intelligent Assistant in AR

Project Name
Socially Intelligent Assistant in AR

Project Description
Augmented reality (AR) introduces new opportunities to enhance the successful completion of missions by supporting the integration of intelligent computational interfaces in the users’ field of view. This research project studies the role embodied conversational agents can play towards that goal. This type of interface has a virtual body and is able to communicate with the user using natural language and nonverbally (e.g., emotion expression). The core research question is: Do embodied conversational interfaces improve decision making quality and efficiency when compared to more traditional types of interfaces?

Job Description
The candidate will develop this research on an existent platform for embodied conversational agents in AR. The candidate will have to propose a set of key functionalities for the agent, implement them, and demonstrate that it improves decision making performance. The proposed functionality must pertain to information that is perceived through the camera or 3D sensors available in the AR platform, and may be communicated to the user verbally and nonverbally.

Preferred Skills
– Experience with AR platforms
– Experience with Unity and C# programming
– Some experience with HCI evaluation techniques
– Some experience with scene understanding techniques and TensorFlow
– Some experience with embodied conversational agents

ARL 47 – Research Assistant, Optimization And Multi-Agent Controls

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The RA will assist the lead by implementing state of the art optimization algorithms, and/or developing new algorithms to optimize multiple objectives. For example, an Army scenario involving UAVs may want to maximize speed, minimize cost, and maximize stealth simultaneously. The RA will also implement and/or develop scalable algorithms for control of multiple simulated robots. Examples include collision avoidance algorithms for UAVs, or task distribution algorithms for teams of humans and UAVs.

Preferred Skills
– Combinatorial optimization (e.g. traveling salesman problem, vehicle routing problem)
– Multi-robot controls (e.g. collision avoidance, path planning)
– Multi-objective optimization
– Programming (Java, Python, C++, and R preferred)
– Tradeoff analysis
– Industrial or Systems Engineering
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 46 – Research Assistant, Machine Scene Understanding from Multimodal Data

Project Name
Machine Scene Understanding from Multimodal Data

Project Description
Current manned and unmanned perception platforms, ground or airborne, carry multimodal imagers and sensors such as electro-optical/infrared cameras, depth sensors, and LiDAR sensors, with future expectation of additional modalities. This project focuses on the development of machine learning (ML) networks for scene understanding using multimodal data, in particular using a diverse dataset consisting of high-fidelity simulated RGB color and IR images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and data fusion techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 45 – Programmer, Multi-Agent Modeling And Simulation

Project Name

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The programmer will create functions and modules that can be integrated into the existing codebase. The programmer may create models of new asset types (e.g. futuristic flying vehicles) based on their physics and mechanics. Under guidance from the lead, the programmer may implement models of human operators.

Preferred Skills
– Java development, Python development, C++ development,
– Collaborative development (e.g. Github, Bitbucket)
– Integrating features from diverse programs to enable new analysis
– Familiarity with physics, engineering, or robotics
– Familiarity with UAVs (e.g. drone racing or design)
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 44 – Research Assistant, Monocular Visual Localization Assisted with Deep Learning

Project Name
Monocular Visual Localization Assisted with Deep Learning

Project Description
Robust and accurate localization is vital to any intelligent system and application that are spatially-aware including autonomous driving, robot navigation, location-based situational awareness, and augmented reality. This project is to develop high-performance self-tracking and localization techniques with single monocular camera that are suitable for intelligent perception on low Size, Weight and Power (SWaP) platforms.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and object recognition techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 43 – Research Assistant, Human Modeling and Simulation

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The RA will assist the lead by researching quantifiable aspects of human performance. The scenarios considered will have UAV operators in a stressful environment trying to complete Army-relevant missions such as search and rescue. The RA will synthesize research on human performance so that the team can mathematically model and simulate humans in this environment (e.g. ability to detect UAVs flying overhead, number of UAVs a person can
simultaneously control).

Preferred Skills
– Human factors engineering
– Human-centered design
– Cognitive ergonomics
– Familiarity with UAVs (e.g. drone racing or design
– Programming (Java, C++, and Python preferred)
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 42 – Research Assistant, Deep Learning Models for Human Activity Recognition Using Real and Synthetic Data

Project Name
Human Activity Recognition Using Real and Synthetic Data

Project Description
In the near future, humans and autonomous robotic agents – e.g., unmanned ground and air vehicles – will have to work together, effectively and efficiently, in vast, dynamic, and potentially dangerous environments. In these operating environments, it is critical that (a) the Warfighter is able to communicate in a natural and efficient way with these next generation combat vehicles, and (b) the autonomous vehicle is able to understand the activities that friendly or enemy units are engaged in. Recent years have, thus, seen increasing interest in teaching autonomous agents to recognize human activity, including gestures. Deep learning models have been gaining popularity in this domain due to their ability to implicitly learn the hierarchical structure in the activities and generalize beyond the training data. However, deep models require vast amounts of labeled data which is costly, time-consuming, error-prone, and requires measures to address any potential ethical concerns. Here we’ll look to synthetic data to overcome these limitations and address activity recognition in Army-relevant outdoor, unconstrained, and populated environments.

Job Description
The candidate will implement Tensorflow deep learning models for human activity recognition – e.g., 3D conv nets, I3D – that can be trained using real human gesture data, and synthetic gesture data (generated using an existent simulator). Knowledge of domain transfer techniques (e.g., GANs) may be useful. The candidate will have to research and demonstrate a solution to this problem.

Preferred Skills
– Experience with deep learning models for human activity recognition
– Experience with Python and TensorFlow
– Independent thinking and good communication skills

ARL 41 – Research Assistant, Wound Ballistics

Project Name
Medical Imaging as a Tool for Wound Ballistics

Project Description
The primary purpose of this project is to research forensic aspects of ballistic injury. The motivation for this project results from a desire to better understand the ability of medical imaging tools to provide clinically- and evidentiary-relevant information on penetrating wounds caused by ballistic impacts both pre- and post-mortem.

Job Description
The research assistant will collect and analyze data, including DICOM medical images, as well as document and present findings of the work.

Preferred Skills
– Graduate student in biomedical engineering, mechanical engineering, or related field
– Some experience working in a laboratory setting
– Some experience in the medical field
– Some experience with medical images or radiology
– Experience in software for data collection, processing and analysis

ARL 40 – Research Assistant, The Biomechanics of Ballistic-Blunt Impact Injuries

Project Name
The Biomechanics of Ballistic-Blunt Impact Injuries

Project Description
The primary purpose of this project is to research the mechanisms and injuries associated with ballistic-blunt impacts. The motivation for this project results from body armor design requirements. Body armor is primarily designed to prevent bullets from penetrating into the body. However, to absorb the energy of the incoming bullet, body armor can witness a large degree of backface deformation (BFD). Higher energy threats, new materials and new armor designs may increase the risk of injury from these events. Even if the body armor systems can stop higher energy rounds from penetrating, the BFD may be severe enough to cause serious injury or death. Unfortunately, there is limited research on the relationship between BFD and injury, hindering new and novel armor developments. Consequently, there is a need to research these injuries and their mechanisms so that proper metrics for the evaluation of both existing and novel system can be established.

Job Description
The research assistant will help design and execute hands-on lab research related to injury biomechanics, collect and analyze data, as well as document and present findings of the work.

Preferred Skills
– Graduate student in biomedical engineering, mechanical engineering, or related field
– Some experience working in a laboratory setting
– Some experience in the medical field
– Experience in software for data collection, processing and analysis

ARL 39 – Research Assistant, Computer Vision/Machine Learning Researcher

Project Name
Zero-Shot Learning for Semantic Scene Recognition

Project Description
The project is going to be on analyzing images or videos using deep learning based zero-shot and/or few-shot learning techniques for semantic scene recognition applications, such as detection, action/activity recognition, segmentation, captioning, etc. In this project, we will develop novel and effective zero-shot/few-shot learning approaches that are required to handle numerous real world scenarios where datasets are sparsely provided.

Job Description
You’ll be working: 1) on a problem related to ‘zero-shot/few-shot learning’ on images and videos for semantic scene recognition including detection, action/activity recognition, segmentation, captioning, etc. 2) independently to carry out a literature survey on state of the art approaches and devise a novel method 3) towards publishing a paper at the end of the internship.

Preferred Skills
– The ability to write code (Python) for computer vision/machine learning techniques
– Be familiar with deep learning frameworks (PyTorch, Caffe, etc)
– An advanced degree in computer science or relevant (MS or PhD)
– Have previous exprience of implementing deep learning algorithms to solve problems in computer vision/machine learning

ARL 38 – Research Assistant, Materials and Device Simulations

Project Name
Materials and Device Simulations for Emerging Electronics

Project Description
The project is part of an ongoing emerging materials and device research effort in the US Army Research Laboratory (ARL). One focus area is exploration and investigation of materials and device designs, both theoretically and experimentally, for low-power, high-speed, and light weight electronic devices.

Job Description
The research assistant will work with ARL scientists to investigate fundamental material and device properties of low-dimensional nanomaterials (2D materials and functionalized diamond surfaces). For this study, various bottom-up materials and device modelling tools based on atomistic approaches such as first-principles density functional theory (DFT) and molecular dynamics (MD) will be used. In addition, numerical and analytical modeling will be used to quantify and analyze data obtained from atomistic simulation to facilitate comparison to in-house experimental findings.

Preferred Skills
– An undergraduate or graduate student in electrical engineering, materials science, physics or computational chemistry
– Sound knowledge of materials and device physics concepts
– Proficiency in at least one scripting language
– Proficiency with atomistic materials modeling concepts
– Interest in fundamental materials design and discovery

ARL 37 – Programmer, Synthetic Image Generator for Deep Learning Using Unity 3D

Project Name
Creation of Synthetic Annotated Image Training Datasets Using Computer Graphics for Deep Learning Convolutional Neural Networks

Project Description
Work as part of a team on a project to develop and apply DLCNN on field deployable hardware.
Purpose: Accelerate deep learning algorithms to recognize people, behaviors and objects relevant to military purposes using computer graphics generated training images for complex environments.
Product: A training image generator which creates a corpus of automatically annotated images for a closed list of people, behavior and objects. Optimized fast and accurate machine learning algorithms that can be fielded in low-power, low-cost and low-weight fieldable sensors.
Payoff: Create an inexpensive source of military related training data and optimal deep learning algorithm tuning for fieldable hardware, which could be used to create semi-automatic annotated datasets for further training and be scalable for the next generation machine learning algorithms.

Job Description
Develop a Unity 3D based image generator to create “pristine” and sensor degraded synthetic data suitable for training and testing DLCNN’s, e.g. Caffe, TensorFlow, DarkNet…. assets such as personnel, vehicles, aircraft, boats and other objects will be rendered under a variety of observation and illumination angle conditions, e.g. full daytime cycle, weather conditions (clear to total overcast, low to high visibility, dry and rain, snow).

Preferred Skills
– Familiarity with Unity3D gaming engine
– Able to program in C#, shaders
– Self-motivated and able to work with existing code, github

ARL 36 – Research Assistant, Machine Learning for Autonomous Visual Navigation

Project Name
Navigation Aiding Sources

Project Description
In this project we develop the theoretical concepts and algorithms for successful navigation of systems using terrain matching and geo-registration techniques for airborne platforms. Focus is on real time implementation on embedded platforms and successful implementation with limited training data. Object and landmark detection and tracking, self localization, networked and collaborative systems are relevant sub-topics for exploration.

Job Description
Candidate will be well versed in machine learning and computer vision, with additional knowledge in navigation and localization strategies. The applicant will work alongside a multidisciplinary team of engineers and researchers located at ARL West and Aberdeen Proving Ground, MD. Algorithm development and subsequent implementation is required for this position.

Preferred Skills
– Machine learning
– Computer vision

ARL 35 – Research Assistant, Creative Visual Storytelling

Project Name
Creative Visual Storytelling

Project Description
This project seeks to discover how humans tell stories about images, and to develop computational models to generate these stories. “Creative” visual storytelling goes beyond listing observable objects and their visual properties, and takes into consideration several aspects that influence the narrative: the environment and presentation of imagery, the narrative goals of the telling, and the audience who is listening. This work involves aspects of computer vision to visually analyze the image, commonsense reasoning to understand what is happening, and natural language generation and theories of narratives to describe it in a cohesive and engaging manner. We will work with low-quality images and non-canonical scenes. Paper reference: http://www.aclweb.org/anthology/W18-1503

Job Description
Possible projects include:
– Develop software framework for crowdsourcing the annotation of stories written about images
– Conduct manual and/or computational analysis of the narrative styles and properties of stories written about images
– Experiment with or combine existing natural language generation and/or computer vision software for creative visual storytelling
– Work with project mentor to design evaluation criteria for assessing the quality of stories written about images

Preferred Skills
Interest in and knowledge of some combination of the following:
– Programming expertise for language generation and/or computer vision
– Digital narratives and storytelling applied to images
– Experimental design and applied statistics for rating and evaluating stories

ARL 34 – Research Assistant, Individual Response to Immersive Technology

Project Name
Individual Response to Immersive Technology

Project Description
This project examines the role of individual characteristics (stable traits and transient states) that may influence response to immersive technologies such as virtual reality and virtual environments. The project examines these effects in the context of spatial learning and navigation tasks.

Job Description
Intern will score and/or analyze existing data to uncover relationships among individual traits, states, immersive technology, and performance on a the tasks. Intern may also be involved in ongoing related research activities.

Preferred Skills
– Statistical analysis
– Knowledge of Matlab or similar programs a plus
– Interest in psychology, virtual reality, and/or learning technology

ARL 33 – Programmer, Individualized Gamification Demonstration

Project Name
Individualized Gamification: Predicting Performance from a Short Questionnaire

Project Description
This project is investigating individualized gamified learning. Past work has shown that the results of an extensive personality/psychological trait questionnaire are able to predict an individual’s performance on a naturalistic training task. The goal of the summer intern project is to demonstrate that a shorter questionnaire could achieve similar predictive power.

Job Description
Job scope will vary based on intern interests and capabilities. At minimum, the intern will program a simple interface to administer a short questionnaire and display a performance prediction based on existing models. Extensions could include statistical analyses to select a subset of questions and pilot data collection to validate predictions.

Preferred Skills
– Programming — Python or R preferred
– Statistics — Factor analysis & related techniques
– Interest in psychology, cognitive neuroscience, and/or gamified learning

344 – Programmer, Immersive Virtual Humans for AR/VR

Project Name
Immersive Virtual Humans for AR/VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills
– C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
– Python, GPU programming, Maya, Octane render, svn/git, strong math skills
– Knowledge in modern rendering pipelines, image processing, rigging

343 – Body Tracking for AR/VR

Project Name
Body Tracking for AR/VR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills
– C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
– Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
– Detection, bilinear morphable models, texture synthesis, markov random field

342 – Programmer, Real-Time Rendering of Virtual Humans

Project Name
Real-Time Rendering of Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization of our new facial scan database and to animate and render virtual humans. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering, and animation. Ideally, the intern would have awareness about physically based rendering, sub surface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
– C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
– Python, GPU programming, Maya, version control (svn/git), strong math skills
– Knowledge in modern rendering pipelines, image processing, rigging, blendshape modelling

341 – Research Assistant, Trust, Bonding and Rapport Between Humans and Autonomy

Project Name
Trust, Bonding and Rapport Between Humans and Autonomy

Project Description
The project will explore verbal and nonverbal techniques for fostering trust, bonding and rapport between a human user and autonomous systems (robots and virtual humans). Intern will work with AI dialog and multi-modal sensing methods to examine ways to sense and respond to human verbal and nonverbal information to build rapport and trust in a laboratory experimental task.

Job Description
Duties include programming and working with existing AI methods. Some understanding of HCI/HRI, experimental methods and data analysis will be useful.

Preferred Skills
– Extensive programming experience
– Knowledge of signal processing and machine learning methods
– Evidence of research potential (e.g., publications)

340 – Research Assistant, The Organizational Impact of Autonomy

Project Name
The Organizational Impact of Autonomy

Project Description
Advances in AI make it possible for intelligent agents to act on our behalf in interactions with other people (negotiating the price of products or evening managing subordinates in an organization). The goal of this research to examine the psychological and organizational consequences of this technology. Intern will be involved in adapting existing AI technology and engage in experimental design and execution of online (MTurk) study on how people use this technology.

Job Description
Duties include some programming, experimental design, execution, data analysis and writing (with aim to publish the research).

Preferred Skills
– Programming experience (Java, java script)
– Knowledge of experimental design and statistical analysis (SPSS)
– Evidence of research potential (e.g., publications)

339 – Research Assistant, Impact of AI on Users’ Psychology

Project Name
Impact of AI on Users’ Psychology

Project Description
Irresistible pressures are driving the adoption of AI. What impact will this have on us? Will using AI or operating through an autonomous robot, for example, undermine trust, increase risk-taking, reduce vigilance to threats and increase dehumanization of others? This project will examine the psychological impact of such advances.

Job Description
The Research Assistant will assist by helping to finalize development of agents that vary in level of autonomy (e.g., full AI vs assisted). The assistant will help to run an experiment where the impact of autonomy on users’ psychological factors is tested, as well as analyze the results of this study.

Preferred Skills
– Experimental design
– Running user studies
– Computer-human or computer-mediated interaction

338 – Programmer, Virtual Reality Game-based Rehabilitation

Project Name
Virtual Reality Game-based Rehabilitation

Project Description
The Medical Virtual Reality (MedVR) group at ICT is devoted to the study and advancement of uses of virtual reality (VR) simulation technology for clinical purposes. MedVR Lab’s Game-based Rehab Group develops low-cost and home-based VR toolkits for physical therapy. We use gaming technology to help patients rehabilitate.

Job Description
Intern will work on our Mystic Isle project which allows rehab patients to do rehabilitation by playing a motion game that tracks their movements using a Kinect sensor. Interns will interface with therapists and engineers to convert Mystic Isle into a virtual reality application and help support user-centered trials at the Keck School of Medicine.

Preferred Skills
– Experience working with Unity3D or other game frameworks
– Proficiency in C/C++/C# and Microsoft Visual Studio
– Ability to work independently and efficiently under deadlines
– Strong communication and teamwork skills
– Experience working on a VR game
– Familiarity with 3D Max, and/or Maya

337 – Research Assistant, Emotion Evoking Game

Project Name
Emotion Evoking Game

Project Description
Taking the player through an emotional journey is an effective way to engage players and create an everlasting game-play experience. Techniques such as the use of sound and visual effects have been experimented in movies and games. In this project, we will focus on the design of game events based on appraisal theories of emotion, and research how characteristics and sequences of game events can induce emotions through interactive game-play.

Job Description
The research assistant will build upon and extend an existing game, EVG, to design a role-playing game to induce emotions.

Preferred Skills
– Game development experience
– Passion for game design
– Unity, C/C++

336 – Research Assistant, Persuasive Games

Project Name
Persuasive Games

Project Description
Cognitive dissonance is a state of mental discomfort that arises from conflicting attitudes or beliefs within an individual. Such dissonance motivates the individuals to restore internal consistency by changing their attitude and behavior. Traditionally, dissonance-based interventions are carried out in person and are both time- and resource-intensive, limiting access to this effective attitudinal and behavioral intervention. In this project, we will design a role-playing game, called DELTA-X, to create an immersive virtual environment for inducing cognitive dissonance.

Job Description
The research assistant will develop a role-playing game, based on an existing game DELTA, to induce attitude and behavioral change.

Preferred Skills
– Unity, C/C++
– Experience in game development
– Passion for game design

335 – Research Assistant, Explainable AI for Agent-based Simulation

Project Name
Explainable AI for Agent-based Simulation

Project Description
In team-based Synthetic Training Environments (STEs), populated with AI-driven entities, the reasoning process behind these AI entities offers great opportunities for teams to understand what happened during training, why it happened, and how to improve. Unfortunately, while there are existing agents that can generate realistic actions in simulated exercises, they typically cannot describe events from their perspective or explain the reasoning behind their behaviors. This is often due to the fact that the algorithms underlying AI-driven entities are not readily explainable, making the resulting behavior hard to understand even for AI experts. This project addresses this challenge by incorporating explainable artificial intelligence (XAI) to support explainable agent behaviors. Although specific methods vary, depending on the targeted AI algorithms, the XAI interface creates an interpretable model for the underlying algorithms. Components of the interpretable models can then be used to create explanations of the decision-making of the AI entities.

Job Description
The research assistant will work with existing agent frameworks and machine learning algorithms to develop explainable models for the AI algorithms.

Preferred Skills
– Good knowledge of AI algorithms
– Python, C/C++
– Good knowledge of math

334 – Research Assistant, Extending Dialogue Interaction

Project Name
Extending Dialogue Interaction

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue which mainly focuses either a system chatting with a single person or assisting a person with accomplishing a single goal. The project will involve investigation of one or more of the following topics: consideration of multiple goals in dialogue, multi-party dialogue (with more than two participants), multi-lingual dialogue, multi-platform dialogue (e.g. VR and phone), automated evaluation of dialogue systems, or extended and repeated interaction with a dialogue system.

Job Description
The student intern will work with the Natural Language Research Group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills
– Some familiarity with dialogue systems or natural language dialogue
– Either programming ability or experience with statistical methods and data analysis
– Ability to work independently as well as in a collaborative environment

333 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship. Previous subjects have included Holocaust and Sexual Assault Survivors and Army Heroes.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
– Very good spoken and written English (native or near-native competence preferred)
– General computer operating skills (some programming experience desirable)
– Experience in one or more of the following:

1. Interactive story authoring & design
2. Linguistics, language processing
3. A related field; museum-based informal education

332 – Research Assistant, Virtual Human Dialogue: Game + Social Chat Activities Experiment

Project Name
Virtual Human Dialogue: Game + Social Chat Activities Experiment

Project Description
This project will involve the design of an experiment seeking to demonstrate that social chat can be used to help an automated agent personalize to its user when playing a word guessing game. This will be interesting to interns interested in getting hands on experience with interactive virtual human agents and learning about how to design and conduct experiments to evaluate these types of agents, or students interested in using artificial intelligence for gaming purposes as well as students interested in investigating using psychology theories to motivate agent design decisions. The student will work one-on-one with a PhD student for this project as well as be advised by a Professor.

Job Description
The intern will be exposed and involved in many stages of an experiment intended to evaluate an interactive agent that plays a word-guessing game and participates in a social chat. These stages include data analysis, experiment design, agent implementation, and possibly running participants through the experiment. The intern might also have the opportunity to contribute to a publication (after the internship) depending on the results from the experiment. The intern will conduct data analysis from completed experiments that evaluated social chat design decisions. This analysis will include statistical investigations as well as annotations. The findings from this analysis will help the intern contribute to finalizing design decisions for a new experiment that is investigating the social chat activity’s ability to help an agent personalize to its user. Depending on skills; the intern will also have the opportunity to implement changes to an agent using ICT’s virtual human toolkit. These changes should help the agent perform the social chat activity and personalize the agent’s word-guessing game to a user. The intern may also help run initial participants through this experiment.

Preferred Skills
– Python, Java, SQL
– Statistics, Machine Learning
– Annotation

331 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology to physical and simulated robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, games, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities: 1. Porting language technology to a robot platform, 2. Design of task for human-robot collaborative activities, 3. Programming of robot for such activities, 4. Use of a robot in experimental activities with human subjects, 5. Analysis of experimental human-robot dialogue data.

Preferred Skills
– Experience with one or more of:
– Using and programming robots
– Dialogue systems, computational linguistics
– Multimodal signal processing, machine learning

330 – Research Assistant, The Sigma Cognitive Architecture

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing a cognitive architecture – i.e., a computational hypothesis about the fixed structures underlying a mind – called Sigma that is based on an extension of the elegant but powerful formalism of graphical models, enabling combining both statistical/neural and symbolic aspects. Sigma is built in Lisp, but its core algorithms are in the process of being ported to C. We are looking for someone interested in working with Sigma in one of a number of possible areas, including abduction, attention, episodic memory and neural reinforcement learning.

Job Description
Looking for a student interested in developing, applying, analyzing and/or evaluating new intelligent capabilities in an architectural framework.

Preferred Skills
– Programming (Lisp preferred, but can be learned once arrive)
– Graphical models (experience preferred, but ability to learn quickly is essential)
– Cognitive architectures (experience preferred, but interest is essential)

329 – Programmer, Advancing Middle School Teachers Understanding of Proportional Reasoning for Teaching

Project Name
Advancing Middle School Teachers Understanding of Proportional Reasoning for Teaching

Project Description
The Institute for Education Sciences (IES) supported, Advancing Middle School Teachers’ Understanding of Proportional Reasoning for Teaching project is building a virtual agent facilitator to help teachers with their professional development on mathematics skills. This intelligent tutoring system will help teachers learn new strategies and skills to help teach proportions to students. ICT will lead technical development, usability analysis, and iterative revision of the intervention software and interaction policies for module content. This role will include both the development of the web application used by teachers, as well as data pipelines to output usage and performance patterns that are integrated into analyses to evaluate efficacy and feasibility. ICT is working with USC’s Rossier School of Education on this project.

Job Description
The goal of the internship will be to contribute to a web application used by teachers, as well as data pipelines to output usage and performance patterns that are integrated into analyses to evaluate efficacy and feasibility. Specific tasks will involve improving a dialog-based tutor, building new user interfaces for a MERN-stack web application, and contributing to data analytics/machine learning which will help identify usage patterns by teachers using the system.

Preferred Skills
– JavaScript/Node.js, React, Python
– Basic AI Programming or Statistics
– Experience with data collection and recording video and audio files

328 – Programmer, Personalized Assistant for Life-Long Learning (PAL3) – AI

Project Name
Personalized Assistant for Life-Long Learning (PAL3) – AI

Project Description
PAL3 is a system for delivering engaging and accessible education via mobile devices. It is designed to provide on-the-job training and support lifelong learning and ongoing assessment. The system features a library of curated training resources containing custom content and pre-existing tutoring systems, tutorial videos and web pages. PAL3 helps learners navigate learning resources through: 1) An embodied pedagogical agent that acts as a guide; 2) A persistent learning record to track what students have done, their level of mastery, and what they need to achieve; 3) A library of educational resources that can include customized intelligent tutoring systems as well as traditional educational materials such as webpages and videos; 4) A recommendation system that suggests library resources for a student based on their learning record; and 5) Game-like mechanisms that create engagement (such as leader-boards and new capabilities that can be unlocked through persistent usage).

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) models driving the dialog systems for PAL3 to support goal-setting, teamwork, or fun/rapport-building; (2) modifying the intelligent tutoring system and how it supports the learner, and (3) statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system. Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills
– C#, JavaScript/Node.js, Python, R
– Dialog Systems, Basic AI Programming, or Statistics
– Strong interest in intelligent agents, human and virtual behavior, and social cognition

327 – Programmer, SMART-E: Service for Measurement and Adaptation to Real-Time Engagement

Project Name
SMART-E: Service for Measurement and Adaptation to Real-Time Engagement

Project Description
The vision behind this work is a toolkit that generalizes metrics and interventions to constantly monitor and optimize engagement and learning in virtual environments. The toolkit will continuously measure the experiences of learners as they interact with virtual learning environments, such as intelligent tutoring systems (ITS). This toolkit for assessing engagement will systematically analyze engagement data to provide insights that improve a target training system along multiple dimensions of engagement that range from short term cognitive improvement to long-term identity formation as a professional on the training topic. Engagement is critical for Army training because lack of engagement results in lower learning, engagement predicts persistence and dropout , and engagement is actionable and can be induced through interventions.

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) machine learning for intelligent tutoring systems and how it supports the learner, and (2) models driving the virtual human utterances and behaviors, and (3) emotion coding, statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system. Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills
– Python, JavaScript, C#
– Basic AI Programming or Statistics
– Strong interest in human and virtual behavior and cognition

326 – Programmer, Integrated Virtual Humans Programmer

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual humans systems by combining various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question/answer characters, to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other, both verbally and non-verbally, i.e. they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C#, or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA, or PHP
  • Experience with Unity
  • Excellent general computer skills
  • Background in Artificial Intelligence is a plus