Can Smell-O-Vision Save VR?

Consumers have been slow to purchase VR headsets, largely due to price and limited content. But several start-ups are making devices that will give people a whiff of the virtual world.

Continue reading in PC Magazine.

How Virtual Reality is Being Used to Heal

For more than twenty years, Skip Rizzo, a clinical psychologist and the director for medical virtual reality at theUniversity of Southern California’s Institute for Creative Technologies, has designed and researched innovative VR environments. They’re used clinically to improve a patient’s psychological, cognitive, and motor abilities. Rizzo and his team at USC developed Bravemind, a VR environment for veterans with PTSD that transports them back to the moments of combat that haunt them in order to help them face their trauma and heal. It’s called simulated exposure therapy. Working with a clinician, veterans are gradually guided into these VR environments while their physiological responses and stress levels are carefully monitored. Over time, their reaction to the war stimuli is lessened and their symptoms of PTSD in the real world start to become more manageable. Bravemind is currently used at over sixty sites, including VA hospitals and military bases. Rizzo thinks VR has broader potential for other medical uses and that it could play a crucial role in shaping the future of personalized medical care.

Continue reading in Goop.

ARL 64 – Research Assistant, Mapping and Semantic SLAM Researcher

Project Name
Sensor Emplacement and UAS Trajectory Planning

Project Description
This project involves generating and leveraging 3D world mapping capabilities to aid in understanding how to best emplace static ground sensors and how to plan efficient UAS trajectories for coverage. Mapping in this case covers 3D point clouds, referencing imagery to point clouds, and semantic segmentation of imagery for labeling the imagery.

Job Description
This project will involve leveraging existing 3D mapping and semantic segmentation techniques (both batch and real-time) and developing new methods of networked sensing and fusion for finding and tracking targets. This position includes both theoretical development as well as working with others to demonstrate this capability on an actual prototype UAS system and ground sensors. Familiarity with robotics, Ubuntu Linux, Robot Operating System (ROS), 3D mapping, and C/C++ programming desired.

Preferred Skills
– Software engineering (C/C++/python/ROS)
– 3D mapping from LIDAR and Photogrammetry
– Semantic Segmentation and Labeling

ARL 63 – Research Assistant, Marketing and Social Media Intern for Science Communication

Project Name
Science Communication in the Age of Podcasts

Project Description
As the Army’s corporate laboratory, the Army Research Laboratory (ARL) conducts cutting edge research around the world. Run on an open campus model, ARL fosters unique collaborations between industry and academia making it one of the most impactful research institutions on technology of the future. However, too few know of this institution and the opportunities afforded by working with ARL. To increase the impact of ongoing work at ARL, this project aims to celebrate and showcase ongoing collaborative research using a podcast format accompanied by written content.

Job Description
The student will play an integral role in making this podcast successful. Duties will include social media strategy and execution, marketing content creation, and assistance with podcast recording. Ultimate duties will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
– Social Media strategy experience
– Audio editing experience
– Marketing experience
– Interest in Science Communication

AIVR 2019

NeurIPS 2019

Olfactory Virtual Reality Technology Being Used to Help Veterans with PTSD

Virtual Reality headsets are used to give gamers the experience of a virtual word using video and sound. Now three different companies are coming out with an add-on that adds scent to the VR experience. 

This is light years beyond the failed Smell-O-Vision technology of the 20th Century.  WBGO’s Jon Kalish reports on a Vermont start-up whose device will soon be used by psychologists treating veterans with PTSD.

Listen here.

Army Pursuing Improved Realism in Live and Virtual Training

Members of the Synthetic Training Environment Cross Functional Team are working with the Program Executive Office for Simulation, Training and Instrumentation, or PEO STRI, and the Simulation and Training Technology Center, known as STTC, to build the Army’s most advanced training capability. These partners, including ICT, discussed dynamic occlusion and other known challenges during the 2019 Interservice/Industry Training, Simulation and Education Conference, Dec. 2-6 in Orlando, Florida. 

Learn more, via the U.S. Army website.

ARL 62 – Research Assistant, Collaborative Robotics

Project Name
Collaborative Unmanned Aerial Vehicles

Project Description
ARL is developing multi-robot, collaborative systems to solve complex problems in dynamic environments. Our research covers a wide range of topics including networking, navigation, beam forming, resiliency and swarm formation.

Job Description
This internship will focus on developing software to support our collaborative robotics research objectives. The student may use a combination of simulation, small unmanned aerial vehicles and ground robots to demonstrate their research accomplishments.

Preferred Skills
– C++
– Python

ARL 61 – Programmer, Programmer, multi-agent modeling, simulation, and robotics

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of simulated humans and robots. They must also be accurate enough to be used for sizing human-robot teams in Army missions.

Job Description
The programmer will create functions and modules to be integrated into the codebase. The programmer may create models of humans and futuristic robots. The programmer may implement some algorithms on physical robots. Primary programming languages will be ROS and Python.

Preferred Skills
– ROS, Python, C++, Java development
– Multi-robot coordination algorithms (e.g. swarming, traveling salesman)
– Collaborative development and version control (e.g. Github)
– Familiarity with physics, engineering, or robotics

ARL 60 – Programmer, Cross-Reality Common Operating Picture for Multi-Domain Operations

Project Name

Project Description
Project AURORA (Accelerated User Reasoning for Operations, Research, and Analysis) seeks to understand how cross-reality (AR,VR,MR) can be used to enhance the common operating picture for future multi-domain battle. The AURORA project is a platform consisting of network (AURORA-NET) and interface (AURORA-XR) modules that allow researchers to conduct controlled experimentation on the ingestion of battlefield data on the network to its visualization, analysis, and actuation by humans and intelligent agents. There is currently limited literature on how best to use immersive technology for decision-making, particularly across different visualization mediums.

Job Description
The intern will assist the team to develop additional visualization and interaction methods into the AURORA-XR platform. These may involve the use of VR, AR, or both simultaneously. Work may also include some exposure to integration of machine learning, artificial intelligence, and networking.

Preferred Skills
– Fluent in C# and/or python
– Experience with Unity
– Experience developing for virtual or augmented reality
– Excellent general computer skills
– Background in cognitive science or UX is a plus

ARL 59 – Research Assistant, Machine Learning-Driven Scene Perception and Understanding

Project Name
Machine Learning-Driven Scene Perception and Understanding

Project Description
Current manned and unmanned SWAP-constrained platforms for perception, ground or airborne, carry multimodal sensors with future expectation of additional modalities. This project aims to support ARL’s one of essential research programs (ERP)– AI for Maneuver and Mobility (AIMM) ERP, and focuses on the development of advanced machine learning algorithms for scene understanding using multimodal data from single as well as distributed sensing platforms, in particular using a diverse dataset consisting of high-fidelity simulated images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 58 – Research Assistant, Integrated Circuit Design Engineering Intern

Project Name
Efficient, Domain-Specific, Reconfigurable Integrated Circuits (ICs)

Project Description
Our research team designs and tests integrated circuits (ICs) that efficiently perform computation tasks relevant to Army applications. ICs are designed to allow for re-configuration, similar to Field Programmable Gate Arrays (FPGAs), but to also provide efficiency in computation closer to that of Application-Specific ICs (ASICs).

Job Description
For one summer, students selected to be interns will help design, simulate, and verify digital blocks used in energy-efficient integrated circuits developed at the lab. Interns will get first-hand experience working with Army researchers and on Army problems. Interns will also have the opportunity to present their work to other interns and researchers at the lab. Example application spaces for developed circuits include (but are not limited to) Swarms, Artificial Intelligence, and Digital Signal Processing (DSP).

Preferred Skills
– Verilog coding and design experience (Cadence Incisive/Xcelium, Verilator, Icarus Verilog, etc.)Proficiency with using MATLAB and Python
– Familiarity with digital signal processing (DSP), machine learning, and/or swarming algorithms
– Current undergraduate or graduate student in electrical engineering
– Preferred: experience with optimizing circuit performance/power consumption

Jessica Brillhart Discusses the Power of Immersive VR

The final episode of an 8-part docu-series for AOL’s “In the Know” featuring ICT’s Director for Mixed Reality, Jessica Brillhart.

Live on AOL and Yahoo!

I/ITSEC 2019

Virtual Reality Helps Ease Trauma for Patients at Tampa Veterans Hospital

For years, Albert Rizzo at USC has studied the technology’s use in treating people with military backgrounds.

Rizzo is associate director of the medical virtual reality group at the school’s Institute for Creative Technologies. He said the stigma against seeking mental health treatments is strong in the military community, where seeking therapy can be seen as a sign of weakness.

Continue reading the full story in Tampa Bay Times.

ARL 57 – Research Assistant, Synthetic Data for Machine Learning

Project Name
Synthetic Data for Machine Learning

Project Description
Machine learning (ML) algorithms require vast amounts of training data to ensure good performance. Nowadays, synthetic data is increasingly used for training cutting-edge ML algorithms. This research aims to develop AI-driven model synthesis approach for generating ML synthetic data. Advanced deep learning techniques, particularly the deep generative networks and geometric deep learning will be explored for model representation and synthesis.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning and data synthesis techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, signal processing,
computer vision/graphics
– Strong programming skills

ARL 56 – Research Assistant, Pupil power: Unlocking the ability to use eye tracking in real-world contexts

Project Name
Investigation of Factors and Estimation of Their Influence on the Pupil Response

Project Description
Because of new cheap and non-invasive eye tracking technology, the pupil response has great potential to be used as a window into the mind to estimate cognitive states that influence performance. However, because pupil size is more strongly driven by non-cognitive factors, it is difficult to attribute pupil size changes to cognitive states. This project will contribute to ongoing efforts to overcome this obstacle by furthering our understanding of the magnitude of various factors that influence the pupil response.

Job Description
If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. The scope of the work will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship but may include data collection, literature review and statistical analysis.

Preferred Skills
– Programming and stastical analysis in Matlab, Python, R
– Interest in Psychophysiology, cognitive neuroscience and/or psychology

ARL 55 – Research Assistant, Quantified Uncertainty Training Research Assistant

Project Name
Evaluating Quantified Uncertainty Training for Adaptation

Project Description
The overarching goal of this project is to advance fundamental understanding of cognitive skills that enable effective adaptation to dynamic and unpredictable situations as well as approaches to training those skills. This project is examining the skill of incorporating multiple sources of uncertain information to make better, faster decisions. We want to know if that skill improves adaptation to dynamic availability and quality of information, and we want to know how to train that skill.

Job Description
The selected research assistant will have the opportunity to contribute to this research. Depending on skills and interest, the intern might assist with literature review, experimental design, data collection, and/or analysis. One specific area of contribution would be to design, implement, and run an online/mturk experiment to replicate in-lab findings.

Preferred Skills
– Experience running user studies/behavioral experiments
– Some programming experience (R, Python, MATLAB, JavaScript)
– Background/coursework in Psychology, Statistics, Data Visualization
– Experience conducting related research
– Experience with crowd-sourced/online experiments

ARL 54 – Research Assistant, Machine Learning and Computer Vision

Project Name
Machine Learning and Computer Vision

Project Description
Current manned and unmanned perception platforms, ground or airborne, carry multimodal sensors with future expectation of additional modalities. This project focuses on the development of advanced machine learning (ML) algorithms for scene understanding using multimodal data, in particular using a diverse dataset consisting of high-fidelity simulated images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 53 – Research Assistant for Materials and Device Simulations

Project Name
Materials and Device Simulations for Power Electronics

Project Description
The project is part of an ongoing emerging materials and device research effort in the US Army Research Laboratory (ARL). One focus area is exploration and investigation of materials and device designs, both theoretically and experimentally, for high-speed, high-frequency and light weight electronic devices.

Job Description
The research assistant will work with ARL scientists to investigate fundamental material and device properties of low-dimensional nanomaterials (functionalized diamond surfaces). For this study, various bottom-up materials and device modelling tools based on atomistic approaches such as first-principles density functional theory (DFT) and molecular dynamics (MD) will be used. In addition, numerical and analytical modeling will be used to quantify and analyze data obtained from atomistic simulation to facilitate comparison to in-house experimental findings.

Preferred Skills

  • An undergraduate or graduate student in electrical engineering, materials science, physics or computational chemistry 
  • Sound knowledge of materials and device physics concepts
  • Proficiency in at least one scripting language
  • Experience with high-performance computing (HPC)
  • Proficiency with atomistic materials modeling concepts and tools, such as VASP, Quantum Espresso, and Lammps etc.
  • Interest in fundamental materials design and discovery
  • Familiar with experimental characterization techniques
  • Ability to work in a collaborative environment as well as independently

ARL 52 – Research Assistant, Deep Learning Models for Human Activity Recognition Using Real and Synthetic Data

Project Name
Human Activity Recognition Using Real and Synthetic Data

Project Description
In the near future, humans and autonomous robotic agents – e.g., unmanned ground and air vehicles – will have to work together, effectively and efficiently, in vast, dynamic, and potentially dangerous environments. In these operating environments, it is critical that (a) the Warfighter is able to communicate in a natural and efficient way with these next generation combat vehicles, and (b) the autonomous vehicle is able to understand the activities that friendly or enemy units are engaged in. Recent years have, thus, seen increasing interest in teaching autonomous agents to recognize human activity, including gestures. Deep learning models have been gaining popularity in this domain due to their ability to implicitly learn the hierarchical structure in the activities and generalize beyond the training data. However, deep models require vast amounts of labeled data which is costly, time-consuming, error-prone, and requires measures to address any potential ethical concerns. Here we’ll look to synthetic data to overcome these limitations and address activity recognition in Army-relevant outdoor, unconstrained, and populated environments.

Job Description
The candidate will implement Tensorflow deep learning models for human activity recognition – e.g., 3D conv nets, I3D – that can be trained using real human gesture data, and synthetic gesture data (generated using an existent simulator). Knowledge of domain transfer techniques (e.g., GANs) may be useful. The candidate will have to research and demonstrate a solution to this problem.

Preferred Skills
– Experience with deep learning models for human activity recognition
– Experience with Python and TensorFlow
– Independent thinking and good communication skills

ARL 51 – Research Assistant, Visual Salience of Obscured Objects

Project Name
Visual Salience of Obscured Objects

Project Description
Visual salience is the perceptual aspect of an image that may grab a person’s attention and is often used to model visual search, as these attention grabbing locations may help a person understand the surrounding environment faster. However, not everything that is informative will grab a person’s attention. This is especially true when an informative object is partially obscured from view (by another object, fog, dust, glare, etc.) leaving only a part of that object visible. But that part of an object can serve as a cue to the presence of the rest of the object and its potential location. This internship will start an investigation into using a parts of object model to enhance a model of visual saliency.
Directorate: Computational and Information Science Directorate (CISD)
Essential Research Program (ERP): AI for Mobility and Manuever (AIMM)

Job Description
The research assistant will read academic papers, attend project meetings, implement and test computer vision and saliency models using python or Matlab, possibly collect or create test data for the AIMM ERP, and write a report/paper or create a poster at the end of the internship.

Preferred Skills
Preferred: Experience with Matlab and Python (tensorflow or pytorch)
Good to have: Experience with Unity or Unreal game engine development

ARL 50 – Research Assistant, Creative Visual Storytelling

Project Name
Creative Visual Storytelling

Project Description
This project seeks to discover how humans tell stories about images, and to develop computational models to generate these stories. “Creative” visual storytelling goes beyond listing observable objects and their visual properties, and takes into consideration several aspects that influence the narrative: the environment and presentation of imagery, the narrative goals of the telling, and the audience who is listening. This work involves aspects of computer vision to visually analyze the image, commonsense reasoning to understand what is happening, and natural language generation and theories of narratives to describe it in a cohesive and engaging manner. We will work with low-quality images and non-canonical scenes. Paper reference:

Job Description
Possible projects include:
– Conduct manual and/or computational analysis of the narrative styles and properties of stories written about images
– Experiment with or combine existing natural language generation and/or computer vision software for creative visual storytelling
– Work with project mentor to design evaluation criteria for assessing the quality of stories written about images

Preferred Skills
Interest in and knowledge of some combination of the following:
– Programming expertise for language generation and/or computer vision
– Digital narratives and storytelling applied to images
– Experimental design and applied statistics for rating and evaluating stories

361 – Programmer, Integrated AI for Simulation and Training

Project Name
Integrated AI for Simulation and Training

Project Description
Effective military training requires an advanced simulation that includes a range of AI capabilities, including realistic enemy tactics, friendly unit performance, and civilian behaviors, all delivered through an appropriate platform, e.g. AR, VR, Desktop, Mobile, etc. This project integrates all required capabilities into a single simulation.

Job Description
The position supports the integration of advanced research AI capabilities into a single Unity training simulation, and includes the design, development and testing of demonstrations that highlight these integrated capabilities. The ideal candidate is fluent in Unity, has a strong affinity with modern AI and ML capabilities, is a fast learning, and can work both independently and as part of a team. 

Given that this project is integrating a range of advanced technologies, it allows for exposure to a range of exciting aspects of modern simulation, game development, and AI.

Preferred Skills
 •  Unity
  •  Machine Learning
  •  AR/VR
  •  Mobile

360 – Programmer, Real-Time Rendering of Virtual Humans

Project Name
Real-Time Rendering of Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization of our new facial scan database and to animate and render virtual humans. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering, and animation. Ideally, the intern would have awareness about physically based rendering, sub surface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
 •  Engineering, math, physics
  •  Programming, OpenGL / Direct3D, GLSL / HLSL, Unity3, C++, Python, GPU programming, Maya, version control (svn/git)
  •  Knowledge in modern rendering pipelines, image processing, rigging, blendshape modelling

359 – Programmer, Body Tracking for AR/VR

Project Name
Body Tracking for AR/VR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills
 •  C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
  •  Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
  •  Detection, bilinear morphable models, texture synthesis, markov random field

358 – Programmer, Immersive Virtual Humans for AR/VR

Project Name
Immersive Virtual Humans for AR/VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills
 •  C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL
  •  Knowhow of state of the art deep learning models, strong math skills, tensorflow, pytorch or other deep learning frameworks
  •  Python, GPU programming, Maya, Octane render, svn/git
  •  Knowledge in modern rendering pipelines, image processing, rigging

357 – Research Assistant, Population Modeling for Analysis and Training (PopMAT)

Project Name
Population Modeling for Analysis and Training (PopMAT)

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multi-agent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills
 •  Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  •  Experience with Python programming.
  •  Knowledge of psychosocial and cultural theories and models.

356 – Research Assistant, Develop Explainable Models for Reinforcement Learning

Project Name
Develop Explainable Models for Reinforcement Learning

Project Description
This project aims to develop explainable models for model-free reinforcement learning. The explainable models will be used to generate explanations on how the algorithm generates policies that are understandable to humans who interacts with automations that uses reinforcement learning.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Solid knowledge in artificial intelligence
  •  Strong programming skills in Python

355 – Research Assistant, Teaching Artificial Intelligence through Game-Based Learning

Project Name
Teaching Artificial Intelligence through Game-Based Learning

Project Description
This project aims to develop a role-playing game to help high school students learn basic concepts in artificial intelligence.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Experience with building games. (Games for education a plus)
  •  AI specialization or focus would be ideal
  •  Master’s degree in CS

354 – Research Assistant, Charismatic Virtual Tutor

Project Name
Charismatic Virtual Tutor

Project Description
The primary goal of this project is to analyze the data of charismatic individuals (audio/video of speeches) to learn the indicators of charisma in nonverbal behaviors. The outcome will then be used to procedurally generate synthetized voices and speech gestures to convey charisma in a virtual human.

Job Description
The Research Assistant intern will work with Dr. Wang in support of the project research objectives.

Preferred Skills
 •  Background in audio (voice and speech signals) processing and speech synthesis.
  •  Knowledge of AI, NLP and ML.
  •  Experience with applying ML to audio signals a plus.
  •  Experience with COVAREP, openSMILE a plus.
  •  Strong programming skills a must. C/C++, Python preferred.
  •  Minimum education requirement: Masters in CS or EE

353 – PAL3 – Research Assistant / Data Analytics / Summer Intern, SLATS – Semi Supervised Learning for Assessment of Teams in Simulations

Project Name
SLATS – Semi Supervised Learning for Assessment of Teams in Simulations

Project Description
Would you like to use and extend novel machine learning techniques to analyze team behavior and performance in a game-like simulation?

The Learning Sciences group at ICT has developed a general-purpose educational-data-mining pipeline to track engagement and learning for individuals. The goal of the SLATS project is to extend this pipeline to team tasks while continuing our approach of requiring only a limited amount of labeled data by using semi-supervised learning. This research will develop diagnostics of the causes for team performance that bridge theory with common use-cases for team metrics (e.g., real-time feedback, adaptive scenario events). The goal of this work is also to produce a library of metrics that is extensible, so that improved metrics and methodologies can replace older ones through empirical studies.

This research is investigating the following machine learning problems:
– Cold Start Problem: Developing models to assess team performance on semi-structured simulations based on a relatively small number of samples (e.g., < 100 teams)
– Team Diagnostics: Credit attribution between team and individual members
– Actionable Metrics: Reporting metrics in a form that can guide future training

Job Description
Generally, the goal of the internship will be to expand the system although the specific tasks will depend on the progress of the project, and the interests of the candidate. Once the initial pipeline is built, a series of machine learning experiments will be required to classify and diagnose team performance with high accuracy and small training samples.

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
 •  Python, R
  •  Machine Learning, Statistics, or Data Analytics
  •  Strong interest in data science, social/team performance, intelligent agents, and social cognition

352 – PAL3 Research Assistant / Summer Intern, Personal Assistant for Life Long Learning (PAL3)

Project Name
Personal Assistant for Life Long Learning (PAL3)

Project Description
Would you like to work on a system designed to rapidly build an AI virtual mentor from a set of video interview clips for a real-life mentor?
Mentor Panel lets students engage in virtual question-and-answer sessions with senior professionals from a variety of fields. Check out the prototype here:

Job Description

Mentor Panel has multiple components in active development. Depending on your skills and interests, you could get involved in research on:
– Natural Language Systems & AI: Dialog systems to improve mentor conversations
– Fullstack UX: Mobile client or web app development for improved mentor interactions
– Rapid Mentor Pipelines: Content-management tools or video-processing pipeline to rapidly create or modify virtual mentors based on video interviews
– Data Mining: Applying statistics or ML to existing MentorPal data to develop new models or find important patterns relevant to publications.

We don’t require interns to come in with a lot of experience in a specific language or framework, but we are looking for bright, highly motivated interns to build on the best of breed in the open source world. It’s important to us that your time here grows your skills using tools that matter in the real world and in job markets. Here’s a short list of technology we’re actively working with or exploring in PAL3:
React/Gatsby, tensorflow/keras, NLTK, jiant, python, flask, javacript, typescript, docker, kubernetes, circleci, AWS

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
  •  React/Gatsby, tensorflow/keras, NLTK, jiant, python, flask, javacript, typescript
  •  docker, kubernetes,
  •  circleci, AWS

351 – PAL3 – Research Programmer, Personal Assistant for Life Long Learning (PAL3)

Project Name
Personal Assistant for Life Long Learning (PAL3)

Project Description
Would you like to work on a next-generation mobile-app coach for personalized learning? 

The Personal Assistant for Life Long Learning (PAL3) is a system for delivering on-the-job training and support lifelong learning via mobile devices. Checkout a brief video about the project here:

Job Description
PAL3 has a variety of components in active development. Depending on your skill set and interests, you could get involved with research on:
– Fullstack UX: Designing and implementing engaging web-apps and mobile apps hosted by PAL3
– Content Tools: Building content development tools for intelligent tutoring systems and interactive systems
– Automation: Automating build/test/deploy processes for traditional systems (e.g., servers) and advanced systems (e.g., virtual mentors)
– AI/ML: Artificial intelligence and machine learning to give personalized recommendations to users
– Teams: Team-building mechanisms, such as game-like mobile app features or local multiplayer

We don’t require interns to come in with a lot of experience in a specific language or framework, but we are looking for bright, highly motivated interns to build on the best of breed in the open source world. It’s important to us that your time here grows your skills using tools that matter in the real world and in job markets. Here’s a short list of technology we’re actively working with or exploring in PAL3:
React (including Gatsby and React Native/XP), Unity 3d, python, javacript, typescript, node/express, graphql, mongogb, docker, kubernetes, circleci, AWS, JAM stack (e.g. Netlify), XAPI

Students in the lab are encouraged to publish and interested students will be invited to participate in at least one publication related to their summer research.

Preferred Skills
 •  React (including Gatsby and React Native/XP), Unity 3d, python, javacript
  •  , typescript, node/express, graphql, mongogb, docker, kubernetes, circleci,
  •  AWS, JAM stack (e.g. Netlify), XAPI

350 – Research Assistant, Identity Models for Dialogue

Project Name
Identity Models for Dialogue

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue by creating explicit models of dialogue agent and human interlocutor identity, investigating human-like dialogue strategies, and synergies across multiple dialogue tasks.

Job Description
The student intern will work with the Natural Language Research Group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills
 •  Some familiarity with dialogue systems or natural language dialogue
  •  Either programming ability or experience with statistical methods and data analysis
  •  Ability to work independently as well as in a collaborative environment

349 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship. Previous subjects have included Holocaust and Sexual Assault Survivors and Army Heroes.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
 •  Very good spoken and written English (native or near-native competence preferred)
  •  General computer operating skills (some programming experience desirable)
  •  Experience in one or more of the following: 1. Interactive story authoring & design 2. Linguistics, language processing 3. A related field; museum-based informal education

348 – Research Assistant, Narrative Summarization

Project Name
Narrative Summarization

Project Description
The Narrative Summarization project at ICT explores new technologies for generating textual descriptions of time-series data from multiplayer interactive games (military training simulations). Interns working on this project will develop new software for transforming structured, formal descriptions of gameplay events into English text using a combination of linguistic templates, grammatical transformation rules, and deep neural network language models. Interns will work directly with the faculty member / project leader, write software code, write documentation and reports, and conduct evaluations / experiments.

Job Description
Interns working on this project will develop new software for transforming structured, formal descriptions of gameplay events into English text using a combination of linguistic templates, grammatical transformation rules, and deep neural network language models. Interns will work directly with the faculty member / project leader, write software code, write documentation and reports, and conduct evaluations / experiments. Successful applicants will have skills in both software engineering (Python, C#, and/or Rust) and in computational linguistics (language modeling, syntactic parsing, and/or formal semantics).

Preferred Skills
 •  Software engineering (Python, C#, and/or Rust)
  •  computational linguistics (language modeling, syntactic parsing, and/or formal semantics

347 – Programmer, One World Terrain (OWT)

Project Name
One World Terrain (OWT)

Project Description
One World Terrain (OWT) is an applied research effort focusing on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:
• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
• Procedurally recreate 3D terrain using drones and other capturing equipment
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
• Develop efficient run-time application for terrain visualization and simulation
• Reduce the cost and time for creating geo-specific datasets for M&S
• Leverage commercial solutions for storage and distribution of serving geospatial data

Job Description
The programmer will work with the OWT team in support of recreating digital 3D global terrain capabilities that replicate the complexities of the next-gen operational environment for M&S.

Preferred Skills
 •  Experience with machine learning and computer vision (Python, TensorFlow, Pytorch)
  •  Experience with 3D point cloud and mesh processing
  •  Experience with Unity/Unreal game engine and related programming skills ( C++/C#)
  •  Experience with photogrammetry reconstruction process
  •  Web services
  •  3D rendering on browsers
  •  Interest/experience with Geographic information system applications and datasets

346 – Research Assistant, The Sigma Cognitive Architecture

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing the Sigma cognitive architecture – a computational hypothesis about the fixed structures underlying a mind – as the basis for constructing human-like, autonomous, social, cognitive (HASC) systems. Sigma is based on an extension of the elegant but powerful formalism of graphical models, enabling combining both statistical/neural and symbolic aspects of intelligence. Although it is built in Lisp, Sigma’s core algorithms are in the process of being ported to C.

Job Description
Looking for a student interested in developing, applying, analyzing and/or evaluating new intelligent capabilities in an architectural framework.

Preferred Skills
– Programming (Lisp preferred, but can be learned once arrive)
– Graphical models and/or neural networks (experience preferred, but ability to learn quickly is essential)
– Cognitive architectures (experience preferred, but interest is essential)

Army Has High Hopes for One World Terrain Training Tool

The Army is creating a high-resolution virtual world realistic enough to help prepare troops for battles across the globe.

The Synthetic Training Environment, or STE, is a 3D training and mission rehearsal tool that brings together live, virtual, constructive and gaming environments to improve soldier and unit readiness. 

Two years ago, as part of the Army’s widespread modernization effort, then-Army Chief of Staff Gen. Mark Milley called for a rapid expansion of the service’s synthetic training capabilities.

A STE cross-functional team, led by Maj. Gen. Maria Gervais, was established to help develop next-generation training capabilities. It is most closely aligned with the soldier lethality portfolio, but is also geared toward the other modernization priorities, she told reporters at last year’s Association of the United States Army’s annual conference.

One World Terrain, also known as OWT, is one of several key components of the new training architecture which will provide an accessible 3D representation of the global operating environment, according to the service. 

Continue reading in National Defense Magazine.

Paging Dr. Robot: Artificial Intelligence Moves into Care

Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients.

It already detects an eye disease tied to diabetes and does other behind-the-scenes work like helping doctors interpret MRI scans and other imaging tests for some forms of cancer.

Now, parts of the health system are starting to use it directly with patients. During some clinic and telemedicine appointments, AI-powered software asks patients initial questions about their symptoms that physicians or nurses normally pose.

And an AI program featuring a talking image of the Greek philosopher Aristotle is starting to help University of Southern California students cope with stress.

Researchers say this push into medicine is at an early stage, but they expect the technology to grow by helping people stay healthy, assisting doctors with tasks and doing more behind-the-scenes work. They also think patients will get used to AI in their care just like they’ve gotten accustomed to using the technology when they travel or shop.

Continue reading in the Associated Press.

The Academic Pipeline

The use of simulation and virtual reality in education and training today is widespread and expanding. Where will the next generation of “simulationists” come from to continue evolution? MS&T speaks with leading research universities, including ICT.

Read the full article in MS&T Magazine.

Technology’s New Role

A monopoly on film-making gives LA the lead in emerging industries. Watch the full segment, via Economist-Pictet.

Fighting on New Frontiers

The United States has already faced and defeated enemies both in and under cities. Across the Department of Defense (DoD), however, there’s an acknowledgment that what was once the exception might one day be the rule. And so, military leaders have begun laying a foundation on which to build a future fighting force that’s as ready to engage in urban and underground environments as it is in conventional domains. There’s just one thing they need to complete their mission: more and better geospatial intelligence (GEOINT), which is being developed thanks to forward-looking programs such as the Army’s One World Terrain (OWT) and the Defense Advanced Research Project Agency’s (DARPA) Subterranean (SubT) Challenge.

Continue reading in Trajectory Magazine.


U.S. Army Leaders Test Out Latest Militarized HoloLens AR Architecture

US Army soldiers are evaluating the latest iteration of Microsoft’s modified HoloLens 2 headset as part of the service’s effort to inject augmented reality (AR) into how the force trains and conducts combat missions.

Under the Integrated Visual Augmentation System (IVAS) programme, the service is working with Microsoft to militarise the company’s HoloLens 2 AR system to include One World Terrain and Nett Warrior, with the goal of beginning to field the capability to the force by the end of 2021. To get there, soldiers are testing out the second iteration of the technology at Fort Pickett, Virginia, and providing the service and company with advice on ways to improve it.

Continue reading in Jane’s 360.

AAAI Fall Symposium

AAAI 2019 Teaching AI in K-12 Symposium


The SIGNLL Conference on Computational Natural Language Learning

This Mapping Tech will Distinguish Different Tree Species

Updated geospatial data is needed more and more in order to enhance the situational awareness of soldiers at the field. One World Terrain (OWT) is a US Army research effort that seeks to provide a set of 3D global terrain capabilities and services that can replicate the coverage and complexities of the operational environment.

The OWT software was intended to build 3D battle maps for training simulations. But an early version has already proven so useful that special operators are using it to plan actual missions.

Continue reading in iHLS.

Surviving the First ‘Deepfake’ Election: Three Questions

Even before deepfakes, voters have needed to sort through disinformation, which has even come from campaigns themselves. 

“You don’t need deepfakes to spread disinformation,” says Hao Li, an associate professor of computer science at the University of Southern California who worked on a federally funded project to spot deepfakes. 

With advances in technology, however, Professor Li predicts that undetectable deepfake videos are between six and 12 months away – a period that corresponds roughly with the election season. 

Continue reading in the Christian Science Monitor.

Can a Building Help Thwart the Next Active Shooter?

Researchers have designed a virtual environment to test how building features in schools and offices help or hurt occupants during active shooter incidents.

Continue reading in USC Viterbi News.

Amazon’s A.I. Emotion-Recognition Software Confuses Expressions for Feelings

We have very strong intuitions about what emotions like happiness look like when displayed on other people’s faces. From childhood, we are taught these associations between facial expressions and inner emotions. Even now, we use emojis to show our feelings when text alone might fail to express our sentiment. We watch TV shows and films that zoom in on actors’ faces to give us insight into how the character is feeling. And when we see an image of a person smiling, we intuit that they are happy.

“People are consistent … if someone is smiling, they’ll rate that image as happy,” Jonathan Gratch says, “which is also why a lot of people on Facebook think everyone else is happier than they are. They see all these smiling faces and they think, ‘Oh, they must be happy.’”

Continue reading in OneZero for Medium.

ICCV 2019 International Conference on Computer Vision

Can AI Also be Used to Mask Someone’s Identity?

At the moment, Facebook does not have any plans for this new technology. However, the research is particularly important due to the proliferation of deepfakes. Dr. Hao Li at the University of Southern California recently stated that we are less than a year away from dealing with “perfectly real” deepfakes. There are already a variety of deepfake videos and images of politicians, celebrities, and others in existence. 

Read more at Hot Hardware.

A Market That’s Virtually Unlimited

Via Rochester Beacon.

At this point only a few VR systems specifically designed for medical purposes are commercially available. A company called Floreo Inc., headed and co-founded by a former Amazon official and father of an autism spectrum son, sells a VR-based educational tool for autistic children, for example.   

Researchers at the University of Southern California’s USC Institute for Creative Technologies have developed and are working a number of AR/VR projects including a VR training app that lets medical providers practice on virtual patients and a VR therapy for PTSD sufferers.

Continue reading.

VR May Change the Future of Therapy

Researchers are discovering that virtual reality (VR) is more than just a fancy way to play video games. It’s proving useful in therapy.

“What does head-mounted display virtual reality provide people?” says Skip Rizzo, research professor and director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies. “It can immerse a person in an environment that can help them to get over their fears or confront their past traumas, all in a controlled stimulus environment.”

Continue reading in Medium’s Elemental.

Reuters Ranks USC #8 in The World’s Most Innovative Universities

A private research university based in Los Angeles, USC has strong ties to Hollywood and significant expertise in digital entertainment. The university’s Institute for Creative Technologies studies how people engage with technology through virtual characters and simulations, and collaborates with studios including Warner Bros and Sony Pictures Entertainment to develop ever more realistic computer-generated characters in movies.

Continue reading in Thomson Reuters.

UIST 2019: 32nd ACM User Interface Software and Technology Symposium

Army Leaders See Virtual, Augmented as the Future of Training

The next steps for One World Terrain include expanding the system’s database. Some of the challenges include improving latency and bandwidth so that the fidelity or realistic qualify of the terrain remains high.

Continue reading in ArmyTimes.

3D Mapping for Army Hinges On Bandwidth

Mapping in 3D will revolutionize both Army ops planning and trainingif ways can be found to deliver sufficient bandwidth to move that data to the field, senior Army officials say. This will require not only increasing connectivity between sensors and soldiers, but also developing software solutions to condense data and protocols for prioritizing who gets what data when, and over what network.

Continue reading in Breaking Defense.

Do We Trust Artificial Intelligence Agents to Mediate Conflict? Not Entirely

New study says we’ll listen to virtual agents except when goings get tough, via USC Viterbi News.

AIS Honors 10 Women in Tech, Including Jessica Brillhart

Celebrating 10 years of achievement in entertainment technology, the Advanced Imaging Society today named 10 female industry innovators who will receive the organization’s 2019 Distinguished Leadership Awards at the its 10th annual Entertainment Technology Awards ceremony on October 28 in Beverly Hills.

The individuals were selected by an awards committee for being significant “entertainment industry growth catalysts.” The Lumiere Statuette recipients are:

Susan Brandt, president, Dr. Seuss Enterprises, San Diego

Jessica Brillhart, director, MxR Lab, USC Institute for Creative Technologies, Los Angeles 

Dee Challis Davy, director, Pageant of the Masters, Festival of the Arts, Laguna Beach

Vicki Dobbs Beck, executive in charge, Industrial, Light Magic ILMxLab, San Francisco

Michelle Grady, EVP, Imageworks, Sony Pictures Entertainment, Vancouver

Patricia Keighley, chief quality guru, IMAX Corporation, Los Angeles 

Holly Liu, co-founder, Kabam, San Francisco 

Tamera Mowry-Housley, host of “The Real, Actress,” advocate, Los Angeles 

Cynthia Slavens, director, studio mastering and operations, Pixar Animation Studios, Emeryville, Calif.

Christina Lee Storm, VP business operations, strategy & emerging technology, Advanced Creative Technology, DreamWorks Animation, Glendale, Calif.

The honors will be presented at the Society’s 10th annual Entertainment Technology Awards luncheon on October 28th at the Four Seasons Hotel at Beverly Hills. The Distinguished Leadership Awards are sponsored by Cisco. 

Via Variety.

21st ACM International Conference on Multimodal Interaction

21st ACM International Conference on Multimodal Interaction
October 14-18, 2019
Suzhou, Jiangsu, China

Special Ops Using Army’s Prototype 3D Maps On Missions: Gervais

The Army’s One World Terrain software was intended to build 3D battle maps for training simulations. But an early version has already proven so useful that special operators are using it to plan actual missions.

Continue reading in Breaking Defense.

USC Taps into AI to help with Student Mental Health

USC wants to make mental health resources more accessible to students. So they built an app for it.

It’s called Ask Ari, and it’s designed to specifically address the issues of USC students and help them streamline the process of self care. The platform was pioneered by the Office of Education and Wellness in partnership with the Institute for Creative Technologies.

Continue reading in USC Annenberg Media.

One World Terrain to Allow U.S. Soldiers to Train Anywhere

Carrying only a backpack and a drone, Soldiers could capture and eventually re-create entire sections of forests and steep mountains. Joe Lacdan explains for Army Recognition.

7th International Conference on Human-Agent Interaction

Ethical Concerns of AI Call Growing Adoption into Question

Hao Li talks with TechTarget about deepfake technology.

American Psychological Association Conference on Technology, Mind and Society

Perfect Deepfake Tech Could Arrive Sooner Than Expected

Professor Hao Li used to think it could take two to three years for the perfection of deepfake videos to make copycats indistinguishable from reality.

But now, the associate professor of computer science at the University of Southern California, says this technology could be perfected in as soon as six to 12 months.

Continue reading and listening to the full interview on WBUR.

How VR Is Improving Treatments for Patients and Training for Doctors

Virtual reality gives doctors a safe place to learn surgical techniques and helps patients overcome PTSD. Read the full article featuring Dr. Skip Rizzo and Bravemind in ZDNet.

How Worried Should We Be About Deepfakes?

ICT’s Hao Li talks with BBC about the technology, listen here.

“Perfectly Real” Fake Videos Will Be Here Faster Than You Think

Continued deepfake coverage in USC Annenberg Media.

Deepfake Pioneer: “Perfectly Real” Fake Vids Are Six Months Away

Futurism‘s coverage of deepfake technology, featuring ICT’s Hao Li.

Half A Year to a Year Away From ‘Perfectly Real’ Deepfakes: Top Artist

CNBC’s “Power Lunch” team discusses the onslaught of ‘deepfakes’ and their impact with Professor Hao Li, a top ‘deepfake’ artist. Watch the segment here.

Syracuse VA to Use Virtual Technology to Help Combat Veterans Tackle Trauma

Veterans suffering from trauma from their time in combat might soon be using technology from video games to help the heal.  Donations from a veterans support group and Syracuse University will help the Syracuse VA Medical Center.

Continue reading on the WAER 88.3 Syracuse University website.

Battling PTSD with Virtual Reality for Veterans in CNY

Virtual reality is bringing veterans back to battle, giving the Syracuse VA another way to work with patients living with Post Traumatic Stress Disorder, working to reduce the number of suicides in this community. 

A donation from SoldierStrong gives 10 VA hospitals across the nation, including Syracuse, a new tool for health professionals. The goal isn’t to erase a painful memory, but to teach veterans how to learn to live with it.

Continue reading about the SoldierStrong donation of Bravemind to the Syracuse VA here.

The World’s Top Deepfake Artist: ‘Wow, this is Developing More Rapidly Than I Thought’

ICT’s Hao Li sees deepfake technology as moving quickly toward being indistinguishable from reality. Read the full piece in MIT Technology Review.

A Deepfake Putin and the Future of AI Take Center Stage at Emtech

Artificial intelligence took center stage at this year’s Emtech conference, presented by MIT Technology Review. The conference began with a demo of a deepfake from ICT’ s Hao Li, and featured conversations about the impact of such technologies and misinformation in general, deploying AI at scale, how organizations should approach using AI, whether facial recognition should be more closely regulated, and most interestingly, and more.

Continue reading the full article in PC Magazine.

International Conference of the International Committee of the Red Cross

International Conference of the International Committee of the Red Cross
September 9-13, 2019
Keynote Presentation

U.S. Army Training Devices Could Soon Be Embedded in Helicopters, Vehicles

Army Recognition features One World Terrain in their article about new Army training devices. Read the full piece here.

The Biggest iPhone News Is a Tiny Chip Inside It

ICT’s David Krum provides some insight and thoughts about the new iPhone for WIRED. Read the full article here.

U.S. Army’s Next-Gen Helicopters and Combat Vehicles to Receive Virtual Training Devices

Defense Blog covers new tool the Army is using, featuring the One World Terrain prototype in its piece. Click here to read the full article.

U.S. Army’s Next-Gen Helicopters and Combat Vehicles to Receive Virtual Training Devices

To help Soldiers achieve the Army’s ambitious readiness goals, the service has made advances in its training. The Army Research Laboratory at Aberdeen Proving Ground, Maryland, and the Institute for Creative Technologies in Southern California have been developing the One World Terrain, a virtual training management tool that will eventually become a pillar of the synthetic training environment. OWT features augmented reality software that uses data collection and data mapping to create realistic training scenarios for Soldiers on the ground.

Continue reading in Defence Blog.

10 Innovative IT Projects to Inspire You This Fall

EdTech features organizations, startups, projects and research that are innovating in the educational tech sector. Read the full article which includes ICT’s work here.

SemDial 2019

SemDial 2019
September 4-6, 2019

Emotion-reading algorithms cannot predict intentions via facial expressions

Though algorithms are increasingly being deployed in all facets of life, a new USC study has found that they fail basic tests as truth detectors.

By Sara Preto

September 4, 2019 — Los Angeles — Most algorithms have probably never heard the Eagles’ song, “Lyin’ Eyes.” Otherwise, they’d do a better job of recognizing duplicity.

Computers aren’t very good at discerning misrepresentation, and that’s a problem as the technologies are increasingly deployed in society to render decisions that shape public policy, business and people’s lives.

Turns out that algorithms fail basic tests as truth detectors, according to researchers who study theoretical factors of expression and the complexities of reading emotions at the USC Institute for Creative Technologies. The research team completed a pair of studies using science that undermines popular psychology and AI expression understanding techniques, both of which assume facial expressions reveal what people are thinking.

“Both people and so-called ‘emotion reading’ algorithms rely on a folk wisdom that our emotions are written on our face,” said Jonathan Gratch, director for virtual human research at ICT and a professor of computer science at the USC Viterbi School of Engineering. “This is far from the truth. People smile when they are angry or upset, they mask their true feelings,and many expressions have nothing to do with inner feelings, but reflect conversational or cultural conventions.”  

Gratch and colleagues presented the findings today at the 8thInternational Conference on Affective Computing and Intelligent Interaction (ACII)

Of course, people know that people can lie with a straight face. Poker players bluff. Job applicants fake interviews. Unfaithful spouses cheat. And politicians can cheerfully utter false statements.

Yet, algorithms aren’t so good at catching duplicity, even as machines are increasingly deployed to read human emotions and inform life-changing decisions. For example, the Department of Homeland Security invests in such algorithms to predict potential threats. Some nations use mass surveillance to monitor communications data. Algorithms are used in focus groups, marketing campaigns, to screen loan applicants or hire people for jobs. 

“We’re trying to undermine the folk psychology view that people have that if we could recognize people’s facial expressions, we could tell what they’re thinking,” said Gratch, who is also a professor of psychology. “Think about how people used polygraphs back in the day to see if people were lying. There were misuses of the technology then, just like misuses of facial expression technology today. We’re using naïve assumptions about these techniques because there’s no association between expressions and what people are really feeling based on these tests.” 

To prove it, Gratch and fellow researchers Su Lei and Rens Hoegen at ICT, along with Brian Parkinson and Danielle Shore at the University of Oxford, examined spontaneous facial expressions in social situations. In one study, they developed a game that 700 people playedfor money, and then captured how people’s expressions impacted their decisionsand how much they earned. Next, they allowed subjects to review their behavior and provide insights into how they were using expressions to gain advantage and if their expressions matched their feelings.

Using several novel approaches, the team examined the relationships between spontaneous facial expressions and key events during the game. They adopted a technique from psychophysiology called event-related potentials to address the extreme variability in facial expressions and used computer vision techniques to analyze those expressions. To represent facial movements, they used a recently proposed method called facial factors, which captures many nuances of facial expressions without the difficulties modern analysis techniques provide.

The scientists found that smiles were the only expressions consistently provoked, regardless of the reward or fairness of outcomes. Additionally, participants were fairly inaccurate in perceiving facial emotion and particularly poor at recognizing when expressions were regulated. The findings show people smile for lots of reasons, not just happiness, a context important in the evaluation of facial expressions. 

“These discoveries emphasize the limits of technology use to predict feelings and intentions,” Gratch said. “When companies and governments claim these capabilities, the buyer should bewarebecause often these techniques have simplistic assumptions built into them that have not been tested scientifically.”

Prior research shows that people will make conclusions about other’s intentions and likely actions simply based off of the other’s expressions. While past studies exist using automatic expression analysis to make inferences, such as boredom, depression and rapport, less is known about the extent to which perceptions of expression are accurate. These recent findings highlight the importance of contextual information when reading other’s emotions and support the view that facial expressions communicate more than we might believe.

USC’s ICT joins entertainment industry artists with computer and social scientists to explore immersive media for military training, health therapies, education and more. Researchers study how people engage with computers through virtual characters, video games and simulated scenarios. ICT is a leader in the development of virtual humans who look, think and behave like real people. Established in 1999, ICT is a U.S.  Department of Defense sponsored University Affiliated Research Center working with the U.S. Army Research Laboratory. 

The work was funded by the U.S. Army and the European Office of Aerospace Research and Development. 


The Case for a Course of Virtual Reality

VR may help ease conditions from pain to phobias to PTSD. This article is based on reporting that features expert sources including ICT’s Dr. Skip Rizzo. Read the full story in U. S. News & World Report.

8th International Conference on Affective Computing & Intelligent Interaction (ACII 2019)

ACII 2019
September 3-6, 2019
Cambridge, United Kingdom

A New Army Network Could Also Help Training

The U.S. Army is undergoing a major technology shift affecting how soldiers prepare for battle through the implementation of a Synthetic Training Environment. 

Continue reading in C4ISRNET.

Synthetic Training Environment Piques Partners Interest

The Army-funded Institute for Creative Technologies provides some of the virtual training capabilities. The institute has won awards for its hologram technology, and Gen. Gervais says holograms are a possibility for the STE in the future. “Is it in the realm of the possible? Yes,” she said.

Continue reading in Signal Magazine.

USC Institute for Creative Technologies Celebrates 20 Years

August 22, 2019 – LOS ANGELES – USC’s Institute for Creative Technologies (ICT) is celebrating its 20thanniversary this year. What began as a Department of Defense and Army initiative to advance simulations and computer-supported training by collaborating with the entertainment industry has since transformed into a leading research center developing artificial intelligence and machine learning technologies for creating interactive educational media that improves the lives of service members, veterans and society at large. 

It is estimated that hundreds of thousands of service members and veterans have used ICT training systems and health therapies. Systems include prototypes that help treat post-traumatic stress symptoms using virtual reality, systems that teach leaders how to counsel subordinates, job training for returning service members using virtual human technology, rapid 3D terrain capture and reconstructions using drones, mixed reality smart glasses, and more. 

The impacts of ICT’s DoD and Army-funded research extend far beyond the military including basic research knowledge transforming industry adoption, fundamental characteristics of human-machine teams, hologram-like projections of genocide and sexual assault survivors, award-winning graphics in blockbuster films, game-based systems for physical therapy, and the creation of low-cost virtual reality headsets like Oculus Rift and Google Cardboard. 

Working in collaboration with the Government and DoD Research Laboratories, to include the Army Research Lab and Army Futures Command Soldier Center, ICT is also a recognized leader in several areas of academic research ranging from computer science, human performance, serious games for training, terrain reconstruction modeling and more. The combination of scientists and storytellers is what has instilled a sense of uniqueness at the Institute and what will continue making an impact today and paving the way for what is possible in the future. 


Are VR Gamers Stumbling Onto a Mental Health Power-Up?

Skip Rizzo, a neuropsychologist at the Medical Virtual Reality Lab at the University of Southern California, is more interested in the therapeutic value of the tech. For years, he’s used VR to help treat veterans with post-traumatic stress disorder.

“When we study people with PTSD and we put them in an fMRI system, for example, and study their brain activation when they’re presented with cues that are reminiscent to the trauma, we see hyper-activation of the amygdala, the fight-or-flight area in the brain,” Rizzo said. “They react as if it’s a real thing.”

Listen to Skip discuss this more with Jad Sleiman on WHYY or read the article on the WHYY website.

Everywhere at Once

Ari Shapiro – USC Viterbi research assistant professor of computer science head of the Character Animation and Simulation research group at the USC Institute for Creative Technologies (ICT) – has come up with a solution that allows harried, hurried people to be in many places at all hours – at least in the digital world.

Shapiro and his team have created smartphone apps, UBeBot for Android and AR Avatar Director for iPhones, that allow users to customize their own realistic looking avatars, which can emulate their voice, gestures, and physical traits.

Continue reading in USC Viterbi News.

Businesses Tap Virtual Reality to Train Workers

When the first modern virtual reality headsets came to market a few years back, it felt like we were on the cusp of the sci-fi future. But stand-alone VR headsets like the Oculus Quest are still pretty expensive, and there’s not that much content available for them. While home use of VR hasn’t gone mainstream, makers of the technology are increasingly aiming the product at businesses to use for training their employees.

Continue reading in Marketplace.

Virtual Reality’s Health Benefits May Soon Be a Doctor’s Appointment Away

Virtual reality (VR) is quickly working its way out of the gaming space and into the medical field, with experts trying this technology in areas from counseling to child birth. Perhaps the idea of putting on a set of goggles while talking about a phobia may sound fanciful. But researchers are finding that changing what we see and hear during moments of stress may reduce anxiety and potentially pain. 

These therapeutic methods aren’t all available at your local clinic, just yet. But developers and medical research centers are pushing forward, finding that getting out of one’s own head space may, eventually, play a key role in the healing process.

Continue reading in GearBrain.

How VR Storytelling Could Transform Mental Health

Given that most health concerns are inseparable from one’s environment, Rizzo calls VR “the ultimate Skinner box,” meaning it can create safe yet emotionally evocative experiences to serve virtually any assessment or treatment approach imaginable. These therapeutic programs could be uniquely reliable for evaluating patients in the subjective world of mental health, wherein up to 85 percent of conditions can go undetected, according to the World Health Organization.

Continue reading the full article in Dope Magazine.

Futuristic Building Senses Threats and Reacts

Engineers and computer scientists are exploring building design and technology seeking ways to protect people. Recent innovations offer many possibilities, from placement of exits to the number of hiding spots and even walls that move.  But before designs can be put in place, researchers must first observe the behavior of the building’s occupants.

How do the people inside a building respond when an active shooter is present? Will their behavior change if the building is designed in a different way? Virtual reality (VR) is the first step to answering these questions and helping engineers create a safer building according to USC assistant professor Gale Lucas, who conducts research in the USC Viterbi School of Engineering Computer Science Department and Institute for Creative Technologies.  

Continue reading on Voice of America.

Virtual Reality in Psychological Treatment

Virtual Reality (VR) technology is all the rage at the moment, with industries across the UK and the world using virtual and augmented reality to further their offerings and increase what they can do with the power of technology. Industries including the property industry, health sector and even firefighters are all using these technologies, but could virtual reality help when treating mental health conditions?

TechRound explores more, including Bravemind in the story. Read more.

Hao Li Speaks with Technology Review about Deepfakes

As a pioneer of digital fakery, Li worries that deepfakes are only the beginning. Despite having helped usher in an era when our eyes cannot always be trusted, he wants to use his skills to do something about the looming problem of ubiquitous, near-perfect video deception.

The question is, might it already be too late?

Continue reading in Technology Review.

International Joint Conference on Artificial Intelligence

International Joint Conference on Artificial Intelligence
August 10-16, 2019
Macao, China

Can VR Mimic a Psychedelic Drug Experience?

A new psychedelic virtual reality experience has some questioning if VR can provide the same psychedelic experience as the real thing. Continue reading in The Fix.

Can Virtual Reality Replace Psychedelic Drugs?

Virtual reality is an “emotionally evocative technology,” says Skip Rizzo, director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies, “because we can create these simulated worlds that fool some of the brain, but not the whole brain.”

Continue reading in Pacific Standard.

Can Virtual Simulations Teach a Human Skill Like Empathy?

Can you learn empathy through interacting with a computer—even though, by definition, the skill requires understanding and sympathizing with real people?

EdSurge speaks with experts, including ICT’s Jonathan Gratch. Visit EdSurge to read the full article.

7th Annual Conference on Advances in Cognitive Systems

7th Annual Conference on Advances in Cognitive Systems
August 2-5, 2019
Cambridge, MA

How Rude Virtual Humans Are Helping Veterans Re-enter the Job Market

The technology lets veterans practice answering standard interview questions—as well as some uncomfortable and offensive questions—in hyper realistic virtual reality. 

Interviewees put on a headset while a vocational counselor stands off to the side and clicks on a screen filled with different questions and responses. Each response is color coded to match the aggressiveness of the phrase, which Wendy Whitcup, an associate producer at ICT, says is to keep the character’s personality constant throughout a training session. 

Continue reading in LA Magazine.

What Makes a Robot Likable?

The August issue of Communications of the ACM features a piece on SimSensei. Read the full article here.

Survivor’s Digital Double Helps Train SARCS, Victim Advocates at SHARP Academy

The U.S. Army website features a story on the ICT prototype, Digital Survivor of Sexual Assault (DS2A). Read the full article here.

ACL 2019

ACL 2019
July 28-August 2
Florence, Italy

CogSci 2019

July 24-27, 2019

Montreal, Canada


10th International Conference on Applied Human Factors and Ergonomics (AHFE 2019)

10th International Conference on Applied Human Factors and Ergonomics (AHFE 2019)
July 24-28, 2019
Washington, D.C.

Hao Li Selected for the DARPA ISAT Study Group

The Defense Advanced Research Projects Agency (DARPA) has named Hao Li to the Information Science and Technology (ISATStudy Group .

SIGIR 2019

SIGIR 2019
July 21-25, 2019
Paris, France

XR in Healthcare: Simple Evolution or Amazing Revolution?

CIO Applications Europe features a piece investigating XR in healthcare, mentioning some of ICT’s projects and efforts. Read the full article here.

ICCM 2019

July 19-22, 2019
Montreal, Canada

Digital Tools Will Bring the World to Army Soldiers

The STE CFT has developed two important digital tools for soldiers, One World Terrain (OWT) and the Reconfigurable Virtual Collective Trainer (RVCT), explained the STE CFT director, Maj. Gen. Maria Gervais, USA. The general, along with Lt. Col. Dylan Morelle, USA, demonstration officer, STE CFT; and Spc. Cody Palmer, USA, a Bradley driver with B Co., 1st Bn., 18th Infantry Regt., 2nd Armored Brigade Combat Team, 1st Infantry Division, Fort Riley, Kansas; spoke with SIGNALMagazine at the AFC event.

Continue reading in Signal Magazine.

The World at their Fingertips: One World Terrain to Allow Soldiers to Train Anywhere, Anytime

Carrying only a backpack and a drone, Soldiers could capture and eventually re-create entire sections of forests and steep mountains.

They can map 3D data from the rough, dry wasteland of the Mohave Desert, the dense rainforests of Hawaii or the rocky, landscape of woodlands. They can even replicate the detail of a bustling metropolis.

And with this data, they can capture intricate details down to the species of trees. That data will be optimized and aggregated with data from other geospatial sensors to build a digital environment Soldiers could use to train for war or duplicate an operational battlefield.

Continue reading in Guidon.

One Year In, Army Futures Command is Fully Up and Running. Here’s Some of the Tech It’s Been Working On

Am update on the Army Futures Command initiative and the One World Terrain Project.

Continue reading in Task & Purpose.

Hollywood Technology to Help Army Innovate Tank Training

The Army has enlisted the help of some of the brightest minds in the tech industry to test and evaluate crucial decision-making skills of tank commanders on the battlefield.  

To achieve that goal, the service extended its reach thousands of miles west from Fort Benning’s Maneuver Center of Excellence in Georgia — where tank crews normally train — to Los Angeles. 

Researchers here at the Institute for Creative Technologies at the University of Southern California through a partnership with the Army have developed a mixed reality program, the Synthetic Collective Operational Prototyping Environment, or SCOPE.

Continue reading the full story on the U.S. Army website.

3D Virtual Reality Research Benefiting U.S. Soldiers

Spectrum News 1 reports on One World Terrain. Watch the full segment here.

See How VR & AR Are Changing Medicine

Learn from experts like USC Institute for Creative Technologies R&D director Arno Hartholt and others, about cutting-edge topics like how VR is being used in healing, drug discovery, and neuroscience.

Via Gamasutra.

In the Age of Deepfakes, Could Virtual Actors Put Humans Out of Business?

When you’re watching a modern blockbuster such as The Avengers, it’s hard to escape the feeling that what you’re seeing is almost entirely computer-generated imagery, from the effects to the sets to fantastical creatures. But if there’s one thing you can rely on to be 100% real, it’s the actors. We might have virtual pop stars like Hatsune Miku, but there has never been a world-famous virtual film star.

The Guardian talks with ICT’s Arno Hartholt about virtual actors and the possibilities they present in entertainment.

International Association for Conflict Management 2019

International Association for Conflict Management 2019
July 7-10, 2019
Dublin, Ireland

Psychology Technology: 7 Examples of Mental Health Tech

Built In follows one Veterans journey through treating PTS symptoms through exposure therapy, featuring Bravemind and its benefits.

A Dose of Virtual Reality for Medical Professionals

As VR literally transports your vision and hearing to another dimension, the tech was found to be useful in treating patients with psychological diseases. In 2017, NBC wrote a piece on how psychologists used VR to treat PTSD suffered by war veterans using an off-the-shelf VR headset.

Transporting patients to a virtual battlefield and dubbed “USC Bravemind”, the therapy aims to give sufferers a chance to relive their traumas over and over again until they are capable of overcoming them. Jimmy Castellanos, an Iraq War veteran, was one of the participants in this program that had his life changed after several sessions; “Before the treatment, 80-90 percent of my dreams were Iraq related. Now I can’t remember the last time I had one. I live in a completely different way now.”

One of the minds behind the project, Professor Albert “Skip” Rizzo, “You can place people in provocative environments and systematically control the stimulus presentation. In some sense, it’s the perfect application because we can take evidence-based treatments and use it as a tool to amplify the effect of the treatment.”

Continue reading in Tech HQ.

ACM IVA 2019

ACM IVA 2019
July 2-5, 2019
Paris, France

Army Awards Key Contracts to Build Virtual Trainers

The awards mark big progress in developing the STE — essentially a virtual world in which to train soldiers for war and aims to move the service away from its stove-piped training systems from the ‘80s and ‘90s.

Continue reading about One World Terrain and the Synthetic Training Environment in Defense News.

Virtual Reality May Make PTSD Treatment More Effective

SoldierStrong, a 501(c)(3) non-profit organization that provides hyper-advanced medical devices and treatments to America’s wounded veterans, is working with four major universities and multiple federal agencies to deliver some of these new PTSD treatments to those who most need them. 

The treatment, known as the ‘StrongMind VR Protocol’, uses virtual reality technology in a clinical setting to address PTSD. The StrongMind Alliance is a group of leading technologists, clinicians, and institutions that have come together to make this treatment possible.

The StrongMind VR Protocol is powered by software developed by Dr. Skip Rizzo and the University of Southern California’s Institute for Creative Technologies. Their software, funded in part by the U.S. Department of Defense, offers 14 different ‘worlds’ that recreate a broad range of combat scenarios. These scenarios, and a range of environmental and sensory factors, can be rapidly customized by doctors and clinicians working with individual veterans dealing with PTSD.

Continue reading on the Bush Center website.

20th International Conference on Artificial Intelligence

June 25-29, 2019

Chicago, IL


Former Google Principal Filmmaker for VR and Vrai Pictures Founder to lead University of Southern California Institute for Creative Technologies MxR Lab

A Forbes exclusive about USC ICT’s new director for the Mixed Reality Lab.

USC Institute for Creative Technologies Names New Mixed Reality Lab Director

LOS ANGELES (June 25, 2019) — The USC Institute for Creative Technologieshas appointed Jessica Brillhart, as the Mixed Reality Lab’s new director, effective September 16, 2019. Brillhart will leverage her knowledge in building inclusive virtual worlds to lead the laboratory in the ongoing development of immersive training scenarios that are human-centered experiences, deeply personalized, emotionally impactful, data rich and cost-effective.

Brillhart comes from Vrai Pictures, where she currently produces impactful and emotional experiences for emerging technologies. Previously, Brillhart was the principal filmmaker for virtual reality at Google where she helped develop Google Jump, a live-action virtual reality ecosystem. 

“It’s an honor to be joining the team at USC’s Institute for Creative Technologies to take on the role as the director of the Mixed Reality Lab,” said Brillhart. “I’ve spent a good part of my career exploring the human side of emergent tech, creating platforms and experiences that not only showcase a technology’s capabilities but also challenge us to think more critically about how those capabilities affect culture and our society as a whole.”

In announcing the appointment, Randy Hill, executive director of the Institute, noted Brillhart’s extensive experience as an immersive director, writer and theorist, and how the team is looking forward to her advancing this powerhouse of research.

“We are experiencing an inspired movement in the advancement of mixed reality developed for training, education, health and entertainment,” said Hill. “USC’s Institute for Creative Technologies has helped lead technical modernization in the virtual and augmented segments by making major contributions to the development of Oculus Rift and Google Cardboard, and we anticipate that the hiring of Jessica will continue that imaginative spirit for the lab’s narrative and experiential content.”

Brillhart’s work explores how emerging technologies are creating original mediums for storytelling. Reflecting on her new responsibilities, she added, “I look forward to continuing MxR’s tradition of pushing experiential design forward by developing impactful tools, systems and experiences that are as functional as they are inclusive, accessible, emotional, and meaningful.”

USC Institute for Creative Technologies is a DoD-sponsored University Affiliated Research Center working in collaboration with the U.S. Army. Established in 1999, ICT has led the way in developing award-winning advanced immersive experiences that leverage groundbreaking research technologies and the art of entertainment to simulate human capabilities. 


Machine Learning Helps Microsoft’s AI Realistically Colorize Video from a Single Image

Film colorization might be an art form, but it’s one that AI models are slowly getting the hang of. In a paper published on the preprint server (“Deep Exemplar-based Video Colorization“), scientists at Microsoft Research Asia, Microsoft’s AI Perception and Mixed Reality Division, Hamad Bin Khalifa University, and USC’s Institute for Creative Technologies detail what they claim is the first end-to-end system for autonomous examplar-based (i.e., derived from a reference image) video colorization. They say that in both quantitative and qualitative experiments, it achieves results superior to the state of the art.

Continue reading in VentureBeat.

Virtual Reality Might Be the Next Big Thing for Mental Health

Experts used to worry that virtual reality (VR) would damage our brains. These days, however, VR seems more likely to help our gray matter. A new wave of psychological research is pioneering VR to diagnose and treat medical conditions from social anxiety to chronic pain to Alzheimer’s disease. Many of these solutions are still in laboratory testing, but some are already making their way into hospitals and therapists’ offices.

Continue reading in Scientific American.

The Coming Computational Approach to Psychiatry

Any disturbance in human behavior has immediate and future consequences and costs. So, when a growing number of humans from across nations suffer from some form of mental illness, the cost and consequences to countries’ economic and societal security understandably skyrocket. This is a crisis that is becoming increasingly catastrophic for all nations, and as a result, everything seems to be at risk.

As countries confront the complexity of mental health illnesses, a lack of understanding of its root causes, genetic predispositions, biochemical workings, ineffective treatments, and absent controls are worsening the crisis. Psychiatry today still relies on mostly voluntary patient reporting and physician observation based on clinical symptoms or discussions alone for clinical decision making. When most psychiatric diagnoses still rely on just talking to the patient, clearly there is a need for better ways to diagnose mental diseases. Now when it comes to treatment, even today, most psychiatrists still go through a trial-and-error approach to determine the right medication in the proper dosage to improve patient outcomes.

Continue reading in Forbes.

Virtual Training Effort Advances

More soldiers soon would be able to train in a broader array of scenarios without leaving their home bases or armories. Under the Army’s “One World Terrain” technology, soldiers would refine their skills in what the service calls its Synthetic Training Environment.

Continue reading FEDweek.

The Evolution of Cognitive Architecture Will Deliver Human-like AI

The current state of AGI research is “a very complex question without a clear answer,” Paul S. Rosenbloom, professor of computer science at USC and developer of the Sigma architecture, told Engadget. “There’s the field that calls itself AGI which is a fairly recent field that’s trying to define itself in contrast to traditional AI.” That is, “traditional AI” in this sense is the narrow, single process AI we see around us in our digital assistants and floor-scrubbing maid-bots.

Continue reading in Engadget.

AI-Powered Therapy to Set Minds at Rest

Online and virtual therapists that use artificial intelligence to inform digital patient engagement could help remediate some everyday mental health challenges.

Continue reading in Engineering & Technology.

How AI Understands Emotion

In science fiction, robots have had emotions for a long time. The film Her celebrates the romantic relationship between a heartbroken Joaquin Phoenix and Samantha, his virtual assistant who has the capacity to develop “love” using artificial intelligence (AI). The Hitchhiker’s Guide to the Galaxy makes numerous gags about the maudlin Marvin the Paranoid Android.

Thanks to recent breakthroughs in AI, emotional engagement with computers is going mainstream, and the reality is a lot different than the way it’s depicted in movies and books. Several industries are now employing virtual humans and AI that give real-time responses to the users’ emotional inputs.

Continue reading in Verizon Communications.

USC Institute for Creative Technologies Hosts First All-Girl High School Hackathon in SoCal

A North Hollywood High School student hit a trifecta this weekend. She brought together passions for empowering her female peers, computer science and the need to bridge the gender gap.

Emily Jin’s AIHacks Saturday and Sunday at the USC Institute for Creative Technologies brought students and scholars from all over the state for a weekend of problem-solving, coding, and collaboration — and, yes, some competition.

Continue reading in LA Daily News.

Army Tests New Training Platform

A technology called “One World Terrain” will help soldiers train virtually on 3D battlefields that replicate real locations. The Army recently conducted user assessments of the technology with soldiers from the 1st Infantry Division out of Fort Riley, Kansas, according to a service release. The testers assessed the database of 3D environments and tested out a management tool and software that can tie different simulation platforms together. Soldier feedback will assist in development of the training system, which is set to reach initial operational capability before the end of fiscal 2021, and full operational capability in fiscal 2023. Three Army divisions and 24 Marine Corps battalions are already using One World Terrain.



Here’s How a Popular Form of Entertainment is Helping Veterans Deal with Pain

Skip Rizzo, director of medical virtual reality at the University of Southern California Institute for Creative Technologies, said virtual reality has been used for various forms of pain management — both physically and mentally. In addition to how it’s being used at Kaplan’s hospital, the technology can distract people from painful medical procedures, provide a game-like environment during physical therapy sessions and help patients heal from military sexual trauma and PTSD.

For the latter, Rizzo’s lab uses a simulation of scenes from Iraq and Afghanistan deserts and villages, as well as stateside military bases, bars and bathrooms, where these traumas often occur, to help patients process what they went through with the help of a therapist.

Continue reading in MilitaryTimes.

Can the NHS’s New Virtual Reality Game Cure a Fear of Heights?

Dr Albert Rizzo, director of medical virtual reality at the University of Southern California, has made headlines in the US after designing a treatment for military veterans suffering from battlefield trauma.

Funded by the U.S. military, Dr Rizzo has created a series of virtual worlds, allowing veterans to ‘return’ to the moment that traumatised them. They might be placed in an army vehicle, driving along an Iraqi desert road, when a roadside bomb detonates in the distance, or experience walking through an Afghan village, when suddenly they come under fire from the enemy.

Continue reading in The Telegraph.

NAACL-2019 Workshop on Narrative Understanding

June 7, 2019

Minneapolis, MN


Soldier Strong Program Helps Veterans Cope with PTSD

Veteran Chris Merkle discusses his return home from war and how VR Exposure Therapy tools like Strong Mind, powered by ICT’s Bravemind, has helped during the process.

See the full segment on The Story with Martha MacCallum.

Army Testing Synthetic Training Environment Platform

Soon Soldiers worldwide could have a wealth of training options at their home station or armory — on a virtual platform. 

The Army recently conducted user assessments of its “One World Terrain” technology, a key component of the Synthetic Training Environment that will allow Soldiers to train virtually in 3-D on battlefields around the world from home station or deployed locations.

Continue reading on the U.S. Army website.

GEOINT Symposium

GEOINT Symposium
June 2-5, 2019
San Antonio, TX

US Army Developing ‘Google Earth on Steroids’ That Will Be Able to Simulate the INSIDE of Buildings

In a report from National Defense, one researcher working on the project revealed that the system will be granular enough to map the inside of buildings and eventually entire cities which can then be used in simulated training exercises.

The military hopes to inform the creation of these realistic simulations, what they call Simulated Training Environments (STE), by building a comprehensive and highly detailed 3D map of locations around the globe — an initiative dubbed One World Terrain.

Continue reading in the Daily Mail.

Robotics and American Sign Language

USC Viterbi researchers are helping to increase exposure to language in deaf infants to improve their development of language, reading, grammar, and writing skills for the rest of their lives.

Continue reading in USC Viterbi News.

Why Many Combat Veterans Are Still Suffering, Years After the Fight Ended

On average, 20 U.S. military veterans daily die by suicide, and suicides among active duty personnel are increasing. A number of treatments for veterans with depression and post-traumatic stress disorder exist, but they have drawbacks. Special correspondent Mike Cerre looks at treatment options and follows up on U.S. Marines with whom he was embedded during the war in Iraq.

Watch full segment on PBS News Hour.

Army Leveraging Virtual-Reality Market for Plug-and-Play Battle Simulation

The Synthetic Training Environment will be a realistic and complex replica of weapons systems and operations, complete with civilians, enemies, weather conditions and even animals, Gervais said. It will integrate the domains of land, air, sea, space and cyber using live, virtual and augmented environments. It will also incorporate scenarios found in the Army’s regular exercises.

Continue reading in Stars and Stripes.

Untethered Shooting Trainers, Reconfigurable Air and Ground Simulators, and the Next Level of Virtual Training

n less than two years, the Army expects to have advanced simulators at four to five installations that can offer connected, realistic combat training for everyone from the dismounted soldier to a tank commander to a helicopter pilot.

Continue reading in Army Times.

Robots Take a Turn Leading Autism Therapy in Schools

New tech teaching emotional and communication skills shows early signs of connecting with children with autism spectrum disorder.

Continue reading in the Wall Street Journal.

Using VR in Psychiatric Care: ‘This time, we have the technology’

VR has suffered cycles of hype and deflation as people puzzle over which sectors it has the potential to transform: could it replace the whiteboard as the principle tool in education or is it merely the next platform for gamers? Will it be strictly limited to the world of manual labour or will it fuel a new wave of porn addiction? Despite advances in technology and falling costs, VR has not yet ‘disrupted’ a sector in the way that was promised. The headsets were always too expensive, the graphics too clunky, the need unconvincing.

However, Rizzo believes VR is ready for adoption in mental healthcare. This is “theoretically informed, scientifically supported and now pragmatically deliverable, with advances in lower cost and high-fidelity equipment and easier to use software”. Key to psychiatric care being a good starting point for mixed-reality clinical care is that VR-based psychiatric treatments are not a great step into the unknown: they are based on treatments known to work in the ‘real world’. It is unsurprising then, that study after study (including those conducted by Rizzo’s own group) has shown that treatment using VR simulations can be at least as effective as standard treatments.

Continue reading in Engineering & Technology.

“My heart broke”: Some People Develop an Emotional Response with Their Robots

When a robot “dies,” does it make you sad? For lots of people, the answer is “yes” — and that tells us something important, and potentially worrisome, about our emotional responses to the social machines that are starting to move into our lives.

Continue reading in the Denver Post.

News from ITEC: Army’s ‘Google Earth on Steroids’ to Include Inside of Buildings

The Synthetic Training Environment intends to train all warfighting functions as well as the human dimensions of warfare, which include interacting with locals. It will be flexible, support repetition and be available at the point of need, according to the Army. Current training and simulation systems are not interoperable, affordable or realistic enough, the Army has said. To get at the latter problem, the service wants to create One World Terrain software to duplicate complex environments including large cities.

Continue reading in National Defense Magazine.

How Virtual Reality Can Help the Global Mental Health Crisis

Dr. Albert Rizzo is a research professor at the University of Southern California’s Institute for Creative Technology (ICT). The research institute, in partnership with Virtually Better and in collaboration with the U.S Army, created Bravemind – a PTSD treatment system. 

An early project undertaken by Bravemind researchers was intended to aid returning veterans process and deal with wartime trauma. Patients were outfitted with a head-mounted display and virtually sent back to Fallujah or other Middle Eastern areas of conflict. The soldiers physically held a rifle or other weapon, and were taken through a simulation that included booming explosions, rumbling engines, and even smoke and dust vented into the treatment room.

Continue reading in Forbes.

DisTec 2019

DisTec 2019
May 14-16, 2019
Stockholm, Sweden
Keynote Presentation

AAMAS (International Conference on Autonomous Agents and Multiagent Systems)

AAMAS (International Conference on Autonomous Agents and Multiagent Systems)
May 13-17, 2019
Montreal, Canada

How Simulation Games Prepare the Military for More Than Just Combat

Before signing up for active duty, today’s generation of armed forces recruits have usually played hours of branded RPGs such as Call of Duty. But while engaging in simulated battescapes improves motor skills and familiarity with military jargon, it can’t prepare them for what’s ahead. Or can it?

Continue reading in PC Magazine.

US Military Developing Google Earth on Steroids That Can Map Inside Building

Google Earth allows us to navigate the world through our computer or phone screens, zooming into cities and communities all over the world with a birds-eye view of the land. For the average civilian, this technology can help provide better directions to a location, or just serve as a fun time-waster as you jump from one continent to the next. 

But it may not surprise you that a tool such as this could be of great service to the military. Recently, a report from National Defense details an initiative from the U.S. military that is attempting to develop what one researcher called “Google Earth on steroids.”

Continue reading on

Be Wary of Robot Emotions; ‘Simulated Love is Never Love’

“When we interact with another human, dog, or machine, how we treat it is influenced by what kind of mind we think it has,” said Jonathan Gratch, a professor at University of Southern California who studies virtual human interactions. “When you feel something has emotion, it now merits protection from harm.”

Continue reading in Associated Press.

How Intelligent Workstations Will Use AI to Improve Health and Happiness

Do you want to be warm or cold? Is it time to stand rather than sit? An interdisciplinary team – made up of designers and USC professors – is using AI to create tech-savvy desks with health and well-being in mind.

Continue reading in USC News.

DCS 2019: Saving Soldiers Using Clinical VR

The virtual approach not only was accepted by soldiers with PTSD, it seemed to have won the day over real humans, in dramatic results shown to the SPIE audience this week.

Albert “Skip” Rizzo III, a leading Virtual Reality innovator for 20 years, and professor of gerontology at UCSD, made a convincing case that VR can outperform humans in the clinic for a broad range of treatments.

Continue reading in SPIE Optics.

Faculty Aim to Design Tomorrow’s Workstation for Today’s Workforce

To examine social aspects of human–machine interaction, Becerik-Gerber and Roll are collaborating with Gale Lucas, PhD, research assistant professor of computer science at USC Viterbi and the USC Institute for Creative Technologies. The team currently is collecting focus group input about how the workstation should offer prompts, including the degree of automation users are comfortable conceding. If the desk senses that a user is positioned in such a way that might trigger back pain, should it make automatic adjustments with the user’s health in mind? Or will people prefer having the final say over their workstations?

Continue reading in HSC News.

Drone Mapping Improves Warfighter Training, Saves Money

Scientists are using virtual and augmented reality to improve training environments for the warfighter. The technology empowers service members on the ground while saving the Defense Department money in the process.

The University of Southern California Institute for Creative Technologies’ One World Terrain project provides the military with a set of 3D global terrain capabilities that replicate an operational environment. It uses a wide collection of data to make this happen, including terrain flyovers with unmanned aircraft. Images captured from these flyovers are then used to produce 3D models for line-of-sight training and threat analysis.

Continue reading on

New Dimensions in Testimony Discussed at March of Living

The Jerusalem Post interviews attendees of this year’s March of Living and discusses the New Dimensions in Testimony project preserving Holocaust survivor stories.

Continue reading.


March 23-27, 2019
Osaka, Japan

New Artificial Intelligence Frontier of VFX

If there’s a buzz phrase right now in visual effects, it’s “machine learning.” In fact, there are three: machine learning, deep learning and artificial intelligence (A.I.). Each phrase tends to be used interchangeably to mean the new wave of smart software solutions in VFX, computer graphics and animation that lean on A.I. techniques.

Already, research in machine and deep learning has helped introduce both automation and more physically-based results in computer graphics, mostly in areas such as camera tracking, simulations, rendering, motion capture, character animation, image processing, rotoscoping and compositing.

VFX Voice asked several key players – including ICT’s Hao Li – about the areas of the industry that will likely be impacted by this new world of A.I.

ACM IUI 2019

ACM IUI 2019
March 16-20, 2019
Los Angeles, CA

Will People Trust a ‘Digital Human’ the Same Way They Trust People?

Research is showing these kind of embodied interactions drive engagement, which opens all kinds of possibilities as to how systems can help people.

Continue reading in CIO Upfront.

International Convention of Psychological Science

International Convention of Psychological Science
March 7-9, 2019
Paris, France

Can Artificial Intelligence Be An Effective Therapist?

Over 450 million people are currently affected by mental or neurological disorders and it is estimated that one in four people will be affected by such condition in the coming years. With the rapid advancement in technology and its application in the medical field, researchers and medical practitioners are now looking at ways in which artificial intelligence and machine learning can be leveraged to detect early symptoms and potential cure for various mental illness.

Over the years, considerable advancements have been made in this regard and AI-powered solutions such as NLP and even chatbots have been designed to understand the human mind.

Continue reading.

Virtual Reality Opens New Paths to Mental Health

A recent report released by the National Alliance on Mental Illness revealed the surprising statistic that 43.8 million Americans will experience a mental health issue or illness in any given year. That is nearly 1 in 5 people. Even more surprising is that 60% of those people will go untreated.

Micron Intelligence Accelerated reports on these issues and technologies that help in treatment.

The Promise of VR for PTSD

Medical professionals treating PTSD have traditionally relied on patients to tap into their memories in order to work through trauma. Skip Rizzo, research professor with the USC Davis School of Gerontology, joins host Krys Boyd to talk about how virtual reality is now being used to place veterans and other sufferers back into stressful situations in an effort to confront traumatic moments head on.

Listen to the full “think” podcast.

Pre-Programming Autonomous Machines Promotes Selfless Decision-Making

Researchers from the U.S. Combat Capabilities Development Command’s Army Research Laboratory, the Army’s Institute for Creative Technologies and Northeastern University collaborated on a paper published in the Proceedings of the National Academy of Sciences.

Continue reading.

GRAPP 2019

GRAPP 2019
February 25-27, 2019
Prague, Czech Republic

Will Autonomous Vehicles Make Us Better People?

Pacific Standard covers new research from ARL and ICT.

Saving Lives with Virtual Reality

Spectrum News 1 interviews Skip Rizzo about how virtual reality isn’t just for gamers, medical professionals use it to train for life-saving surgeries.

Watch the full segment here.

Could Talking to a Bot Help You Feel Better?

Research from the University of Southern California’s Institute for Creative Technologies, for example, found that U.S. veterans returning from Afghanistan were more willing to disclose symptoms of PTSD to a virtual interviewer than to an anonymous written survey.

Continue reading in Fast Company.

Programming Autonomous Machines Ahead of Time Promotes Selfless Decision Making

ABERDEEN PROVING GROUND, Md. (Feb. 11, 2019) — A new study suggests the use of autonomous machines increases cooperation among individuals.

Researchers from the U.S. Combat Capabilities Development Command’s Army Research Laboratory, the Army’s Institute for Creative Technologies and Northeastern University collaborated on a paper published in the Proceedings of the National Academy of Sciences.

The research team, led by Dr. Celso de Melo, ARL, in collaboration with Drs. Jonathan Gratch, ICT, and Stacy Marsella, NU, conducted a study of 1,225 volunteers who participated in computerized experiments involving a social dilemma with autonomous vehicles.

“Autonomous machines that act on people’s behalf — such as robots, drones and autonomous vehicles — are quickly becoming a reality and are expected to play an increasingly important role in the battlefield of the future,” de Melo said. “People are more likely to make unselfish decisions to favor collective interest when asked to program autonomous machines ahead of time versus making the decision in real-time on a moment-to-moment basis.”

De Melo said that despite promises of increased efficiency, it is not clear whether this paradigm shift will change how people decide when their self-interest is pitted against the collective interest.

“For instance, should a recognition drone prioritize intelligence gathering that is relevant to the squad’s immediate needs or the platoon’s overall mission?” de Melo asked. “Should a search-and-rescue robot prioritize local civilians or focus on mission-critical assets?”

“Our research in PNAS starts to examine how these transformations might alter human organizations and relationships,” Gratch said. “Our expectation, based on some prior work on human-intermediaries, was that AI representatives might make people more selfish and show less concern for others.”

In the paper, results indicate the volunteers programmed their autonomous vehicles to behave more cooperatively than if they were driving themselves. According to the evidence, this happens because programming machines causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals.

“We were surprised by these findings,” Gratch said. “By thinking about one’s choices in advance, people actually show more regard for cooperation and fairness. It is as if by being forced to carefully consider their decisions, people placed more weight on prosocial goals. When making decisions moment-to-moment, in contrast, they become more driven by self-interest.”

The results further show this effect occurs in an abstract version of the social dilemma, which they say indicates it generalizes beyond the domain of autonomous vehicles.

“The decision of how to program autonomous machines, in practice, is likely to be distributed across multiple stakeholders with competing interests, including government, manufacturers and controllers,” de Melo said. “In moral dilemmas, for instance, research indicates that people would prefer other people’s autonomous vehicles to maximize preservation of life (even if that meant sacrificing the driver), whereas their own vehicle to maximize preservation of the driver’s life.”

As these issues are debated, researchers say it is important to understand that in the possibly more prevalent case of social dilemmas — where individual interest is pitted against collective interest — autonomous machines have the potential to shape how the dilemmas are solved and, thus, these stakeholders have an opportunity to promote a more cooperative society.


Facing the Harsh Realities of PTSD

A look at how stepping into a virtual war zone helps war veterans face PTSD.

Watch full segment on Spectrum News 1.

VR Breakthroughs in Autism

Watch Skip Rizzo use virtual reality to help a teenager on the spectrum read social cues and practice interview skills, via Spectrum News 1.

SciTech Awards: Scanning Hollywood with Paul Debevec

Creating photo-real digital actors is one of the biggest remaining challenges in feature film visual effects. Digitally rendered faces must achieve a high level of realism in order to cross the “Uncanny Valley” and appear close enough to real people in order to be accepted by the audience. Often, the digital faces must accurately bear a resemblance not just to the real actors, but the characters they are portraying.

Key to so many vfx digital doubles, character performances and high end digital faces in recent films has been the ultra high quality scanning provided by specialist scanning done in a Light Stage. This innovation is being recognised by an Academy Technical Achievement Award ( or Sci Tech Award) being presented to Paul Debevec, Tim Hawkins and Wan-Chun Ma and Yu Xueming.

Continue reading in FX Guide.

The Next Frontier of Wearable Tech: Your Shoes

Would you be tempted to spend the money for some smart shoes? Is the extra technology worth the higher price tag? Or, is this taking wearable technologies a step too far?

KPCC ‘Air Talk’ discusses this with David Krum, listen here.

The 5 Biggest Tech Challenges to Building a Commercial Avatar

Here’s a look at some of the biggest tech barriers facing avatar systems – and how researchers think we can overcome them, from ANA Travel Unlimited.

Smart Buildings May Soon Deploy AI Avatars to Improve Energy Efficiency

New research shows that the human-machine dynamic will be important to achieving the potential for smart buildings in the near future.

Continue reading the full story in In-Building Tech.

Digital Influences and the Dollars that Follow Them

Today, there are generally three types of virtual avatar: realistic high-resolution CGI avatars, stylized CGI avatars and manipulated video avatars. Each has its strengths and pitfalls, and the fast-approaching world of scaled digital influencers will likely incorporate aspects of all three.

Continue reading about digital avatars and a brief mention of Ari Shapiro’s work in TechCrunch.

A Virtual Human Teaches Negotiating Skills

Whether it’s haggling for a better price or negotiating for a higher salary, there is a skill to getting the most of what you want. Voice of America’s Elizabeth Lee has the details.

Rapid Terrain Generation

Geospecific 3D terrain representation (aka reality modeling) is revolutionizing geovisualization, simulation, and engineering practices around the world. In tandem with the rapid growth in unmanned aerial systems (UAS) and small satellites, reality modeling advancements now allow geospatial intelligence (GEOINT) practitioners to generate three-dimensional models from a decentralized collection of digital images to meet mission needs in both urban and rural environments. Scalable mesh models deliver enhanced, real-world visualization for engineers, geospatial teams, combatant, and combat support organizations. In this, reality modeling provides a detailed understanding of the physical environment, and models allow installation engineers and GEOINT practitioners to quickly generate updated, high-precision 3D reality meshes to provide real-world digital context for the decision-making process.

Continue reading the full article in Trajectory Magazine.

International LiDAR Mapping Forum

International LiDAR Mapping Forum
January 28-30, 2019
Denver, CO

AI Programs Can Help in Early Detection of Mental Health Issues

GovernmentCIO explores research focused on using artificial intelligence to help detect signs of mental health issues, referencing ICT studies in the article.

Read the full story here.

IMSH 2019

International Meeting on Simulation in Healthcare (IMSH) 2019January 26-30, 2019
San Antonio, TX

Zippy: The Virtual Research Navigator

A new collaboration with the Children’s Hospital of Los Angeles using animated video and bots to improve research literacy.

Learn more.

AR Trial Wants to Give Juvenile Offenders a ‘Second Chance’

Dr. Skip Rizzo has used virtual and augmented reality to help veterans, those with autism, and adults emerging from the criminal justice system better cope with day-to-day life. His latest AR-based project, Second Chance, turns its attention toward juvenile offenders.

Continue reading in PC Magazine.

The Invisible Warning Signs That Predict Your Future Health

Artificial intelligence is proving it can spot the warning signs of disease before we even know we are ill ourselves. Leah Kaminsky, a GP and author, believes it could lead to a new era of healthcare.

Continue reading about AI’s impact in future healthcare, referencing ICT’s PTSD virtual human assisted therapy research.

Army Researchers Explore Benefits of Immersive Technology for Soldiers

The emergence of next generation virtual and augmented reality devices like the Oculus Rift and Microsoft HoloLens has increased interest in using mixed reality to simulate training, enhance command and control, and improve the effectiveness of warfighters on the battlefield.

It is thought that putting mission relevant battlefield data, such as satellite imagery or body-worn sensor information, into an immersive environment will allow warfighters to retrieve, collaborate and make decisions more effectively than traditional methods.

However, there is currently little evidence in the scientific literature that using immersive technology provides any measurable benefits, such as increased task engagement or improved decision accuracy.

There are also limited metrics that can be used to assess these benefits across display devices and tasks.

Researchers at RDECOM’s Army Research Laboratory, the Army’s corporate research laboratory (ARL), in collaboration with the University of Minnesota and the U.S. Army’s Institute for Creative Technologies at the University of Southern California, have set out to change this.

Continue reading.

How Virtual Reality Will Transform Medicine

Anxiety disorders, addiction, acute pain and stroke rehabilitation are just a few of the areas where VR therapy is already in use. Scientific American explores these uses and how VR might help to transform the medical field.

Continue reading in Scientific American.