CES Watch: Magic Leap is Finally Leaping into Reality

Augmented Reality is a hot topic as we head into CES and 2018, and there’s one startup that’s shaking up the emerging AR/VR landscape, and it’s not even available yet—nor is it augmented or virtual reality, but its own hybrid Mixed Reality. That startup is none other than Magic Leap.

Google and Alibaba have invested hundreds of millions of dollars into Magic Leap, and David Nelson, creative director of the MxR Lab at USC Institute for Creative Technologies told Rolling Stone that the technology “is moving us toward a new medium of human-computing interaction. It’s the death of reality.”

VR Can Enhance Military Training and Treat Trauma

FedTech Magazine explores VR potential for the military. Read the full story here.

Science & Technology Futures Initiative (SciTech Futures)

Download a PDF preview.

The Science and Technology Futures Initiative (SciTech Futures) is an ASA(ALT) funded research project that helps Army leaders ideate in the S&T space while identifying blind spots in Army planning. Not only is the Army acquisition process expensive, but traditional ideation and war gaming techniques, specifically in-person events, are not scalable. SciTech Futures seeks to address these issues by tapping into the wisdom of the crowd with targeted exercises focusing on topics of interest to Army S&T leadership. This broadens the participant pool and expands the number of analyzable ideas at the Army’s disposal.

To accomplish this, ICT leverages in-house game design and narrative expertise, SMEs within the US Army and USMC, and contacts across academia. ICT has the unique ability to translate and synthesize concepts between these cross- disciplinary fields. ICT’s SciTech Futures platform is web-based, allowing participants to contribute ideas from any PC or mobile device.

The SciTech Futures project consists of three main thrusts:

• Online collaborative ideation platform and exercises
• In-person workshops
• Compelling narrative storytelling

Online collaborative ideation platform
The SciTech Futures online platform can quickly be tailored to foresight topics in support of Future Army Warfighters. Recent exercise topics include “Sustainment and Logistics in the Urban Environment” and “Operationalizing Artificial Intelligence for Multi-Domain Operations,” both targeting the 2035 time horizon. Players contribute ideas based on targeted prompts, themes, and questions, collaborating in a virtual workshop setting where they collectively improve ideas.

In-person workshops
The SciTech Futures in-person workshops further develop the ideas that emerge from the online platform. These workshops bring together US Army and USMC SMEs, Hollywood writers, game designers, and Academic researchers to focus on world building. Ideas are developed using both operational and narrative approaches to imagine the environments and technologies that will be used by and against the Future Warfighter. These workshops enable a richer analysis of the technologies and concepts that emerged in the online arena.

Compelling narrative storytelling
The SciTech Futures project’s narrative and creative work highlights the most relevant ideas from the in-person workshop and online platform, helping to visualize and crystalize concepts, trends, and technologies by grounding them in near-future, real-world scenarios. This narrative work, which has taken the form of short stories and graphic novels, has been published in Small Wars Journal and the TRADOC Mad Scientist Blog.

 

US Army-Funded Technology Wins Oscar

ADELPHI, MD. (December 17, 2018) – Creative geniuses behind digital humans and human-like characters in Hollywood blockbusters Avatar, Blade Runner 2049, Maleficent, Furious 7, The Jungle Book, Ready Player One and others have U.S. Army-funded technology to thank for visual effects.

That technology was developed at the U.S. Army Institute for Creative Technologies at the University of Southern California. The ICT is funded by the U.S. Army RDECOM Research Laboratory, the Army’s corporate research laboratory (ARL).

Developers of that technology were recently announced winners of one of nine scientific and technical achievements by the Academy of Motion Picture Arts and Sciences.

A Technical Achievement Award will be presented at the Beverly Wilshire in Beverly Hills on February 9, 2019 to Paul Debevec, Tim Hawkins and Wan-Chun Ma for the invention of the Polarized Spherical Gradient Illumination facial appearance capture method, and to Xueming Yu for the design and engineering of the Light Stage X capture system during the Academy’s annual Scientific and Technical Awards Presentation. The Scientific and Technical Academy Awards demonstrate a proven record of contributing significant value to the process of making motion pictures.

“Polarized Spherical Gradient Illumination was a breakthrough in facial capture technology allowing shape and reflectance capture of an actor’s face with sub-millimeter detail, enabling the faithful recreation of hero character faces. The Light Stage X structure was the foundation for all subsequent innovation and has been the keystone of the method’s evolution into a production system.” The new high-resolution facial scanning process uses a custom sphere of computer-controllable LED light sources to illuminate an actor’s face with special polarized gradient lighting patterns which allow digital cameras to digitize every detail of every facial expression at a resolution down to a tenth of a millimeter.

The technology has been used by the visual effects industry to help create digital human and human-like characters in a number of movies and has scanned over one hundred actors including Tom Cruise, Angelina Jolie, Zoe Saldana, Hugh Jackman, Brad Pitt, and Dwayne Johnson at USC ICT.

Additionally, the Light Stage technology assists the military in facilitating recordings for its Sexual Harassment/Assault Response and Prevention program through a system called The Digital Survivor of Sexual Assault (DS2A). DS2A leverages research technologies previous created for the Department of Defense under the direction of the U.S. Army Research Laboratory and allows for Soldiers to interact with a digital guest speaker and hear their stories. As part of the ongoing SHARP training, this technology enables new SHARP personnel, as well as selected Army leaders, to participate in conversations on SHARP topics through the lens of a survivor’s firsthand account. It is the first system of its kind to be used in an Army classroom.

All four awardees were members of USC ICT’s Graphics Laboratory during the development of the technology from 2006 through 2016.

Paul Debevec continues as an Adjunct Research Professor at USC Viterbi and at the USC ICT Vision & Graphics Lab. Wan-Chun “Alex” Ma was Paul Debevec’s first Ph.D student at USC ICT and Xueming Yu joined the USC ICT Graphics Lab in 2008 as a USC Viterbi Master’s student. Tim Hawkins now runs a commercial light stage scanning service in Burbank for OTOY, who licensed the light stage technology through USC Stevens in 2008.

This is the second Academy Sci-Tech award being given to the Light Stage technology developed at the USC Institute for Creative Technologies. The first, given nine years ago, was for the earliest light stage capture devices and the “image-based facial rendering system developed for character relighting in motion pictures” and was awarded to Paul Debevec, Tim Hawkins, John Monos, and Mark Sagar.

Established in 1999, the Army’s ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S. Army Research Laboratory. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation.

The ARL is part of the U.S. Army Research, Development and Engineering Command, or RDECOM, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.

###

VR Can Benefit Mental Health in Real Life

Helping patients with post-traumatic stress disorder confront their fears is often more complex than simulating a generic high-rise or spider. One system that provides a broad menu of fear cues to patients with PTSD, created by VR therapy developer Albert “Skip” Rizzo and colleagues at the University of Southern California in Los Angeles, helps people suffering from post-traumatic stress after military duty in Iraq and Afghanistan.

Continue reading.

VR/AR Are Not Future Tech – They’re Already Reshaping the World

CIO Applications writes about industries where VR/AR technology is being implemented and how it’s helping to push boundaries, featuring some of ICT’s research.

Read the full article here.

3 Ways Tech Helps Heal the Mind and Body

Mixed-reality experiences developed at USC use immersive technology to help humans deal with conditions ranging from injuries to PTSD.

Continue reading in USC Trojan Family Magazine.

How Will the U.S. Military Use the Hololens on the Front Line?

In the near future, U.S. soldiers could be relying on Microsoft’s mixed reality Hololens technology to give them the edge. Last week, Bloomberg News reported that Microsoft had won a US $480 million contract from the U.S. Army for prototypes of a system that could result in the Army ordering 100,000 headsets, potentially for use in active combat.

IEEE Spectrum’s Stephen Cass asked David Krum, associate director of the MxR Lab at the University of Southern California’s Institute for Creative Technologies (ICT), about how the technology might be used more widely. Though ICT is a U.S. Department of Defense-sponsored research center that studies immersive technologies, the Institute is not involved with the latest Hololens initiative.

Continue reading.

Virtual and Mixed Reality at the Nexus of SciFi, Engineering, and Training

IEEE sits down with ICT’s MxR Lab to discuss virtual and mixed reality, and how these technologies assist in the fields of STEM, engineering and military training.

Watch the full video here.

Faces of Basic Research: Professor Jonathan Gratch

The Air Force Office of Scientific Research interviews Jonathan Gratch for their spotlight on basic research.

Watch here.

ARL 49 – Research Assistant, Deep Learning based Holistic Scene understanding Using Heterogeneous Platforms

Project Name
Deep Learning based Holistic Scene understanding Using Heterogeneous Platforms

Project Description
The goal of this project is to provide real-time situational awareness to combat arms units in complex environments by capturing a comprehensive view of a battlefield. Specifically, we plan to perform holistic scene understanding using multiple heterogeneous platforms (ground & aerial) with multimodal sensors (e.g. visible/ thermal cameras, acoustics, etc.) distributed over a region of interest.

Job Description
The student’s work will involves developing AI&ML approaches for distributed learning on heterogeneous platforms with multi-modal data. This includes: a) Developing light-weight ML methods (classification, detection, tracking, etc.) that can operate in resource constraint environment. b) Developing deep learning based networks that can learn from each other by sharing local analytics results (classification labels, object bounding boxes, simple tracking data, etc.) from neighboring nodes and combining them at individual nodes for further refinements to provide improved scene understanding.

Preferred Skills
Deep learning, machine learning, computer vision, caffe, tensorflow

SIGGRAPH ASIA 2018

SIGGRAPH ASIA 2018
December 4-7, 2018
Tokyo, Japan
Presentations

ARL 48 – Research Assistant, Socially Intelligent Assistant in AR

Project Name
Socially Intelligent Assistant in AR

Project Description
Augmented reality (AR) introduces new opportunities to enhance the successful completion of missions by supporting the integration of intelligent computational interfaces in the users’ field of view. This research project studies the role embodied conversational agents can play towards that goal. This type of interface has a virtual body and is able to communicate with the user using natural language and nonverbally (e.g., emotion expression). The core research question is: Do embodied conversational interfaces improve decision making quality and efficiency when compared to more traditional types of interfaces?

Job Description
The candidate will develop this research on an existent platform for embodied conversational agents in AR. The candidate will have to propose a set of key functionalities for the agent, implement them, and demonstrate that it improves decision making performance. The proposed functionality must pertain to information that is perceived through the camera or 3D sensors available in the AR platform, and may be communicated to the user verbally and nonverbally.

Preferred Skills
– Experience with AR platforms
– Experience with Unity and C# programming
– Some experience with HCI evaluation techniques
– Some experience with scene understanding techniques and TensorFlow
– Some experience with embodied conversational agents

ARL 47 – Research Assistant, Optimization And Multi-Agent Controls

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The RA will assist the lead by implementing state of the art optimization algorithms, and/or developing new algorithms to optimize multiple objectives. For example, an Army scenario involving UAVs may want to maximize speed, minimize cost, and maximize stealth simultaneously. The RA will also implement and/or develop scalable algorithms for control of multiple simulated robots. Examples include collision avoidance algorithms for UAVs, or task distribution algorithms for teams of humans and UAVs.

Preferred Skills
– Combinatorial optimization (e.g. traveling salesman problem, vehicle routing problem)
– Multi-robot controls (e.g. collision avoidance, path planning)
– Multi-objective optimization
– Programming (Java, Python, C++, and R preferred)
– Tradeoff analysis
– Industrial or Systems Engineering
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 46 – Research Assistant, Machine Scene Understanding from Multimodal Data

Project Name
Machine Scene Understanding from Multimodal Data

Project Description
Current manned and unmanned perception platforms, ground or airborne, carry multimodal imagers and sensors such as electro-optical/infrared cameras, depth sensors, and LiDAR sensors, with future expectation of additional modalities. This project focuses on the development of machine learning (ML) networks for scene understanding using multimodal data, in particular using a diverse dataset consisting of high-fidelity simulated RGB color and IR images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and data fusion techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 45 – Programmer, Multi-Agent Modeling And Simulation

Project Name

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The programmer will create functions and modules that can be integrated into the existing codebase. The programmer may create models of new asset types (e.g. futuristic flying vehicles) based on their physics and mechanics. Under guidance from the lead, the programmer may implement models of human operators.

Preferred Skills
– Java development, Python development, C++ development,
– Collaborative development (e.g. Github, Bitbucket)
– Integrating features from diverse programs to enable new analysis
– Familiarity with physics, engineering, or robotics
– Familiarity with UAVs (e.g. drone racing or design)
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 44 – Research Assistant, Monocular Visual Localization Assisted with Deep Learning

Project Name
Monocular Visual Localization Assisted with Deep Learning

Project Description
Robust and accurate localization is vital to any intelligent system and application that are spatially-aware including autonomous driving, robot navigation, location-based situational awareness, and augmented reality. This project is to develop high-performance self-tracking and localization techniques with single monocular camera that are suitable for intelligent perception on low Size, Weight and Power (SWaP) platforms.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and object recognition techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
– A dedicated and hardworking individual
– Experience or coursework related to machine learning, computer vision
– Strong programming skills

ARL 43 – Research Assistant, Human Modeling and Simulation

Project Name
Agent-Based Modeling and Simulation of Human-Robot Teaming

Project Description
This project aims to create user-friendly simulations of multi-UAV (drone) systems and their human operators. The simulations must be lightweight enough to analyze large numbers (20+) of humans and agents at once, and accurate enough to enable the end user to make system design decisions, such as the number of personnel and quality of robots required to complete a mission. UAV-centered Army missions are used as scenarios for the analysis, and we investigate the performance of current and futuristic technology.

Job Description
The RA will assist the lead by researching quantifiable aspects of human performance. The scenarios considered will have UAV operators in a stressful environment trying to complete Army-relevant missions such as search and rescue. The RA will synthesize research on human performance so that the team can mathematically model and simulate humans in this environment (e.g. ability to detect UAVs flying overhead, number of UAVs a person can
simultaneously control).

Preferred Skills
– Human factors engineering
– Human-centered design
– Cognitive ergonomics
– Familiarity with UAVs (e.g. drone racing or design
– Programming (Java, C++, and Python preferred)
– Familiarity with agent-based modeling (e.g. NetLogo, MASON, AnyLogic, GAMA, AFSIM)

ARL 42 – Research Assistant, Deep Learning Models for Human Activity Recognition Using Real and Synthetic Data

Project Name
Human Activity Recognition Using Real and Synthetic Data

Project Description
In the near future, humans and autonomous robotic agents – e.g., unmanned ground and air vehicles – will have to work together, effectively and efficiently, in vast, dynamic, and potentially dangerous environments. In these operating environments, it is critical that (a) the Warfighter is able to communicate in a natural and efficient way with these next generation combat vehicles, and (b) the autonomous vehicle is able to understand the activities that friendly or enemy units are engaged in. Recent years have, thus, seen increasing interest in teaching autonomous agents to recognize human activity, including gestures. Deep learning models have been gaining popularity in this domain due to their ability to implicitly learn the hierarchical structure in the activities and generalize beyond the training data. However, deep models require vast amounts of labeled data which is costly, time-consuming, error-prone, and requires measures to address any potential ethical concerns. Here we’ll look to synthetic data to overcome these limitations and address activity recognition in Army-relevant outdoor, unconstrained, and populated environments.

Job Description
The candidate will implement Tensorflow deep learning models for human activity recognition – e.g., 3D conv nets, I3D – that can be trained using real human gesture data, and synthetic gesture data (generated using an existent simulator). Knowledge of domain transfer techniques (e.g., GANs) may be useful. The candidate will have to research and demonstrate a solution to this problem.

Preferred Skills
– Experience with deep learning models for human activity recognition
– Experience with Python and TensorFlow
– Independent thinking and good communication skills

ARL 41 – Research Assistant, Wound Ballistics

Project Name
Medical Imaging as a Tool for Wound Ballistics

Project Description
The primary purpose of this project is to research forensic aspects of ballistic injury. The motivation for this project results from a desire to better understand the ability of medical imaging tools to provide clinically- and evidentiary-relevant information on penetrating wounds caused by ballistic impacts both pre- and post-mortem.

Job Description
The research assistant will collect and analyze data, including DICOM medical images, as well as document and present findings of the work.

Preferred Skills
– Graduate student in biomedical engineering, mechanical engineering, or related field
– Some experience working in a laboratory setting
– Some experience in the medical field
– Some experience with medical images or radiology
– Experience in software for data collection, processing and analysis

ARL 40 – Research Assistant, The Biomechanics of Ballistic-Blunt Impact Injuries

Project Name
The Biomechanics of Ballistic-Blunt Impact Injuries

Project Description
The primary purpose of this project is to research the mechanisms and injuries associated with ballistic-blunt impacts. The motivation for this project results from body armor design requirements. Body armor is primarily designed to prevent bullets from penetrating into the body. However, to absorb the energy of the incoming bullet, body armor can witness a large degree of backface deformation (BFD). Higher energy threats, new materials and new armor designs may increase the risk of injury from these events. Even if the body armor systems can stop higher energy rounds from penetrating, the BFD may be severe enough to cause serious injury or death. Unfortunately, there is limited research on the relationship between BFD and injury, hindering new and novel armor developments. Consequently, there is a need to research these injuries and their mechanisms so that proper metrics for the evaluation of both existing and novel system can be established.

Job Description
The research assistant will help design and execute hands-on lab research related to injury biomechanics, collect and analyze data, as well as document and present findings of the work.

Preferred Skills
– Graduate student in biomedical engineering, mechanical engineering, or related field
– Some experience working in a laboratory setting
– Some experience in the medical field
– Experience in software for data collection, processing and analysis

ARL 39 – Research Assistant, Computer Vision/Machine Learning Researcher

Project Name
Zero-Shot Learning for Semantic Scene Recognition

Project Description
The project is going to be on analyzing images or videos using deep learning based zero-shot and/or few-shot learning techniques for semantic scene recognition applications, such as detection, action/activity recognition, segmentation, captioning, etc. In this project, we will develop novel and effective zero-shot/few-shot learning approaches that are required to handle numerous real world scenarios where datasets are sparsely provided.

Job Description
You’ll be working: 1) on a problem related to ‘zero-shot/few-shot learning’ on images and videos for semantic scene recognition including detection, action/activity recognition, segmentation, captioning, etc. 2) independently to carry out a literature survey on state of the art approaches and devise a novel method 3) towards publishing a paper at the end of the internship.

Preferred Skills
– The ability to write code (Python) for computer vision/machine learning techniques
– Be familiar with deep learning frameworks (PyTorch, Caffe, etc)
– An advanced degree in computer science or relevant (MS or PhD)
– Have previous exprience of implementing deep learning algorithms to solve problems in computer vision/machine learning

ARL 38 – Research Assistant, Materials and Device Simulations

Project Name
Materials and Device Simulations for Emerging Electronics

Project Description
The project is part of an ongoing emerging materials and device research effort in the US Army Research Laboratory (ARL). One focus area is exploration and investigation of materials and device designs, both theoretically and experimentally, for low-power, high-speed, and light weight electronic devices.

Job Description
The research assistant will work with ARL scientists to investigate fundamental material and device properties of low-dimensional nanomaterials (2D materials and functionalized diamond surfaces). For this study, various bottom-up materials and device modelling tools based on atomistic approaches such as first-principles density functional theory (DFT) and molecular dynamics (MD) will be used. In addition, numerical and analytical modeling will be used to quantify and analyze data obtained from atomistic simulation to facilitate comparison to in-house experimental findings.

Preferred Skills
– An undergraduate or graduate student in electrical engineering, materials science, physics or computational chemistry
– Sound knowledge of materials and device physics concepts
– Proficiency in at least one scripting language
– Proficiency with atomistic materials modeling concepts
– Interest in fundamental materials design and discovery

ARL 37 – Programmer, Synthetic Image Generator for Deep Learning Using Unity 3D

Project Name
Creation of Synthetic Annotated Image Training Datasets Using Computer Graphics for Deep Learning Convolutional Neural Networks

Project Description
Work as part of a team on a project to develop and apply DLCNN on field deployable hardware.
Purpose: Accelerate deep learning algorithms to recognize people, behaviors and objects relevant to military purposes using computer graphics generated training images for complex environments.
Product: A training image generator which creates a corpus of automatically annotated images for a closed list of people, behavior and objects. Optimized fast and accurate machine learning algorithms that can be fielded in low-power, low-cost and low-weight fieldable sensors.
Payoff: Create an inexpensive source of military related training data and optimal deep learning algorithm tuning for fieldable hardware, which could be used to create semi-automatic annotated datasets for further training and be scalable for the next generation machine learning algorithms.

Job Description
Develop a Unity 3D based image generator to create “pristine” and sensor degraded synthetic data suitable for training and testing DLCNN’s, e.g. Caffe, TensorFlow, DarkNet…. assets such as personnel, vehicles, aircraft, boats and other objects will be rendered under a variety of observation and illumination angle conditions, e.g. full daytime cycle, weather conditions (clear to total overcast, low to high visibility, dry and rain, snow).

Preferred Skills
– Familiarity with Unity3D gaming engine
– Able to program in C#, shaders
– Self-motivated and able to work with existing code, github

ARL 36 – Research Assistant, Machine Learning for Autonomous Visual Navigation

Project Name
Navigation Aiding Sources

Project Description
In this project we develop the theoretical concepts and algorithms for successful navigation of systems using terrain matching and geo-registration techniques for airborne platforms. Focus is on real time implementation on embedded platforms and successful implementation with limited training data. Object and landmark detection and tracking, self localization, networked and collaborative systems are relevant sub-topics for exploration.

Job Description
Candidate will be well versed in machine learning and computer vision, with additional knowledge in navigation and localization strategies. The applicant will work alongside a multidisciplinary team of engineers and researchers located at ARL West and Aberdeen Proving Ground, MD. Algorithm development and subsequent implementation is required for this position.

Preferred Skills
– Machine learning
– Computer vision

ARL 35 – Research Assistant, Creative Visual Storytelling

Project Name
Creative Visual Storytelling

Project Description
This project seeks to discover how humans tell stories about images, and to develop computational models to generate these stories. “Creative” visual storytelling goes beyond listing observable objects and their visual properties, and takes into consideration several aspects that influence the narrative: the environment and presentation of imagery, the narrative goals of the telling, and the audience who is listening. This work involves aspects of computer vision to visually analyze the image, commonsense reasoning to understand what is happening, and natural language generation and theories of narratives to describe it in a cohesive and engaging manner. We will work with low-quality images and non-canonical scenes. Paper reference: http://www.aclweb.org/anthology/W18-1503

Job Description
Possible projects include:
– Develop software framework for crowdsourcing the annotation of stories written about images
– Conduct manual and/or computational analysis of the narrative styles and properties of stories written about images
– Experiment with or combine existing natural language generation and/or computer vision software for creative visual storytelling
– Work with project mentor to design evaluation criteria for assessing the quality of stories written about images

Preferred Skills
Interest in and knowledge of some combination of the following:
– Programming expertise for language generation and/or computer vision
– Digital narratives and storytelling applied to images
– Experimental design and applied statistics for rating and evaluating stories

ARL 34 – Research Assistant, Individual Response to Immersive Technology

Project Name
Individual Response to Immersive Technology

Project Description
This project examines the role of individual characteristics (stable traits and transient states) that may influence response to immersive technologies such as virtual reality and virtual environments. The project examines these effects in the context of spatial learning and navigation tasks.

Job Description
Intern will score and/or analyze existing data to uncover relationships among individual traits, states, immersive technology, and performance on a the tasks. Intern may also be involved in ongoing related research activities.

Preferred Skills
– Statistical analysis
– Knowledge of Matlab or similar programs a plus
– Interest in psychology, virtual reality, and/or learning technology

ARL 33 – Programmer, Individualized Gamification Demonstration

Project Name
Individualized Gamification: Predicting Performance from a Short Questionnaire

Project Description
This project is investigating individualized gamified learning. Past work has shown that the results of an extensive personality/psychological trait questionnaire are able to predict an individual’s performance on a naturalistic training task. The goal of the summer intern project is to demonstrate that a shorter questionnaire could achieve similar predictive power.

Job Description
Job scope will vary based on intern interests and capabilities. At minimum, the intern will program a simple interface to administer a short questionnaire and display a performance prediction based on existing models. Extensions could include statistical analyses to select a subset of questions and pilot data collection to validate predictions.

Preferred Skills
– Programming — Python or R preferred
– Statistics — Factor analysis & related techniques
– Interest in psychology, cognitive neuroscience, and/or gamified learning

344 – Programmer, Immersive Virtual Humans for AR/VR

Project Name
Immersive Virtual Humans for AR/VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills
– C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
– Python, GPU programming, Maya, Octane render, svn/git, strong math skills
– Knowledge in modern rendering pipelines, image processing, rigging

343 – Body Tracking for AR/VR

Project Name
Body Tracking for AR/VR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills
– C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
– Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
– Detection, bilinear morphable models, texture synthesis, markov random field

342 – Programmer, Real-Time Rendering of Virtual Humans

Project Name
Real-Time Rendering of Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization of our new facial scan database and to animate and render virtual humans. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering, and animation. Ideally, the intern would have awareness about physically based rendering, sub surface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
– C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
– Python, GPU programming, Maya, version control (svn/git), strong math skills
– Knowledge in modern rendering pipelines, image processing, rigging, blendshape modelling

341 – Research Assistant, Trust, Bonding and Rapport Between Humans and Autonomy

Project Name
Trust, Bonding and Rapport Between Humans and Autonomy

Project Description
The project will explore verbal and nonverbal techniques for fostering trust, bonding and rapport between a human user and autonomous systems (robots and virtual humans). Intern will work with AI dialog and multi-modal sensing methods to examine ways to sense and respond to human verbal and nonverbal information to build rapport and trust in a laboratory experimental task.

Job Description
Duties include programming and working with existing AI methods. Some understanding of HCI/HRI, experimental methods and data analysis will be useful.

Preferred Skills
– Extensive programming experience
– Knowledge of signal processing and machine learning methods
– Evidence of research potential (e.g., publications)

340 – Research Assistant, The Organizational Impact of Autonomy

Project Name
The Organizational Impact of Autonomy

Project Description
Advances in AI make it possible for intelligent agents to act on our behalf in interactions with other people (negotiating the price of products or evening managing subordinates in an organization). The goal of this research to examine the psychological and organizational consequences of this technology. Intern will be involved in adapting existing AI technology and engage in experimental design and execution of online (MTurk) study on how people use this technology.

Job Description
Duties include some programming, experimental design, execution, data analysis and writing (with aim to publish the research).

Preferred Skills
– Programming experience (Java, java script)
– Knowledge of experimental design and statistical analysis (SPSS)
– Evidence of research potential (e.g., publications)

339 – Research Assistant, Impact of AI on Users’ Psychology

Project Name
Impact of AI on Users’ Psychology

Project Description
Irresistible pressures are driving the adoption of AI. What impact will this have on us? Will using AI or operating through an autonomous robot, for example, undermine trust, increase risk-taking, reduce vigilance to threats and increase dehumanization of others? This project will examine the psychological impact of such advances.

Job Description
The Research Assistant will assist by helping to finalize development of agents that vary in level of autonomy (e.g., full AI vs assisted). The assistant will help to run an experiment where the impact of autonomy on users’ psychological factors is tested, as well as analyze the results of this study.

Preferred Skills
– Experimental design
– Running user studies
– Computer-human or computer-mediated interaction

338 – Programmer, Virtual Reality Game-based Rehabilitation

Project Name
Virtual Reality Game-based Rehabilitation

Project Description
The Medical Virtual Reality (MedVR) group at ICT is devoted to the study and advancement of uses of virtual reality (VR) simulation technology for clinical purposes. MedVR Lab’s Game-based Rehab Group develops low-cost and home-based VR toolkits for physical therapy. We use gaming technology to help patients rehabilitate.

Job Description
Intern will work on our Mystic Isle project which allows rehab patients to do rehabilitation by playing a motion game that tracks their movements using a Kinect sensor. Interns will interface with therapists and engineers to convert Mystic Isle into a virtual reality application and help support user-centered trials at the Keck School of Medicine.

Preferred Skills
– Experience working with Unity3D or other game frameworks
– Proficiency in C/C++/C# and Microsoft Visual Studio
– Ability to work independently and efficiently under deadlines
– Strong communication and teamwork skills
– Experience working on a VR game
– Familiarity with 3D Max, and/or Maya

337 – Research Assistant, Emotion Evoking Game

Project Name
Emotion Evoking Game

Project Description
Taking the player through an emotional journey is an effective way to engage players and create an everlasting game-play experience. Techniques such as the use of sound and visual effects have been experimented in movies and games. In this project, we will focus on the design of game events based on appraisal theories of emotion, and research how characteristics and sequences of game events can induce emotions through interactive game-play.

Job Description
The research assistant will build upon and extend an existing game, EVG, to design a role-playing game to induce emotions.

Preferred Skills
– Game development experience
– Passion for game design
– Unity, C/C++

336 – Research Assistant, Persuasive Games

Project Name
Persuasive Games

Project Description
Cognitive dissonance is a state of mental discomfort that arises from conflicting attitudes or beliefs within an individual. Such dissonance motivates the individuals to restore internal consistency by changing their attitude and behavior. Traditionally, dissonance-based interventions are carried out in person and are both time- and resource-intensive, limiting access to this effective attitudinal and behavioral intervention. In this project, we will design a role-playing game, called DELTA-X, to create an immersive virtual environment for inducing cognitive dissonance.

Job Description
The research assistant will develop a role-playing game, based on an existing game DELTA, to induce attitude and behavioral change.

Preferred Skills
– Unity, C/C++
– Experience in game development
– Passion for game design

335 – Research Assistant, Explainable AI for Agent-based Simulation

Project Name
Explainable AI for Agent-based Simulation

Project Description
In team-based Synthetic Training Environments (STEs), populated with AI-driven entities, the reasoning process behind these AI entities offers great opportunities for teams to understand what happened during training, why it happened, and how to improve. Unfortunately, while there are existing agents that can generate realistic actions in simulated exercises, they typically cannot describe events from their perspective or explain the reasoning behind their behaviors. This is often due to the fact that the algorithms underlying AI-driven entities are not readily explainable, making the resulting behavior hard to understand even for AI experts. This project addresses this challenge by incorporating explainable artificial intelligence (XAI) to support explainable agent behaviors. Although specific methods vary, depending on the targeted AI algorithms, the XAI interface creates an interpretable model for the underlying algorithms. Components of the interpretable models can then be used to create explanations of the decision-making of the AI entities.

Job Description
The research assistant will work with existing agent frameworks and machine learning algorithms to develop explainable models for the AI algorithms.

Preferred Skills
– Good knowledge of AI algorithms
– Python, C/C++
– Good knowledge of math

334 – Research Assistant, Extending Dialogue Interaction

Project Name
Extending Dialogue Interaction

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue which mainly focuses either a system chatting with a single person or assisting a person with accomplishing a single goal. The project will involve investigation of one or more of the following topics: consideration of multiple goals in dialogue, multi-party dialogue (with more than two participants), multi-lingual dialogue, multi-platform dialogue (e.g. VR and phone), automated evaluation of dialogue systems, or extended and repeated interaction with a dialogue system.

Job Description
The student intern will work with the Natural Language Research Group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills
– Some familiarity with dialogue systems or natural language dialogue
– Either programming ability or experience with statistical methods and data analysis
– Ability to work independently as well as in a collaborative environment

333 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship. Previous subjects have included Holocaust and Sexual Assault Survivors and Army Heroes.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
– Very good spoken and written English (native or near-native competence preferred)
– General computer operating skills (some programming experience desirable)
– Experience in one or more of the following:

1. Interactive story authoring & design
2. Linguistics, language processing
3. A related field; museum-based informal education

332 – Research Assistant, Virtual Human Dialogue: Game + Social Chat Activities Experiment

Project Name
Virtual Human Dialogue: Game + Social Chat Activities Experiment

Project Description
This project will involve the design of an experiment seeking to demonstrate that social chat can be used to help an automated agent personalize to its user when playing a word guessing game. This will be interesting to interns interested in getting hands on experience with interactive virtual human agents and learning about how to design and conduct experiments to evaluate these types of agents, or students interested in using artificial intelligence for gaming purposes as well as students interested in investigating using psychology theories to motivate agent design decisions. The student will work one-on-one with a PhD student for this project as well as be advised by a Professor.

Job Description
The intern will be exposed and involved in many stages of an experiment intended to evaluate an interactive agent that plays a word-guessing game and participates in a social chat. These stages include data analysis, experiment design, agent implementation, and possibly running participants through the experiment. The intern might also have the opportunity to contribute to a publication (after the internship) depending on the results from the experiment. The intern will conduct data analysis from completed experiments that evaluated social chat design decisions. This analysis will include statistical investigations as well as annotations. The findings from this analysis will help the intern contribute to finalizing design decisions for a new experiment that is investigating the social chat activity’s ability to help an agent personalize to its user. Depending on skills; the intern will also have the opportunity to implement changes to an agent using ICT’s virtual human toolkit. These changes should help the agent perform the social chat activity and personalize the agent’s word-guessing game to a user. The intern may also help run initial participants through this experiment.

Preferred Skills
– Python, Java, SQL
– Statistics, Machine Learning
– Annotation

331 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology to physical and simulated robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, games, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities: 1. Porting language technology to a robot platform, 2. Design of task for human-robot collaborative activities, 3. Programming of robot for such activities, 4. Use of a robot in experimental activities with human subjects, 5. Analysis of experimental human-robot dialogue data.

Preferred Skills
– Experience with one or more of:
– Using and programming robots
– Dialogue systems, computational linguistics
– Multimodal signal processing, machine learning

330 – Research Assistant, The Sigma Cognitive Architecture

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing a cognitive architecture – i.e., a computational hypothesis about the fixed structures underlying a mind – called Sigma that is based on an extension of the elegant but powerful formalism of graphical models, enabling combining both statistical/neural and symbolic aspects. Sigma is built in Lisp, but its core algorithms are in the process of being ported to C. We are looking for someone interested in working with Sigma in one of a number of possible areas, including abduction, attention, episodic memory and neural reinforcement learning.

Job Description
Looking for a student interested in developing, applying, analyzing and/or evaluating new intelligent capabilities in an architectural framework.

Preferred Skills
– Programming (Lisp preferred, but can be learned once arrive)
– Graphical models (experience preferred, but ability to learn quickly is essential)
– Cognitive architectures (experience preferred, but interest is essential)

329 – Programmer, Advancing Middle School Teachers Understanding of Proportional Reasoning for Teaching

Project Name
Advancing Middle School Teachers Understanding of Proportional Reasoning for Teaching

Project Description
The Institute for Education Sciences (IES) supported, Advancing Middle School Teachers’ Understanding of Proportional Reasoning for Teaching project is building a virtual agent facilitator to help teachers with their professional development on mathematics skills. This intelligent tutoring system will help teachers learn new strategies and skills to help teach proportions to students. ICT will lead technical development, usability analysis, and iterative revision of the intervention software and interaction policies for module content. This role will include both the development of the web application used by teachers, as well as data pipelines to output usage and performance patterns that are integrated into analyses to evaluate efficacy and feasibility. ICT is working with USC’s Rossier School of Education on this project.

Job Description
The goal of the internship will be to contribute to a web application used by teachers, as well as data pipelines to output usage and performance patterns that are integrated into analyses to evaluate efficacy and feasibility. Specific tasks will involve improving a dialog-based tutor, building new user interfaces for a MERN-stack web application, and contributing to data analytics/machine learning which will help identify usage patterns by teachers using the system.

Preferred Skills
– JavaScript/Node.js, React, Python
– Basic AI Programming or Statistics
– Experience with data collection and recording video and audio files

328 – Programmer, Personalized Assistant for Life-Long Learning (PAL3) – AI

Project Name
Personalized Assistant for Life-Long Learning (PAL3) – AI

Project Description
PAL3 is a system for delivering engaging and accessible education via mobile devices. It is designed to provide on-the-job training and support lifelong learning and ongoing assessment. The system features a library of curated training resources containing custom content and pre-existing tutoring systems, tutorial videos and web pages. PAL3 helps learners navigate learning resources through: 1) An embodied pedagogical agent that acts as a guide; 2) A persistent learning record to track what students have done, their level of mastery, and what they need to achieve; 3) A library of educational resources that can include customized intelligent tutoring systems as well as traditional educational materials such as webpages and videos; 4) A recommendation system that suggests library resources for a student based on their learning record; and 5) Game-like mechanisms that create engagement (such as leader-boards and new capabilities that can be unlocked through persistent usage).

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) models driving the dialog systems for PAL3 to support goal-setting, teamwork, or fun/rapport-building; (2) modifying the intelligent tutoring system and how it supports the learner, and (3) statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system. Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills
– C#, JavaScript/Node.js, Python, R
– Dialog Systems, Basic AI Programming, or Statistics
– Strong interest in intelligent agents, human and virtual behavior, and social cognition

327 – Programmer, SMART-E: Service for Measurement and Adaptation to Real-Time Engagement

Project Name
SMART-E: Service for Measurement and Adaptation to Real-Time Engagement

Project Description
The vision behind this work is a toolkit that generalizes metrics and interventions to constantly monitor and optimize engagement and learning in virtual environments. The toolkit will continuously measure the experiences of learners as they interact with virtual learning environments, such as intelligent tutoring systems (ITS). This toolkit for assessing engagement will systematically analyze engagement data to provide insights that improve a target training system along multiple dimensions of engagement that range from short term cognitive improvement to long-term identity formation as a professional on the training topic. Engagement is critical for Army training because lack of engagement results in lower learning, engagement predicts persistence and dropout , and engagement is actionable and can be induced through interventions.

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) machine learning for intelligent tutoring systems and how it supports the learner, and (2) models driving the virtual human utterances and behaviors, and (3) emotion coding, statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system. Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills
– Python, JavaScript, C#
– Basic AI Programming or Statistics
– Strong interest in human and virtual behavior and cognition

326 – Programmer, Integrated Virtual Humans Programmer

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual humans systems by combining various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question/answer characters, to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other, both verbally and non-verbally, i.e. they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C#, or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA, or PHP
  • Experience with Unity
  • Excellent general computer skills
  • Background in Artificial Intelligence is a plus

I/ITSEC 2018

I/ITSEC 2018
November 26-30, 2018
Orlando, FL
Presentations

Pinscreen Releases Deep Learning Model

Shiropen features new research from Pinscreen, University of Southern California, and USC Institute for Creative Technologies showcasing a deep learning model “paGAN (photoreal avatar)” which can create a 3D avatar for a mobile terminal from one input face image and can control via its own expression.

The full paper outlining new research can be viewed here.

In Focus: Using Virtual Reality to Face the Realities of PTSD

Spectrum Local News talks with ICT’s Skip Rizzo about Bravemind.

Check out the full segment here.

Researchers Determine That Reading Stories Increases Empathy

Research carried out by a team of scientists at the University of Southern California (USC) has revealed that reading stories proves to be a universal experience that may promote empathy in people regardless of cultural differences and origins. The research published in Human Brain Mapping has identified the brain activity patterns of people who completely understand stories in their native language.

As part of the study, the scientists examined more than 20 million blog posts featuring personal stories through a software created by experts from the USC Institute for Creative Technologies. Forty blog entries were subsequently chosen and translated into Mandarin Chinese and Farsi. The blog entries, which featured stories such as divorce and lying, were then read by 90 American, Chinese, and Iranian respondents in their native language. The researchers also scanned the respondents’ brains while reading, and asked the participants a few questions thereafter.

Continue reading in Natural News.

Army Accelerating Synthetic Training Environment Programs

The synthetic training environment, or STE, is a next-generation paradigm for enhancing readiness. The Army plans to use a combination of gaming, cloud computing, artificial intelligence, virtual and augmented reality and other technologies to better enable soldiers to improve their skills, said Maj. Gen. Maria Gervais, deputy commanding general of the Combined Arms Center-Training.

Continue reading in National Defense Magazine.

What’s Creative?: “Hi, I’m Ellie.”

A look at Ellie and the use of virtual humans in helping to treat symptoms of Post-Traumatic Stress, via What’s Creative?

Alexa Could Become Your Therapist As Experts Make Smart Speakers Judge Emotions

The Australian Review covers Judith Shulevitz’s piece about AI capabilities in connected devices.

Continue reading for more insight from ICT’s Jonathan Gratch.

Modeling & Simulation

The Modeling and Simulation Group creates immersive and informative experiences that help Warfighters and supporting elements improve performance.

With advanced prototypes such as Captivating Virtual Instruction for Training (CVIT) or DisasterSim, learners can hone life-saving skills wherever they are. With One World Terrain capabilities, military decision-makers can experience a seamless, realistic geospatial foundation when executing their training. Researchers are advancing these capabilities to support an authoritative representation of the planet that will be usable in next-generation military training systems.

The M&S Group employs researchers, domain experts, creative writers, and professional game designers in order to develop content that is as engaging as it is instructive. It has successfully transitioned a number of its advanced prototypes to the DoD, most recently with its rapid 3D terrain capture and reconstruction pipeline, which is a pillar of the Marine Corps’ Tactical Decision Kit (TDK).

Links:

https://www.voanews.com/a/4724820.html 

https://www.armytimes.com/news/your-army/2018/10/11/new-virtual-marksmanship-and-squad-immersive-trainers-are-headed-to-dozens-of-army-locations-next-year/

https://www.army.mil/article/212967/army_to_release_new_squad_advanced_marksmanship_trainer

https://gpsworld.com/u-s-army-invests-in-virtual-reality-training

 

2018 Color and Imaging (CIC) Conference

2018 Color and Imaging (CIC) Conference
November 12-16, 2018
Vancouver, Canada
Presentations

Healing the Invisible Wounds of War with Virtual Reality

The RAND Corporation’s Invisible Wounds of War study estimates that as many as one in five who’ve seen battle experiences PTSD, which—if left untreated—can rip apart lives with nightmares, flashbacks, insomnia, anger, guilt, and feelings of isolation.

Since 9/11, nearly three million service members have deployed to war zones in Iraq and Afghanistan—about half of them more than once.

Now, an innovative, evidence-based approach to treating PTSD is reaching more veterans than ever before. Called “virtual reality exposure therapy,” it heals by transporting the veteran back to the traumatic war event, into a computer-generated, parallel universe created in a Southern California lab.

Continue reading on the RAND Corporation website.

3.5 – VR: What’s Possible in Reality?

Dell Technologies talks with Skip Rizzo and other industry insiders about VR and capabilities for the future.

Listen to the full podcast here.

SEMDIAL (AixDial) 2018

SEMDIAL (AixDial) 2018
November 6-11, 2018
Marseille, France
Presentations

IVA 2018

IVA 2018
November 5-8, 2018
Sydney, Australia
Presentations

Seeing is Believing: The State of Virtual Reality

The Verge writes an in-depth piece about the history of virtual reality.

Continue reading.

Happy with a 20% Chance of Sadness

Researchers are developing wristbands and apps to predict moods—but the technology has pitfalls as well as promise. ICT’s Jonathan Gratch talks with Scientific American about these technologies and what it means for our future.

The Future of Mental Diagnosis and Treatment – Artificial Intelligence

The stuff we see in science fiction movies about artificial intelligence is slowly becoming a self-fulfilling prophecy. The fiction of yesterday is fast becoming the fact of today. Medium discusses how the technology is being adopted and what artificial intelligence can currently do in medicine.

AAMC 2018

AAMC 2018
November 2-6, 2018
Austin, TX
Presentations

Virtual Reality Therapy Has Real-Life Beenfits for Some Mental Disorders

Skip Rizzo talks with Science News about Virtual Reality Exposure Therapy.

Continue reading.

Tech Papers: The Secret to SIGGRAPH Asia Success

Hao Li talks with VFX Blog about what it takes to write a winning paper at SIGGRAPH Asia.

Continue reading.

Magic Leap Conference Videos Dive Deeper into Medical, Audio, & AR Cloud Uses for Magic Leap One

Just a day after releasing an initial set of three panels held at the free event for registered attendees, Magic Leap has posted an additional five videos that give developers, creators, and end users a deeper understanding of what the company is moving toward as a platform.

Perhaps the most fascinating video is the session devoted to using the Magic Leap One for medical purposes. One presentation, delivered by Skip Rizzo, the director of the Medical Virtual Reality lab at University of Southern California Institute for Creative Technologies (USC ICT), and Arno Hartholt, the director of R&D integration at USC ICT and CTO at HIA Technologies, exposed the audience to a Magic Leap app called VITA (virtual interactive training agent).

Continue reading.

EMNLP 2018

EMNLP 2018
October 31-November 4, 2018
Brussels, Belgium
Presentations

AI Project Aims to Improve Negotiation Skills

USC’s Daily Trojan talks with Emmanuel Johnson about new research aimed at improving negotiation skills.

Continue reading.

Building the World of ‘Blade Runner 2049’

Digital Media World explores the production of Blade Runner 2049.

Continue reading.

What Are the Advantages and Issues of Artificial Intelligence in Terms of Study

AppRobust explores the pros and cons of artificial intelligence, featuring research from ICT.

Continue reading.

Army to Release New Squad Advanced Marksmanship Trainer

Over the next year, 26 installations are scheduled to receive the new Squad Advanced Marksmanship Trainer — with the first potential location slated for Fort Drum, New York, officials said.

The Army has been working on a squad-immersive environment since 2009, but limitations on virtual reality and other related technologies have hindered the development process, according to Maj. Gen. Maria Gervais, director of the Synthetic Training Environment Cross-Functional Team.

Continue reading on the U.S. Army website.

VR Days

VR Days
October 24-26, 2018
Amsterdam
Presentations

How Virtual Reality Is Transforming Autism Studies

Proponents of VR argue that no other medium comes as close to putting you in someone else’s shoes. “Having a perceptual experience — that’s something we haven’t been able to do without VR,” says Albert “Skip” Rizzo, research professor at the University of Southern California in Los Angeles and a pioneer of using VR in psychiatry. “You can watch a movie, but it’s different than walking around and having your perceptual experience,” Rizzo says.

Continue reading in Spectrum News.

AI’s Potential to Diagnose and Treat Mental Illness

Harvard Business Review explores AI solutions that help in diagnosing and treating mental illness, mentioning ICT’s Ellie as a potential aid in the process.

Continue reading.

Amazing AI Advances in Education: Benefits and Controversies

AI can help digitize textbooks and create customizable “smart” content for students of all age ranges, helping them with memorizing and learning. Virtual characters and augmented reality can be powered by AI to create believable social interactions such as those experimented with by the University of Southern California (USC) Institute for Creative Technologies. These virtual environments can be used to assist students in their endeavors and learning process, or as substitutes for tutors, lecturers and teaching assistants. No one can ever work all day and night and provide students with 24/7 responses… unless he or she is a robot, of course!

Continue reading in Techopedia.

After Decades of Silence, Anne Frank’s Step-Sister Speaks Up

The author of three books, Schloss travels often to tell her story. She was interviewed by the Steven Spielberg-founded Shoah Foundation project and participated in a hologram project sponsored by the Institute for Creative Technologies at the University of Southern California. The holograms answer questions posed to them.

Outfitting Avatars To Cross The Uncanny Valley

Science Friday interviews Hao Li about the big challenges for creating photorealistic avatars, and how face-swapping technology threatens our perception of what’s real in the news.

Watch the full segment.

AAAI 2018 Fall Symposium on a Common Model of the Mind

AAAI 2018 Fall Symposium on a Common Model of the MindOctober 18-20, 2018
Arlington, VA
Presentations

The New Reality of Mixed Reality

Tech Well Insights explores new mixed reality experiences, including VITA for Magic Leap One.

Read the full article here.

Hate Negotiating? There’s a Virtual Human for That.

By Sara Preto

Fall ’18

Emmanuel Johnson, a Fulbright Scholar and computer science Ph.D. working with USC Viterbi’s Jonathan Gratch, is collaborating with researchers from the USC Institute for Creative Technologies (ICT) and others sponsored by the National Science Foundation, to examine how “virtual humans” might aid real humans in the subtle art of negotiation. The goal is to provide a personalized and low-cost approach to training.

“This research is personal to me,” said Johnson, explaining that his native Liberia still suffers the effects of past negotiations gone wrong. For instance, a 1926 deal with the Firestone Tire & Rubber Company deal gave Firestone one million acres of Liberia’s rich tropical forest for 99 years — at an annual rate of six cents.

“If Liberia had had better negotiators who could see how unfair this deal was for the country, things might have turned out differently,” Johnson said. “The work we’re doing provides a system to help countries like Liberia. People learn how to negotiate so that others can’t take unfair advantage of them.”

Negotiations happen in almost every social and organizational setting, from reducing monthly bills and haggling over the price of a car, to discussing the terms of a job offer. Still, few people actually like to bargain, and the self-study guides, courses and training programs designed to improve one’s skills are expensive.

“One of the common misconceptions people have is that a negotiation must be a zero-sum game in which one party loses at the expense of another winning,” Johnson said. “Often that’s not the case. If negotiators take the time to learn what their opponent wants, they might be able to see that their interests are different and that an agreement is possible.”

The team has been collaborating with USC Marshall’s Peter Kim to better understand how people approach negotiation, the best way to present material to students and how to provide feedback during the process.

When might virtual humans make their public debut?

“I don’t think we’re far,” Johnson said. “However, there is work to be done in helping machines better understand and reason with humans, something we are actively addressing in our work.”

IEEE ISMAR 2018

IEEE ISMAR 2018
October 16-20, 2018
Munich, Germany
Presentations

U.S. Army Invests in Virtual Reality Training

The U.S. Army considers virtual reality training as an important path ahead to prepare warfighters.

Central to STE is a cloud-enabled One World Terrain (OWT) that will let warfighters conduct virtual training and complex simulations anywhere on a virtual representation of the Earth. OWT will leverage cloud technologies to deliver to the point of need, ensuring a common and high-fidelity whole-Earth terrain representation for a multitude of different simulation systems.

Continue reading in GPS World.

ICMI 2018

ICMI 2018
October 16-20, 2018
Boulder, CO
Presentations

Can a Computer Headset Cure Your Fear of Heights?

Daily Mail explores the many uses of VR and how the technology can be helpful for a variety of reasons.

Read the full article here.

Inside Magic Leap’s Over-the-Top Developer Conference

PC Magazine interviews ICT’s Dr. Skip Rizzo at Magic Leap’s conference for developers.

Read the full article here.

New Virtual Marksmanship and Squad Immersive Trainers are Headed to Dozens of Army Locations Next Year

At the Association of the United States Army’s annual meeting, Maj. Gen. Maria Gervais, who serves both as the director of the Army’s Synthetic Training Environments Cross-Functional Team and deputy commanding general of the Combined Arms Center-Training, laid out some immediate hits on the virtual training front while also talking about long-term goals for the programs.

The squad immersive trainer has been a concept the Army’s pursued since at least 2009, but much of the hardware and software needed to make it a reality simply didn’t exist at the time, Gervais told the audience.

Continue reading the full article in ArmyTimes.

Virtual Reality Isn’t Just for Video Games Anymore

The Skidmore News discusses a recent visit from ICT’s Skip Rizzo, read the full article about his talk and research here.

USC ICT and The Dan Marino Foundation Partner with Magic Leap to Deliver Spatial Computing Experience for Individuals with Autism and other Developmental Disabilities

Next-generation virtual human job interview practice system soon available on Magic Leap One.

For more information, please contact Sara Preto, 310-301-5006, preto@ict.usc.edu.

Studies show that young adults with Autism Spectrum Disorder (ASD) may experience challenges in finding employment. Though young adults with ASD are often high educational achievers and have the ability to work, the United Nations estimates that more than 80% are unemployed.

In a new partnership between Magic Leap, the University of Southern California’s Institute for Creative Technologies and The Dan Marino Foundation, collaborators have developed a tool using mixed reality that helps young adults with ASD overcome an important obstacle they may face entering into the workforce — in-person job interviews. The Virtual Interactive Training Agent (VITA), now in mixed reality on Magic Leap One, is a virtual simulation job interview practice system that builds competence and reduces anxiety.

“The partnership with Magic Leap and The Dan Marino Foundation gave us the opportunity to push the limits of spatial computing technology for a really important pro-social purpose,” said Albert “Skip” Rizzo, director of Medical VR for USC’s Institute for Creative Technologies. “VITA was designed to help those who need it most to practice job interviews in a safe and controlled environment, allowing them to get more comfortable over time and reach their full potential. The application of technology in this way could serve to improve the employment opportunities for persons on the Autism Spectrum and we are excited about expanding our work in this direction for other groups who could benefit from this approach.”

VITA provides the opportunity for ASD users to repetitively practice job interviewing in a safe simulated environment. While it is recognized that many persons with ASD have the necessary capabilities for success in vocational activities, many report that the process of engaging in a job interview is anxiety provoking, as well as socially and verbally challenging; these factors may limit their success in job seeking situations and aid to the high unemployment rate.

“Working with Magic Leap and The Dan Marino Foundation has been a great experience,” said Arno Hartholt, director of R&D Integration for USC’s Institute for Creative Technologies. “We got our first prototype up and running very quickly, so that we could collaboratively explore how to get the most out of this new computing platform. It’s still early days, but the potential for how this kind of technology can help people is enormous.”

“The systems being developed through our partnership with USC’s Institute for Creative Technologies and now Magic Leap are game changers for the future of young adults with Autism and other disabilities,” said Mary Partin, CEO of The Dan Marino Foundation. “Over the past four years employment rates for Marino Campus students are averaging 72%. The Dan Marino Foundation is now expanding this technology in working with youth in the Juvenile Justice system where employment is a factor in reducing recidivism.”

“At Magic Leap, we strive to impact lives and make a difference – that’s why we’re so excited about this collaboration. We see this as the first step of many in taking action to help empower those individuals with Autism or other developmental disabilities in each step of their journey with valuable life skills. We hope that this technology can help level the playing field and be an enabling tool to improve quality of life and humanity,” said Brenda Freeman, Chief Marketing Officer at Magic Leap.

VITA offers a variety of possible job interview role-play interactions supporting the practice of job interviewing skills across a range of challenge levels and allows for customizable training geared to the needs of the user. VITA for Magic Leap One will be available soon.

###

ABOUT THE USC INSTITUTE FOR CREATIVE TECHNOLOGIES:
The USC Institute for Creative Technologies develops award-winning advanced immersive experiences that leverage groundbreaking research technologies and the art of entertainment to simulate human capabilities. Influencing the trajectory of technological exploration and advancement, USC ICT’s mission is to use basic and applied research that benefits learning, education, health, human performance, and knowledge.

ABOUT THE DAN MARINO FOUNDATION:
The Dan Marino Foundation, Inc., a 501(c)3 organization was established by Dan and Claire Marino, motivated by their experiences in raising their son, Michael, who is diagnosed with Autism. For over 26 years, the Foundation has been a leader in innovation and change, “empowering individuals with Autism and other developmental disabilities.” The Foundation has raised more than $72 million to create unique and impactful initiatives in the community. Among these first-of-their-kind initiatives are the Nicklaus Children’s Hospital Dan Marino Outpatient Center, the Marino Autism Research Institute, Marino Adapted Aquatics, Summer STEPS Employment Programs, Virtual Interactive Training Agent Program (ViTA-DMF), and now post-secondary programs at both Marino Campus in Broward and at FIU in Miami-Dade. For more information, please visit  danmarinofoundation.org, marinocampus.org  or ViTADMF.org.

Is Alexa Dangerous?

Why would we turn to computers for solace? Machines give us a way to reveal shameful feelings without feeling shame. When talking to one, people “engage in less of what’s called impression management, so they reveal more intimate things about themselves,” says Jonathan Gratch, a computer scientist and psychologist at the University of Southern California’s Institute for Creative Technologies, who studies the spoken and unspoken psychodynamics of the human-computer interaction. “They’ll show more sadness, for example, if they’re depressed.”

Continue reading in The Atlantic.

How to Teach Against Hate? Lean on Genocide Survivors

Teaching tolerance through survival, a piece from Trojan Family magazine.

The Advanced Imaging Society Announces Distinguished Leadership Award Recipients

Receiving awards for Technology will be Apple Inc for ProRes Raw codec, IBM Watson computer incorporating AI, DreamWorks Animation openVDB Data Structure System, NVIDIA GeForce RTX graphics cards, Google VR “Welcome to Light Fields,” Cisco for its broadcast production virtualization system, Cinionic/Barco for its HDR Light Steering Cinema Projector, the University of Southern California Institute for Creative Technologies’ HDR-image based lighting, The Mill for its Mascot real-time animation system, Mach 1 for its spatial sound system workflow, Positron for its Voyager chair for immersive experiences, Survios Electronauts VR music creation tool, RYOT for its Yahoo Mail AR, Felix and Paul for their dynamic projection workflow for stereoscopic 360-degree live video, and Secret Location for its Vuser Spark technology for content rights management. The annual awards are sponsored by Dell and Cisco.

Continue reading on BusinessWire.com.

‘Body Computing’ Turns Healthcare into Lifecare

As founder of USC’s Center for Body Computing (CBC), Dr. Leslie Saxon’s mission is to make sure ‘patients have a dog in the fight’ as they use tech to shift healthcare to ‘lifecare.’ We talked to her ahead of this week’s CBC annual conference.

Continue reading in PC Magazine.

Synthetic Training Environment and the Digital Revolution in the Army

What makes the Synthetic Training Environment truly revolutionary and disruptive is the “how” as much as the “what” of it. Much like the name environment suggests, the STE is comprised of a common One World Terrain, or OWT; Training Simulation Software, or TSS; Training Management Tools, or TMT; and common user interfaces that will change the entire ecosystem of simulation training capabilities. To better understand how, we can look at another technical revolution.

Continue reading about STE and One World Terrain on the U.S. Army website.

USC Puts AR in La Brea Tar Pits

The La Brea Tar Pits, which is part of the Natural History Museum of Los Angeles County, has partnered with USC to incorporate augmented reality into their exhibits. The museum would use this new technology as a tool to help correct misconceptions that are common among visitors.
When first arriving to the museum, visitors are greeted by a large muddy lake, that is seemingly filled with tar. In it, are life-sized replicas of three mammoths. One has its long trunk in the air and seems to be struggling to break free of the gooey tar. The other two, an adult and baby, look on attempting to help.
Continue reading in USC’s Annenberg Media.

ICT’s Adjunct Professor Paul Debevec Named Visual Effects Fellow

Visual effects veterans Craig Barron, Joyce Cox, Dan Curry, Paul Debevec and Mike Fink are set to become Visual Effects Society Fellows during an Oct. 11 reception in Beverly Hills.

Continue reading the full article in The Hollywood Reporter.

Visual Effects Society Announces 2018 VES Fellows

Paul Debevec is one of this year’s VES fellows, read about the award in Animation World Network and Studio Daily.

USC Partners with La Brea Tar Pits on Augmented Reality and Education

In a new partnership between USC and the Natural History Museums of Los Angeles County, which include the La Brea Tar Pits and Museum, researchers will try to understand how best to design AR experiences for effective learning.

The project is funded by a new grant from the National Science Foundation totaling $2 million.

The research will compare learning and engagement from visitors interacting with various versions of an AR experience that differ in visual immersion (touchscreen vs. low-cost 3D headset) and interactivity (selecting vs. manipulating virtual objects).

Emily Lindsey, assistant curator and excavation site director for the La Brea Tar Pits, and Benjamin Nye, the director of learning science at the USC Institute for Creative Technologies, are the principal investigators. Gale Sinatra, the Stephen H. Crocker Professor of Education Psychology at the USC Rossier School of Education, and William Swartout, chief technology officer for USC ICT, are co-principal investigators.

Continue reading in USC News.

The Ultimate Physician’s Assistant

Forbes explores emerging artificial intelligence (AI) innovations transforming the healthcare ecosystem today—from automated MRI imaging for early cancer detection to DNA sequencing for targeted drug development. Continue reading the full article, mentioning groundbreaking research from ICT, here.

Museum, USC Create AR Experiences at La Brea Tar Pits

Woolly mammoths may be extinct, but visitors at the La Brea Tar Pits will get to see one up close. By partnering with USC, the Natural History Museums of Los Angeles County will introduce a new augmented reality experience to transport patrons back in time.

The USC Institute of Creative Technology partnered with the USC Rossier School of Education after taking an interest in how the La Brea Tar Pits presented natural history within Los Angeles.

Continue reading in the Daily Trojan.

Advanced Imaging Society Unveils 2018 Technology Lumiere Award Honorees

The AIS Technology Award program annually acknowledges and celebrates technologies and processes demonstrating both innovation and impact in advancing the future of the entertainment and media industries including but not limited to theatrical, television broadcast, video, virtual reality, augmented reality, mixed reality, stereoscopic 3D, themed attractions, and other forms of relevant content.

ICT has been honored for its HDR-image based lighting. Continue reading for the full list of winners.

Why Robots That Look Too Human Make Some People Uneasy

An increasing number of robots are being created and designed to work side by side with humans, in a human environment. That means robots have to be structured like a person, because some of them have to walk and sit like a person. Some robots are even being designed to look human.

But seeing an android, a robot that looks human, can make some people uneasy. That growing unsettling feeling or phenomenon as robots begin to look more like human beings is called the “uncanny valley.”

Even researchers who work on robots are not immune to it.

“I know how they work. I know they’re just machines, but something about something that looks like a person but doesn’t quite move like a person is disturbing,” said Jonathan Gratch, director for virtual human research at the University of Southern California’s (USC) Institute for Creative Technologies.

Continue reading on VOANews.com.

ICT Wins AIS Tech Award

The Advanced Imaging Society will recognize 15 honorees during its 9th annual Technology Lumiere Awards, which will be held during December in Hollywood, including ICT for its HDR-image-based lighting.

For more information, visit The Hollywood Reporter.

Natural History Museum Partners with USC for Augmented Reality Experiences at La Brea Tar Pits

Discoveries from the iconic excavation site will be the linchpin in an effort to understand how to most effectively use a technology growing in popularity.

A recent boom in augmented reality (AR) technology is leading educational institutions to explore new ways of teaching, where virtual scenes are mixed with real-life locations and objects. However, more research is needed in order to understand when and how AR can be leveraged to increase knowledge rather than merely entertain visitors.

In a new partnership between the Natural History Museums of Los Angeles County (which includes the La Brea Tar Pits and Museum) and University of Southern California, researchers will seek to understand how best to design augmented reality experiences for effective learning. The project is funded by a new grant from the National Science Foundation totaling $2 million. This research will compare learning and engagement from visitors interacting with various versions of an AR experience that differ in visual immersion (touchscreen vs. low-cost 3D headset) and interactivity (selecting vs. manipulating virtual objects).

Emily Lindsey, assistant curator and excavation site director for the La Brea Tar Pits, and Benjamin Nye, the director of learning science at the USC Institute for Creative Technologies, are the principal investigators. Gale Sinatra, the Stephen H. Crocker Professor of Education Psychology at the USC Rossier School of Education, and William Swartout, chief technology officer for USC ICT, are co-PIs.

A key aspect of the project is to use AR to provide additional information about what visitors see to help dispel misconceptions. “Augmented reality offers a powerful medium to share how science happens at the La Brea Tar Pits,” Nye says. “AR can show hidden worlds connected to what you would normally see with your eyes, such as seeing the pits in different time periods. These can tell the story of not just what we know, but how we know what we know.”

Located in the heart of metropolitan Los Angeles, the La Brea Tar Pits are among the world’s most famous fossil localities. Opened to the public in 1977, the La Brea Tar Pits Museum served 418,000 visitors last year year with displays of Ice Age fossils from asphaltic deposits, as well as with live demonstrations of the paleontology process. With a vast collection of millions of fossils, the La Brea Tar Pits constitute an unparalleled resource for understanding environmental change in Los Angeles during the last 50,000 years of Earth’s history.

The new partnership will draw on USC’s expertise in technology, design and student engagement and the Natural History Museum’s expertise in paleontology and content-rich exhibits to create an experience that will help museum visitors engage with the scientific process, in order to both improve understanding of science and reduce scientific misconceptions. Under the partnership, visitors to the museum will explore AR time portals where they gather evidence to distinguish between competing hypotheses and update their own hypotheses as they find new evidence.

“Certain scientific concepts, like the nature of geologic time, have historically been difficult for people to wrap their minds around,” Lindsey says. “This partnership allows us to explore the ways that new, immersive technologies can help people understand and connect with these concepts more fully.”

###

ABOUT THE USC INSTITUTE FOR CREATIVE STUDIES:

The USC Institute for Creative Technologies develops award-winning advanced immersive experiences that leverage groundbreaking research technologies and the art of entertainment to simulate human capabilities. Influencing the trajectory of technological exploration and advancement, USC ICT’s mission is to use basic and applied research that benefits learning, education, health, human performance, and knowledge. This work is a collaboration between the USC ICT Learning Science and Mixed Reality (MxR) groups.

ABOUT THE USC ROSSIER SCHOOL OF EDUCATION:

The mission of the USC Rossier School of Education is to prepare leaders to achieve educational equity through research, policy and practice. Consistently ranked as one of the nation’s best education schools by U.S. News and World Report, USC Rossier draws on innovative thinking and collaborative research to improve learning opportunities and outcomes, address disparities and solve the most intractable problems in education.

ABOUT THE LA BREA TAR PITS AND MUSEUM:

The La Brea Tar Pits and Museum is one of the Natural History Museums of Los Angeles County, which also includes the Natural History Museum and the William S. Hart Museum. The asphalt seeps at La Brea represent the only consistently active and urban Ice Age excavation site in the world. This makes the campus a unique site museum–where fossils are discovered, excavated, prepared and displayed in one place. Outside, the remains of plants and animals trapped during the last 50,000 years ago are revealed in active excavation sites. Inside, visitors see the next step of the process, as scientists and volunteers clean, repair and identify these discoveries in the transparent Fossil Lab. The museum then displays the final result: extraordinary specimens of saber-toothed cats, dire wolves and mastodons, as well as fossilized remains of microscopic plant remains, insects and reptiles.

ECCV 2018

ECCV 2018
September 8-14, 2018
Munich, Germany
Presentations

Army Improving Integrated Training Environments for Aviators

To better prepare aviators for the future fight against a near-peer adversary, the Army is working to improve live, virtual and constructive training environments.

Continue reading about the Synthetic Training Environment and One World Terrain on the U.S. Army site.

Can Technology Help You Live Longer?

BBC Click investigates, interviewing ICT’s Ari Shapiro about preserving the likeness of human beings.

Watch the full segment here.

International Conference on Disability and VR

International Conference on Disability and VR
September 4-6, 2018
London, UK
Keynote Presentation

iFEST 2018

iFEST 2018
August 26-29, 2018
Alexandria, VA
Presentations

Director for Cognitive Architecture Research at USC’s Institute for Creative Technologies Honored for Contributions in the Artificial Intelligence Community

Paul S. Rosenbloom of USC’s Institute for Creative Technologies and Viterbi School of Engineering has won the 2018 Herbert A. Simon Prize for Advances in Cognitive Systems, for the development of the Soar cognitive architecture, including its application to knowledge-based systems and models of human cognition. Additionally, Rosenbloom has been recognized for his contributions to theories of representation, reasoning, problem solving, and learning.

The Herbert A. Simon Prize for Advances in Cognitive Systems recognizes scientists who have made important and sustained contributions to understanding human and machine intelligence through the design, creation, and study of computational artifacts that exhibit high-level cognition.

Rosenbloom works with John E. Laird and the research community to develop a Common Model of Cognition (aka a Standard Model of the Mind), which instead of providing one detailed hypothesis concerning the structure of a human-like mind, is attempting to capture the best current consensus concerning what should be in such a model. His research addresses multiple facets of high-level cognition and he has been a strong advocate of the cognitive systems movement.

Herbert A. Simon’s abiding concern with high-level cognition in humans and machines drives the initiative in celebrating groundbreaking ideas about high-level processing and their potential for understanding the mind. The annual award is sponsored by the Cognitive Systems Foundation, which contributes a cash prize of $10,000, and is co-sponsored by the Herbert Simon Society.

###

COLING 2018

COLING 2018
August 20-26, 2018
Santa Fe, NM
Presentations

Virtual Reality in the Medical Field

Today, VR is being used in everything from marketing for business growth to education in primary schools. One of the often overlooked yet more incredible fields that VR shows potential in is the medical field. Virtual reality has greatly benefitted medical care, and as simulated technologies continue to develop, the possibilities for using VR in the medical field are endless. Renderosity Magazine explores three of the major ways in which virtual reality is transforming medical care: Rehabilitation and Therapy, Medical Education and Pain Relief.

Read the full article here.

Would You Seek Therapy From a Machine?

Would you talk to a computer therapist? ICT has built an AI interface for use in mental health settings. KCRW hears how it works and why it won’t likely replace real therapists, ever.

Listen here.

2018 ACS

2018 ACS
August 18-20, 2018
Stanford, CA
Presentations

SIGGRAPH 2018: Deep Learning and Deep Fakes

A roundup of SIGGRAPH 2018 in FXPHD.com.

Cubic Motion Makes More Than Faces

Using Light Stages, 3D scanning technology developed by Paul Debevec’s team at USC’s Southern California Institute for Creative Technologies (ICT) created a stunningly realistic model of O’Brien’s face for their “Meet Emily” video presentation and integrated it into a staged interview. The video effectively predicted the coming end of the uncanny valley.

Continue reading in Graphic Speak.

OC Fair Hits 2018 Attendance Record of 1.4M Visitors

The 2018 fair, which ended Sunday, Aug. 12, surpassed previous years in total guests – by 10 percent more than the 1.33 million fairgoers in 2017 – and its busiest day, July 28, drew 86,334 people, beating a single-day visitor record set in 2001.

Among the popular attractions this year was Heroes Hall, a tribute to military veterans now in its second year. Nearly 20,000 fairgoers passed through, and more than half of them tried out the “Bravemind” virtual reality exhibit, fair spokeswoman Terry Moore said. Visitors can expect another special military exhibit at the 2019 fair, she said.

Continue reading in the OC Register.

SIGGRAPH 2018

SIGGRAPH 2018
August 12-16, 2018
Vancouver, Canada
Presentations

 

Meet the Chatbots Providing Mental Health Care

The Wall Street Journal takes an in-depth look at AI and chatbots, speaking with Gale Lucas for insight.

Read the full article here.

Here Come the Virtual Humans

PC Magazine interviews ICT’s Hao Li about his work and the future of virtual humans.

Read the full article here.

We Care Wednesday from KTLA

KTLA‘s Gayle Anderson visits the OC Fair and reminds viewers to stop by the Bravemind exhibit while summer’s still here.

A.I. at SIGGRAPH: Part 2. Pinscreen at Real Time Live

FX Guide interviews Hao Li ahead of SIGGRAPH 2018.

Read the full article here.

Higher Education Innovation: 25 Examples of Excellence

This article is brought to you in partnership with The Mission Daily and Vemo Education, highlighting 25 research institutes at the forefront of innovation, including ICT.

Read the full article here.

Artificial Intelligence Can Help Veterans Deal with the Trauma of War

VR is a powerful tool when it comes to recreating environments and being able to control parameters.

In the case of veterans, it allows a gradual transition from the stress of the warzone to civilian life.

This is what Crusades 22, a non-profit organisation providing integrative and complementary care for veterans and active soldiers with post-traumatic stress injuries (PTSI) is working on, in collaboration with Dr Albert Rizzo, director of the University of Southern Califonia’s Institute of Creative Technologies and several tech companies.

Continue reading.

Why Do Some Companies Have Humans Pretending to be Bots?

AI is hot. And it’s everywhere, from customer service apps and virtual assistants like Siri and Alexa, to all manner of transportation apps and social media features. Government is no stranger to using AI either, whether it comes to surveillance, power and water systems, or predictive policing.

Government CIO Media investigates the many advantages AI has to offer, read more here.

Bravemind at Heroes Hall

KTLA’s Morning News visits Heroes Hall, as Gayle Anderson learns more about the Bravemind exhibit.

Watch here.

In the Foreground: SIGGRAPH 2017 Posters Graduate Winner

ACM’s SIGGRAPH Blog aught up with SIGGRAPH 2017 first-place ICT graduate winner Chloe LeGendre (whose recent research has been accepted to the SIGGRAPH 2018 Posters program) to learn about her experience.

Continue reading.

9th AHFE International Conference 2018

9th AHFE International Conference 2018
July 21-25, 2018
Orlando, FL
Keynote Session

From Sci Fi to Commercialization: The Rise of Digital Avatars

A lot of the fear and friction that come with asking for help, or the need to build trust with a stranger, are alleviated via visualizing therapy. Inhibitions melt away faster with a digital entity. In fact, an ongoing study within USC’s Institute for Creative Technologies (ICT) is using virtual therapists to help veterans open up about their PTSD.

Continue reading in Medium.

Could VR Therapy Transform Mental Health Treatment?

Verdict researches how VR can be used in mental health treatments.

Read the full article here.

When Health Apps Meet VR

The use of smartphones and tablet devices in health care has generated much interest recently, with a rise in related apps changing the way that patients and health care professionals interact. Increasing numbers of healthcare professionals are using these technologies in the workplace, as well as large numbers of consumers, who are now more connected than ever to such apps that can help diagnose and improve symptoms, as well as boost drug compliance. PharmaTimes looks at various applications that use VR, featuring Bravemind as one possible tool to help treat symptoms of post-traumatic stress.

Continue reading.

2018 HCII (Human Computer Interaction) Conference

2018 HCII (Human Computer Interaction) Conference
July 15-20, 2018
Las Vegas, NV
Presentations

ACL 2018

ACL 2018
July 15-20, 2018
Melbourne, Australia
Presentations

Why It’s So Hard for You to Stop Reading Books You Don’t Even Like

MSN writer Katie Heaney tries to understand grit, the type of personality that will persevere through anything, even a bad book, by talking with ICT’s Gale Lucas.

Read more here.

Revealing the Secret of CG Double in Logan

AnimationKolkata looks at the technology and process behind creating digital doubles in ‘Logan’.

IJCAI-ECAI 2018

IJCAI-ECAI 2018
July 13-19, 2018
Stockholm, Sweden
Presentations

SIGDIAL 2018

SIGDIAL 2018
July 12-14, 2018
Melbourne, Australia
Presentations

The Quantified Heart

Artificial intelligence (AI) is no longer just about the ability to calculate the quickest driving route from London to Bucharest, or to outplay Garry Kasparov at chess. Think next-level; think artificial emotional intelligence.

Aeon explores the other ways in which AI is linked to us, specifically focusing on our emotional relationships.

California Virtual Reality Exhibit Provides Window into PTSD Treatment

Designed by USC’s Institute for Creative Technologies, Bravemind is an interactive program clinicians use to slowly immerse veterans and military service members into virtual environments that relate to their traumatic experiences — but in controlled settings — as a form of treatment.

Continue reading in Stars and Stripes.

OC Fair Virtual Reality Exhibit Provides Window into PTSD Treatment

A new virtual reality experience at the OC Fair & Event Center offers insights into how researchers are using advanced technology to treat veterans suffering from post-traumatic stress disorder.
Visitors to the Heroes Hall veterans museum can put on special goggles and headphones and enter the “Bravemind: Using Virtual Reality to Combat PTSD” exhibition when the Orange County Fair opens Friday.

Continue reading in the LA Times.

SeriousPlay Conference

SeriousPlay ConferenceJuly 10-12, 2018
Manassas, VA
Presentations

AAMAS 2018

AAMAS 2018
July 10-15, 2018
Stockholm, Sweden
Presentations

Virtual Reality Isn’t Just for Gamers Anymore; It Will Change Your Health

As virtual reality (VR) software becomes more sophisticated, users are able to interact with the environment through multiple senses. Our brains and bodies begin to experience the virtual environment as real.
How does VR work for healthcare?

A useful framework for understanding the application of VR in medicine and health has been proposed by one of the field’s leaders, Albert “Skip” Rizzo, PhD, of the University of Southern California’s Institute for Creative Technologies. Dr. Rizzo organizes applications into broad categories based on the underlying effects.

Continue reading the full article in Forbes.

Psychological Testing and Assessments Are Going High Tech

APA Monitor explores how technology is being used in psychological testing and assessments.

4 Ways Virtual Reality Will Disrupt Healthcare

Cox Communications includes ways in which healthcare will incorporate virtual reality technologies. Continue reading to see how pairing together VR and therapy, specifically using Bravemind in helping combat symptoms of PTS, will change the industry.

It’s Almost Time to ‘Free Your Inner Farmer’ at the Orange County Fair

This year’s Orange County Fair features tons of new food, attractions and entertainment. The LA Times Daily Pilot gives readers an overview of what not to miss, including the Bravemind exhibit at Heroes Hall.

VR Soon a Clinical Reality in Psychosis Care

While VR therapy is still in the early stages of research, the current evidence suggests it can reduce paranoia and improve sociability without significant risk of patients’ becoming detached from reality.

In many respects, Molly Porter is like any other human resources specialist conducting a job interview. She is dedicated, thorough, and knows the right questions to ask. The only difference is that Molly isn’t real; she’s a virtual avatar on a video monitor, and her function today is to help patients recovering from psychosis develop important social skills.

Continue reading in VR Room.

Nvidia GPUs Could Use AI to Power Next-Gen HairWorks Models in Future Games

Researchers from the University of South California, Pinscreen, and Microsoft have developed a hair rendering technique powered by deep learning. This neural network can render 3D hair models from only a 2D reference image, and is the first of its kind to work in real-time.

If you’ve ever turned on Nvidia’s HairWorks in games such as The Witcher 3 or Final Fantasy XV, you may well have noticed your in-game performance drops significantly – even if it’s just Geralt’s lovely mop on the screen. Rendering a couple hundred thousand individual strands of hair is no walk in the park.

Continue reading in PCGamesN.com.

Bravemind & OC Fair 2018

The countdown to the 2018 OC Fair has reached less than two weeks.

More than 1 million people are expected to check out this year’s fair with its carnival games and rides, food creations to make the most out of any cheat day, shopping, entertainment and so many animals for trying to catch a selfie with.

The OC Register covers the events of this year’s fair, featuring Bravemind at Heroes Hall.

Machine Learning for 3D Understanding

Machine Learning for 3D UnderstandingJuly 2-4, 2018
Munich, Germany
Presentations

Bravemind Experience Opens at Heroes Hall

The LA Times Daily Pilot section features the new Bravemind exhibit opening at the Orange Country Fair’s Heroes Hall.

AEID 2018

AEID 2018
June 27-30, 2018
London, UK
Presentations

Around Town: Heroes Hall Offers Preview of ‘Bravemind’ Exhibit

The public can get a peek this week of “Bravemind: Using Virtual Reality to Combat PTSD,” the newest exhibit planned for the Heroes Hall veterans museum in Costa Mesa.
The interactive exhibit will showcase the virtual reality technology that is being used to help treat and assess post-traumatic stress disorder for military service members and veterans.

The free preview will run Thursday through Sunday at Heroes Hall at the OC Fair & Event Center, 88 Fair Drive. “Bravemind” will officially open with the start of the Orange County Fair on July 13. Museum hours are 11 a.m. to 5 p.m. Wednesdays through Sundays. For more information, visit ocfair.com/heroes-hall.

Via LA Times – Daily Pilot.

SESAM Bilbao 2018

SESAM Bilbao 2018
June 27-29, 2018
Bilbao, Spain
Presentations

University of Southern California and Other Announced a Method Using Deep Learning to Generate a 3D Model of Hair from One 2D Image

Researchers at the University of Southern California, USC Institute for Creative Technologies, Pinscreen, Microsoft Research Asia, published a method to more naturally generate hair 3D models from one 2D image using deep learning.

Continue reading from Seamless.

Learning@Scale: Fifth Annual ACM Conference on Learning at Scale

Learning@Scale: Fifth Annual ACM Conference on Learning at Scale
June 26-28, 2018
London, UK
Presentations

Virtual Reality: Educating, Delivering Therapy & Facilitating Support

This piece for Medium features research and work highlighting the many uses of virtual reality.

Read more here.

ISLS 2018

ISLS 2018
June 23-27, 2018
London, UK
Presentations

Mesoscopic Facial Geometry Inference Using Deep Neural Networks

80 Level looks at research from the Vision and Graphics Lab at ICT exploring a novel approach to synthesizing facial geometry.

Continue reading here.

CVPR 2018

CVPR 2018
June 18-22, 2018
Salt Lake City, Utah
Presentations

Digital Human Project Started at Toei Token Institute

Toei Co., Ltd. Toku Institute announces that it will launch a special research team and introduce the latest version of LightStage, developed by ICT, which is a human face scanning system in order to fully tackle real human generation (digital human) at CG did. 

Continue reading here.

USARIEM Partners to Explore Using Virtual Humans to Measure Cognitive Performance

Army researchers from the U.S. Army Research Institute of Environmental Medicine, or USARIEM, have recently begun collaborating with ICT on developing and enhancing technologies that can be used to accurately and objectively detect degraded cognitive performance in Soldiers, allowing unit leaders and medics to be able to make informed mission decisions and assess who is operationally fit to fight.

Continue reading here.

The Future of VR/AR in Medical Fields, according to Experts

MoguraVR, based out of Japan, interviews ICT’s Arno Hartholt for his thoughts on how VR and AR will impact the medical field.

Continue reading here.

ITS 2018

ITS 2018
June 11-15, 2018
Montreal, Canada
Presentations

New Study on Avatars Question the Influence of Virtual Clones

The gamer’s experience over the past decade has evolved tremendously and player-customized avatars, or virtual clones, are becoming more and more realistic every day. Previous analysis, for example, has shown that women prefer avatars that do not look like them. But have you ever wondered if the look of your Avatar was influencing your game and your decisions?

Continue reading about recent research from ICT and University of Illinois focusing on the likeness of avatars and how that might affect a gamer’s experience.

Realistic 3D Avatars From a Single Image

Digital media needs realistic digital faces. The recent surge in augmented and virtual reality platforms has created an even stronger demand for high-quality content, and rendering realistic faces plays a crucial role in achieving engaging face-to-face communication between digital avatars in simulated environments.

Continue reading in mc.ai.

NAACL HLT 2018

NAACL HLT 2018June 1-6, 2018
New Orleans, LA
Presentations

30th Association for Psychological Science Annual Convention

30th Association for Psychological Science Annual Convention
May 24-27, 2018
San Francisco, CA
Panel Participation

Holograms: Are They Still The Preserve of Science Fiction?

The Guardian explores 3D technologies and holograms, looking to ICT for for source inspiration.

Read the full article here.

FLAIRS-30

FLAIRS-30
May 22-24, 2018
Marco Island, FL
Presentations

Computer Animation and Social Agents (CASA) 2018

Computer Animation and Social Agents (CASA) 2018
May 21-23, 2018
Beijing, China
Presentations

In-Depth: Therapeutic VR In 2018 Is No Longer Just a Distraction

VR could soon be a resource in treating a different kind of recurring fear as well. Digital health therapeutic company Pear Therapeutics’ pipeline includes reCALL for PTSD, an immersive, experimental VR treatment that would be prescribed in conjunction with pharmaceuticals to reduce patients’ psychological trauma. So far, pilot studies have shown a “marked improvement” in PTSD scores among patients using the VR therapy compared to standard care alone.

Placing veterans or others with the condition back into a simulated version of a traumatic event might sound counterintuitive, but according to USC Davis School of Gerontology professor Albert “Skip” Rizzo, experiencing a version of the events in which they have greater control can provide a sense of resolution.

“Exposure therapy is an ideal match with VR,” Rizzo said during an NBC interview on the subject. “You can place people in provocative environments and systematically control the stimulus presentation. In some sense it’s the perfect application because we can take evidence-based treatments and use it as a tool to amplify the effect of the treatment.”

Continue reading in MobiHealthNews.

Data Is Improving Government Services, But at What Cost to Citizen’s Privacy?

Data now informs almost everything the public sector does, and it also informs on us.

ICT’s Todd Richmond provides insight on the issue for Government Technology’s ‘Go Public’ podcast.

IST Meeting of the Minds

IST Meeting of the Minds
May 18, 2018
Caltech Annenberg Center
Keynote Speaker: Hao Li

Why Technology for Physician Education is a Necessity, Not a Luxury

Technology continues to change the way healthcare professionals deliver care to patients, enabling faster collaboration and improving workflow processes. Thanks to mobile devices and processes, such as secure text messaging, organizations today are better equipped to provide quality support to individuals with fewer inconveniences or interruptions.

At the same time, digital tools are also evolving the way clinicians hone their craft. The University of Nevada, Reno School of Medicine, for instance, uses telemedicine tools via Project ECHO (Extension for Community Healthcare Outcomes) to connect university-based specialists with primary-care clinicians to allow for faster, more efficient knowledge sharing.

Telemedicine, though, represents just one of several ways technology is reshaping physician education.

For New Doctors, AR and VR Are Not Foreign Concepts

At the University of Southern California Institute for Creative Technologies, for instance, virtual and augmented reality are helping to train clinicians to “more skillfully handle delicate interactions with patients,” according to an article published earlier this year in the Journal of the American Medical Association. Skip Rizzo, director for medical virtual reality at the organization, says such tools allow doctors, nurses and others to “mess up a bunch” prior to working in the trenches for real.

Continue reading in HealthTech.

‘Yanny’ Versus ‘Laurel’ Debate Is the New ‘What Color Is the Dress?’ Meme

It’s the biggest debate of our time — or, at least, the biggest since the great “the dress” debate of 2015.

Is it “Yanny” or is it “Laurel”?

ICT’s Todd Richmond weighs on why people are hearing “Yanny” while others might hear “Laurel”. Continue reading in TheWrap.

Virtual Therapy, Real Results

Diversity in Action interviews Skip Rizzo about using virtual reality to help in the treatment and prevention of PTS symptoms.

Read the full article on page 10 in Diversity in Action‘s May/June 2018 issue.

IWSDS 2018

IWSDS 2018
May 14-16, 2018
Singapore
Presentations

Virtual Reality is Used to Enhance the Lives of Aging Soldiers

A technology that bloomed in the age of joystick-thumbing millennials is now providing “ooooh’s” and ear-to-ear grins to old soldiers born generations ago. And those entrusted with looking after the veterans say it is broadening their lives and enhancing their care.

Continue reading in Newsday.

IEEE Innovation Challenges Summit

IEEE Innovation Challenges Summit
May 11, 2018
San Francisco, CA
Keynote Presentation

Are Video Games Really Better When You Play as Yourself?

Ever dreamed of sprinting through a pixelated world, head-butting mysterious blocks, jumping on sentient mushrooms, and battling an evil turtle to rescue the princess?

Of course, you have. But not all dreams should come true.

A new study suggests virtual doppelgangers don’t necessarily improve video gameplay.

Researchers at the University of Southern California Institute for Creative Technologies and University of Illinois at Urbana-Champaign found that photorealistic avatars have little effect on players’ performance.

Continue reading on Geek.com.

Would Super Mario Bros. be better if you could play as yourself? Well, not exactly.

New research suggests that avatar appearance may not make a difference to players in certain game contexts.

Contact: Sara Preto, (310) 301-5006 or preto@ict.usc.edu

The gaming experience over the last decade has evolved tremendously and player-customized avatars, or virtual doppelgangers, are becoming more realistic every day. Past studies have shown women may prefer avatars that don’t look like them but a new study by USC Institute for Creative Technologies and University of Illinois at Urbana-Champaign shows no gender difference or negative effect on player’s performance or subjective involvement based on whether a photorealistic avatar looked like them or like their friend.

The study is the latest to examine benefits to using self-similar avatars in virtual experiences, and builds primarily on a study by Gale Lucas that analyzed players’ performance and subjective involvement with a photorealistic self-similar avatar in a maze running game. Results showed effects based on avatar appearance as well as gender differences in participants’ experiences.

“In the previous work, we found that players felt more connected and engaged – and that their avatar was more attractive – when they navigated the game with a photorealistic self-similar avatar, compared to a photorealistic avatar that looked like a stranger,” said Gale Lucas, senior research associate for USC Institute for Creative Technologies.

“While previous research shows that male players also found that the game was more enjoyable with self-similar avatars, women if anything, enjoyed it more with a stranger’s avatar. However, we found no effects on performance. Although there were no performance benefits of self-similar avatars, we wanted to confirm that these subjective benefits of self-similar avatars were because they looked like the player, not just that they were familiar.”

Thus, to help researchers and game designers assess the cost-benefit tradeoffs of self-similar avatars, Lucas and co-authors Helen Wauck, Ari Shapiro, Wei-Wen Feng, Jill Boberg and Jonathan Gratch conducted an experiment inviting pairs of friends to visit the USC Institute of Creative Technologies lab for a full-body scan to be generated in photorealistic avatars. Shapiro, a USC Viterbi research assistant professor in computer science, is one of the pioneers of “fast avatar capture” that creates a photorealistic, 3D double of you in the span of 20 minutes. One of the friends in the pair was instructed to play a search and rescue game with their own avatar, and the other friend in the pair played the game with their friend’s avatar.

“By comparing people who played the game with their own avatar to those who played with their friends’ avatar, we could test the effect of self-similarity without confounding it with familiarity,” Lucas said.

Lucas also mentioned that because she and the team did not replicate the previous findings in this new study, those prior discoveries could have been due to familiarity rather than self-similarity per se. This suggests that having an avatar resembling someone you know personally may be just as good as having one that looks like you. So, rather than creating a personalized avatar for each member of a military troop or classroom, it may be enough to create one avatar that looks like someone in the group, and everyone in the group could benefit from it – compared to an avatar that looked like a stranger.

“We also found that women’s experiences with self-similar avatars was no more negative than men’s,” said Lucas. “This second study used even more high fidelity avatars than the previous study we ran, so it seems that with better rendering (e.g., of face, hair, hands), women no longer felt less enjoyment with their own avatar.”

The new findings reveal how important it is to carefully consider the extent to which high fidelity self-similar avatars align with the purpose and structure of an interactive experience before deciding whether it is worth the investment of time and money to implement.

The study was published on April 25th in The Proceedings of the ACM CHI Conference on Human Factors in Computing Systems, the premier international conference of Human-Computer Interaction.

###

Two Tech Startups Changing the World That Are Backed By Hollywood Actors

Business Insider features Bravemind in a post-Dell Technologies World 2018 story.

Continue reading to learn more about the project and its connection to Hollywood.

Virtual Reality Graded Exposure Therapy: The Paradigm Shift That Can Change Our Mental Reality

Corporate Wellness Magazine explores VRGET, featuring insight from ICT’s Dr. Skip Rizzo.

LREC 2018

LREC 2018
May 7-12, 2018
Miyazaki, Japan
Presentations

Army Seeks ‘Synthetic Training Environment’

Defense Systems features collaborative efforts to create a unified virtual training architecture that would allow soldiers to practice realistic operations anywhere and anytime and improve training management across the Army environment.

Continue reading about the Synthetic Training Environment.

Virtual Humans Could Improve Conversations at the Doctor’s Office

Game-like simulations are training health care professionals to be more empathetic and to tackle conversations on tough topics like mental health.

CNET spoke with ICT’s Jonathan Gratch about how virtual humans can be used in healthcare and other industries.

Continue reading.

Dell Technologies Invests in Virtual Reality Exposure Therapy

Dell Technologies Invests in Virtual Reality Exposure Therapy

Announces $100,000 Grant to Advance Bravemind

LAS VEGAS – Kicking off Dell Technologies World 2018, Chairman and CEO Michael Dell announces a $100,000 grant for the University of Southern California’s Institute for Creative Technologies to advance its virtual reality exposure therapy prototype, Bravemind, used to assist in the treatment of post-traumatic stress symptoms in war Veterans.

Dr. Albert ‘Skip’ Rizzo, director for medical virtual reality at the Institute for Creative Technologies, helps kick off the conference by joining actor Jeffrey Wright on stage for a live demonstration of the project. Wright describes Bravemind and the work being done at the Institute as “human progress”, a common theme at Dell Technologies World 2018.

The donation will be used to help expand Bravemind and develop its most innovative form using higher fidelity virtual reality equipment at a lower cost, an option that was not feasible until recent advances in the technology over the last few years.

For more information about Bravemind, please visit www.ict.edu/prototype/pts.

###

Michael Dell Says Robopocalypse is Fake News, Future is Software Defined

SDxCentral covers Dell Technologies World 2018, where Michael Dell and company announce a $100,000 grant for the advancement of Bravemind.

Continue reading here.

How Virtual Reality Is Saving Lives Both On and Off the Battlefield

Dell Technologies explores the use of VR in exposure therapy in this feature, continue reading the full article here.

Can the Scary World of Facial Recognition Generate a Nearly $9B Benefit

Forbes speaks with ICT’s Hao Li for insight.

MODSIM World 2018

MODSIM World 2018
April 24-26, 2018
Norfolk, VA
Presentations

The ACM CHI Conference on Human Factors in Computing Systems

The ACM CHI Conference on Human Factors in Computing Systems
April 21-26, 2018
Montreal, Canada
Presentations

The US Army Has Made a Virtual North Korea to Train Soldiers

The US army is creating a virtual replica of the planet to drill troops in. The idea is that it will be so realistic that practising missions in virtual reality will be almost as good as the real thing.

Continue reading in New Scientist.

War Games: Army Deletes Bureaucracy to Get Sims Fast

There is real uncertainty whether such things as robotic tanks and high-speed scout helicopters are possible on the Army’s timeline. But if there’s one area where a high-speed approach can work, it’s training simulations, where the Army can piggyback on the rapid development in commercial gaming.

Breaking Defense examines new technologies the Army is using, featuring ICT’s Synthetic Training Environment and One World Terrain efforts. Read the full story here.

Inside the VR Therapy Designed to Help Sexual Assault Survivors Heal by Facing Attackers

ABC News investigates new VR technology used to help treat sexual assault survivors, speaking with ICT’s Dr. Skip Rizzo about virtual reality exposure therapy.

Continue reading the full article here.

Using Virtual Reality to Treat Addiction

The Fix looks at ways in which VR can help support the treatment of addiction.

Spotting Fake News in a World with Manipulative Video

For CBS News, ICT’s Hao Li talks about manipulating images with Carter Evans.

Is the VR Universe in Ready Player One Possible?

For this Giz Asks, Gizmodo reached out to a number of VR experts, including Arno Hartholt and Skip Rizzo, about the plausibility of an OASIS-like platform coming onto the market, and how much computing power would be required to sustain it.

Continue reading the full article in Gizmodo.

Virtual Reality Now Being Used to Treat PTSD

Examining VR treatments for PTS, via News Blaze.