The Next Technology Shift: The Internet of Actions

GP-Bullhound released their 2018 technology predictions in early December 2017. At the top of that list was the changing relationship between technology and politics followed by cybersecurity, mobile over TV in China, translation technology, no more emails, international labor, the software suite, Industry 4.0 and the rise of regulators in blockchain and Initial Coin Offerings. Coming in last was Augmented Reality (AR).

But, according to Todd Richmond, IEEE Member and Director of USC’s Mixed Reality Lab, the next impactful technology shift is the Internet of Actions (IoA). IoA is a vision of digital technology becoming a more effective partner for humans as we navigate through our increasingly mixed-reality worlds.

Richmond believes that the key enabling technology is artificial intelligence (AI) which will drive the personalization that is part and parcel of IoA.

Continue reading the full article in Forbes.

We Visited Thomas Jefferson with VR!

Alyssa Smith of Cheddar was hanging out with Thomas Jefferson thanks to USC’s Institute for Creative Technologies. You’ll never guess what his favorite hobby is, watch the full segment on Cheddar.

How Mixed Reality Helps PTSD Patients

Todd Richmond, Director of Advanced Prototype Development for the USC Institute of Creative Technologies, speaks with Cheddar about how emulating the environment of trauma helps treat victims.

Watch the full interview on Cheddar TV.

Becoming an Avatar

Alyssa Julya Smith of Cheddar visited the USC Institute for Technologies in December where she learned about research being conducted in virtual reality from audio lead Jamison Moore and assistant professor Ari Shapiro.

Alyssa became an avatar herself, placed in a virtual world with the nation’s third president, Thomas Jefferson. They talk how virtual reality is being implemented in educational settings and how the technology can add to a curriculum.

Watch the full segment on Cheddar.

How Mixed Reality Could “Profoundly” Change the World

Mixed reality is set to make a huge impact on people’s lives. IEEE Member and USC ICT’s Director of Advanced Prototype Development Todd Richmond explains how he researches the way this technology will change the way things are done across industries.

“Mixed reality is going to profoundly change our world,” says Richmond. Cheddar Anchor Alyssa Julya Smith explores the autonomous drone lab inside USC. Richmond says this lab is looking to understand the relationship between humans and autonomous objects.

The drones are trained to follow the people controlling them. A big question looking into the future of autonomous objects is the ability for humans to trust the technology. Richmond says the project at USC looks at how to build interactions between machines and humans to advance command and control.

Watch the full segment on Cheddar.

ARL 24 – Research Assistant, Socially Intelligent Assistant in AR

Project Name
Socially Intelligent Assistant in AR

Project Description
Augmented reality (AR) introduces new opportunities to enhance the successful completion of missions by supporting the integration of intelligent computational interfaces in the users’ field of view. This research project studies the role embodied conversational agents can play towards that goal. This type of interface has a virtual body and is able to communicate with the user using natural language and nonverbally (e.g., emotion expression). The core research question is: Can embodied conversational agents facilitate completion of missions above and beyond what is afforded by more traditional types of interfaces?

Job Description
The intern will develop this research on an existent platform for embodied conversational agents in AR. The intern will have to propose a set of key functionalities for the agent, implement them, and demonstrate that it improves mission completion performance. The proposed functionality must pertain to information that is perceived through the camera or 3D sensors available in the AR platform, and may be communicated to the user verbally and nonverbally.

Preferred Skills

  • Experience with AR platforms
  • Experience with Unity and C# programming
  • Some experience with HCI evaluation techniques
  • Some experience with scene understanding techniques and TensorFlow
  • Some experience with embodied conversational agents

ARL 23 – Programmer, Creation of Synthetic Annotated Image Training Datasets for Deep Learning Convolutional Neural Networks

Project Name
Creation of Synthetic Annotated Image Training Datasets Using Computer Graphics for Deep Learning Convolutional Neural Networks

Project Description
Work as part of a team on a project to develop and apply DLCNN on field deployable hardware:
Purpose: Accelerate deep learning algorithms to recognize people, behaviors and objects relevant to military purposes using computer graphics generated training images for complex environments.
Product: A training image generator which creates a corpus of automatically annotated images for a closed list of people, behavior and objects. Optimized fast and accurate machine learning algorithms that can be fielded in low-power, low-cost and low-weight fieldable sensors.
Payoff: Create an inexpensive source of military related training data and optimal deep learning algorithm tuning for fieldable hardware, which could be used to create semi-automatic annotated datasets for further training and be scalable for the next generation machine learning algorithms.

Job Description
Develop scripts for ARMA3 to create “pristine” and sensor degraded synthetic data suitable for training and testing DLCNN’s, e.g. Caffe, TensorFlow, DarkNet,…. Assets such as personnel, vehicles, aircraft, boats and other objects will be rendered under a variety of observation and illumination angle conditions, e.g. full daytime cycle, weather conditions (clear to total overcast, low to high visibility, dry and rain, snow).

Preferred Skills

  • Programming skills: Python,Matlab, scripting
  • Good documentation of algorithms, code and workflow
  • Gaming Engines, e.g. ARMA3, Unreal, Unity, Blender
  • Familiarity with Windows and Linux, cloud computing
  • Familiarity of willingness to learn basics of DLCNNs

ARL 22 – Research Assistant, Real-Time Scene Understanding in Augmented Reality

Project Name
Real-Time Scene Understanding in Augmented Reality

Project Description
Augmented reality (AR) has the potential to facilitate the successful completion of missions by providing users with critical real-time information about the surrounding environment. To accomplish this, we need intelligent systems that are able to integrate camera and 3D data efficiently. Furthermore, these systems need to be able to present this information effectively in the users’ (augmented) field-of-view. This research project focuses on three core questions: 1) What information should we perceive in the environment? 2) How do we perceive this information? 3) How do we present this information to the user?

Job Description
The intern will integrate computer vision techniques in an AR platform to allow recognition of mission-critical entities, and integrate this data with the 3D representation of the environment afforded by the AR platform. This intern will also develop the ideal visual representation for this information and lead an evaluation that demonstrates improvement to relevant mission completion metrics.

Preferred Skills

  • Experience with machine learning for scene understanding
  • Experience with AR platforms
  • Experience with Unity and C# programming
  • Experience with TensorFlow
  • Some experience with HCI evaluation techniques

ARL 21 – Research Assistant, Real-Time Facial Expression Perception to Enhance Human-Agent Collaboration

Project Name
Real-Time Facial Expression Perception to Enhance Human-Agent Collaboration

Project Description
Emotion expressions help regulate social life and communicate important information about our goals to others. Research shows that emotion expressions also have an important impact on the decisions we make with others, even if others are machines. This research project studies whether endowing computational systems with the ability to perceive facial expressions in users can help promote collaboration and cooperation between humans and machines.

Job Description
The intern will develop a system that is capable of perceiving, in real-time, the user’s facial expressions and integrate with an existent web platform for embodied agents. The intern will then study whether embodied agents that can express and perceive emotion promote collaboration and cooperation with human users.

Preferred Skills

  • Experience developing or using off-the-shelf tools for real-time perception of facial expressions
  • Experience with Unity, C#, and Javascript programming
  • Experience with HCI evaluation techniques
  • Some experience with image processing techniques and TensorFlow

ARL 20 – Research Assistant, VPU-Based Deep Learning Inference at the Edge

Project Name
VPU-Based Deep Learning Inference at the Edge

Project Description
VPU (Vision Processing Unit) is an emerging class of system-on-chip (SOC) with dedicated artificial intelligence and neural computing capabilities for accelerating embedded visual intelligence and inference on resource-constrained devices. This project is to explore the powerful VPU capability and to develop innovative technical solutions to streamline the deep neural network-based visual perception algorithms (i.e. object detection, semantic segmentation, and localization) that can be efficiently executed on miniaturized, ultralow powered VPU devices at the edge.

Job Description
The work includes pursuing novel solutions and methods to run the computationally-intensive computer vision and deep neural network inference algorithms on a ultra-compact, self-contained VPU device (e.g. Intel USB-based Neural Compute Stick) at high speed and low power without compromising accuracy and performance. A specified neural network-based perception algorithm (e.g. object detection) will be used for algorithm development and evaluation. Anticipated results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to machine learning, computer vision
  • Strong programming skills

ARL 19 – Research Assistant, Self-Localization and Perception Over Rough Environments

Project Name
Self-Localization and Perception Over Rough Environments

Project Description
Robust and accurate self-tracking and localization is vital to any systems and applications that are spatially-aware (e.g. autonomous driving/flying, location-based situational awareness, augmented reality, etc.). Current techniques, however, have significant limitations in terms of accuracy, robustness, scalability, and reliability, which cannot fulfill the operating requirements especially in unconstrained rough environments. This project is to develop high-performance localization and perception capability over rough environments (e.g. urban rubble, vegetated off-road terrain, non-GPS state estimation).

Job Description
The work includes pursuing robust techniques that enable simultaneous localization and structure perception from multi-sensors. Key innovation of the techniques is to combine advanced SLAM (simultaneous localization and mapping), object recognition, and monocular depth prediction with deep neural network under an unified framework to resolve the challenging problems. The opportunity to merge these techniques is truly unique and has never been done before. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to machine learning, computer vision
  • Strong programming skills

ARL 18 – Research Assistant, Digital Sandtable Using HoloLens AR

Project Name
Digital Sandtable Using HoloLens AR

Project Description
The goal of this project is to develop a “Digital Sandtable” prototype by applying emerging Augmented Reality (AR) and immersive techniques to visual exploration and presentation of various mission sensing data in support of mission planning, assessment, enhanced data comprehension, and decision-making.

Job Description
The work includes pursuing technical solutions, developing core algorithms, and producing a proof of concept prototype (i.e. “Digital Sandtable”) by making use of the algorithms from research efforts and off-the-shelf commercial products such as Microsoft HoloLens and augmented reality development toolkits.

Preferred Skills

  • A dedicated and hardworking individual
  • Experience or coursework related to VR/AR, game development, Unity
  • Strong programming skills

ARL 17 – Research Assistant, Medical Imaging as a Tool for Wound Ballistics

Project Name
Medical Imaging as a Tool for Wound Ballistics

Project Description
The primary purpose of this project is to research forensic aspects of ballistic injury. The motivation for this project results from a desire to better understand the ability of medical imaging tools to provide clinically- and evidentiary-relevant information on penetrating wounds caused by ballistic impacts both pre- and post-mortem.

Job Description
The research assistant will collect and analyze data, including DICOM medical images, as well as document and present findings of the work.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Graduate student in biomedical engineering, mechanical engineering, or related field
  • Some experience working in a laboratory setting
  • Some experience in the medical field
  • Some experience with medical images or radiology
  • Experience in software for data collection, processing and analysis

 

ARL 16 – Programmer, Investigating AR/VR Human Interaction with Complex Analysis Geometry

Project Name
Investigating AR/VR Human Interaction with Complex Analysis Geometry

Project Description
This project will explore virtual and augmented reality methods for displaying Vulnerability / Lethality simulation data in a suitable context. This would include prototyping visuals and human-computer interactions in Unity and working with analysts and evaluators to determine optimal means for conveying complex groupings of data and results.

Job Description
The Army Research Laboratory (ARL) is looking for an enthusiastic, self-motivated student with a background in Unity Programming and 3D Model creation. The student will take the lead in understanding the requirements for a Vulnerability/Lethality analysis and use these requirements to define what interactions need to be explored.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus

Preferred Skills

  • Computer Programming
  • Experience in Unity (AR/VR a plus)
  • 3D Modeling and Texturing a plus

ARL 15 – Research Assistant, Researching Biomedical Imaging Segmentation Techniques

Project Name
Researching Biomedical Imaging Segmentation Techniques

Project Description
A big part of human survivability is adequately representing the variety of anatomy that exists. This project will be a meta-analysis of all current automated and manual segmentation techniques, as well as anatomical atlas creation, to date. The student will research how other models are made, the complexity of the models, document techniques and create a library of found research papers. This research will then be put together as a paper. The paper’s conclusion will suggest avenues for segmentation, image recognition and model creation based on their findings.

Job Description
The research assistant will document all of the findings to date and present their findings at summers’ end. Given what the student finds they will recommend a few avenues for improving segmentation and model creation techniques.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Biomedical background with focus in imaging
  • Knowledge of medical (DICOM) data and anatomy
  • Understanding of FEM models
  • Proficient writer

ARL 14 – Research Assistant, The Biomechanics of Ballistic-Blunt Impact Injuries

Project Name
The Biomechanics of Ballistic-Blunt Impact Injuries

Project Description
The primary purpose of this project is to research the mechanisms and injuries associated with ballistic-blunt impacts. The motivation for this project results from body armor design requirements. Body armor is primarily designed to prevent bullets from penetrating into the body. However, to absorb the energy of the incoming bullet, body armor can witness a large degree of backface deformation (BFD). Higher energy threats, new materials and new armor designs may increase the risk of injury from these events. Even if the body armor systems can stop higher energy rounds from penetrating, the BFD may be severe enough to cause serious injury or death. Unfortunately, there is limited research on the relationship between BFD and injury, hindering new and novel armor developments. Consequently, there is a need to research these injuries and their mechanisms so that proper metrics for the evaluation of both existing and novel system can be established.

Job Description
The research assistant will help design and execute hands-on lab research related to injury biomechanics, collect and analyze data, as well as document and present findings of the work.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus.

Preferred Skills

  • Graduate student in biomedical engineering, mechanical engineering, or related field
  • Some experience working in a laboratory setting
  • Some experience in the medical field
  • Experience in software for data collection, processing and analysis

ARL 13 – Programmer, Anatomical Shape Model in WebGL

Project Name
Anatomical Shape Model in WebGL

Project Description
This project will create a 3D shape model that morph through different size human using anthropometric data. 3D geometry and anthropometric measurements will be provided. This shape model will be placed in a 3D WebGL environment (three.js), which will allow the user to define specific features of the human in a Web based environment. This web-based environment should also export the geometry. This application will support the ability to update application with future measurements or similarly formatted anthropometric data.

Job Description
The Army Research Laboratory (ARL) is looking for an enthusiastic, self-motivated student with a background in web-based programming. The student will take the lead in understanding the requirements for the Anatomical Shape Model Program and developing a prototype that will satisfy those requirements. The research assistant will explore approaches to solving the problem; develop working prototypes, as well as document and present their workflow.
**Internship may be located at Keck School of Medicine on USC’s Health Science Campus

Preferred Skills

  • Computer programming (web preferred)
  • 3D Web-programming experience (WebGL-three.js)
  • Some 3D animation/modeling experience ( morph targets) would help but not required

Can Magic Leap Deliver on Its Big Hardware Reveal?

Magic Leap announces their One system, a head-mounted display and wearable processing unit that connects to a handheld controller. With not much details available, WIRED turned to ICT’s David Nelson for some thoughts on what kind of technology might be behind the One system.

Continue reading the full article on WIRED.com.

Magic Leap: Founder of Secretive Start-Up Unveils Mixed-Reality Goggles

Magic Leap today revealed a mixed reality headset that it believes reinvents the way people will interact with computers and reality. Unlike the opaque diver’s masks of virtual reality, which replace the real world with a virtual one, Magic Leap’s device, called Lightwear, resembles goggles, which you can see through as if wearing a special pair of glasses. The goggles are tethered to a powerful pocket-sized computer, called the Lightpack, and can inject life-like moving and reactive people, robots, spaceships, anything, into a person’s view of the real world.

Creative Director of ICT’s Mixed Reality Lab David Nelson chatted with Brian Crecente of Rolling Stone about the technology and how it is moving us toward a new medium of human-computing interaction.

Read the full article in Rolling Stone.

Learning a Language in VR is Less Embarassing than IRL

Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?

Quartz investigates and chats with ICT’s Jonathan Gratch about the positive effects VR can have on people that might be wary of the technology.

Read the full article on Qz.com.

Creating a Life-sized Automultiscopic Morgan Spurlock for CNN’s “Inside Man”

Download a PDF overview.

We present a system for capturing and rendering life-size 3D human subjects on an automultiscopic display. Automultiscopic 3D displays allow a large number of viewers to experience 3D content simultaneously without the hassle of special glasses or head gear. Such displays are ideal for human subjects as they allow for natural personal interactions with 3D cues such as eye-gaze and complex hand gestures. In this talk, we will focus on a case-study where our system was used to digitize television host Morgan Spurlock for his documentary show “Inside Man” on CNN. Automultiscopic displays work by generating many simultaneous views with highangular density over a wide-field of view. The angular spacing between between views must be small enough that each eye perceives a distinct and different view. As the user moves around the disply, the eye smoothly transitions from one view to the next. We generate multiple views using a dense horizontal array of video projectors. As video projectors continue to shrink in size, power consumption, and cost, it is now possible to closely stack hundreds of projectors so that their lenses are almost continuous. However this display presents a new challenge for content acquisition. It would require hundreds of cameras to directly measure every projector ray. We achieve similar quality with a new view interpolation algorithm suitable for dense automultiscopic displays.

While the display has many applications, from video games to medical visualization, we are currently working on a much larger project to record the 3D testimonies of Holocaust survivors. This project, “New Dimensions in Testimony” or NDT, is a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies, in partnership with exhibit design firm Conscience Display. NDT combines ICT’s Light Stage technology with natural language processing to allow users to engage with the digital testimonies conversationally. NDT’s goal is to develop interactive 3D exhibits in which learners can have simulated educational conversations with survivors though the fourth dimension of time. Years from now, long after the last survivor has passed on, the New Dimensions in Testimony project can provide a path to enable youth to listen to a survivor and ask their own questions directly, encouraging them, each in their own way, to reflect on the deep and meaningful consequences of the Holocaust. NDT follows the age-old tradition of passing down lessons through oral storytelling, but with the latest technologies available.

Multi-View Stereo on Consistent Face Topology

Download a PDF overview.

We present a multi-view stereo reconstruction technique that directly produces a complete high-fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist-quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi-view input images of the subject, while satisfying cross-view, cross-subject, and cross-pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA-based dimension reduction and denoising scheme. We demonstrate high-fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state-of-the-art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production-quality mesh topologies.

Our objective is to warp a common template model to a different person in arbitrary poses and different expressions while ensuring consistent anatomical matches between subjects and accurate tracking across frames. The key challenge is to handle the large variations of facial appearances and geometries, as well as the complexity of facial expression and large deformations. We propose an appearance-driven mesh deformation approach that produces intermediate warped photographs for reliable and accurate optical flow computation. Our approach effectively avoids image discontinuities and artifacts often caused by methods based on synthetic renderings or texture reprojection.

Practical Multispectral Lighting Reproduction

Download a PDF overview.

We present a practical framework for reproducing omnidirectional incident illumination conditions with complex spectra using an LED sphere with multispectral LEDs. For lighting acquisition, we augment standard RGB panoramic photography with one or more observations of a color chart. We solve for how to drive the LEDs in each light source to match the observed RGB color of the environment and to best approximate the spectral lighting properties of the scene illuminant. Even when solving for non-negative intensities, we show that accurate illumination matches can be achieved with as few as four or six LED spectra for the entire ColorChecker chart for a wide gamut of incident illumination spectra.

A significant benefit of our approach is that it does not require the use of specialized equipment (other than the LED sphere) such as monochromators, spectroradiometers, or explicit knowledge of the LED power spectra, camera spectral response curves, or color chart reflectance spectra. We describe two useful and easy to construct devices for multispectral illumination capture, one for slow measurements of detailed angular spectral detail, and one for fast measurements with coarse spectral detail.

We validate the approach by realistically compositing real subjects into acquired lighting environments, showing accurate matches to how the subject would actually look within the environments, even for environments with mixed illumination sources, and demonstrate real-time lighting capture and playback using the technique.

Augmented Reality and Virtual Reality to Enhance US Military Training

AR Post remembers important conversations about the use of mixed reality in the Army held during the 2017 Body Computing Conference.

Read the full article here.

Oscars: How Top VFX Pros Brought Baby Groot, Wonder Woman’s Golden Lasso to Life

Hollywood Reporter runs down the technology behind 2017 blockbusters. Read the full article to learn more about ICT’s involvement in films like Logan, Valerian and Blade Runner 2049.

Immerse Yourself in New Worlds With Augmented and Virtual Reality

IEEE features a piece on their IEEE Transmitter and how it can be used to explore outer space and to treat medical conditions. To get a closer look at an archeological dig or to take a trip to Mars, visit the immersive experiences on IEEE Transmitter. There you can learn about the many applications of augmented and virtual reality, including how they’re being used for health care.

To continue reading, visit The Institute, IEEE’s news source.

324 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology developed for use with virtual humans to physical robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, games, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots.

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities: 1. Porting language technology to a robot platform, 2. Design of task for human-robot collaborative activities, 3. Programming of robot for such activities, or 4. Use of a robot in experimental activities with human subjects.

Preferred Skills

Experience with one or more of:

  • Using and programming robots
  • Dialogue systems, computational linguistics
  • Multimodal signal processing, machine learning

323 – Research Assistant, Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship but might include Holocaust Survivors (as part of the New Dimensions in Testimony Project), or Army heroes, or others.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills

  • Very good spoken and written English (native or near-native competence preferred)
  • General computer operating skills (some programming experience desirable)
  • Experience in one or more of the following:
    1. Interactive story authoring & design
    2. Linguistics, language processing
    3. A related field; museum-based informal education

322 – Research Assistant, Extending Dialogue Interaction

Project Name
Extending Dialogue Interaction

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue which mainly focuses either a system chatting with a single person or assisting a person with accomplishing a single goal. The project will involve investigation of one or more of the following topics: consideration of multiple goals in dialogue, multi-party dialogue (with more than two participants), multi-lingual dialogue, multi-platform dialogue (e.g. VR and phone), automated evaluation of dialogue systems, or extended and repeated interaction with a dialogue system.

Job Description
The student intern will work with the Natural language research group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills

  • Some familiarity with dialogue systems or natural language dialogue
  • Either programming ability or experience with statistical methods and data analysis
  • Ability to work independently as well as in a collaborative environment

321 – Programmer, Advancing Multisense Capabilities to Support Scalable Multiplatform Virtual Human Applications

Project Name
Advancing Multisense Capabilities to Support Scalable Multiplatform Virtual Human Applications

Project Description
This project will develop, test, and implement a scalable multimodal sensing platform that will enable novel research endeavors and enhance existing prototypes. The advantage of this software architecture project will be seen in its ability to easily connect automatic behavioral sensing and user state inference capabilities with any virtual human and non-virtual human application using a microphone, basic webcam, and other computer vision sensors (e.g., Kinect, Intel RealSense, Primesense).

Job Description
The student intern will assist in developing health applications that use motion tracking and body sensing by providing programming and testing. The student intern will work with Kinect, Intel RealSense, and other sensing devices and integrate their functionality into the software.

Preferred Skills

  • Unity 3D
  • C#
  • C++
  • Signal Processing Algorithms
  • Computer Vision
  • Windows Programming

319 – Research Assistant, Analytic Projection for Authoring and Profiling of Social Simulations

Project Name
Analytic Projection for Authoring and Profiling of Social Simulations

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multiagent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills

  • Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  • Experience with Python programming.

318 – Programmer, Body-tracking for VR / AR

Project Name
Body-tracking for VR / AR

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. The intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills

  • C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
  • Detection, bilinear morphable models, texture synthesis, markov random field

317 – Programmer, Immersive Virtual Humans for AR / VR

Project Name
Immersive Virtual Humans for AR / VR

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is to use diffuse albedo maps to learn the displacement maps. After training, we can synthesize a high quality displacement map given a flat lighting texture map.

Job Description
The intern will assist the lab to develop an end-to-end approach for 3D modeling and rendering using deep neural network-based synthesis and inference techniques. The intern will understand computer vision techniques and have some experience with deep learning algorithms as well as knowledge in rendering, modeling, and image processing. Work may also include researching hybrid tracking of high resolution dynamic facial details and high quality body performance for virtual humans.

Preferred Skills

  • C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
  • Python, GPU programming, Maya, Octane render, svn/git, strong math skills
  • Knowledge in modern rendering pipelines, image processing, rigging

316 – Programmer, Real-time Rendering for Virtual Humans

Project Name
Real-time Rendering for Virtual Humans

Project Description
The Vision and Graphics lab at ICT pursues research and works in production to perform high quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. Research in how machine learning can be used to aid the creation of such datasets using single images is one of the most recent focuses in the lab. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on a software to aid both in visualization during the production pipeline as well as for producing images as training data for learning algorithms. The goal is a feature rich, real-time renderer which produces highly realistic renderings of humans scanned in the light stage.

Job Description
The intern will work with lab researchers to develop features in the rendering pipeline. This will include research and development of the latest techniques in physically based real-time character rendering. Ideally, the intern would have awareness about sub surface scattering techniques, hair rendering, parallax mapping and 3D modeling and reconstruction.

Preferred Skills

  • C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3D
  • Python, GPU programming, Maya, Octane render, svn/git, strong math skills
  • Knowledge in modern rendering pipelines, image processing, rigging

315 – Research Assistant, Mixed Reality Techniques and Technologies

Project Name
Mixed Reality Techniques and Technologies

Project Description
The ICT Mixed Reality Lab (MxR) researches and develops the techniques and technologies to advance the state-of-the-art for immersive virtual, mixed, and augmented reality experiences. With the guidance of the principal investigators (Dr. Evan Suma Rosenberg and Dr. David Krum), students working in the lab will help to design, create, and evaluate technological prototypes and/or conduct experiments to investigate specific research questions related to human-computer interaction. The specific project will be determined based on interviews with potential candidates, and selected interns will be matched to the projects that best align with their interests and skillset. For summer 2018, particular projects of interest include immersive decision making, EEG data from VR users, 3D interaction techniques, augmented reality user interface prototyping, and interaction between multiple users in shared mixed reality environments.

Job Description
Duties include brainstorming, rapid prototyping of novel techniques, developing virtual environments using the Unity game engine, running user studies, and analyzing experiment data. Some projects may include programming (C++, C#, Python, Unity) or fabrication (3D design and 3D printing).

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual and/or augmented reality technology
  • Programming in C++, C#, or similar languages
  • Familiar with experimental design and user study procedures
  • Prior experience with rapid prototyping equipment, such as 3D printers, laser cutters, etc. (optional)

312 – Programmer, One World Terrain

Project Name
One World Terrain

Project Description
One World Terrain (OWT) is an applied research project designed to assist the DoD in creating the most realistic, accurate and informative representations of the physical and non-physical landscape. Part of the goal of the Army National Simulation Center’s Synthetic Training Environment (STE) concept, is to help establish a next-generation government/industry terrain dataset for modeling and simulation (M&S) hardware and software for training and operational use.

The project seeks to:
-Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
-Utilize commercial cloudfront solutions for storing and serving geospatial data
-Procedurally recreate 3D terrain using drones and other capturing equipment
-Reduce the cost and time for creating geo-specific datasets for M&S

Job Description
The programmer will work with the OWT technical lead in support of recreating digital 3D global terrain capabilities that replicate the complexities of the operational environment for M&S.

Preferred Skills

  • Interest/experience in photogrammetry
  • Interest/experience with geographic information system applications and datasets
  • Unity/WebGL

311 – Research Assistant, Exploring Narrative for Immersive Experience Design

Project Name
Exploring Narrative for Immersive Experience Design

Project Description
The ICT MxR Lab researches and develops the techniques and technologies to advance the state-of-the-art for immersive virtual reality and mixed reality experiences. With the guidance of the principal investigator (Todd Richmond and David Nelson), students working in the lab will assist in creating interactions and experiences rooted in narrative principles, in order to help inform the design and implementation of more effective and compelling immersive training scenarios and applications.

Job Description
Duties will include analyzing immersive experience design to determine necessities for creating rich immersive scenarios; and conceptualizing and possibly the rapid prototyping of novel interactions and techniques developed in immersive environments using the Unity game engine.

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual reality technology or 3D/touch interfaces

310 – Research Assistant, Virtual Human Interviewer

Project Name
Virtual Human Interviewer

Project Description
Would you tell a computer something you wouldn’t tell another person? Our research finds that people are more comfortable talking with a virtual human interviewer than a real human interviewer. This happens in a variety of contexts: interviewing about mental health, financial status, and even when people are learning how to negotiate. Next we need to understand the boundary conditions (when and for whom does this happen). For example, it might only occur because particularly people fear being judged in these contexts. Or mostly among people who do feel concerned about other people. This research will address these important questions to help better use technology to let people talk about things they wouldn’t tell other people.

Job Description
The research assistant will work side by side with lead researchers on this project. They will help to design and implement the next steps in this research project with direct collaboration with the lead researchers. With guidance from our research staff, they will also help to run the study and also work with the team to learn how to process and analyze the data. If appropriate, students will also be actively involved in manuscript writing for this research project.

Preferred Skills

  • Experimental design
  • Running user studies
  • Computer-human or computer-mediated interaction

308 – Research Assistant, Character Animation and Simulation Group

Project Name
Individualized Models of Motion and Appearance

Project Description
Digital characters are an important part of entertainment, simulations and digital social experiences. Characters can be designed to emulate or imitate human-like (and non-human like behavior). However, humans are very complicated entities, and in order to create a convincing virtual human, it is necessary to model various elements, such as human-like appearance, human-like behaviors, and human-like interactions. 3D characters can fail to be convincing representations because of improper appearance, improper behavior, or improper reactions. The goal of this internship is to advance the state-of-the-art in character simulation by improving or adding aspects to a digital character that would make them more convincing representations of real people.

Job Description
Research, develop and integrate methods for use on virtual characters to better fidelity, interaction or realism of characters. Design or implement algorithms from research papers and integrate into animation/simulation system (SmartBody).

Preferred Skills

  • C+
  • Computer graphics and animation knowledge
  • Research in character/animation/simulation/human modeling

307 – Research Assistant, Motion Interpretation and Generation Using Deep Learning

Project Name
Real-time Behavior Interpretation

Project Description
The Real-time Behavior Interpretation project aims to model how people make sense of the behavior of others, inferring their plans, goals, intentions, emotions, and social situations from observations of their actions. This project uses as input short animated movies involving simple shapes, and outputs English narratives that correspond to human-like interpretations of the situation. To accomplish this task, the project employs contemporary deep-learning approaches to action recognition, interpretation using probability-ordered logical abduction, and natural language generation technologies.

Job Description
2018 summer interns on the RBI project will focus on applying deep learning techniques to the problem of action recognition and motion generation, using crowdsourced datasets of the motion trajectories of simple shapes.

Preferred Skills

  • Experience using Tensorflow, including RNNs and CNNs
  • Web programming skills, particularly Javascript and SVG manipulation
  • Interest in animation and motion interpretation

306 – Research Assistant, Voice-controlled Interactive Narratives for Training (VIANT)

Project Name
Voice-controlled Interactive Narratives for Training (VIANT)

Project Description
The VIANT project aims to build interactive training experiences that are audio-only, where fictional scenario content is presented as produced audio clips with professional voice actors, and player actions are accepted as spoken input using automated speech recognition.

Job Description
2018 summer interns on the VIANT project will create at least three complete interactive audio narratives for different training objectives. This will include researching the training domain, creating a post-test evaluation metric, designing a scenario structure, and writing a fictional script that can be used to produce audio content.

Preferred Skills

  • Excellent fiction writing abilities
  • Familiarity with interactive fiction approaches
  • Experience with film / radio / television production

305 – Research Assistant, Non-verbal Communication During Joint Tasks

Project Name
Non-verbal Communication During Joint Tasks

Project Description
How do you know what someone is going to do when they don’t say a word? This research explores how we communicate our intentions non-verbally. Nonverbal behaviors are an important channel of communication and this is especially true in the case of negotiations or joint tasks, where we might not know what the other person is going to do and our outcome depends on it. Studies have shown that humans are able to recognize non-verbal cues that signal others’ intentions. But what exactly are those cues that humans pick up? And how could we identify and track them automatically? This research tackles this important question.

Job Description
The research assistant will work side by side with lead researchers on this project. They will help to design and implement the next steps in this research project with direct collaboration with the lead researchers. With guidance from our research staff, they will also help to run the study and also work with the team to learn how to process and analyze the data. If appropriate, students will also be actively involved in manuscript writing for this research project.

Preferred Skills

  • Experimental design
  • Running user studies
  • Computer-human or computer-mediated interaction

304 – Programmer, Integrated Virtual Humans Programmer

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual humans systems by combining various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question / answer characters, to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other, both verbally and nonverbally, i.e. they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C#, or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA, or PHP
  • Experience with Unity
  • Excellent general computer skills
  • Background in Artificial Intelligence is a plus

303 – Research Assistant, Personal Assistant for Life Long Learning (PAL3)

Project Name
PAL3

Project Description
This research involves a mobile-based learning technology that connects many different technologies from ICT (e.g., virtual humans, NLP dialog systems, affect detection, modeling user knowledge). PAL3 is a system that monitors learning and provides intelligent recommendations and explanations. This project has many different pieces, which include gamification design, data analytics, user interface programming, interactive dialogs (e.g., with a virtual mentor), and a variety of other opportunities. The PAL3 project is unique and pushes the boundaries on some key research topics that will be influential for learning technology in years to come.

Job Description
The ideal candidate would be a quick learner and a self-starter, with a high level of technical competence in AI, UI design, or machine learning & statistics. This research involves a mid-sized team, so there are substantial opportunities for engaging with senior programmers, researchers, and students from other fields. This project also has opportunities for contributing to peer-reviewed publications.

Preferred Skills

  • Programming (C#, Python, JavaScript)
  • UI Design or Game Design (Unity, React.js, etc.)
  • Machine Learning / Statistics

302 – Programmer, Multi-Agent Architecture Programmer

Project Name
GIFT Multi-Agent Architecture

Project Description
This project involves contributing to an open source repository which leverages AI to improve learning. This research integrates a variety of different technologies, with an emphasis on agent-oriented programming and web services. The goal of this work is to make it easier to plug-and-play new modules for a learning technology, and to help design technologies that learn over time.

Job Description
This research is highly technically-oriented, with an emphasis on web services and systems integration (e.g., working on a large system with many moving pieces). The ideal candidate for this work has strong programming skills, and is looking to push those to the limit with complex problems that require solving novel problems for messaging patterns, distributed processing, and software-as-a-service infrastructure. Some opportunities for user experience design are also available for authoring tools that make configuring web services easier.

Preferred Skills

  • Strong programming fundamentals (ideally in JavaScript, Python, Java, etc.)
  • Web service programming
  • Agent-oriented programming

301 – Research Assistant, Learning Analytics Research

Project Name
SMART-E

Project Description
This project is building a service for generalized engagement metrics, with the goal to apply this service to a variety of different learning and training systems (both in real time and to analyze already-collected data).

Job Description
This research has opportunities to research and apply machine learning, statistics, and web service programming. As this research will involve a variety of analytics types, there will be different opportunities depending on skill level and experience (e.g., undergraduate, masters, PhD). Finally, interns will be expected to do work and documentation that contributes to a publication in a peer reviewed venue.

Preferred Skills

  • Machine Learning
  • Programming (Python / R / JS / C# preferred, but not required)
  • Statistics

Interactive Exhibit Lets Visitors ‘Talk’ with Nanjing Massacre Survivor in Mandarin

The Nanjing Massacre Memorial Hall in China today debuted its permanent exhibition of New Dimensions in Testimony — interactive survivor testimony technology developed by USC Shoah Foundation — The Institute for Visual History and Education.  The event marks the 80th anniversary of massacre in Nanjing.

This is the first permanent museum exhibition of New Dimensions in Testimony, or NDT, outside the United States and the first exhibit anywhere featuring the Mandarin-language testimony of Madame Xia Shuqin, a survivor of the massacre as a child. She is the only non-Holocaust survivor who has been interviewed for NDT.

Continue reading the full article in USC News.

First Mandarin-Language New Dimensions in Testimony Exhibit to Premiere at Nanjing Massacre Memorial Hall

As part of the commemoration of the 80th anniversary of the Nanjing Massacre, USCShoah Foundation will introduce its first Mandarin-language New Dimensions in Testimony interactive survivor testimony at the Nanjing Memorial Hall in Nanjing, China on Dec. 13.

It is the first permanent exhibition of NDT outside the United States and features Madame Xia Shuqin, a child survivor of the Nanjing Massacre. She is the only non-Holocaust survivor who has been interviewed for the project so far.

The Tianfu Bank and Tianfu Group generously funded the creation of this NDT testimony.

The exhibit is a featured part of the newly reconstructed Nanjing Massacre Memorial Hall’s core exhibit space. NDT uses groundbreaking natural language software to allow audiences to interact with the recorded image of a genocide survivor, who responds to questions in real time, powered by complex algorithms providing realistic conversation.

Xia traveled to Los Angeles in 2016 to film the NDT interview. The interview took five days and was filmed on the 360-degree “light stage” at the USC Institute for Creative Technologies. High definition cameras and lights captured the interview from all angles as Xia answered hundreds of questions about her life before, during and after the Nanjing Massacre.

Continue reading the full press release in Markets Insider, part of Business Insider, here.

Q&A with Skip Rizzo

Beyond Standards of the IEEE Association sits down with ICT’s Dr. Skip Rizzo to discuss everything from how mixed reality differs from virtual and augmented realities, to how these technologies are used in his line of work.

Read the full Q&A here.

CVMP

CVMP
December 11-12, 2017
London, England
Keynote Presentation

A New Virtual PTSD Screening is Helping Veterans Disclose Symptoms

Medical Training Magazine features Ellie and how AI can be used as a tool to detect symptoms of PTS.

Read the full article here.

The Near Future of AI Media

CableLabs is developing artificial intelligence (AI) technologies that are helping pave the way to an emerging paradigm of media experiences. We’re identifying architecture and compute demands that will need to be handled in order to deliver these kinds of experiences, with no buffering, over the network.

Click here to continue reading about the CableLabs and USC Institute for Creative Technologies partnership.

2017 Neural Information Processing Systems (NIPS)

2017 Neural Information Processing Systems (NIPS)
December 4-9, 2017
Long Beach, CA
Presentations

Virtual Reality to Treat PTSD: Interview with Todd Richmond, Director of USC’s Mixed Reality Lab

While PTSD is a significant issue for many of those serving in the military and others who work in traumatic situations, it also affects huge numbers of ordinary people who experience traumatic events such as assaults or natural disasters. Nearly 24 million Americans suffer from PTSD at any given time, and women are twice as likely as men to develop the condition. PTSD can sometimes be overlooked and is reportedly underdiagnosed, but anxiety disorders still cost society approximately $40 billion per year in treatment costs and loss of productivity.

A relatively new option for PTSD therapy involves virtual reality, with the goal of creating multisensory, immersive environments and experiences to treat the condition. The technique can be controlled by a clinician to suit a patient’s needs, and the results so far are promising.

Medgadget had the opportunity to ask Todd Richmond, Director of the Mixed Reality Lab at the University of Southern California and IEEE Member, some questions about the concept of using VR for PTSD and how it has worked so far.

Continue reading in medGadget.

Liquid Science: Virtual Reality

Red Bull TV hosts a TV show featuring GZA “The Genius” of Wu Tang Clan as he travels to USC Institute for Creative Technologies to engage in VR prototypes in our Mixed Reality Lab and even creates his own virtual avatar.

View the full episode here.

This New Robotic Avatar Arm Uses Real Time Haptics

According to Todd Richmond, IEEE Member and Director of USC’s Mixed Reality Lab, Internet of Actions (IoA) is a vision of digital technology becoming a more effective partner for humans as we navigate through our increasingly mixed-reality worlds.

“Technology development will need to move from the past and current focus on the “device” and the “capability” and more towards the human at the end of the technology,” adds Richmond.

Continue reading on Forbes.com.

Computing Clout Helps Hollywood Tap Potential of Immersive Content

In the worlds of AI, AR and VR, Dell delivers the power and reliability that can stitch complex data into a seamless experience.

Hollywood also stands to benefit from Dell’s non-entertainment work. Dell’s partnership with headset maker Meta and Ultrahaptics to develop AR shoe design systems for Nike has the potential to change the way production designers, set decorators and costume designers work. And its work with Albert “Skip” Rizzo at the USC Institute of Creative Technologies, to explore how VR can help service members deal with PTSD and autistic teenagers overcome the stress of job interviews, could one day be used by actors and writers for research and character development.

Continue read on Variety.com.

SIGGRAPH ASIA

SIGGRAPH AsiaNovember 27-30, 2017
Bangkok, Thailand
Presentations

 

I/ITSEC

I/ITSEC 2017November 27 – December 1, 2017
Orlando, FL
Presentations
 

Using VR-Based Psychotherapy for PTSD Helps Traditional Therapy Effects

The use of virtual reality technology is currently concentrated on gaming, but it has many other applications, including use in medical technology. VR may offer an alternative or even an accompaniment to traditional psychotherapy for post-traumatic stress disorder patients.

International Business Times sat down with ICT’s Todd Richmond for more insight into using VR as a tool in helping treat PTSD. Visit ibtimes.com to read the full article.

ICAT-EGVE 2017

ICAT-EGVE 2017
November 22-24, 2017
Adelaide, Australia
Presentations

Six USC Professors Named Fellows of Esteemed Scientific Society

FFive USC scientists and one Keck School of Medicine of USC physician have been elected fellows of the American Association for the Advancement of Science, an honor awarded to AAAS members by their peers.

Founded in 1848, the nonprofit organization is the world’s largest general scientific society. The group began the AAAS Fellows tradition in 1874 and publishes the journal Science.

This year 396 members will be named fellows because of their scientifically or socially distinguished efforts to advance science or its applications. Of the six USC professors included, ICT’s Paul Rosenbloom, also a computer science professor at Viterbi, was recognized for his focus is on the mechanisms that enable thought and how they combine to yield minds.

Read the full article in USC News.

Facing Down PTSD

“We tell patients it’s going to get harder before it gets easier. We’re not bullshitting anybody,” says Dr Albert ‘Skip’ Rizzo on the phone from his base at the University of Southern California. Rizzo is at the forefront of innovative research into how virtual reality ‘exposure therapy’ – exposing a patient to virtual reconstructions of a traumatic event – can be used to treat patients suffering from post-traumatic stress disorder (PTSD).

The foundations of this work are in the treatment of military veterans, with a ‘Virtual Vietnam’ developed in 1997 to help veterans still suffering with PTSD four decades after the end of the conflict. Virtual reality was first seriously looked at as a tool to treat terror-related PTSD victims four years later, in the month ­following the September 11 World Trade Center attacks.

Continue reading the full article in The Big Issue.

Confessions of a Technology Evangelist

ICT’s Todd Richmond contributes a piece to EdTech Digest about how VR and AR will (really) transform education to create a meaningful future.

Read the full article here.

VR Brings Dramatic Change to Mental Health Care

Skip Rizzo, associate director for medical virtual reality at the USC Institute for Creative Technologies, has been working with the U.S. Army on ways to use Virtual Reality (VR) to treat soldiers’ Post-Traumatic Stress Disorder for over a decade. His system, “Bravemind,” initially funded by the Department of Defense in 2005, can accurately recreate an inciting incident in a war zone, like Iraq, to activate “extinction learning” which can deactivate a deep-seated “flight or fight response,” relieving fear and anxiety. “This is a hard treatment for a hard problem in a safe setting,” Rizzo told me. Together with talk therapy, the treatment can measurably relieve the PTSD symptoms. The Army has found “Bravemind” can also help treat other traumas like sexual assault.

Continue reading the full article in Huffington Post.

Building the World of ‘Blade Runner 2049’

Digital Media World covers the technology behind the blockbuster, including ICT’s Light Stage.

Read the full article on Digitalmediaworld.tv.

LA CoMotion

LA CoMotion
November 16-19, 2017
Los Angeles, CA
Panel Discussion

Siri is My Agony Aunt – but is Telling Big Tech My Innermost Feelings a Bad Idea?

In the same way we feel free to write what we really think in the pages of a book, when we discuss our innermost feelings we now tend to disclose more talking to AI than to humans, according to a study conducted by the Institute for Creative Technologies in Los Angeles.

Continue reading on TheGuardian.com.

Dipping a Toe in the US Synthetic Training Environment

In the last few years training grounds have become increasingly virtual but it’s only the beginning. A new Synthetic Training Environment, built to fuse the real world with the virtual world into one giant training platform, could change soldier training forever.

Read more about the Synthetic Training Environment in Army-Technology.

SIGGRAPH Asia Tech Papers Preview: 3D Avatars From a Single Photo

Ian Failes of VFX Blog interviews Hao Li in anticipation of SIGGRAPH Asia.

Read the full article featuring what to expect from Hao Li and team here.

3 Ways AR/VR Are Improving Autonomous Vehicles

By Todd Richmond, Director, USC Mixed Reality Lab November 15, 2017

Virtual, Augmented and Mixed Reality (VAMR) aren’t just for games and entertainment, despite what many think. VAMR will touch every aspect of our lives, from how we shop for clothes to the way we consume media.

Autonomous vehicles, another one of the most exciting disruptive technologies, will also be impacted by VAMR in several ways. Here’s a look at three ways VAMR is improving autonomous vehicles.

Simulated Testing Ground
Mixed Reality (MR) Prototyping will provide a safe testing ground for autonomous vehicles, which are yet to be perfected. The Mixed Reality Lab at the University of Southern California (USC) has been using MR Prototyping to explore human-machine teaming; as of writing, the lab has successfully paired people with autonomous drones.

Traditional research, design, and development would put real drones and real people in very real danger while algorithms and other aspects are sorted out. Instead, the USC lab has successfully virtualized all aspects of the pairing within a game engine – virtual drones and virtual humans, together, in virtual environments.

Within this simulation, the level of “reality” can vary, and geo-specific terrain can be used if desired. Many of the design parameters and algorithmic problems associated with human-drone pairing can be solved in the virtual space. Afterwards, reality gets “mixed,” with some of the elements (e.g. autonomous drones) flying in both the virtual and physical space, and the humans remaining exclusively in the virtual space. More of the problems are resolved, and then the system can go fully physical when the algorithms are well-behaved.

Testing human-drone pairing in a VAMR space doesn’t differ that much from testing autonomous vehicles in a wholly virtual space – engineers can test, tweak, and ultimately educate autonomous vehicles in a virtual space, removing all danger to humans. Additionally, since the virtual and physical can be tied together in real time, that allows for remote collaboration as well as connection to other virtual systems, further removing the danger to humans and improving testing outcomes.

Visual Displays Will Improve Situational Awareness
New forms of visual displays, combined with aural and haptic feedback, will be designed to improve driver situational awareness and increase safety; combining these with other active systems based on computer vision, such as lane departure and auto-braking, presents the promise of lower accidents, fatality rates, and more.

There is perhaps a romantic notion of the traditional sedan dashboard evolving into something akin to an F-35 cockpit, which is actually not the way autonomous vehicle’s interfaces should be designed. Pilots, especially those who fly fighter jets, are highly trained and are a far cry from the normal driver, in terms of reaction time, decision making ability and much more. For normal drivers, less is probably more.

Portable Environment for Riders
AI will end up being a major player for autonomous vehicles, and it will likely be a combination of highly-optimized computer vision algorithms, next-generation path planning and traffic flow monitoring and metering. In the case of fully autonomous vehicle, VAMR may end up as the “portable environment” for riders.

For example, as telepresence capabilities improve, the ability to “take a meeting” while riding to work may become a viable reality. While we run the risk of personal isolation, VAMR combined with autonomous travel could provide the productivity increase technology has long promised, while also providing a bit of an escape from mundane commutes. Additionally, VAMR combined with autonomous travel could supplement taking the traditional phone call while driving, which would significantly reduce accident rates, fatality rates, and more.

Autonomous vehicles are upon the world, and there’s much to be excited about. But before we start hailing our automated Ubers and Lyfts, whizzing around cities in the backseat of a car with no driver, a lot of work needs to be done. VAMR will play a larger role than many think in making the (safe) driverless vehicles of tomorrow a reality.

Content and images via Robotics Trends.

USC Collaboration to Test VR for Stroke Treatment

The Neural Plasticity and Neurorehabilitation Laboratory at USC is exploring the intersection of virtual reality and neuroscience with a new program that aims to help stroke victims.

The laboratory aims to implement VR technology in health care under the acronym REINVENT, according to Sook-Lei Liew, head of the NPNL.

Liew’s laboratory is partnering with USC Institute of Creative Technologies’ MxR lab for the REINVENT project, which focuses on virtual reality and stroke rehabilitation.

“In all areas of human interaction with any kind of sophisticated technology, you want to develop things can amplify human functioning and learning,” said Albert Rizzo, who works at the MxR lab.

The MxR lab is part of USC’s Institute for Creative Technologies and hosts a variety of research at the forefront of virtual development, with focuses on optical physical therapy and game apps.

Rizzo explained that in using VR as a medium for rehabilitation or as a tool for data collection, there are three measurements that must be considered.

Continue reading on DailyTrojan.com.

Intelligent Cognitive Assistants Workshop

Intelligent Cognitive Assistants Workshop
November 14-15, 2017
San Jose, CA
Panel Discussions

VR, 360-degree video help journalism students tell stories at USC Annenberg

USC News explores how VR is making its way into journalism and USC’s role in the cutting-edge technology.

Read the full story in USC News.

Victor Luo is Making NASA Cool for Coders

In a piece for Bloomberg about coding with NASA, ICT’s Skip Rizzo gives his perspective on using AR and VR to design virtual space shuttles in 3D and then assist astronauts on the real shuttles orbiting outside the atmosphere.

Read the full article on Bloomberg.com.

This USC Lab is Pioneering the Next Big Thing in Experience Design

Todd Richmond and his team at USC Institute for Creative Technologies are exploring techniques and technologies to improve the fluency of human-computer interactions.

Watch the full video segment on Fastcodesign.com.

Are Humans Actually More ‘Human’ than Robots?

In a recent report, the Pew Research Center found that Americans are more worried than they are enthusiastic about automation technologies when it comes to tasks that rely on qualities thought to be unique to humans, such as empathy. They’re concerned that, in lacking certain sensibilities, robots are fundamentally limited in their ability to replace humans at those jobs; they don’t, according to the report, trust “technological decision-making.”

Human drivers don’t seem all that “human” when it comes to thoughtful decision-making. Federal fatal-crash data show that despite reductions in the number of deaths due to distracted or drowsy driving, those related to other reckless behaviors—including speeding, alcohol impairment, and not wearing seatbelts—have continued to increase. Roughly 37,000 of last year’s fatal crashes were attributed to poor decision-making.

Humans aren’t necessarily better than robots at caregiving, either. The American Psychological Association in 2012 estimated that 4 million older Americans—or about 10 percent of the country’s elderly population—are victims of physical, psychological, or other forms of abuse and neglect by their caregivers, and that figure excludes undetected cases.

Nor do they inherently excel at interpersonal skills. Humans incessantly use “strategic emotions”—emotions that don’t necessarily reflect how they actually feel—to achieve social goals, protect themselves from perceived threats, take advantage of people, and adhere to work-environment rules. Strategic emotions can help relationships but, if they’re detectable, they can harm them, too.

As an example, Jonathan Gratch, the director of emotion and virtual human research at the University of Southern California’s Institute for Creative Technologies, pointed to customer-service representatives, who tend to follow a script when speaking with people. Because they rarely express genuine emotions, they aren’t, according to Gratch, “really being human.” In fact, these rules surrounding professional conduct make it easier to program machines to do that sort of work, especially when Siri and Alexa are already collecting data on how people talk, such as their intonations and speech patterns. “There’s this digital trace you can treat as data,” he said, referring to the scripts on which customer-service reps rely, “and machines learn to mimic what people do in those tasks.”

Read more in The Atlantic.

ICMI 2017 – 19th ACM International Conference on Multimodal Interaction

ICMI 2017 – 19th ACM International Conference on Multimodal InteractionNovember 13-17, 2017
Glasgow, Scotland
Presentations

New Post-Traumatic Stress Disorder Treatments for Veterans Focus on Technology

It turns out that military veterans are willing to talk about stress. They just haven’t been getting access to the right confidantes and a comfortable setting.

Ellie is an example. Her ability to press veteran interview subjects to reveal information about their feelings and mental state worked better than the primary method used by the Department of Veteran Affairs.

Her secret? Ellie isn’t a bland questionnaire. She isn’t a human, either.

Developed by the University of Southern California’s Institute for Creative Technologies, Ellie is a virtual PTSD screening and diagnostic tool that provides patients with an anonymous, unrecorded interview session. A recent study of Ellie’s interactions with veterans showed that they are more willing to report symptoms of post-traumatic stress disorder to the program than using a traditional assessment method.

Read more on CNBC.com.

What Are Holographic Calls? Technology May Replace Voice Calling In The Future

Even though the technology seems to be pretty far-fetched right now, it has a possibility of becoming a consumer technology, according to Todd Richmond, IEEE member and director of the mixed reality lab at the Institute for Creative Technologies at the University of Southern California.

He outlined the challenges involved in an email to IBT.

“In our labs we have shared virtual environments between east and west coast using a combination of VR and AR (Vive and Hololens). For that to move into the widespread consumer market, there are technical and user experience challenges that need to be solved. Current AR/VR hardware is still rather clunky and is not particularly comfortable for long-term use. The technology needs to approach something closer to reading glasses or perhaps large sunglasses. Or projection (“hologram”) technology needs to improve (probably a combination of both),” he stated.

The biggest challenge for the technology, according to Richmond, is the portrayal of a person in VR and how these virtual environments will be navigated. If you want to text in such an environment — how do u draw letters?

Richmond says that the technology could be available for commercial usage around 2020, however, it might take a decade for it to become a consumer technology.

Read more on IBT.com.

Army Veteran, a USC Adminitrator, Uses Storytelling to Support the Military

By Ron Mackovich, USC News

Randall Hill ’78 was born on an Army base, and his life story revolves around the military.

“My dad was in the Army, and he encouraged me to pursue my application to West Point,” Hill recalled. “He said, ‘Why don’t you just see if you can get in?’”

Hill got in, graduated and served six years as a commissioned officer with assignments in field artillery and intelligence.

After earning a PhD in computer science from USC, Hill went on to become executive director of the Institute for Creative Technologies, where the entertainment and gaming fields converge with research to build training and simulation platforms. ICT’s interactive and virtual reality programs go beyond military training, helping veterans find jobs and cope with trauma from combat and sexual assault.

For Hill, it’s all about the story.

“We use the art of storytelling to support the military,” Hill said. “We’re in Hollywood. USC has the best cinema school, so we’re in the right place.”

Dedicated to authenticity
One of ICT’s newest interactive platforms is being rolled out at Fort Leavenworth to help victims of sexual assault.

“We interviewed a male soldier who suffered a sexual assault, and integrated that into an interactive media program using artificial intelligence,” Hill said. “Other soldiers can ask him questions, and we created a database that will generate the best response. They’re not just fact-based questions. It’s more like ‘What was your experience,’ and it comes back with a story.”

The interactive program grew in part from a project involving the USC Shoah Foundation – The Institute for Visual History and Education that created an interactive experience with a Holocaust survivor.

“They were intent on authenticity, so we brought that same authenticity to the sexual assault program,” Hill said. “Part of the goal is prevention because some sexual assaults happen during hazing. We want to expose this to say ‘hazing is not OK, it has long-term consequences, it has a huge impact on people’s lives.’”

Going forward, Hill plans to stick to the story as ICT develops new programs.

“Storytelling is one of the oldest ways people have communicated, and your brain lights up when you hear a story,” Hill said. “You always remember a good movie, even decades after you see it.”

AAAI 2017 Fall Symposium

AAAI 2017 Fall Symposium on A Standard of Model of the Mind
November 9-11, 2017
Arlington, VA
Presentations

DS2A

Download a PDF overview.

The Digital Survivor of Sexual Assault (DS2A) system allows Soldiers to interact with a digital Sexual Harassment/Assault Response and Prevention (SHARP) guest speaker and hear their stories. First-person stories are one of the most powerful ways people share information, connect with, and learn from each other. As part of the ongoing SHARP training, survivors of sexual assault often speak to large groups of Soldiers. Unfortunately, not every Soldier who would benefit from interacting with a survivor will have this opportunity. DS2A allows more soldiers to interact with a speaker and preserves the emotional impact of hearing about the speaker’s experience. Soldiers can interact with the digital survivor and hear the speaker’s stories as direct responses to their own questions.

DS2A is a powerful new tool for instructors at the Army SHARP Academy. The system enables new SHARP personnel, as well as selected Army leaders, to participate in conversations on SHARP topics through the lens of a survivor’s firsthand account. DS2A can play an important role in the prevention of sexual harassment and sexual assault by enabling new Sexual Assault Response Coordinators (SARCs) and Victim Advocates (VAs) to interact with a sexual assault survivor and hear the survivor’s stories in a non-confrontational environment. The experience may help SHARP professionals understand how to better support victims, and perform their prevention and response duties. It can also help Army leaders understand the impact that incidents of sexual assault and retaliation can have on an individual Soldier and unit readiness. The Army SHARP Academy plans to use DS2A in its resident courses of instruction, directed by instructors trained in the proper use of the system.

DS2A system is based on the New Dimensions in Testimony (NDT) project, a collaborative effort between the USC Shoah Foundation and USC ICT. Development of DS2A leveraged research technologies previously created for the Department of Defense under the direction of the Army Research Lab Simulation and Training Technology Center (ARL STTC). These technologies include the Light Stage, to facilitate recordings of survivors, and natural language dialogue technology to enable conversational engagement with survivors. DS2A is the first system of its kind to be used in an Army classroom.

AECT 2017

AECT 2017November 6-11, 2017
Jacksonville, FL
Presentations

How Technology is Keeping Holocaust Testimony Alive

The life-size image of Pinchas Gutter on a video screen, fidgeting, blinking and tapping his foot, seemed present and alive in the way portraits do in the magical world of Harry Potter. The Holocaust survivor, who lives in Toronto, was nowhere near the Museum of Jewish Heritage on the day I visited, but by stepping up to a podium, clicking on a mouse and speaking into a microphone, I was able to ask Gutter questions. His image responded with answers—speech quirks, pauses and gestures included. He spoke to me about religion and sports; he shared his favorite Yiddish joke; I hear he sometimes sings. Gutter also told me that he was a happy child until September 1, 1939, when Hitler’s armies invaded Poland and World War II began. Soon after, his father was taken away and beaten nearly to death. After that, he said, “I knew that life wouldn’t be the same.”

Read more on MSN.com.

The Ultimate Escapism

Is VR addiction really something we need to worry about now? Currently, eye strain, cybersickness, and a lack of sense of touch in VR make it far less immersive than portrayed in sci-fi. You can’t yet plug in for hours and hours. “It’s not at a holodeck level yet. I don’t think you’re seeing a public health challenge,” says Albert “Skip” Rizzo, director of medical virtual reality at USC’s Institute for Creative Technologies. In fact, experts have been speculating and researching how VR technologies could be used to treat addiction. For example, alcoholics immersed in a virtual bar or a virtual party can be taught to manage their cravings and develop coping and refusal skills so that they can prevent a relapse when they’re near the real thing.

Continue reading on Slate.com.

Hologram Technology in Holocaust Museum Exhibit Immortalizes Survivors’ Stories

Illinois Holocaust Museum CEO Susan Abrams said the museum helped advance the project — New Dimensions in Testimony — a collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies.

“The survivors were filmed in a studio in L.A. of which there are only three in the world,” Abrams said. “The survivors were surrounded by over a hundred cameras.”

She called the technology “future-proof” meaning that one day the recordings may be able to be shown in a 360-degree venue as technology advances. But for now, survivor testimonies are expressed through a three-dimensional hologram that is as close to the real thing as technology gets, she said.

“It prepares us for the day when our survivors will not be here,” Abrams said. “Right now, the 60,000 students and educators who come through plus tens of thousands of general visitors have the incredible privilege to hear directly from a survivor.”

In addition to the holograms, the Take a Stand Center highlights 40 historical and contemporary “upstanders” who have fought against injustice and cruelty in various ways.

Continue reading on ChicagoTribune.com.

UX and the Psychology of Storytelling

Stories are effective because they appeal to a hardwired way that the human mind works. It’s our natural impulse to impose order and attach meaning to our observations.

In a 1944 psychology experiment, participants watched a short animated film in which three geometric figures — a large triangle, a small triangle, and a small circle — move around and within a rectangle shape with a ‘door’. Participants in this study then described what they saw.

The researchers, Fritz Heider and Marianne Simmel, discovered that participants assigned all kinds of personality characteristics and motives to these simple shapes, generating compelling plots about an ‘aggressive’ large triangle, the ‘helpless’ circle, and the ‘hero’ small triangle. Sometimes the plot centered on love, or cheating, or sometimes it was a parenting saga.

More recently, seven comedians interpreted this short film for USC Institute for Creative Technologies, which is a very entertaining watch… The film simply depicts lines and shapes in motion, yet our brains fill in so much more.

Continue reading on WhatUsersDo.com.

Metafocus: Personalized Lifelong Learning

The University of Southern California (USC) Institute for Creative Technologies has developed “virtual humans” that look, move, and speak like real humans, albeit on large screens. These virtual humans employ MBE science and ITS technology to create learning experiences in schools, museums, and medical research facilities. Virtual humans “add a rich social dimension to computer interaction,” answering questions at any time of day so students never feel completely stuck.

Continue reading in Learning Solutions Magazine.

U.S. Military Seeking Technology to Better Prepare for War

Voice of America’s Elizabeth Lee investigates cutting-edge technologies to better prepare military for war. By attending the USC Center for Body Computing’s annual Body Computing Conference and visiting ICT’s Mixed Reality Lab, this piece explores technologically advanced solutions aimed to help the U.S. military.

Virtual Technology Allows SHARP Academy Students Opportunity to Interview Survivor

United States Army Spc. Jarett Wright was hazed and sexually assaulted while deployed to Iraq in 2010. Students from the Sexual Harassment/Assault Response and Prevention Academy had the opportunity to interview Wright Oct. 12, even though he wasn’t speaking to them at the time.

The Digital Survivor of Sexual Assault (DS2A) project uses the latest technology to educate and bring awareness to U.S. Army personnel to the horrific realities of sexual assault. The project replicates the experience of an in-person interaction with a survivor of sexual assault. Using Google voice recognition software, DS2A allows Soldiers and Department of the Army Civilians to have an immersive and interactive conversation with a survivor of sexual abuse using a virtual avatar in place of the victim.

“This initiative represents a great collaborative effort between the SHARP Academy, the Army Research Laboratory, and the University of Southern California, Institute for Creative Technologies (USC-ICT), and leverages innovative technology to enhance SHARP education and training,” said Col. Christopher Engen director of the U.S. Army SHARP Academy.

Though the program has been tested many times, this session was the first time an entire SHARP Academy class was allowed to interact with the DS2A.

Continue reading on TRADOC’s news site.

Found in Translation: USC Scientists Map Brain Responses to Stories in Three Different Languages

New brain research by USC scientists shows that reading stories is a universal experience that may result in people feeling greater empathy for each other, regardless of cultural origins and differences.

And in what appears to be a first for neuroscience, USC researchers have found patterns of brain activation when people find meaning in stories, regardless of their language. Using functional MRI, the scientists mapped brain responses to narratives in three different languages — Americanized English, Farsi and Mandarin Chinese.

The USC study opens up the possibility that exposure to narrative storytelling can have a widespread effect on triggering better self-awareness and empathy for others, regardless of the language or origin of the person being exposed to it.

“Even given these fundamental differences in language, which can be read in a different direction or contain a completely different alphabet altogether, there is something universal about what occurs in the brain at the point when we are processing narratives,” said Morteza Dehghani, the study’s lead author and a researcher at the Brain and Creativity Institute at USC.

Dehghani is also an assistant professor of psychology at the USC Dornsife College of Letters, Arts and Sciences, and an assistant professor of computer science at the USC Viterbi School of Engineering.

The study was published in the journal Human Brain Mapping.

Making sense of 20 million personal anecdotes

The researchers sorted through more than 20 million blog posts of personal stories using software developed at the USC Institute for Creative Technologies. The posts were narrowed down to 40 stories about personal topics such as divorce or telling a lie.

They were then translated into Mandarin Chinese and Farsi, and read by 90 American, Chinese and Iranian participants in their native language while their brains were scanned by MRI. The participants also answered general questions about the stories while being scanned.

Using state-of-the-art machine learning and text-analysis techniques, and an analysis involving over 44 billion classifications, the researchers were able to “reverse engineer” the data from these brain scans to determine the story the reader was processing in each of the three languages. In effect, the neuroscientists were able to read the participants’ minds as they were reading.

Continue reading the full article in USC News.

M+DEV Conference

M+DEV (Madison Game Development) ConferenceOctober 27, 2017
Madison, WA
Presentations

U.S. Military Looks to Solve Old Problems with New Solutions

“When you combine performance capture that is autonomously driven with a lot of this biodata, it is going to change the way athletes train. It’s going to change the way that the military trains and operates, and it is going to change the way that we interact with the world,” said Todd Richmond, director of Advanced Prototype Development at the University of Southern California Institute for Creative Technologies.

Read the full article on i-HLS.com.

VR Brings Dramatic Change to Mental Health Care

Skip Rizzo, associate director for medical virtual reality at the USC Institute for Creative Technologies, has been working with the U.S. Army on ways to use Virtual Reality (VR) to treat soldiers’ Post-Traumatic Stress Disorder for over a decade. His system, “Bravemind,” initially funded by the Department of Defense in 2005, can accurately recreate an inciting incident in a war zone, like Iraq, to activate “extinction learning” which can deactivate a deep-seated “flight or fight response,” relieving fear and anxiety. “This is a hard treatment for a hard problem in a safe setting,” Rizzo told me. Together with talk therapy, the treatment can measurably relieve the PTSD symptoms. The Army has found “Bravemind” can also help treat other traumas like sexual assault.

Read more on Forbes.com.

All the Face-Tracking Tech Behind Apple’s Animoji

WIRED examines Apple’s iPhoneX, animojis and the researchers behind the new technology. Elizabeth Stinson talks with Hao Li for more, read the full article on WIRED.com.

Affective Computing and Intelligent Interaction (ACII) 2017

ACII 2017
October 23-26, 2017
San Antonio, TX
Presentations

Interactive Holocaust Project Opens Thursday

The USC Shoah Foundation and USC Institute for Creative Technologies will open the first permanent installation of their interactive Holocaust project on Thursday after over five years of work.

The installation, New Dimensions in Testimony, features extensive interviews with Holocaust survivors through interactive technology that allows the public to have conversations with the individuals.

The Holocaust survivors were selected from a variety of backgrounds that included a large range in ages, experiences and locations during the war. One of the 15 participants in the project was Eva Schloss, Anne Frank’s stepsister, whose interactive work is currently being displayed in New York at a temporary installation.

The goal of the project was to recreate the intimacy of learning from Holocaust survivors, which the team working on the project attempted to do by allowing the public to ask the interactive displays any question they wished.

Read the full article on DailyTrojan.com.

NATO Modeling and Simulation Group MSG-149 Symposium

NATO Modeling and Simulation Group MSG-149 SymposiumOctober 19-20, 2017
Lisbon, Portugal
Presentations

SHARP Academy Tests Virtual Victim

Army Spc. Jarett Wright was hazed and sexually assaulted while deployed to Iraq in 2010. Students from the Sexual Harassment/Assault Response and Prevention Academy had the opportunity to interview Wright Oct. 12, even though he wasn’t speaking to them at the time.

The Digital Survivor of Sexual Assault project uses the latest technology to educate and bring awareness to Army personnel to the horrific realities of sexual assault. The project replicates the experience of an in-person interaction with a survivor of sexual assault. Using Google voice recognition software, DS2A allows soldiers and Department of the Army civilians to have an immersive and interactive conversation with a survivor of sexual abuse using a virtual avatar in place of the victim.

Read more in Ft. Leavenworth Lamp.

‘Raw Data’: An Oral History

The Rolling Stone covers Survios’ first game as a studio, interviewing its founding members and discussing their experience with ICT’s MxR Lab.

Read the full article on Rollingstone.com.

Experimental Virtual and Mixed Reality Technologies Can be Applied to Military of the Future

Virtual reality, augmented reality and mixed reality projects are being developed that can have military applications. One mixed reality project at the University of Southern California Institute for Creative Technologies (USC ICT) involves drones small enough to fit in the palm of a hand. The drones can follow and capture a person’s movements so they can be analyzed under a training simulation.

“When you combine performance capture that is autonomously driven with a lot of this biodata, it is going to change the way that athletes train. It is going to change the way that the military trains and operates, and it is going to change the way that we interact with the world,” said Todd Richmond, director of Advanced Prototype Development at the University of Southern California Institute for Creative Technologies.

Read more in Voice of America or VOANews.com.

How Technology is Keeping Holocaust Survivor Stories Alive Forever

Pinchas Gutter was the first Holocaust survivor to participate in the New Dimensions in Testimony project, a collaboration of the USC Shoah Foundation, the Institute for Creative Technologies (ICT), also at the University of Southern California and Conscience Display.

Read the full article about New Dimensions in Testimony on Newsweek.com.

5th International Conference on Human Agent Interaction (HAI 2017)

5th International Conference on Human Agent Interaction (HAI 2017)
October 17-20, 2017
Bielefeld, Germany
Presentations

Virtual Therapists Help Veterans Open Up About PTSD

WHEN US TROOPS return home from a tour of duty, each person finds their own way to resume their daily lives. But they also, every one, complete a written survey called the Post-Deployment Health Assessment. It’s designed to evaluate service members’ psychiatric health and ferret out symptoms of conditions like depression and post-traumatic stress, so common among veterans.

But the survey, designed to give the military insight into the mental health of its personnel, can wind up distorting it. Thing is, the PDHA isn’t anonymous, and the results go on service members’ records—which can deter them from opening up. Anonymous, paper-based surveys could help, but you can’t establish a good rapport with a series of yes/no exam questions. Veterans need somebody who can help. Somebody who can carry their secrets confidentially, and without judgement. Somebody they can trust.
Or, perhaps, something.

“People are very open to feeling connected to things that aren’t people,” says Gale Lucas, a psychologist at USC’s Institute for Creative Technologies and first author of a new, Darpa-funded study that finds soldiers are more likely to divulge symptoms of PTSD to a virtual interviewer—an artificially intelligent avatar, rendered in 3-D on a television screen—than in existing post-deployment health surveys. The findings, which appear in the latest issue of the journal Frontiers in Robotics and AI, suggest that virtual interviewers could prove to be even better than human therapists at helping soldiers open up about their mental health.

Read the full article on WIRED.com.

VR Could Trick Stroke Victims’ Brains Toward Recovery

Researchers at the University of Southern California are examining how virtual reality could promote brain plasticity and recovery.

Read all about the research on CNET.com.

DocOn

The DocOn application currently being developed by USC’s Center for Body Computing (CBC) and USC’s Institute for Creative Technologies (ICT) brings the convenience and reassurance of a personal doctor right to your smartphone. With DocOn, no matter where you are, expert medical advice is only one click away.

By combining ICT’s Rapid Avatar scanning technology and artistic creative development with the medical expertise of cardiac-electrophysiologist, Dr. Leslie Saxon and her team, DocOn takes internet searches to a more personal, accessible, mobile level.

Not only will DocOn bring expert medical advice to smartphone users 24/7, but it will also broaden the reach of exceptional medical care to anyone, anywhere in the world, regardless of language or proximity to a staffed clinic.

CLOVR

The CLOVR responsive system, delivered conveniently to users at home, school, or anywhere it’s needed, enables users to have meaningful, personal, interactions with virtual humans. Like an empathetic listener, CLOVR analyzes the user’s emotional state and responds adaptively while also providing conversational feedback loops and non-verbal behavior.

Phase I of the project focused on creating a User Perceived State model based on a lexical analysis of direct user input. This emotional model, called the User Perceived State, or UPS, in combination with a new dialogue management and natural language processing editor, drives the system’s response. Responses include not only what the virtual agent says, but their gestures, posture, and tone. Reactive responses can also include changes to the environment or multimedia selections.

Phase II is slated to expand the UPS model by integrating indirect input such as the user’s facial expressions, vocal tone, pulse rate, and posture to create a more holistic analysis of their emotional state.

Like human to human communication, CLOVR will include these implicit, non-verbal, clues in determining its responses.

RAM Replay

RAM Replay aims to improve memory retention, skill acquisition and rule learning for soldiers. The application consists of three Virtual Reality testbeds, which can be authored by researchers to contain a range of events and rules related to specific tasks. Through EEG readings and stimulation during sleep, neuroscientists explore methods to improve subject performance on these tasks.

RAM Replay is funded by the Department of Defense Advanced Research Projects Agency (DARPA) and is a collaboration with the Hugh Research Laboratory, Rutgers University, University of New Mexico and Cardiff University.

VR on the Lot

VR on the LotOctober 13-14, 2017
Los Angeles, CA
Presentations

Virtual Interviewers Prods Veterans to Reveal Post-Traumatic Stress

Talking – to a computer-generated interviewer named Ellie – appears to free soldiers and veterans who served in war zones to disclose symptoms of post-traumatic stress, a new study from USC Institute for Creative Technologies finds.

Read the full article on Reuters.com.

PTSD Treatment: How AI Is Helping Veterans with Post-Traumatic Stress Disorder

The Institute for Creative Technologies at USC got lots of buzz for its original research, and introducing the world to Ellie, a digital diagnostic tool that strongly resembles, but cannot replace a human therapist. Ellie, an avatar of a woman in a cardigan with olive-toned skin and a soothing voice, listens to the people who come to her, and does what any human sounding board does. She listens to the content of their speech, and scans their facial expressions, tone, and voice, for cues that hint at meanings beyond speech. Ellie’s design was decided upon by the research group’s art team. As for how Ellie sounds, “she has a very comforting voice,” Lucas told Newsweek.

Continue reading on Newsweek.com.

Dr. Skip Rizzo and the Rise of Medical VR Therapy

Once thought to be a technology exclusively for entertainment, virtual reality applications pioneered by Albert “Skip” Rizzo, Ph.D. have provided life-changing therapeutic results for clients with serious anxiety disorders and members of the military in particular.

As the Director of Medical Virtual Reality at USC’s Institute for Creative Technologies (ICT) and Research Professor at the USC’s Department of Psychiatry and School of Gerontology, Dr. Rizzo has been at the forefront of dramatic innovations in clinical research and care for more than two decades, and his application of VR as a valuable tool in medical treatment underscores the broadening growth of VR beyond entertainment.

Read the full article in VFX Voice.

Our Search for Meaning Produces Universal Neural Signatures

In an era dominated by heartbreaking headlines and divisive political rhetoric, a pioneering state-of-the-art brain imaging study reminds us of our human commonality and the universality of our search for meaning in the stories we read.

Read the full article in Psychology Today.

Reading Stories Creates Universal Patterns in the Brain

New research shows that when we hear stories, brain patterns appear that transcend culture and language. There may be a universal code that underlies making sense of narratives.

Read the full article in Medical News Today.

Scientists Find There is Something Universal About What Occurs in the Brain When It Processes Stories

New brain research by USC scientists shows that reading stories is a universal experience that may result in people feeling greater empathy for each other, regardless of cultural origins and differences.

Read the full article in Gears of Biz.

Reading Makes You Feel More Empathy for Others, Researchers Discover

This University of Southern California study using ICT software to measure brain activity found greater empathy for others through reading.

Full article available on DailyMail.com.

AAAI AI and Interactive Digital Storytelling Conference

AAAI AI and Interactive Digital Storytelling Conference (AIIDE 2017)October 5-9, 2017
Salt Lake City, UT
Presentations

Academy’s Tech Council Adds 7 New Members

Established in 2003 by the Academy’s Board of Governors, the Science and Technology Council provides a forum for the exchange of information, promotes cooperation among diverse technological interests within the industry, sponsors publications, fosters educational activities, and preserves the history of the science and technology of motion pictures.

The returning Council co-chairs for 2017–2018 are two members of the Academy’s Visual Effects Branch: Academy governor Craig Barron, an Oscar-winning visual effects supervisor; and Paul Debevec, a senior staff engineer at Google VR, adjunct professor at the USC Institute for Creative Technologies and a lead developer of the Light Stage image capture and rendering technology, for which he received a Scientific and Engineering Award in 2009.

Read the full press release on Oscars.org.

Virtual Reality Teaches Veterans to Develop Interview

IEEE JobSite features ICT’s VITA4VETS collaboration with ARL, Dan Marino Foundation, Google.org and U.S. Vets in a recent piece about the project aimed at helping returning service members land jobs.

Read the full article on IEEE JobSite.

How to Improve Customer Experience with VR

Knowledge Center reports how VR can help the customer experience, speaking with ICT’s Skip Rizzo about using the technology already for mental health rehabilitation.

Read the full article here.

The Last Human Job

What abilities will set humans apart from machines?

It’s been a question at the center of decades of science fiction, and one that’s taken on increasing real-world urgency as we try to anticipate how the advancing artificial intelligence revolution will transform the way we work and live.

This piece has been published in Slate and is adapted from an essay that originally ran in the New America Weekly.

Read the full article here.

Reuters Ranks USC #20 in World’s Most Innovative Universities

The annual World’s Most Innovative Universities rankings from Reuters has been published, placing USC in the 20th spot. Reuters specifically called out ICT’s work in studying how people engage with technology through virtual characters and simulations, as well as our collaborations with studios including Warner Bros and Sony Pictures Entertainment to develop ever more realistic computer-generated characters in movies.

See the ranking in Reuters here.

Will Technology Transform Mental Health Care? A Future Tense Event Recap

On Sept. 28, Future Tense convened leading researchers in the field to discuss the ways technology is changing approaches to psychiatric study and care. The question at the heart of the discussion was: Are we on the verge of a new era in psychiatric care, or will these treatments go the way of other now-condemned methods?

Visit Slate.com for the full recap.

2017 IEEE Visualization Conference

2017 IEEE Visualization ConferenceSeptember 29-October 7, 2017
Phoenix, AZ
Presentations

Can Gaming & VR Help You With Combatting Traumatic Experiences?

Trauma affects a great many people in a variety of ways, some suffer from deep-seated trauma such as Post Traumatic Stress Disorder caused by war or abuse. And others suffer from anxiety and phobias caused by traumatic experiences such as an accident, a loss an attack.

Each needs its own unique and tailored regime to lessen the effects and to aid the individuals in regaining some normalcy to their lives. Often these customized treatments are very expensive and difficult to obtain.

In the world of ubiquitous technology and an ever-increasing speed in visual-based treatments, these personalized therapies are becoming more accessible to the average sufferer.

Gamasutra explores more, read the full article here.

Metafocus: Why I Don’t Want You to Know About Robo-Teachers

Learning Solutions Magazine explores the world of avatars and virtual humans, citing a few of ICT’s projects in the piece.

Read the full article here.

Vets Prep for Careers with Virtual Job Interviews

Government Computer News (GCN) covers the ICT prototype designed to help Veterans prepare for rejoining with workforce with VITA4VETS.

Read the full article here.

From Cancer Screening to Better Beer, Bots are Building a Brighter Future

VentureBeat explores the latest era of artificial intelligence, a period marked by the proliferation of intelligent virtual assistants and robots with specific skill sets.

Read the full article featuring a mention of ICT’s SimSensei project.

5 Angelenos Who Have Fascinating L.A. Jobs

L.A. Weekly’s Jessica Ogilvie talks with ICT’s Arno Hartholt about his work with the institute and what makes it so fascinating.

Read the full article in L.A. Weekly.

Virtual Reality, Real Medicine: Treating Brain Injuries with VR

ICT’s Dr. Skip Rizzo visits the Kessler Foundation to track progress of a trial measuring executive function performance in traumatic brain injury patients.

Streaming Media covered the news, you can read the full article here.

Virtual Reality Helps Veterans Prepare for New Jobs

ABERDEEN PROVING GROUND, Md. — The U.S. Army Research Laboratory and its partners recently developed a new way for veterans to seek employment.

The Virtual Training Agent for Veterans, or VITA4VETS, is a virtual simulation practice system designed to build job interviewing competence and confidence, while reducing anxiety. Although Army researchers and developers at the University of Southern California’s Institute for Creative Technologies, Google.org and the Dan Marino Foundation originally developed the training system to help those with autism prepare for job interviews, they soon realized its potential to help veterans.

While several companies advertise they hire vets, transitioning from military service life to a civilian workplace can be challenging. One day they are a Soldier, Sailor, Airman or Marine — then the next day, they are back to being “just a citizen.” The prevalence of militarisms in speech and thought override the ways of conceptualizing the civilian world.

The researchers and developers said they understand returning home can be arduous in itself, but preparing to find employment can be even more taxing.

That’s where they believe VITA4VETS can help improve one’s interviewing skills and instill a sense of discipline.

Juan Gutierrez, a 33-year-old Navy veteran with experience in aviation electronics was satisfied with the new style of interview.

“Answering questions with a virtual human rather than a real human helped me feel less nervous, and I could practice different responses and there were no repercussions with the avatar,” Gutierrez said.

Gutierrez said he had more confidence and the experience was as much an interview for a potential employer as it was for him.

“I learned I could ask questions too. Instead of feeling nervous — like I am being tested, it was a way for me to be honest and learn if it (the job) is something I’d like to do. Overall, VITA helped me feel confident with my interview,” said Gutierrez.

In 2016, the Bureau of Labor Statistics reported that 20.9 million men and women were veterans, accounting for about nine percent of the civilian non-institutional population age 18 and over. Of those 20.9 million, more than 450,000 were unemployed.

The military provides transition training, but when one considers the unemployment statistics and challenges servicemembers face, it underscores the urgency for creating methods to better prepare veterans for civilian employment.

“Although many veterans have the necessary talent and temperament for vocational achievement, they may find it challenging to express the ways in which their skills and experience are able to translate to the private sector,” said Matthew Trimmer, project director for VITA4VETS at USC ICT.

Currently available through U.S. VETS in Los Angeles, VITA4VETS leverages virtual humans that can support a wide-range of interpersonal skill training activities. It uses six characters that span different genders, ages and ethnic backgrounds. Each character is capable of three behavioral dispositions or interview styles and can be placed in a variety of interchangeable background job contexts, all controllable from an interface menu.

According to Trimmer, offering a variety of possible job interview roleplay interactions supports practice across a range of challenge levels and allows for customizable training geared to the needs of the user. Trimmer also said the approach has been known to produce positive results, indicating increased confidence with practice and high job acquisition rates.

“If focusing on one portion of said issue can provide any support to those that have served us, then it is one step closer to better assisting the overall transition process,” Trimmer said.

According to the U.S. Vets in Los Angeles, 93-percent of veterans have obtained employment using the VITA4VETS application.

Read the full article on the U.S. Army website.

Virtual Reality Helps Veterans Prepare for New Jobs

ABERDEEN PROVING GROUND, Md. — The U.S. Army Research Laboratory and its partners recently developed a new way for veterans to seek employment.

The Virtual Training Agent for Veterans, or VITA4VETS, is a virtual simulation practice system designed to build job interviewing competence and confidence, while reducing anxiety. Although Army researchers and developers at the University of Southern California’s Institute for Creative Technologies, Google.org and the Dan Marino Foundation originally developed the training system to help those with autism prepare for job interviews, they soon realized its potential to help veterans.

While several companies advertise they hire vets, transitioning from military service life to a civilian workplace can be challenging. One day they are a Soldier, Sailor, Airman or Marine — then the next day, they are back to being “just a citizen.” The prevalence of militarisms in speech and thought override the ways of conceptualizing the civilian world.

The researchers and developers said they understand returning home can be arduous in itself, but preparing to find employment can be even more taxing.

That’s where they believe VITA4VETS can help improve one’s interviewing skills and instill a sense of discipline.

Juan Gutierrez, a 33-year-old Navy veteran with experience in aviation electronics was satisfied with the new style of interview.

“Answering questions with a virtual human rather than a real human helped me feel less nervous, and I could practice different responses and there were no repercussions with the avatar,” Gutierrez said.

Gutierrez said he had more confidence and the experience was as much an interview for a potential employer as it was for him.

“I learned I could ask questions too. Instead of feeling nervous — like I am being tested, it was a way for me to be honest and learn if it (the job) is something I’d like to do. Overall, VITA helped me feel confident with my interview,” said Gutierrez.

In 2016, the Bureau of Labor Statistics reported that 20.9 million men and women were veterans, accounting for about nine percent of the civilian non-institutional population age 18 and over. Of those 20.9 million, more than 450,000 were unemployed.

The military provides transition training, but when one considers the unemployment statistics and challenges servicemembers face, it underscores the urgency for creating methods to better prepare veterans for civilian employment.

“Although many veterans have the necessary talent and temperament for vocational achievement, they may find it challenging to express the ways in which their skills and experience are able to translate to the private sector,” said Matthew Trimmer, project director for VITA4VETS at USC ICT.

Currently available through U.S. VETS in Los Angeles, VITA4VETS leverages virtual humans that can support a wide-range of interpersonal skill training activities. It uses six characters that span different genders, ages and ethnic backgrounds. Each character is capable of three behavioral dispositions or interview styles and can be placed in a variety of interchangeable background job contexts, all controllable from an interface menu.

According to Trimmer, offering a variety of possible job interview roleplay interactions supports practice across a range of challenge levels and allows for customizable training geared to the needs of the user. Trimmer also said the approach has been known to produce positive results, indicating increased confidence with practice and high job acquisition rates.

“If focusing on one portion of said issue can provide any support to those that have served us, then it is one step closer to better assisting the overall transition process,” Trimmer said.

According to the U.S. Vets in Los Angeles, 93-percent of veterans have obtained employment using the VITA4VETS application.

Read the full article on the U.S. Army website.

The 11th Annual USC Body Computing Conference

The 11th Annual USC Body Computing Conference
September 22, 2017
USC Town & Gown Ballroom

How Medical Care Benefits from VR/AR and Virtual Humans

Gamasutra sits down with Arno Hartholt to discuss the role of VR, AR and virtual humans in healthcare.

Read the full article in Gamasutra.

Museum of Jewish Heritage in New York City Allows Virtual ‘Interviews’ with Holocaust Survivors

An exhibit at the Museum of Jewish Heritage in Manhattan called “New Dimensions in Testimony” uses hours of recorded high-definition video and language-recognition technology to create just that kind of “interview” with Eva Schloss, Anne Frank’s stepsister, and fellow survivor Pinchas Gutter.

Read more about the New Dimensions in Testimony project in Newsday.

MYiHealth

MYiHealthSeptember 20-21, 2017
Stockholm, Sweden
Keynote Presentation

Army Research Center Maps LA Coliseum in 3-D for Homeland Security

ABERDEEN PROVING GROUND, Md. (Sept. 20, 2017) — The US Army Research Laboratory’s university partner – the University of Southern California Institute for Creative Technologies, in collaboration with the Aerospace Corporation and Department of Homeland Security, created a three-dimensional reconstruction of the Los Angeles Memorial Coliseum to help ensure the safety of its visitors.

They used commercial, off the shelf unmanned aerial systems and photogrammetric software to create the 3-D reconstruction of the LA Coliseum to be used by the Department of Homeland Security for infrastructure protection and security planning.

The Department of Homeland Security visited the ICT for demonstrations of the One World Terrain project, specifically the collection and 3-D reconstruction of areas of interest useful for terrain visualization, walk-through, planning and mission rehearsal. Officials thought it held promise for their infrastructure protection group, specifically where large crowds gathered and may be soft targets.

The aircraft were flown autonomously using an Android app built by ICT known as RAPTRS. Thousands of high-resolution still photographs of the structure were collected by the drones. Commercial photogrammetry software, in combination with classification algorithms developed at ICT, were used to reconstruct the structures in three dimensions and prepare the models for visualization and analysis. The visualization and analysis occurs in ICT’s Aerial Terrain Line of sight Analysis System, or ATLAS. In ATLAS, users can visualize sight-lines and plan tactical movements on and around reconstructed structures.

This addresses an enduring Army challenge. Terrain remains one of, if not the most, pressing challenge when it comes to training, preparing and planning for all aspects of combined arms operations.

The UAS-to-3-D-model pipeline used at LA Coliseum gives Army units the ability to launch organic UAS assets on automated imagery acquisition flights and use the acquired imagery to reconstruct the terrain of their areas of interest. It provides leaders a georeferenced, up-to-date, high detail (5cm or better) 3-D model that can be used for mission rehearsal, simulation and situational awareness.

The RAPTRS pipeline is user-friendly and more cost-effective than many terrain capture methods. If Soldiers need to train in an area with insufficient or outdated geospatial data, they can rapidly collect and create or update terrain models.

ICT’s research and the next-generation process used at the Coliseum is one of several ways the Army can regain terrain/geospatial overmatch and reduce cost and time for creating geo-specific datasets for modeling and simulation.

World Conference on Information Technology

World Conference on Information Technology
September 20-22, 2017
Beijing, China

VRDC Fall 2017

VRDC Fall 2017
September 20 – 22, 2017
San Francisco, CA
“The Science of Engineering of Redirected Walking” with Mahdi Azmandian
Presentations by Arno Hartholt

The Remembering Machine

Davina Pardo writes an opinion piece for the New York Times about preserving Holocaust survivor stories and the New Dimensions in Testimony project.

Read the full article in the New York Times.

Gordon & Hobbs Book Celebration

Gordon & Hobbs Book Celebration
Tuesday, September 19, 2017
Andrew Gordon and Jerry Hobbs invite you to join them in celebrating the completion of their book, entitled “A Formal Theory of Commonsense Psychology: How People Think People Think” (Cambridge University Press).
We will be enjoying food, drinks, and live jazz with friends and colleagues at the USC Town & Gown Ballroom, near the center of campus. 
Pre-party lecture: Andrew and Jerry will deliver a public lecture on the topic of the book in Cammilleri Hall at the USC Brain & Creativity Institute, 3620A McClintock Avenue, at 4:30pm on September 19.
The favor of a reply is requested by Wednesday, September 13 online at usc.edu/esvp (code: bookparty)
Click here to RSVP:

Exhibit Allows Virtual Interviews with Holocaust Survivors

What was it like in a Nazi concentration camp? How did you survive? How has it affected your life since?

Technology is allowing people to ask these questions and many more in virtual interviews with actual Holocaust survivors, preparing for a day when the estimated 100,000 Jews remaining from camps, ghettos or hiding under Nazi occupation are no longer alive to give the accounts themselves.

Karen Matthews of the Associated Press investigates more. Read the full article on AP.

IEEE International Conference on Image Processing (ICIP 2017)

ICIP 2017
September 17-20, 2017
Beijing, China
Presentations

Holocaust Survivor Holograms Give History New Depth

KCET in California explores the ICT and USC Shoah Foundation’s collaborative project, New Dimensions in Testimony. Read all about here.

What to Expect When You’re Expecting the New Apple iPhone

Apple officially unveiled the iPhone 8, iPhone 8S and iPhone X, the 15th iteration of the little personal computing device that changed the world on September 12, 2017. USC experts discuss whether it is indeed just another minor iteration in Apple’s incremental but successful corporate philosophy under CEO Tim Cook, or a release that finds a way to transform our collective relationship with technology like it did 10 years ago.

ICT’s Todd Richmond comments on Apple’s integration of Augmented Reality. To read his thoughts, visit the USC News Press Room.

So, About Those iPhone Animojis

Apple’s animated emojis – Animojis – for the iPhone X just announced today are getting lots of attention, partly because the tech behind them likely extends from the company’s acquisition of Faceshift in 2015.

While that’s certainly not been officially confirmed, back then, Faceshift was doing some very cool things with driving animated avatars directly (ie. in real-time) from video of your own face, coupled with depth sensing tech – effectively the same thing that happens with these Animojis via the iPhone cameras.

Several tools have of course also been developed elsewhere that use input video and facial performance to drive animated characters, but for fun, I thought it might be interesting to go back to specific pieces of computer graphics research from 2009, 2010 and 2011 that each partly served as the origins of Faceshift.

Other continued research efforts also played a part in the development of Faceshift, but these papers below (which also have accompanying videos), were key and show how the facial animation of CG avatars would be driven in real-time from video captured of human performances.

FACE/OFF: LIVE FACIAL PUPPETRY
Thibaut Weise, Hao Li, Luc Van Gool, Mark Pauly
Proceedings of the Eighth ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2009, 08/2009 – SCA ’09

Visit VFX Blog for more.

Speaker Q&A: Arno Hartholt Discusses the Use of Virtual Humans and VR/AR for Clinicians

Arno Hartholt is Director for R&D Integration at USC Institute for Creative Technologies and will be at VRDC 2017 to present his talk Immersive Medical Care with VR/AR and Virtual Humans, which will discuss how to apply VR/AR and other powerful capabilities to heal, inform, and teach in the medical domain. Here, Arno gives us some information about himself and his work.

Read more in Gamasutra.

Why Augmented Reality Is About to Take Over Your World

In preparation for Apple’s big announcement, BuzzFeed News talks with ICT’s Todd Richmond about Augmented Reality and what to expect in the not-too-distant future.

Read the full article in BuzzFeed News.

Digital Taipei 2017

Digital Taipei 2017
September 9-12, 2017
Taipei City, Taiwan
Panelist Participation

Keeping Holocaust Survivor Testimonies Alive – Through Holograms

As Holocaust survivors die out, many museums and study centers are scrambling to figure out just how to preserve testimonies in ways that will engage young people of the future.

One answer may be holography.

Illinois’ Holocaust Museum & Education Center will be unveiling its long-awaited, multi-million-dollar Take A Stand Center this October, which combines high-definition holographic interview recordings and voice recognition technology to enable Holocaust survivors to tell their personal stories and respond to questions from the audience, inviting a one-on-one ‘conversation’.

“The idea, there, is to carry on the dialogue,” David Traum, a lead researcher on the project, told the Forward. “It’s beyond what you can get from a static recording or documentary.”

Read more in The Forward.

Everything a Computer Wanted to Know About Humans (but was too afraid to ask)

Computers can outperform the greatest minds in many challenges, but while new approaches in machine learning are making in huge strides, from classifying images to understanding languages, machines still fall short in other areas.

Despite excelling in chess and solving complex computations, when it comes to understanding the pain of heartbreak or recognizing the emotional power of a Rothko painting, humans still come out on top.

But will that always be the case?

In a new book, dubbed “a computer’s guide to humans,” USC Information Sciences Institute chief scientist Jerry Hobbs and Institute for Creative Technologies director for interactive narrative research Andrew Gordon provide a linguistic framework to help computers understand our mysterious human ways, from emotions and beliefs to planning and memory.

Read the full article on USC’s Information Sciences Institute’s website.

New Dimensions in Testimony

As part of its 20th-anniversary commemoration, the Museum of Jewish Heritage is proud to pilot this interactive testimony installation — the first of its kind in the greater New York area — and to present the world premiere of the testimony of Eva Schloss and the New York premiere of the testimony of Pinchas Gutter.

For more information, visit the Museum of Jewish Heritage.

VR For Good: How The Virtual Medicine Conference Wants To Better VR Healthcare

Simply saying that VR is good for healthcare is too broad a statement. As well all know, there are thousands of different strands of subjects that fall under that umbrella, and identifying which ones VR is well-suited for is a little trickier. But an upcoming conference aims to unify these various strands, providing a VR medical conference that charts the future of technology’s impact on the health sector.

Virtual Medicine, as the event is called, is organized by Dr. Brennan Spiegel, the Director of Health Services Research for Cedars-Sinai Health System, and Co-Chair of the VR/AR Association Digital Health Committee. Taking place from March 28th – 29th 2018 at the Cedars-Sinai Medical Center in Los Angeles, the event gathers various leaders from the world of medical VR for two days of talks, sessions, and workshops.

19 speakers have been lined up so far from groups like the USC Institute for Creative Technologies, Osso Health, Children’s Hospital LA and even Samsung. In fact, Samsung is a partner for the event as are the VR/AR Association and AppliedVR, a VR platform designed for healthcare.

Check out some of the work Cedars-Sini itself is doing with VR already.

“Often conversations happen in isolation about how to make VR successful,” AppliedVR’s Josh Sackman said of the conference, “but it truly takes a village to make something like VR work in such a complex space like healthcare. It requires trial and error, constant feedback and communication with a cross-functional team, establishing best practices, and most importantly open dialogue from a wide range of stakeholders ranging from clinicians to content creators to investors to researchers to hardware companies.”

By gathering figureheads together to share successes and failures as well as data and research, Virtual Medicine hopes to stimulate VR’s use within the medical community and defy claims of gimmicks.

“Ultimately, we do see this as something in every patient room, operating room, imaging center, emergency room, surgery center, infusion center and other places where patients experience something scary or painful,” Sackman adds. “And we are starting to see some really powerful stories of patients using this in their homes, which is leading to safer and possibly more effective management strategies for those with chronic pain and other chronic conditions.”

If you’re interested in attending Virtual Medicine then it’d be a good idea to act fast; super early bird tickets are available until the end of the month and offer general admission for $299. After that, GA tickets will be priced at $399 for the rest of the year and $499 leading up to the event in March.

Via Upload VR.

20 Best 3D Animation Software Tools

All3DP lists out the 20 Best 3D Animation Software Tools, including ICT’s SmartBody prototype. Read the full article in All3DP here.

SMPTE 2017 Dives Into the Tech Behind Next-Gen Media

For more than a century, the people of SMPTE have sorted out the details of many significant advances in media and entertainment technology, and the programme for the SMPTE 2017 Annual Technical Conference & Exhibition (SMPTE 2017) keeps that tradition.

SMPTE 2017 will take place 23-26 October, and it will fill two exhibit halls and multiple session rooms at the Hollywood & Highland Center in Los Angeles. In fact, this is the final year the event will take place in its long-standing Hollywood location; the conference and exhibition will move to a larger venue in downtown Los Angeles next year.

Paul Debevec will be honored this year, read the full article in 4RFV for more information.

Tech Talk: Long-Term VR Side Effects Are Still a Big Unknown

There may be a lot of hype around VR technologies, but many researchers are still really undecided on the question about whether or not that’s a good thing. Although studies have been conducted for decades with a focus on the possible side effects, the most recent research suggests that more data is needed about what happens to users in the long term.

Android Headlines explores possible side effects and challenges facing VR today. Read the full article here.

How Digital Healthcare is Changing Everything: Thought Leaders Meet to Discuss Innovation in Digital Health at USC’s 11th Annual Body Computing Conference

LOS ANGELES, Aug. 28, 2017 /PRNewswire-USNewswire/ — On Friday, September 22, the University of Southern California (USC) Center for Body Computing (CBC), part of the Keck School of Medicine of USC, will curate conversations to provide a comprehensive understanding of how digital health is touching every aspect of our lives – from performance, behavior and decision-making to medicine, cybersecurity, the military, sports and public policy –at its 11th annual Body Computing Conference (click link to register).

Thought leaders across a broad spectrum come together for the one-day summit to offer local, national and global perspectives on the evolving convergence of health and digital technology. Speakers will shed light on a wide range of topics including the California-led cybersecurity initiative in health IT and L.A.’s 2028 Olympics plan to transform the Olympic Village into a connected health space. The event also includes demonstrations and discussions on the impact of unique wearable sensors to track personal fitness, enhance elite athletic performance and train the next generation of warfighters. And panelists will debate and advocate for the power of digital health tools to be able to combat the critical global issue of diabetes or build on-demand transportation safety nets to address the rapid rise in our aging population.

“Digital tools are making healthcare omnipresent, they are no longer a spoke in the wheel of our lives – they are the hub,” said Leslie Saxon, MD, founder and executive director of the USC Center for Body Computing. “We’re proud to be one of the only digital health conferences that brings together such an eclectic mix of global thought leaders to demonstrate, debate, and introduce the latest products, research, and investments that are accelerating the integration of digital health into every aspect of our lives.”

The exclusive 250-guest capacity crowd includes digital health start-ups and venture capitalists, small and large company executives, non-profit and government organizations, students, academic leaders and media.

Speakers from the California Governor’s Office of Business and Economic Development (GO-Biz), AARP Foundation, Abbott, an Academy Award-winning producer, Boston Celtics, Brent Scowcroft Center on International Security, ESPN, GE Software, Goldman Sachs, Joslin Diabetes Center, Karten Design, Lyft, NBA, NFL Players Association, Tastemade, UnitedHealthcare, U.S. Army, U.S. Department of Homeland Security, U.S. Food and Drug Administration, U.S. Marines, VSP Global and others will take center stage. Hosted on USC’s main Los Angeles campus, these experts join the innovators from across USC schools including: Annenberg School of Communications, Brain & Creativity Institute, Institute of Creative Technologies and the medical experts at Keck Medicine of USC. Click here for full list of speakers.

“Whether you are an elite athlete using biometrics to achieve peak performance where milliseconds can make a million-dollar difference, a military commander making crucial choices based on collective team health dynamics, a healthcare system protecting patient privacy or an individual looking to be empowered to enhance personal health outcomes, this event showcases today’s realities and tomorrow’s promise of digital health,” added Saxon.

According to Rock Health, digital health companies raised $4.2 billion in 2016 – double the amount raised in 2013 – with wearables and biosensors representing $312 million. GMI Insights projects the digital health market will grow to $379 billion by 2024. Last year, virtual reality reached an almost $1 billion market growth in health care specific applications and the adoption of artificial intelligence, apps and mHealth tools are poised for promise when it comes to individual and population health management. The USC CBC serves as a hub at the convergence of a fast-paced and growing digital technology revolution when it comes to medicine, acting as the research project lead and product design partner for small and large companies who want to maximize doctor efficiency, increase access, decrease costs and increase patient empowerment, engagement and health outcomes.

About the USC Center for Body Computing
The USC Center for Body Computing is the digital health innovation center for the Keck Medicine of USC medical enterprise. Collaborating with inventors, strategists, designers, investors and visionaries from health care, entertainment and technology, the USC CBC serves as an international leader on digital health and wearable technology. Founded in 2006 by Leslie Saxon, a cardiologist, the CBC was one of the nation’s first academically-based centers to focus on digital health solutions.

Dr. Saxon, an internationally renowned digital health guru has spoken at TEDMED, SXSW and WIRED international conferences as well as participates on the Food and Drug Administration (FDA) advisory group on global medical app regulations and recently served on a panel at the Bipartisan Policy Center to discuss medical apps and health IT cybersecurity. She was recognized as the nation’s “Most Tech Savvy Doctor” by Rock Health. For more information about the USC CBC: uscbodycomputing.org.

Media interested in attending the conference, please contact: Sherri.Snelling@med.usc.edu

17th Annual International Conference on Intelligent Virtual Agents

Intelligent Virtual Agents Conference
August 27-30, 2017
Stockholm, Sweden
Presentations

PTSD Exposure Therapy in VR: Importance of Storytelling & Emotional Presence in Healing from Trauma

Voices of VR sits down with ICT’s Dr. Albert ‘Skip’ Rizzo to discuss VR Exposure Therapy.

Read all about in Voices of VR.

USC Joins Alliance to Shape SoCal Into the Next Global Tech Hub

USC has joined the new Alliance for Southern California Innovation, a nonprofit coalition of universities, research institutions and corporations aiming to unify Southern California’s tech and biotech industries.

Read the full article in USC News.

Paul Debevec to Be Honored by Motion Picture and TV Engineers

The Society of Motion Picture and Television Engineers will present Paul E. Debevec with its most prestigious award, the Progress Medal, at its Oct. 26 awards ceremony during the SMPTE Technical Conference & Exhibition at Loews Hollywood Hotel.

Read the full article in Hollywood Reporter.

We Need to Look More Carefully Into the Long-term Effects of VR

When you think about it, virtual reality is such a step change in immersion, mediums like TV and video games seem abstract by comparison. What we’re not asking enough, is what impact this might have in the long run.

This is virtual reality‘s second coming; we’ve been tinkering with the idea since Morton Heilig built the Sensorama in the 1960s, but it wasn’t until the 90s that VR got its first “boom”. Sadly Hollywood’s promises of breathtaking alternate universes were beyond what the technology of the era could reach, dooming it to failure. But even back then, people had concerns about what long-term exposure to VR could do to the human mind. A study carried out at Michigan State University concluded that VR rewired the brain, but was unable to determine if longer term effects were possible.

Now we’re in 2017, VR is back (again), and still we’ve done little to interrogate whether our brains are even ready for this next level of human-machine interfacing. But it’s coming: various researchers have revealed to Wareable that work is underway to look further into the impact virtual reality could have on our brains and eyes.

Read the full article featuring commentary from Dr. Albert ‘Skip’ Rizzo on Wareable.

Augmented Reality is the Potential Future of US Military Training

The Synthetic Training Environment (STE) is an augmented reality training endeavor designed to improve soldier readiness in a variety of environments.

“Due to the rapidly expanding industrial base in virtual and augmented reality, the Army is moving out to seize an opportunity to augment readiness,” Col. Harold Buhl, Army Research Lab Orlando and Information and Communications Technology program manager, told taskandpurpose.com. “With STE, the intent is to leverage commercial advances with military technologies to provide commanders with unit-specific training options to achieve readiness more rapidly and sustain readiness longer.”

Read the full article in Military Training & Simulation.

IJCAI 2017

International Joint Conference on Artificial IntelligenceAugust 19-25, 2017
Melbourne, Australia
Presentations

How VR is Changing the Way We Think About Therapy

VR Scout explores a few methods in which virtual reality enhances therapy. Read the full article here.

Don’t Miss Out On All the Great AR Talks at VRDC Fall 2017

Gamasura teases what to expect at this year’s VRDC Fall 2017 Conference.

Click here for more information about the show and especially Arno Hartholt’s talk on immersive medical care.

BEING THERE: Virtual Reality Lets Therapy Patients Return to the Scene of Their Fear

The Herald Tribune covers Cade Metz’s piece for the New York Times about virtual reality exposure therapy.

Visit The Herald Tribune to read the full article featuring Dr. Albert ‘Skip’ Rizzo.

SIGdial 17

SIGdial 17
August 15-17, 2017
Saarbruken, Germany
Presentations

43rd National Organization for Victim Assistance

43rd National Organization for Victim Assistance (NOVA)
August 14-17, 2017
San Diego, CA
Presentations

SIGKDD 2017

SIGKDD 2017
August 13-17, 2017
Halifax, Nova Scotia, Canada
Presentations

How the Army is Using Augmented Reality to Bolster Troop Readiness

Task & Purpose covers the recent ARL and ICT STE news.

Read the full article in Task & Purpose.

Augmented Reality May Revolutionize Army Training

Orlando Echo covers news of the joint effort between the U.S. Army Research Laboratory and several entities — University of Southern California Institute for Creative Technologies, Combined Arms Center-Training and Program Executive Office for Simulation, Training and Instrumentation — are working to research, prototype and eventually deliver the Synthetic Training Environment, otherwise known as STE.

Read the full article in Orlando Echo.

How Virtual Reality is Transforming Public Health and Medicine

Did you know that virtual, mixed, and augmented reality content can have therapeutic and educational benefits? Pixvana explores noteworthy studies that show how X-Reality improves users’ lives.

Read the full article featuring Bravemind here.

Augmented Reality May Revolutionize Army Training

By Joyce M. Conant, ARL Public Affairs and Sara Preto, ICT

 

ABERDEEN PROVING GROUND, Md. — The development of advanced learning technologies for training is underway. Linking augmented reality with live training will enable units to achieve the highest levels of warfighting readiness and give valuable training time back to commanders and Soldiers.

The U.S. Army must train to win in a complex world that demands adaptive leaders and organizations that thrive in ambiguity and chaos. To meet this need, force 2025 and beyond, the Army’s comprehensive strategy to change and deliver land-power capabilities as a strategic instrument of the future joint force, requires a new training environment that is flexible, supports repetition, reduces overhead and is available at the point of need.

A joint effort between the U.S. Army Research Laboratory, University of Southern California Institute for Creative Technologies, Combined Arms Center-Training and Program Executive Office for Simulation, Training and Instrumentation, are working to research, prototype and eventually deliver the Synthetic Training Environment, otherwise known as STE.

STE is a collective training environment that leverages the latest technology for optimized human performance within a multi-echelon mixed-reality environment. It provides immersive and intuitive capabilities to keep pace with a changing operational environment and enable Army training on joint combined arms operations. The STE moves the Army away from facility-based training, and instead, allows the Army to train at the point of need — whether at home-station, combat training centers or at deployed locations.

“Due to the rapidly expanding industrial base in virtual and augmented reality, and government advances in training technologies, the Army is moving out to seize an opportunity to augment readiness,” said Col. Harold Buhl, ARL Orlando and ICT program manager. “With STE, the intent is to leverage commercial advances with military specific technologies to provide commanders adaptive unit-specific training options to achieve readiness more rapidly and sustain readiness longer.”

Buhl said in parallel, the intent is to immerse Soldiers in the complex operational environment and stress them physically and mentally and iteratively to as General Martin Dempsey (retired U.S. Army general who served as the 18th Chairman of the Joint Chiefs of Staff from October 1, 2011 until September 25, 2015) has said, ‘make the scrimmage as hard as the game.’

This training environment delivers the next generation of synthetic collective trainers for armor, infantry, Stryker and combat aviation brigade combat teams. These trainers are being developed to lower overhead, be reconfigurable and use advanced learning technologies with artificially intelligent entities to simultaneously train BCT-level and below. This multi-echelon collective training will be delivered to geographically distributed warfighters, at the point of need, for both current and future forces.

“As the Army evolves with manned and unmanned teams and other revolutionary battlefield capabilities, STE will be flexible enough to train, rehearse missions and experiment with new organization and doctrine,” Buhl said.

Leveraging current mixed reality technologies, STE blends virtual, augmented and physical realities, providing commanders and leaders at all levels with multiple options to guide effective training across active and dynamic mission complexities. STE will provide intuitive applications and services that enable embedded training with mission command workstations and select platforms.

“This capability coupled with the immersive and semi-immersive technologies that bring all combat capabilities into the same synthetic environment, add to this quantum leap in training capability, the geo-specific terrain that STE will use in collaboration with Army Geospatial Center and you have the opportunity to execute highly accurate mission rehearsal of a mission and multiple branches and sequels,” Buhl said.

STE adaptive technology supports rapid iterations and provides immediate feedback — allowing leaders to accurately assess and adjust training — all in real-time. With a single open architecture that can provide land, air, sea, space and cyberspace synthetic environment, with joint, interagency, intergovernmental, and multi-national partners, Army multi-domain operations are inherent with STE.

An increasingly complex element of the land domain is the expansion of megacities. In the coming decades, an increasing majority of the world’s population is expected to reside in these dense urban areas. Technologies in development by ARL for STE will provide the realism of complexity and uncertainty in these dense and stochastic environments. STE is intended to evolve and enhance readiness in megacities by replicating the physical urban landscape, as well as the complex human dynamics of a large population.

“It enables our formations to train as they fight using their assigned mission command information systems, and all other BCT and echelons above BCT warfighting capabilities,” Buhl said. “Operational informative systems and the training environment systems will share an identical common operating picture; enabling seamless mission-command across echelons.”

Ryan McAlinden, director for Modeling, Simulation and Training at ICT said his team has been working with ARL, the TRADOC capabilities manager, Combined Arms Center for Training, and PEO STRI for the past year to help inform the requirements process for the STE.

“The team has been researching and prototyping techniques and technologies that show feasibility for the one world terrain part of the program,” McAlinden said. “The hope is that these research activities can better inform the materiel development process when the STE is formally approved as a program of record.”

By leveraging technology to provide the means to train in the complex operating environment of the future, integrate technologies to optimize team and individual performance, provide tough realistic training that is synchronized with live capstone events and give commanders options for accelerated and sustained readiness, STE is transforming Army training to achieve readiness and win in a complex world.

“As we develop, demonstrate and transition technologies across the U.S. Army Research Development and Engineering Command that provide solutions to tough Army problems, we never lose sight of focus on Soldiers and commanders,” Buhl said. “These men and women deserve the very best in technology and more importantly in our respect for their leadership, initiative and ingenuity in the use of that technology. STE has tremendous opportunity for the Army if we develop and deliver with that focus.”

—–

The U.S. Army Research Laboratory, currently celebrating 25 years of excellence in Army science and technology, is part of the U.S. Army Research, Development and Engineering Command, which has the mission to provide innovative research, development and engineering to produce capabilities that provide decisive overmatch to the Army against the complexities of the current and future operating environments in support of the joint warfighter and the nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.

Augmented Reality May Revolutionize Army Training

he development of advanced learning technologies for training is underway. Linking augmented reality with live training will enable units to achieve the highest levels of warfighting readiness and give valuable training time back to commanders and Soldiers.

The U.S. Army must train to win in a complex world that demands adaptive leaders and organizations that thrive in ambiguity and chaos. To meet this need, the Army has developed Force 2025 and Beyond, a comprehensive strategy to change and deliver land-power capabilities as a strategic instrument of the future joint force. The successful implementation of this strategy requires a new training environment that is flexible, supports repetition, reduces overhead and is available at the point of need.

A joint effort between the U.S. Army Research Laboratory and several entities — University of Southern California Institute for Creative Technologies, Combined Arms Center-Training and Program Executive Office for Simulation, Training and Instrumentation — are working to research, prototype and eventually deliver the Synthetic Training Environment, otherwise known as STE.

Read the full article on the U.S. Army website.

Disney’s ‘Magic Bench’ Fixes AR’s Biggest Blind Spot

ICT’s David Nelson and Todd Richmond talk with Brian Barrett of WIRED about Disney’s use of augmented reality in ‘Magic Bench’.

Read the full story in WIRED.

Real-time Digital Human Avatar Rendering Leaves SIGGRAPH 2017 Attendees Stunned

Attendees at SIGGRAPH 2017 were treated to a look at the future of real-time digital human rendering. SIGGRAPH’S VR Village hosted an experience that featured interviews by a digital avatar being “driven” in real-time (yes, like in Ready Player One) by the human Mike, who wore a special rig that captured his motions and expressions.

Read the full article by Alex Wall in Medium.

Researchers Showcase Impressive New Bar for Real-time Digital Human Rendering in VR

A broad team of graphics researchers, universities, and technology companies are showcasing the latest research into digital human representation in VR at SIGGRAPH 2017. Advanced capture, rigging, and rendering techniques have resulted in an impressive new bar for the art of recreating the human likeness inside of a computer in real-time.

MEETMIKE is the name of the VR experience being shown at this week at SIGGRAPH 2017 conference, which features a wholly digital version of VFX reporter Mike Seymour being ‘driven’ and rendered in real-time by the real life Seymour. Inside the experience, Seymour is to play host, interviewing industry veterans and researchers inside of VR during the conference. Several additional participants wearing VR headsets can watch the interview from inside the virtual studio.

Read more in Road to VR.

How Artificial Intelligence Could Benefit Those in Empathy-Centric Professions

Pacific Standard asks the question, ‘If care jobs become the last human jobs, could that encourage employers and policymakers to recognize and value them as the economically critical work that they are?’

Read the full article here.

Annual Meeting of Association for Computational Linguistics (ACL)

Annual Meeting of Association for Computational Linguistics (ACL)
July 30, 2017 – August 4, 2017
Vancouver, Canada
Presentations

A New Way for Therapists to Get Inside Heads: Virtual Reality

The New York Times explores the use of VR in exposure therapy, speaking with Dr. Albert “Skip” Rizzo for insight into the process.

Read the full article here.

SIGGRAPH 2017

SIGGRAPH 2017 Los Angeles
July 30, 2017 – August 3, 2017
Los Angeles, CA
Presentations

The Last Human Job?

New America explores care professions and the potential threat of Artificial Intelligence.

Read the full article featuring commentary from ICT’s Albert “Skip” Rizzo here.

Virtual Reality: New Frontiers for Psychiatric Disorders

Italian blog ‘State of Mind’ explores the role virtual reality plays in the psychiatric field. Digging deep into research and including Dr. Albert “Skip” Rizzo’s work, the article features in depth information about bridging the field and technology together.

Read the full article here.

International Society for Research in Emotion (ISRE)

ISRE
July 25-30, 2017
St. Louis, MO
Keynote Speaker: Jonathan Gratch

This is What the Future of Health Care Looks Like

Fast Company talks the future of health care with ICT’s Todd Richmond and the USC Center for Body Computing. See the full video here.

CVPR 2017

CVPR 2017July 22-25, 2017
Honolulu, HI
Presentations

ICCM 2017

15th Annual Meeting of the International Conference on Cognitive Modeling
July 22-25, 2017
University of Warwick, UK
Presentations

The Therapeutic Value of Virtual Reality

Move over, psychedelics; VR is coming to a clinic near you.

AlterNet sat down with Dr. Albert “Skip” Rizzo for a download on what VR can bring to the table therapeutically.

Read the full article in AlterNet.

2017 Naval Future Force Science and Technology Expo

2017 Naval Future Force Science and Technology Expo
July 20-21, 2017
Washington, D.C.
Presentations

Star Wars Resembles Advancements in USC Research

Famous icons of the Star Wars saga, such as lightsabers, holograms, and futuristic robotics, can be seen not only in the films by USC Alum George Lucas, but also in the real-life work at USC.

Researchers at USC are creating real-life revolutionary advancements in technology that resemble the effects in the imaginary intergalactic world of Star Wars: The Force Awakens.

Read the full article in USC’s International Academy, here.

Can a Machine be Alive?

Julien Crockett of LA Review of Books discusses a panel discussion and meeting with ICT’s Jonathan Gratch.

“On the evening I met Actroid-F, Jonathan Gratch, from USC’s Institute for Creative Technologies (ICT), sat with her creator, Yoshio Matsumoto, both, or should I say all three of them, part of a panel to discuss the uncanny valley, defined as a repulsive tendency toward too much human-ness in something that is not fully human. Gratch, whose background is in the nascent field of affective computing, focused his remarks on the artificial intelligence they are building to bring Actroid-F to “life.” He mused on “if and why and how a machine could ‘have’ an emotion and what good that could be.” Motioning up to the screen, he then introduced “Ellie,” a human-like software agent and the prototype for Actroid-F’s AI. Both of them endowed with a strong posture and calm poise, it is easy to see their similarities.”

To read the full article, visit LA Review of Books here.

Enlisting Virtual Reality to Ease Real Pain

Virtual reality technology engages a person in a 360-degree visual experience. It has been used in medical research for more than two decades, to treat trauma, anxiety and even burn pain. The fact that it can now be accessed with headsets and mobile phones is fueling hospitals’ interest.

The Wall Street Journal explores ways in which VR can help ease pain, and speaks with Dr. Skip Rizzo for more insight. Read the full article here.

A Hologram of a Holocaust Survivor Will Answer Any Question You Have About the Genocide

Soon you’ll be able to ask a hologram of a Holocaust survivor any question you want, and he will answer it.

That’s thanks to the New Demensions in Testimony project from the USC Shoah Foundation and the USC Institute for Creative Technologies (ICT).

Circa explores more, see the video and read the full article here.

HCI International 2017

HCI International 2017 (Human-Computer Interaction International Conference)
July 11-13, 2017
Vancouver, Canada
Presentations

ESRI User Conference

ESRI User Conference
July 10-14, 2017
San Diego, CA
Presentations

13 Secrets Your Smile Can Reveal About You

Smiles are often used to cover up another emotion. “For example, someone might start to frown then cover this with a smile,” says Jonathan Gratch, who is based at USC’s Institute for Creative Technologies in Playa Vista, California, where he is the director for virtual human research “The nature of a smile also communicates subtle information about its authenticity.” Another telltale sign is if a smile that starts and ends too quickly is seen as not genuine, he says.

Reader’s Digest explores 13 secrets your smile reveals about you, talking with ICT’s Jonathan Gratch for more insight. Read the full article here.

Using Artificial Intelligence for Mental Health

Innovative technology is offering new opportunities to millions of Americans affected by different mental health conditions.

 Advancements in artificial intelligence (AI) are bringing psychotherapy to more people who need it. Nonetheless, the benefits of these methods need to be carefully balanced against their limitations. The long-term efficacy of the AI approach regarding mental health is yet to be tested, but the initial results are promising.

Very Well takes a deeper look into this area, featuring ICT’s MultiSense and SimSensei technologies. Read the full article here.

Get Expert Advice on Mixed-Reality Game Design at VRDC Fall 2017

Organizers of the Virtual Reality Developers Conference would like to quickly let you know about two standout sessions taking place later this year at VRDC Fall 2017, which takes place September 21-22 at a bigger, better venue in San Francisco!

Notably, XEODesign president Nicole Lazzaro will be at the show to give a cutting-edge talk on mixed-reality game design. Her “‘Matrix’ vs. Pokemon Go: The Mixed Reality Battle for the Holodeck” session will feature 3 compelling future MR scenarios to illustrate 5 core MR design techniques.

Lazzaro will draw on her 20+ years of interactive experience design as she dives deep into the design requirements for compelling MR that takes advantage of virtual world overlays, depth maps of existing terrain, NPC and object interaction, and character customization.  If you have any interest in creating effective, impactful mixed-reality games and experiences, don’t miss it!

Also at VRDC Fall 2017, Arno Hartholt (Director of Research and Development Integration at the University of Southern California Institute for Creative Technologies) will be presenting a great session on “Immersive Medical Care with VR/AR and Virtual Humans.” that aims to break down how VR/AR offers unique capabilities for health-related research and treatment.

Check it out, and you’ll  learn how VR/AR and virtual humans can be applied to worthy causes beyond gaming and entertainment, particularly within the medical domain. You’ll walk away able to discuss examples of how these capabilities allow researchers to study human behavior and specific ailments, and how they can lead to various treatments, e.g. PTSD or pain. Also, you’ll get a practical overview of how these systems can be designed, developed and assessed.

And of course, VRDC Fall 2017 organizers look forward to announcing many more talks for the event in the weeks to come. Don’t forget to register early at a discounted rate!

Since tickets sold out for the first three VRDC events, VRDC Fall 2017 will offer more sessions and move to a bigger location at the Hilton Union Square in San Francisco, CA September 21-22.

For more information on VRDC Fall 2017, visit the show’s official website and subscribe to regular updates via Twitter and Facebook.

 

Via Gamasutwa.