How video games are easing our phobias, one spider at a time

From the Abby’s acrophobia in The Last of Us Part II, to the claustrophobic feeling of cramming yourself into a locker during Alien: Isolation, games are great at replicating the physical sensation of fear. But phobias can be a huge barrier of entry for trying a new game, especially for people afraid of common video game adversaries like spiders. For phobic gamers, this fear can keep them from playing a game they might otherwise enjoy. 

The good news is, this ability to terrify is actually being used to heal people. Neuropsychologists like Dr. Skip Rizzo have been studying for decades how games can be used to treat mental illness.

“I always make sure that I include the term exposure therapy because it’s not VR therapy,” he says. “VR is a tool. The therapy is exposure therapy.”

Watch the video from Polygon to learn more about how VR is being used to help treat people suffering from a variety of illnesses.

Study finds over 64% of people reported new health issues during ‘work from home’

What impact has working from home as a result of the COVID-19 pandemic had on our health? In a new study, researchers from USC have found that working from home has negatively impacted our physical health and mental health, increased work expectations and distractions, reduced our communications with co-workers and ultimately lessened our productivity. The study finds that time spent at the workstation increased by approximately 1.5 hours, while most workers are likely to have less job satisfaction and increased neck pain when working from home. It also illustrates the differential impact of working from home for women, parents, and those with higher income.

Nearly 1,000 respondents participated in the survey regarding the impact of working from home on physical and mental well-being. Authored by Ph.D student Yijing Xiao, Burcin Becerik-Gerber, Dean’s Professor of Civil and Environmental Engineering, Gale Lucas, a research Assistant Professor at the USC Institute for Creative Technologies and Shawn Roll, Associate Professor of Occupational Science and Occupational Therapy, the study was published in the Journal of Occupational and Environmental Medicine. Becerik-Gerber and Lucas are co-directors of The Center for Intelligent Environments at USC.

The survey was conducted during the early days of the pandemic. Responses regarding lifestyles, home office environments, and physical and mental well-being revealed the following about that first phase of the pandemic’s “work from home” period:

  • Over 64 percent of respondents claimed to have one or more new physical health issues 
  • Nearly 75 percent of those surveyed experienced one new mental health issue 
  • Female workers with annual salary less than 100k were more likely than male workers or workers with higher income to report two or more new physical and mental health issues 
  • Female workers had higher incidence of depression 
  • Parents with infants tended to have better mental well-being but also a higher chance of reporting a new mental health issue
  • Having toddlers was affiliated with physical well-being but was also associated with more physical and mental health issues
  • Living with at least one teenager lowered the risk of new health issues
  • Nearly 3/4 of workers adjusted their work hours and more than 1/3 reported scheduling their work hours around others
  • Workers who adjusted their work hours or schedule work around others were more likely to report new physical or mental health issues
  • Pets did not appear to have impact on physical or mental health
  • Workers decreased overall physical activity and physical exercise, combined with increased overall food intake 
  • Decreased physical and mental well-being was correlated with increased food or junk food intake 
  • Only one-third had a dedicated room for their work at home; at least 47. 6 percent shared their workspace with others 

The authors suggest that having a dedicated work from home space would mitigate a number of negative impacts. 

Becerik-Gerber, the study’s corresponding author said,”The quality of your home workspace is important; having a dedicated workspace signals to others that you are busy, and minimizes the chances of being distracted and interrupted. Increased satisfaction with the environmental quality factors in your workspace, such as lighting, temperature, is associated with a lower chance of having new health issues. In addition, knowing how to adjust your workspace helps with physical health.”

###

ARL 68 – Research Assistant, Faster, more accurate, more adaptable decision-making with visualized uncertainty

Project Name
Faster, more accurate, more adaptable decision-making with visualized uncertainty

Project Description
This Army Research Lab-led project is doing basic research to understand how to help people make faster, more accurate decisions under uncertainty using appropriate visualization and training technologies. We are interested in conventional as well as XR information displays of quantified uncertainty.

Job Description
The selected research assistant will help analyze & write up data, assist in developing, planning, or prototyping future experiments, and/or help with data collection or piloting.

Preferred Skills
• Some programming experience (any of JavaScript, Python, MATLAB, R)
• Experience or interest in research with human participants (psychology, behavioral economics etc.)
• Familiarity with frequentist and/or Bayesian statistics
• Good writing skills
• Willingness to work in a collaborative, interdisciplinary team

Intern Application Sample Page

Welcome to the 2021 USC Institute for Creative Technologies Summer Research Program Application. We would like to collect some basic information from you that will help us process your application. All information will be kept confidential.

* indicates required field

Please tell us more about where you will be enrolled in Fall 2021.





Acceptable file types: doc,docx,pdf.
Maximum file size: 2mb.

Please review the projects and rank your top three choices.

Please choose up to three of your skills/experiences.

Please ask a faculty member who is familiar with your work to provide a letter of reference. The faculty member must send the reference through a university affiliated email address to reference@ict.usc.edu. Confirmation that the reference was received will only be sent to the email address of the faculty member that submitted the reference.

Upon submission of your application, a confirmation will be sent to your preferred email address.

If you experience any problems with your application or if you do not receive a confirmation of your submission, email internprogram@ict.usc.edu.

The University of Southern California values diversity and is committed to equal opportunity in employment.

ARL 67 – Research Assistant, Synthetic Data for Machine Learning

Project Name
Synthetic Data for Machine Learning

Project Description
Machine learning (ML) algorithms require vast amounts of training data to ensure good performance. Nowadays, synthetic data is increasingly used for training cutting-edge ML algorithms. This research aims to develop AI-driven model synthesis approach for generating ML synthetic data. Advanced deep learning techniques, particularly the deep generative networks and geometric deep learning will be explored for model representation and synthesis.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning and data synthesis  techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
• A dedicated and hardworking individual
• Experience or coursework related to machine learning, signal processing
• computer vision/graphics
• Strong programming skills

ARL 66 – Research Assistant, Machine Learning and Computer Vision

Project Name
Machine Learning and Computer Vision

Project Description
Current perception platforms, ground or airborne, carry multimodal sensors with future expectation of additional modalities. This project focuses on the development of advanced machine learning (ML) algorithms for scene understanding using multimodal data, in particular using a diverse dataset consisting of high-fidelity simulated images of various objects and scenes of interest.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning, image processing and computer vision techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
• A dedicated and hardworking individual
• Experience or coursework related to machine learning, computer vision
• Strong programming skills

ARL 65 – Research Assistant, Pupil power: Unlocking the ability to use eye tracking in real-world contexts

Project Name
Investigation of non-cognitive factors and estimation of their influence on the pupil response

Project Description
Because of new cheap and non-invasive eye tracking technology, the pupil response has great potential to be used as a window into the mind to estimate cognitive states that influence performance. However, because pupil size is more strongly driven by non-cognitive factors, it is difficult to attribute pupil size changes to cognitive states. This project will contribute to ongoing efforts to overcome this obstacle by furthering our understanding of the magnitude of various factors that influence the pupil response.

Job Description
If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. The scope of the work will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship but may include data collection, literature review and statistical analysis.

Preferred Skills
• Programming and stastical analysis in Matlab, Python, R
• Machine learning
• Interest in Psychophysiology, cognitive neuroscience and/or psychology

376 – Research Assistant, Charismatic Virtual Tutor

Project Name
Charismatic Virtual Tutor

Project Description
The primary goal of this project is to analyze the data of charismatic individuals (audio/video of speeches) to learn the indicators of charisma in nonverbal behaviors. The outcome will then be used to procedurally generate synthetized voices and speech gestures to convey charisma in a virtual human.

Job Description
The Research Assistant intern will work with the project leader in support of the project research objectives.

Preferred Skills
•  Background in audio (voice and speech signals) processing and speech synthesis.
•  Knowledge of AI, NLP and ML.
•  Experience with applying ML to audio signals a plus.
•  Experience with COVAREP, openSMILE a plus.
•  Strong programming skills a must. C/C++, Python preferred
•  Minimum education requirement: Masters in CS or EE

375 – Research Assistant, Game-based learning

Project Name
Teaching Artificial Intelligence through Game-Based Learning

Project Description
This project aims to develop a role-playing game to help high school students learn basic concepts in artificial intelligence.

Job Description
The Research Assistant intern will work with the project leader in support of the project research objectives.

Preferred Skills
•  Experience with building games. (Games for education a plus)
•  AI specialization or focus would be ideal
•  Master’s degree in CS

374 – Research Assistant, Reinforcement learning

Project Name
Develop Explainable Models for Reinforcement Learning

Project Description
This project aims to develop explainable models for model-free reinforcement learning. The explainable models will be used to generate explanations on how the algorithm generates policies that are understandable to humans who interacts with automations that uses reinforcement learning.

Job Description
The Research Assistant intern will work with the project leader in support of the project research objectives.

Preferred Skills
•  Solid knowledge in artificial intelligence
•  Strong programming skills in Python

373 – Research Assistant, Multimodal perception of human behavior

Project Name
Mutlimodal sensing of human behavior

Project Description
In this project, we will study, develop and analyze machine learning methods for recognizing human states. The goal of this research is to build novel machine based approaches in recognizing human intent, motivation and emotion. This will be done with multimodal analysis of human behavioral and physiological responses. For more information on our research you can consult our webpage: https://ihp-lab.org/

Job Description
The intern will assist and contribute to the research on machine understanding of emotion and motivation. The research includes performing per-processing and data cleaning in addition to performing deep learning to recognize nonverbal behaviors and physiological changes.

Preferred Skills
•  Machine learning
•  Python
•  Familiarity with deep learning frameworks, e.g., PyTorch or Keras
•  Familiarity with Linux shell
•  Scientific communication; writing/presenting research

372 – Research Assistant, ML Researcher

Project Name
One World Terrain

Project Description
One World Terrain (OWT) is an applied research effort focusing on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:
• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
• Procedurally recreate 3D terrain using drones and other capturing equipment
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
• Develop efficient run-time application for terrain visualization and simulation
• Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S)
• Leverage commercial solutions for storage and distribution of serving geospatial data

Job Description
The ML Researcher will work with the OWT team in support of recreating digital 3D global terrain capabilities that replicate the complexities of the next-gen operational environment for M&S.

Preferred Skills
•  Experience with machine learning and computer vision (Python, TensorFlow, Pytorch)
•  Experience with 3D point cloud and mesh processing
•  Experience with photogrammetry reconstruction process
•  Experience with Unity/Unreal game engine and related programming skills ( C++/C#)
•  3D rendering on browsers
•  Web services
•  Interest/experience with Geographic information system applications and datasets

371 – Programmer, One World Terrain

Project Name
One World Terrain

Project Description
One World Terrain (OWT) is an applied research effort focusing on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.
The project seeks to:
• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments;
• Procedurally recreate 3D terrain using drones and other capturing equipment;
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment; • Develop efficient run-time application for terrain visualization and simulation;
• Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S);
• Leverage commercial solutions for storage and distribution of serving geospatial data

Job Description
The programmer will work with the OWT team in support of building a unified platform for simulation capabilities that replicate the complexities of the next-gen operational environment for M&S.

Preferred Skills
•  Experience with Unity/Unreal game engine and related programming skills ( C++/C#)
•  Web services
•  Interest/experience with Geographic information system applications and datasets

370 – Programmer, Real-Time Modelling and Rendering of Virtual Humans

Project Name
Real-Time Modelling and Rendering of Virtual Humans

Project Description
The Vision and Graphics Lab at ICT pursues research and works in production to perform high-quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. There has been continued research on how machine learning can be used to model 3D data effortlessly by data-driven deep learning networks rather than traditional methods. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on software to aid both in the visualization of our new facial scan database and to animate and render virtual humans. To realize and test the useability of this we would need a tool that can model and render the created Avatar through a web-based GUI. The goal is a real-time, responsive web-based renderer on a client controlled by software hosted on the server.

Job Description
The intern will work with lab researchers to develop a tool to visualize assets generated by the machine learning model of the rendering pipeline in a web browser using a Unity plugin and also integrate deep learning models to be called by web-based APIs. This will include the development of the latest techniques in physically-based real-time character rendering and animation. Ideally, the intern would have awareness about physically based rendering, subsurface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
•  C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3
•  Python, GPU programming, Maya, version control (svn/git)
•  Knowledge in modern rendering pipelines, image processing, rigging, blendshape modeling.
•  Web-based skills – WebGL, Django, JavaScript, HTML, PHP.

369 – Programmer, Material Reflectance Property Database for Neural Rendering

Project Name
Material Reflectance Property Database for Neural Rendering

Project Description
The Vision and Graphics Lab at ICT pursues research and works in physically-based neural rendering for general objects. Although the modern rendering pipeline used in the industry achieves compelling and realistic rendering results, it still has many general issues. It requires professionals to manually tweak the material property to match the natural-looking appearance of each object. Thus it’s costly for a complex scene with multiple objects. We have to model eyeball, teeth, facial hair, and skin separately, only for rendering a human face. On the other side, neural rendering will introduce a revolution of an easy high-quality rendering. By building a radiance field with geometry models, lighting models, and material property models using a network, neural rendering will render a complicated scene in real-time. The material property is no longer required, and the network automatically assigns it according to the object’s material label. This breakthrough will be beneficial for immersive AR/VR experience in the future.

Job Description
In the center of neural rendering is the material property estimation, for which we need a material database. The intern will work with lab researchers to capture, process the material database using our dedicated Light-stage. Our database will include extensive objects (e.g., cloth, wood, hair) with different reflectance properties, measured using our Light Stage via controllable lighting conditions. This database will be used for physically-based neural rendering algorithm development.

Preferred Skills
•  C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
•  Experience with computer vision techniques: multi-camera stereo, optical flow, photometric stereo, Spherical Harmonics.
•  Knowledge in modern rendering pipelines, image processing, computer vision, computer graphics

368 – Programmer, 3D Scene Understanding and Processing

Project Name
3D Scene Understanding and Processing

Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding and processing of 3D scenes, specifically in reconstruction, recognition, and segmentation, using learning-based techniques. It has important values in practical applications of auto-driving, AR, and VR. However, generating realistic 3D scene data for training and testing is challenging due to limited photorealism in synthetic data and intensive manual work in processing real data. Besides, the large scale of complex scenes further increases the difficulty in utilizing such data. Thus, we need to develop advanced techniques for better 3D data generation. Our first goal is an automatic method for automatic data cleanup, organization, annotation, and completion of both real data and synthetic data, either in image space or 3D space, to generate well-structured data for multiple learning-based 3D tasks. Then we will use the data to train the neural networks for the joint reconstruction and segmentation of large-scale 3D scenes.

Job Description
The intern’s task will focus on 3D data processing by developing intelligent algorithms, and editing/visualization tools, to fix problems in the current dataset (inaccurate segmentation, incomplete surfaces, distorted textures, and so on) and generate clean and accurate 3D models for training. Meanwhile, the intern will also join the research in 3D scene reconstruction and segmentation using these data.

Preferred Skills
•  Good knowledge in Computer Vision and Graphics, familiar with 3D rendering pipelines and image processing.
•  Engineering maths physics and programming, C++, Python, GPU programming, Unity3, OpenGL.
•  Basic skills in deep learning and experience in training networks.

367 – Research Assistant, Multiagent Modeling of Human Social Reasoning with Theory of Mind

Project Name
Multiagent Modeling of Human Social Reasoning with Theory of Mind

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multi-agent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc., as well as how those entities model each other (i.e., their Theory of Mind).

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills
•  Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
•  Experience with Python programming.
•  Knowledge of psychosocial and cultural theories and models.

366 – Research Assistant, Internship: MedicalVR Research Assistant

Project Name
Integrated Virtual Roleplayers for Simulation and Training

Project Description
The goal of the Integrated Virtual Roleplayers effort is to take conversational agents from the lab to real training environments by developing an integrated solution for the research and development of interactive characters. This includes the ability to create (semi) autonomous virtual roleplayers (e.g., allies, adversaries, civilians, tutors, mentors) who converse with end-users using verbal and nonverbal behaviors.

Job Description
The MedVR summer intern will collaborate on existing MedVR projects to investigate the efficacy of virtual humans in clinical applications. This position is part of a multidisciplinary research team including psychologists, computer scientists, and game developers in which the intern will assist project leaders with study design and execution. This position will be directly involved in all aspects of the research process, including idea generation, research design, research execution, data analysis, publication and dissemination.

Preferred Skills
•  Students currently pursuing a bachelor’s or master’s degree in psychology or related field.
•  Experience with human subjects research.
•  Study design and survey development in Qualtrics.
•  Data collection utilizing Mechanical Turk.
•  Data analysis in SPSS and/or R.
•  Previous experience in qualitative analysis and data coding in NVivo or a desire to learn Nvivo.

365 – Research Assistant, Personal Assistant for Lifelong Learning (PAL3): COPE

Project Name
Personal Assistant for Lifelong Learning (PAL3): COPE

Project Description

The military offers a wide array of resources to increase resilience and reduce destructive behaviors (e.g., suicide, alcohol abuse, sexual assault). Unfortunately, these resources often require searching across many web portals and the resources are often static (e.g., no interaction, no personalization). Additionally, social stigmas may present barriers to seeking out information or help directly. This poses problems, as many individuals who need health services fail to seek them out [1]. The Personal Assistant for Life-Long Learning (PAL3) uses artificial intelligence to address these issues through just-in-time training and interventions that are tailored to an individual. PAL3 is mobile coach that provides support throughout a sailor’s career by understanding where learners are (knowledge, past training, and experience), where they want to go (career and learning goals), and giving personalized, adaptive coaching and learning resource recommendations to build a more resilient and lethal workforce [2]. PAL3 has demonstrated increased skill retention, even during voluntary use [3].

In this research, we seek to prototype and study the effects of leveraging artificial intelligence to deliver personalized, interactive support to increase resilience and reduce destructive behaviors and support a Culture of Excellence. Specifically, this research will examine two mechanisms:
A) Intelligent Systems for Educating to Reduce Destructive Behavior
B) Virtual Agent Counseling Portal (Feasibility Study)

Job Description
Help create high quality intelligent learning technology content. Author new training content.

Preferred Skills
•  Prior teaching or tutoring experience is a must
•  Experience with intelligent tutoring systems
•  Content authoring

364 – Research Assistant, NLP Virtual Mentoring & Tutoring (CareerFair.ai / OpenTutor)- Research Assistant – Summer Intern

Project Name
CareerFair.ai: Increasing Connections to Fast Growing STEM Careers

Project Description
This project will help students increase their opportunities to learn from mentors by developing and disseminating CareerFair.ai, a web based portal where: a) students can interact for free with virtual STEM professionals in DoD priority areas; and b) STEM professionals can build their own intelligent mentors. In a prior ONR STEM grant, we developed MentorPal: a machine-learning natural language understanding (NLU) system that can be automatically trained to identify the most appropriate response to an input question by processing video-recorded answers. Our project will scale this up, by developing a self-serve platform for recording and publishing virtual mentors. The result will be a sustainable, expanding virtual career fair where students can talk to a diverse array of professionals to learn about different pathways to STEM careers.

Job Description
Focus on Natural Language Processing and dialog system algorithms for MentorPAL and/or OpenTutor. Research on active learning algorithms (i.e., human-in-the-loop for labeling) and/or semi-supervised systems (i.e., leveraging unlabeled data to improve dialog outcomes). Research on techniques to improve dialog quality based on reward functions or other aspects of dialog action priorities/importance rather than more reactive question-answering. Contributing to competitive peer-reviewed publications.

Preferred Skills
•  Natural language processing (e.g., Python NLTK, SciKit-Learn)
•  dialog systems
•  MERN stack
•  Familiarity with deep learning tools (e.g., AllenNLP / Fast.ai / TorchText; and/or PyTorch / TensorFlow)

363 – Research Assistant, Cloud Authoring Pipelines for Intelligent Systems (CareerFair.ai)

Project Name
CareerFair.ai: Increasing Connections to Fast Growing STEM Careers

Project Description
This project will help students increase their opportunities to learn from mentors by developing and disseminating CareerFair.ai, a web based portal where: a) students can interact for free with virtual STEM professionals in DoD priority areas; and b) STEM professionals can build their own intelligent mentors. In a prior ONR STEM grant, we developed MentorPal: a machine-learning natural language understanding (NLU) system that can be automatically trained to identify the most appropriate response to an input question by processing video-recorded answers. Our project will scale this up, by developing a self-serve platform for recording and publishing virtual mentors. The result will be a sustainable, expanding virtual career fair where students can talk to a diverse array of professionals to learn about different pathways to STEM careers.

Job Description
Become part of a cutting edge team of mobile programmers working on the CareerFair.ai / MentorPal prototype. Work with MERN stack with focus on React UI and automation of a video processing pipeline. Develop recommender and visualization systems that will help a layman train and improve an AI mentor based on themselves. Contribute to an open source project which advances the state-of-the-art for intelligent systems and publish peer-reviewed articles on your work.

Preferred Skills
•  React UI
•  MERN stack
•  Video processing software

362 – Research Assistant, Machine Learning Intern

Project Name
Semi-Supervised Learning for Assessing Team Simulations (SLATS)

Project Description
The Semi-Supervised Learning for Assessing Team Simulations (SLATS) project has two focus areas. First, this work will develop a process for classifying team quality that can be performed even when only a limited amount of labeled data is available (e.g., trainers have assessed team performance), by leveraging patterns in unlabeled data. Second, this research will develop diagnostics of the causes for team performance that bridge theory with common use-cases for team metrics (e.g., real-time feedback, adaptive scenario events). This work will involve formalizing and implementing models and services that connect team and individual performance, infer credit/blame for training performance, and calculate and distribute metrics. The goal of this work is also to produce a library of metrics that is extensible, so that improved metrics and methodologies can replace older ones over time.

Job Description
Research on machine learning approaches to classify team behaviors and performance. Machine learning on databases of logs of training experiences. Algorithms optimized for medium to large sized data sources.

Preferred Skills
•  Machine Learning (Python frameworks, SciKit-Learn, PyTorch, TensorFlow, Pandas, etc.)
•  Data Analysis, Data Pipelines, Statistical methods
•  MongoDB

Virtual Reality Treatment Helps Veterans With PTSD

New research shows Post Traumatic Stress Disorder has increased since the pandemic. The ITeam found a virtual reality treatment that puts veterans in some of the worst moments of their lives to help them move forward. Lolita Lopez reports for NBC4 News at 6 p.m. on Nov. 13, 2020.

Watch the segment with NBC4 Los Angeles.

How VR and 5G can help returning vets heal

There are approximately 20 million veterans in the United States. Over a third of them have reported at least one bout of post-traumatic stress disorder (PTSD). And this year, military suicides are up by 20%. Can virtual reality (VR) help our vets overcome the traumas of war?

While traditional in-person therapy has been largely off-limits since the start of the pandemic, new technology has stepped in to try to stem the tide. One clinical psychologist, Dr. Albert “Skip” Rizzo, is using VR to help 80% of veterans in his lab see a meaningful reduction in PTSD symptoms and live more fulfilling lives after their service.

Read the full article, via Verizon.

Veteran suicide crisis demands our action: Skip Rizzo contributes to Tribune Publishing

As we honor those Americans who have sacrificed so much in the service of our country, we must do more to address a threat that stalks all too many of them.

In 2018, 321 active-duty service members took their own lives, including 57 Marines, 68 sailors, 58 airmen and 138 soldiers. This is the highest number of active-duty suicides since 2012 — when an equal number took their own lives — since close tracking began in 2001.

Continue reading Skip’s opinion piece here.

Big Tech Snags Hollywood Talent to Pursue Enhanced Reality

While most of us have yet to meet our digital avatar, the technology used to make a younger version of Robert De Niro for ‘The Irishman’ and Will Smith for ‘Gemini Man’ is becoming more widely available to people who want their own digital twin. 

Learn more about ICT’s involvement in the industry, via Wall Street Journal.

How virtual reality can help robots change the face of the construction industry

The construction industry is one of the largest industries in the world economy, accounting for 13 percent of the world’s GDP. In the U.S. alone, it employs over seven million people and creates nearly $1.3 trillion worth of structures each year. But it also is one of the slowest in terms of embracing new technology and increasing productivity and growth.

Challenged by safety concerns and labor shortages, the construction industry has shown signs of stagnation. Artificial intelligence and automation offer opportunities to advance the industry, however, these come with tremendous challenges—including a lack of education and understanding on how construction teams and robots can successfully work together.

Researchers at the USC Viterbi Sonny Astani Department of Civil and Environmental Engineering (CEE) are tackling these challenges with virtual reality (VR) trainings to teach construction professionals to team with robots to do their jobs more safely and efficiently. The team includes: from the CEE Department Chair Lucio Soibelman, Dean’s Professor of Civil and Environmental Engineering Burçin Becerik-Gerber and Ph.D. candidates Pooya Adami and Patrick Rodrigues; Research Assistant Professor at the USC Institute for Creative Technologies Gale Lucas; and from the USC Rossier School of Education Assistant Professor Yasemin Copur-Gencturk and Postdoctoral Research Associate Peter Wood.

Continue reading.

Expect More Virtual House Calls from Your Doctor Thanks to Telehealth Revolution

USC Trojan Family explores the world of Telehealth as COVID-19 continues to halt business as usual. In this piece, the SimSensei project is featured in its Telehealth tech roundup. Read the full article for more.

CNN’s ‘Tech for Good’ Highlights Bravemind

Watch the full segment, featuring Dr. Albert ‘Skip’ Rizzo and Iraq war veteran Chris Merkle.

Tech for Good

Health care is already benefiting from VR

The Economist explores the many ways in which VR is helpful in the health care industry, including Bravemind in their roundup of treatments.

Read the full article here.

Body Computing Conference 2020

New research reveals conditions under which humans more likely to act deceptively through virtual intermediaries

Recently computer scientists at USC Institute of Technologies (ICT) set out to assess under what conditions humans would employ deceptive negotiating tactics. Through a series of studies, they found that whether humans would embrace a range of deceptive and sneaky techniques was dependent both on the humans’ prior negotiating experience in negotiating as well as whether virtual agents where employed to negotiate on their behalf. The findings stand in contrast to prior studies and show that when humans use intermediaries in the form of virtual agents, they feel more comfortable employing more deceptive techniques than they would normally use when negotiating for themselves.

Learn more.

When bots do the negotiating, humans more likely to engage in deceptive techniques

Series of studies reveal conditions under which humans more likely to have virtual intermediaries act deceptively

Recently computer scientists at USC Institute of Technologies (ICT) set out to assess under what conditions humans would employ deceptive negotiating tactics. Through a series of studies, they found that whether humans would embrace a range of deceptive and sneaky techniques was dependent both on the humans’ prior negotiating experience in negotiating as well as whether virtual agents where employed to negotiate on their behalf. The findings stand in contrast to prior studies and show that when humans use intermediaries in the form of virtual agents, they feel more comfortable employing more deceptive techniques than they would normally use when negotiating for themselves. 

Lead author of the paper on these studies, Johnathan Mell, says, “We want to understand the conditions under which people act deceptively, in some cases purely by giving them an artificial intelligence agent that can do their dirty work for them.”

Nowadays, virtual agents are employed nearly everywhere, from automated bidders on sites like eBay to virtual assistants on smart phones. One day, these agents could work on our behalf to negotiate the sale of a car, argue for a raise, or even resolve a legal dispute. 

Mell, who conducted the research during his doctoral studies in computer science at USC, says, “Knowing how to design experiences and artificial agents which can act like some of the most devious among us is useful in learning how to combat those techniques in real life.”

The researchers are eager to understand how these virtual agents or bots might do our bidding and to understand how humans behave when deploying these agents on their behalf. 

Gale Lucas, a research assistant professor in the Department of Computer Science at the USC Viterbi School of Engineering and at USC ICT, as well as the corresponding author on the study published in the Journal of Artificial Intelligence Research, says, “We wanted to predict how people are going to respond differently as this technology becomes available and gets to us more widely.”

The research team, consisting of Mell, Sharon Mozgai, Jonathan Gratch and Lucas, conducted three separate experiments, focusing on the conditions under which humans would opt for a range of ethically dubious behaviors. These behaviors included tough bargaining (aggressive pressuring), overt lies, information withholding, manipulative use of negative emotions (feigning anger), as well as rapport building and appealing through use of sympathy. Part of these experiments involved negotiations with non-human, virtual agents and programming virtual agents as their proxies.

The researchers found that people were willing to engage in deceptive techniques under the following conditions:

  • If they had more prior experience in negotiation
  • If they had a negative experience in negotiation (as little as 10 minutes of a negative experience could affect their intention to use more deceptive practices in future negotiations)
  • If they had less prior experience in negotiation, but were employing a virtual agent to negotiate for them

Say the authors, “How humans say they will make decisions and how they actually make decisions are rarely aligned.” When people programmed virtual agents to make decisions, they acted similarly to as if they had engaged a lawyer as a representative and through this virtual representative, were more willing to resort to deceptive tactics.

“People with less experience may not be confident that they can use the techniques or feel uncomfortable, but they have no problem programming an agent to do that,” says Lucas.

Other outcomes: when humans interacted with a virtual agent who was fair, they were fairer, but when the virtual agent was nicer or nasty in terms of its emotional displays, participants did not change their willingness to engage in deceptive practices. 

The researchers also gleaned some insights about human behavior in general.

Compared to their willingness to endorse the more deceptive techniques including overt lies, information withholding, and manipulative use of negative emotions, “people really don’t have any problem with being nice to get what they want or being tough to get that what they want,” says Lucas, which suggests that these apparently less deceptive techniques are considered more morally acceptable by the participants. 

The work has implications for ethics on technology use and for future designers. The researchers say, “If humans, as they get more experience, become more deceptive, designers of bots could account for this.” 

Lucas notes, “As people get to use the agents to do their bidding, we might see that their bidding might get a little less ethical.”

Mell adds, “While we certainly don’t want people to be less ethical, we do want to understand how people really do act, which is why experiments like these are so important to creating real, human-like artificial agents.”

###

14th Annual USC Body Computing Conference Rallies Industry Leaders to Discuss the Impact of COVID-19 On Digital Health

Conference to showcase new research and analyze healthcare accessibility for all

LOS ANGELES, Sep. 9, 2020 — On Friday, Oct. 2, the USC Center for Body Computing, part of the USC Institute for Creative Technologies, will host thought leaders and innovators dedicated to discovering how connected technology can revolutionize healthcare and human performance at the 14th Annual Body Computing Conference. The theme of this year’s event, The Digital Health Watershed: COVID-19 and the Tipping Point, will focus on the modernization of healthcare and human performance through technology to make it more affordable and accessible for all during the global pandemic.

At the conference, learn how to navigate the new normal and future through interviews with key industry, policy and thought leaders. Engage in discussions about enabling cultures, technologies and policies to allow a transformation of healthcare into “Lifecare”, an optimized myriad of ways designed and applied to provide a better care model for everyone.

The day-long conference has a rich tradition of gathering the most influential leaders in digital health, including key players from technology makers, academics, and innovative companies to partake in compelling discussions that help foster partnerships, investments and research projects.

“For 14 years we’ve advocated the value of digital tools to promote health access, equity and quality of care. We have also defined the need for a more personalized and continuous health care that brings information and services to the individual,” says Leslie Saxon, MD, founder and executive director of the USC Center for Body Computing. “COVID-19 has accelerated the transition of healthcare to this future model and our conference this year will help describe

the importance of the changes pandemic has had on healthcare delivery, and help chart the future healthcare that will encompass solutions for optimizing the individual for human performance, wellness and chronic disease.”

Speakers will be announced closer to the event on Friday, October 2nd, and the conference will be opened up to more attendees through a virtual model. For more information and to register for the 14th Annual Body Computing Conference, visit https://www.uscbodycomputing.org/2020-body-computing-conference

###

Next Steps Forward With Chris Meek Featuring Dr. Skip Rizzo and Chris Merkle

Host Chris Meek speaks with Dr. Albert “Skip” Rizzo, director of medical virtual reality at the University of Southern California’s Institute for Creative Technologies, about Bravemind, the virtual reality (VR) exposure therapy program Rizzo created to successfully treat post-traumatic stress (PTS) in veterans.

Listen to full podcast here.

The Army’s Next Robot Will Know When You’re Talking Trash to It – And Know When to Talk Back

The Army is developing a system to allow autonomous ground robots to communicate with soldiers through natural conversations — and, in time, learn to respond to soldier instructions no matter how informal or potentially crass they may be.

Researchers from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, working in collaboration with the University of Southern California’s Institute for Creative Technologies, have developed a new capability that allows conversational dialogue between soldiers and autonomous systems.

The capability, known as the Joint Understanding and Dialogue Interface (JUDI), is elegant in its simplicity: the system processes spoken language instructions from soldiers, derives the core intent, and carries out a set of functions, according to Dr. Matthew Marge, a computer scientist at ARL.

Continue reading in Task & Purpose.  

U.S. Army Research Enables Conversational AI Between Soldiers, Robot

Dialogue is one of the most basic ways humans use language, and is a desirable capability for autonomous systems. Army researchers developed a novel dialogue capability to transform Soldier-robot interaction and perform joint tasks at operational speeds.

The fluid communication achieved by dialogue will reduce training overhead in controlling autonomous systems and improve Soldier-agent teaming.

Researchers from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, in collaboration with the University of Southern California’s Institute for Creative Technologies, developed the Joint Understanding and Dialogue Interface, or JUDI, capability, which enables bi-directional conversational interactions between Soldiers and autonomous systems.

The Institute for Creative Technologies, or ICT, is a Department of Defense-sponsored University Affiliated Research Center, or UARC, working in collaboration with DOD services and organizations. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation. ICT brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, education and more.

This effort supports the Next Generation Combat Vehicle Army Modernization Priority and the Army Priority Research Area for Autonomy through reduction of Soldier burden when teaming with autonomous systems and by allowing verbal command and control of systems.

“Dialogue will be a critical capability for autonomous systems operating across multiple echelons of Multi-Domain Operations so that Soldiers across land, air, sea and information spaces can maintain situational awareness on the battlefield,” said Dr. Matthew Marge, a research scientist at the laboratory. “This technology enables a Soldier to interact with autonomous systems through bidirectional speech and dialogue in tactical operations where verbal task instructions can be used for command and control of a mobile robot. In turn, the technology gives the robot the ability to ask for clarification or provide status updates as tasks are completed. Instead of relying on pre-specified, and possibly outdated, information about a mission, dialogue enables these systems to supplement their understanding of the world by conversing with human teammates.”

In this innovative approach, he said, dialogue processing is based on a statistical classification method that interprets a Soldier’s intent from their spoken language. The classifier was trained on a small dataset of human-robot dialogue where human experimenters stood in for the robot’s autonomy during initial phases of the research.

The software developed as part of the collaboration with USC ICT leverages technologies developed in the institute’s Virtual Human Toolkit.

“JUDI’s ability to leverage natural language will reduce the learning curve for Soldiers who will need to control or team with robots, some of which may contribute different capabilities to a mission, like scouting or delivery of supplies,” Marge said.

The goal, he said, is to shift the paradigm of Soldier-robot interaction from today’s heads-down, hands-full joystick operation of robots to a heads-up, hands-free mode of interaction where a Soldier can team with one or more robots while maintaining situational awareness of their surroundings.

According to the researchers, JUDI is distinct from current similar research conducted in the commercial realm.

“Commercial industry has largely focused on intelligent personal assistants like Siri and Alexa – systems that can retrieve factual knowledge and perform specialized tasks like setting reminders, but do not reason over the immediate physical surroundings,” Marge said. “These systems also rely on cloud connectivity and large, labeled datasets to learn how to perform tasks.”

In contrast, Marge said, JUDI is designed for tasks that require reasoning in the physical world, where data is sparse because it requires previous human-robot interaction and there is little to no reliable cloud-connectivity. Current intelligent personal assistants may rely on thousands of training examples, while JUDI can be tailored to a task with only hundreds, an order of magnitude smaller.

Moreover, he said, JUDI is a dialogue system adapted to autonomous systems like robots, allowing it to access multiple sources of context, like Soldier speech and the robot’s perception system, to help in collaborative decision-making.

This research represents a synergy of approaches created by ARL researchers from both the lab’s Maryland locations and ARL West in Playa Vista, California, who are part the lab’s Human Autonomy Teaming, or HAT, and Artificial Intelligence for Maneuver and Mobility, or AIMM, Essential Research Program, and experts in dialogue from USC ICT. The group’s speech recognizer also leveraged a speech model developed as part of the Intelligence Advanced Research Projects Activity’s Babel program, designed for reverberant and noisy acoustic environments.

JUDI will be integrated into the CCDC ARL Autonomy Stack, a suite of software algorithms, libraries and software components that perform specific functions that are required by intelligent systems such as navigation, planning, perception, control and reasoning, which was developed under the decade-long Robotics Collaborative Technology Alliance.

Successful innovations in the stack are also rolled into the CCDC Ground Vehicle System Center’s Robotics Technology Kernel.

“Once ARL develops a new capability that is built into the autonomy software stack, it is spiraled into GVSC’s Robotics Technology Kernel where it goes through extensive testing and hardening and is used in programs such as the Combat Vehicle Robotics, or CoVeR, program,” said Dr. John Fossaceca, AIMM ERP program manager. “Ultimately, this will end up as Army owned intellectual property that will be shared with industry partners as a common architecture to ensure that Next Generation Combat Vehicles are based on best of breed technologies with modular interfaces.”

Moving forward, the researchers will evaluate the robustness of JUDI with physical mobile robot platforms at an upcoming AIMM ERP-wide field test currently planned for September.

“Our ultimate goal is to enable Soldiers to more easily team with autonomous systems so they can more effectively and safely complete missions, especially in scenarios like reconnaissance and search-and-rescue,” Marge said. “It will be extremely gratifying to know that Soldiers can have more accessible interfaces to autonomous systems that can scale and easily adapt to mission contexts.”

###

VR Tools Becoming More Common in Real-World Practice

At the 2020 virtual Psych Congress Elevate conference, Dr. Skip Rizzo, director of the medical virtual reality lab at the University of Southern California’s Institute for Creative Technologies in Los Angeles, used an example of a VR simulation that helps people with a fear of flying. “In this case, these environments are deliverables using low-cost VR headsets that don’t even require a computer. All the processing is done on a headset that’s about $400,” he said. 

“The accessibility of this technology has dramatically changed in the last couple of years,” Dr. Rizzo explained. “You can pull a VR headset out of your desk drawer and hand it to a client, rather than having to use an exotic computer, and be able to deliver this type of treatment more effectively.”

Continue reading in Psychiatry & Behavioral Learning Network.

Dr. Albert “Skip” Rizzo on Virtual Reality

Albert “Skip” Rizzo, PhD, describes virtual reality in this clip from his “Virtual Reality: The New Frontier in Mental Health” session, which will be presented at the 2020 Psych Congress Elevate conference, July 25-27, 2020.

VIRTUAL – HCI International 2020

VIRTUAL – 11th International Conference on Applied Human Factors and Ergonomics (AHFE 2020)

To Mitigate and Track Covid-19, Entrepreneurs Push to Develop Tools

The Wall Street Journal speaks with industry experts about specific tools in the developmental stage to help fight Covid-19. In this article, reporter David Ewalt discusses the concept of a virtual reality game to help aid children in dealing with the pandemic with ICT’s Skip Rizzo.

Read the article here.

RIDE

Download a PDF overview.

Rapid Integration & Development Environment

A VIRTUAL ENVIRONMENT TESTBED ACCELERATING DOD SIMULATION TECHNOLOGIES

OVERVIEW

RIDE combines features inherent to commercial game engines with many of the immersive technologies developed throughout the ICT research portfolio. ICT, in direct support of Department of Defense-funded research (DoD), has created this state-of-the-art, modern simulation research and development framework that has proved invaluable in advancing the research of ICT, collaborators, and stakeholders across multiple lines of effort.

RIDE combines and integrates – into a single simulation framework – the following unique capabilities created through ICT research and development: One World Terrain (OWT) data and tools; generative programming; networking; machine learning (ML) tools; speech recognition; natural language processing; character AI behaviors; and scenario event development.

RIDE is integrated with commercial game engines allowing the re-use of visual art, 3D models, and other simulation technologies in the common platform thereby reducing the efforts required to create divergent simulation prototypes. Future ICT work will focus on advancing novel AI and ML approaches; adding narrative summarization to support after-action reviews; supporting research with mixed reality technologies, and expanding the implementation of RIDE across multiple commercial game engines.

ICT has been successful in making RIDE available to DoD organizations interested in using and sponsoring capabilities to support research and development objectives. ICT is working to create a DoD “RIDE Community of Users” in order to expand the awareness of RIDE, encourage its widespread use across the DoD simulation community, and leverage the expertise of DoD researchers and developers to diversify future RIDE capabilities. Sponsored research and development models for RIDE can also be made available to broaden and accelerate advanced simulation prototypes.

For more information and introductory video, please visit the RIDE website.

Army Looks to Better Attract Gaming Industry for Training Simulations

The Army’s STE information system, which is currently in development, will serve similar to an operating system on a smartphone.

When the iPhone was first released, it only had a handful of standard applications developed by Apple. The company then created its App Store, which now has over 2 million apps available to download on iPhones.

The STE information system will have three baseline apps: training simulation software that will drive simulations; training management tools to plan, execute and assess training; and One World Terrain that will be 3-D and readily accessible either on hand or pulled from a commercial asset into simulators in less than 72 hours.

Continue reading, via U.S. Army.

Suicide Crisis – PTSD, Suicide and Bravemind

The Bravemind application has been implemented at Ozark Center, part of an initiative with the Veteran Integration Program in Missouri.

Watch the full segment, via KSN Local News.

AI Therapists Promise to Help You Cope with Coronavirus Isolation

Gale Lucas and her research are featured in this piece from OneZero about human-computer interaction in the time of COVID-19.

Read the full article here.

Do Humans Dream of Androids Dreaming?

ICT’s Jonathan Gratch provides insight on emotionally sophisticated personal assistants for USC Dornsife News.

Read the full article here.

Will COVID-19 Pave the Way for Home-Based Precision Medicine?

COVID-19 could fundamentally alter the way we deliver healthcare, abandoning the outdated 20th century brick and mortar fee-for-service model in favor of digital medicine. At-home diagnostics may be the leading edge of this seismic shift and the pandemic could accelerate the product innovations that allow for home-based medical screening.

“That’s the silver lining to this devastation,” says Dr. Leslie Saxon, executive director of the USC Center for Body Computing at the Keck School of Medicine in Los Angeles. As an interventional cardiologist, Saxon has spent her career devising and refining the implantable and wearable wireless devices that are used to treat and diagnose heart conditions and prevent sudden death. “This will open up innovation—research has been stymied by a lack of imagination and marriage to an antiquated model,” she adds. “There are already signs this is happening—relaxing state laws about licensure, allowing physicians to deliver health care in non-traditional ways. That’s a real sea change and will completely democratize medical information and diagnostic testing.”

Continue reading in leapsmag.

Marine Training May Take More Mental Than Physical Stamina

Continued coverage of recent CBC research, via PsychCentral.

How the U.S. Marine Corps Remains the ‘Best of the Best’

The Marine Corps has a desire to maximize the number of Marines who pass selection, without compromising their fitness standards or making the course any easier. If they could predict who would be likely to fail or pass before Recon selection, training would be more efficient. One researcher decided to try and answer the question, what separates those who pass from those who fail?

Continue reading about recent research from the Center for Body Computing, via Yahoo! News.

Gale Lucas

Mona Sobhani

Sharon Mozgai

Leslie Saxon

The Rise of Biometrics in Sports

“We’re not just trying to preserve an individual in the short term. We want methods that will preserve that individual into their postelite athlete life. One has to think about what is going to ultimately affect the health of that individual—including their nutritional, emotional, and cognitive needs—over the long term,” Leslie Saxon for IEEE Pulse.

Continue reading the full article here.

Jessica Brillhart

Kyle McCullough

David Traum

Psychological Factors Matter More than Physical Performance for Marine Training

Additional coverage of CBC’s recent published research, via News Medical Life Sciences.

INVRSE

Download a PDF overview.

A LOW-COST PLATFORM FOR MOBILE-BASED HARDWARE THAT INTEGRATES CASUAL IMMERSIVE EXPERIENCES SEAMLESSLY INTO 2D TOUCH-SCREEN BASED MEDIA

The hardware components of INVRSE includes a simple INVRSE lens assembly that slides onto the top portion of your tablet or over your phone screen. This enables you to experience content in virtual or augmented reality while experiencing traditional media formats like text, photos, or video. What results is a cutting-edge content ecosystem that is accessible, scalable, and cost effective. The goal of INVRSE is to leverage all the great things that immersive media can do while eliminating the logistical limitations and user hurdles that come with it.

A LOW-COST SOLUTION ​Immersive hardware for technologies such as virtual reality and augmented reality tend to be expensive to purchase and costly to maintain. The costs of creating INVRSE devices are nominal and the hardware can be easily reproduced on any 3D printer.

EASY TO USE ​INVRSE is straightforward, allowing anyone – regardless of technical ability, familiarity, or skillset – to be able to experience immersive media. If you know how to use a tablet, then you know how to use INVRSE.

ALL TYPES OF MEDIA IN ONE PLACE ​INVRSE offers a hybrid and holistic media ecosystem, combining traditional formats like text, photos and videos with emerging mediums like VR and AR. INVRSE allows users to intuitively and casually jump in and out of various formats without having to switch apps, platforms, or devices.

CLOUD-BASED & NETWORK READY ​Content for INVRSE runs on widely used mobile-based technology and hardware. Media can be streamed from either a WiFi or a cellular network, or it can be pulled down locally to devices from the cloud to be accessed in areas of limited to no connectivity. INVRSE also has the capacity to go from being a one-on-one experience to a fully social one. Creators can network devices together and designate a leader to guide a team of users through an immersive experience. This is particularly impactful in the sectors which have training and education at their cores.

EMPOWERS THOSE WHO NEED IT WHEN THEY NEED IT MOST Whether you’re a teacher creating a remote lesson for her Chemistry class, a platoon commander defining a tactical scenario while on the frontlines, or a news publisher who wants to leverage all forms of media to tell a cohesive story – INVRSE will empower anyone/everyone to be able to create engaging content for the platform. The future capabilities of an INVRSE solution will include an easy-to-use content creator tool complete with templates and guidelines to help add, edit, and share. INVRSE source-code and schematics are open source, empowering users to craft and mod their very own viewers to fit their various needs.

For more information, please contact David Nelson, senior producer, MXR Lab at: dnelson@ict.usc.edu.

Healing the Invisible Wounds of War with Virtual Reality

Since 9/11, nearly three million service members have deployed to war zones in Iraq and Afghanistan—about half of them more than once.

Now, an innovative, evidence-based approach to treating PTSD is reaching more veterans than ever before. Called “virtual reality exposure therapy,” it heals by transporting the veteran back to the traumatic war event, into a computer-generated, parallel universe created in a Southern California lab.

Continue reading and listen to the full podcast, via Veterans in America, a special limited-series podcast from RAND.

Marine Training May Take More Mental Than Physical Grit

Keck Medicine of USC study identifies psychological measures that may predict who is more likely to complete – or quit – a demanding marine training course

LOS ANGELES, June 25, 2020 — The United States military has a constant need for service members who can serve in elite and specialized military units, such as the Marine Corps. However, because the training courses for these forces is so rigorous, the dropout rate is high.

To help determine predictors of success or failure in elite military training, Leslie Saxon, MD, executive director of the USC Center for Body Computing, and fellow Center for Body Computing researchers monitored the physical and psychological activity of three consecutive classes of Marines and sailors enrolled in a 25-day specialized training course.

The results were published in the Journal of Medical Internet Research mHealth and uHealth.

A total of 121 trainees participated. Only slightly more than half (64) successfully completed the course.

Researchers found there was no correlation between finishing and performance on physical training standards, such as hikes or aquatic training. Physical markers such as heart rate or sleep status also did not play a role.

Rather, the biggest determinant was mental. Trainees who identified themselves as extroverted and having a positive affect – the ability to cultivate a joyful, confident attitude – were most likely to complete the course.

“These findings are novel because they identify traits not typically associated with military performance, showing that psychological factors mattered more than physical performance outcomes,” says Saxon, who is also a cardiologist with Keck Medicine of USC and a professor of medicine (clinical scholar) at the Keck School of Medicine of USC

Researchers were also able to pinpoint psychological stressors that triggered dropping out of the course. Trainees typically quit before a stressful aquatic training exercise or after reporting an increase in emotional or physical pain and a decrease in confidence. This led researchers to be able to predict who would drop out of the course one to two days in advance. 

While Saxon has been studying human performance in elite athletes for 15 years, this was her first study involving the military. She partnered with the USC Institute for Creative Technologies, which has established military research programs, to run the study with a training company in Camp Pendleton, Calif. that trains Marines in amphibious reconnaissance. Typically, only around half of the participants finish the training.

The study authors collected baseline personality assessments of the trainees before the recruits began the course, assessing personality type, emotional processing, outlook on life and mindfulness. Researchers next provided subjects with an iPhone and Apple Watch, and a specially designed mobile application to collect continuous daily measures of trainees’ mental status, physical pain, heart rate, activity, sleep, hydration and nutrition during training.

The mobile application also prompted trainees to answer daily surveys on emotional and physical pain, well-being and confidence in course completion and instructor support.

“This study, the first to collect continuous data from individuals throughout a training, suggests that there may be interventions the military can take to reduce the number of dropouts,” says Saxon. “This data could be helpful in designing future training courses for Marines and other military units to increase the number of elite service members, as well as provide insights on how to help athletes and other high performers handle challenges.” 

Saxon is already testing whether or not various psychological interventions or coaching might encourage more trainees to stay the course.

Other USC Center for Body Computing study authors include Rebecca Ebert, BS, senior research coordinator, and Mona Sobhani, PhD, director of research and operations. Researchers from the USC Marshall School of Business and the Department of Computer Science at the USC Viterbi School of Engineering also contributed to the study.

# # #

OVR Technology Delivers First-Of-Its-Kind Scent Experience Virtual Reality Experiences with Scents

Anyone can benefit from the immersive experiences of OVR Technology’s scent platform. However, it’s most advantageous to organizations looking for measurable VR outcomes. Take Bravemind for example. Skip Rizzo of the University of Southern California developed the treatment program for veterans with PTSD. He, too, harnesses the power of virtual reality and olfaction.

Continue reading in AR Post.

CANCELED – Amazon re:MARS 2020

OVR Technology Delivers First-Of-Its-Kind Scent Experience for VR

Skip Rizzo, Director for ICT’s Medical VR group, will use OVR Technology’s platform in the next iteration of Bravemind. Such therapies are crucial, as 20 percent of military veterans suffer from PTSD and 21 veterans die by suicide daily in the United States.

Continue reading in Gear VR Powered.

Body Computing

The Center for Body Computing (CBC) is a digital health research and innovation center focused on digital technology-driven health and human performance solutions for a modern age.  Our core expertise is in the use of biometric sensors within devices that are held in the hand, worn on the skin or implanted in the body, to optimize health conditions. The CBC has created a model for the future of health, performance, and chronic disease management that is not confined to a point visit between a subject and health provider.

We use clinical research to test, validate and develop technology to make healthcare more accessible and affordable to a broader population. The results are personalized and continuous health models, reimagined training techniques through evidence-based application of biometric data, and improved health information safety and efficacy. We also have active engagement with experts, programs, and policies that assure current and best-in-breed cybersecurity, data privacy, and ethics practices.

Work with us to see how together we can accelerate our mission to use technology to make healthcare more personal, affordable, and accessible for all.

Tracking Michigan Protesters Raises Privacy, COVID-19 Spread Questions

Collecting phone data for use in public health studies or operations has become a hot issue because of the coronavirus pandemic, said Dr. Mona Sobhani, director of research and operations at the University of Southern California’s Center for Body Computing. Regular citizens might not realize it, but apps commonly collect location information and barter the data to companies interested in targeted marketing, she said. 

Continue reading in The Detroit News.

What It Takes to Become One of Marine Corps’ Elite

Leslie Saxon, Executive Director of the Center for Body Computing conducted a study to determine why so many Marine candidates were dropping out. Her study “sought to continuously quantify the mental and physical status of trainees of an elite United States military unit [Recon Marines] to identify novel predictors of success or failure in successive training classes performed on land and in water.” The results of the study are fascinating.

Continue reading in The National Interest.

Can Virtual Reality Help Sports Fans Experience Game Day In A Post COVID-19 World?

ICT’s MxR Lab Director Jessica Brillhart talks with CBS News about the use of VR during and after a global pandemic.

Read the full story here.

How to Motivate Workers Who Are Managed By an Algorithm?

USC researchers investigate crowdwork — assigning mundane tasks via a website — and determine how to help these workers feel invested in their duties.

By Sara Preto

Many businesses turned to remote workers to continue their operations after states issued stay-at-home orders to reduce COVID-19 infections. It’s a trend that is likely to continue long after the coronavirus is controlled.

To help companies ease the transition online, USC researchers studied the challenges to increasing the use of crowdwork — a manifestation of the gig economy in which companies offer ad-hoc, mundane tasks to prospects via a website. The move minimizes disruptions that organizations would experience as a result of COVID-19 or other crises.

The study, conducted in September through a collection of task responses via Amazon’s Mechanical Turk crowdsourcing platform, shows that workers will need more autonomy over tasks and a clearer sense of purpose to perform often mundane work at a high level — advantages that AI assistance offers

“Crowdwork functions similarly to Uber, but it is used to perform online tasks like clean data, train artificial intelligence and moderate content,” said Gale Lucas, research assistant professor at the Institute for Creative Technologies at the USC Viterbi School of Engineering.

“As unemployment rates continue to skyrocket, it will likely become even more popular in serving as a stopgap during the current shutdown and as the economy changes due to COVID-19. We need to improve crowdwork and make it more efficient, which could involve new types of supervision assistance using AI.”

The findings were presented May 11 via the International Conference On Autonomous Agents and Multi-Agent Systems in New Zealand. A video presentation is be publicly available.

Algorithmic management contributes to crowdwork

With the continuous development of AI technologies, employees and gig workers increasingly encounter software algorithms that assist in assigning their work. Many tasks performed by managers — such as hiring, evaluations and setting compensation — will increasingly use AI as a tool to help perform these functions.

These newly automated supervisory duties — called algorithmic management — already play a major role at companies like UPS, Uber and Amazon, which outsource tasks to a large pool of online workers.

New research from ICT and Fujitsu Laboratories shows that to enhance worker motivation in a crowdwork environment, worker autonomy and transparency in regard to how completed tasks have been solved is imperative.

Perceptions of autonomy can enhance productivity, especially when the work holds intrinsic meaning for workers, yet crowdwork often seems meaningless. According to the researchers, “More problematically, the meaning of the work is sometimes hidden due to security or experimental control, like when the workers serve as subjects in a scientific experiment. Enhancing user motivation and performance through human-agent interaction is an important challenge, not only for algorithmic management but in other AI disciplines, including educational technology, personal health maintenance, computer games, personal productivity monitoring and crowdsourcing.”

Researchers investigate how to maintain worker motivation

To test the management applications, ICT researchers conducted an online experiment investigating how perceptions of autonomy and the meaningfulness of work shape crowdworker motivation. Yuushi Toyoda, senior researcher for Fujitsu Laboratories, and USC researchers Jonathan Gratch and Lucas examined alternative techniques to maintain crowdworker motivation when their work is additionally managed by an algorithm.

“Given that system designers might be designing autonomous agents that perform some management tasks in the context of algorithmic management, understanding how workers might respond to these systems, especially in remote work conditions, could provide essential guidance for designers,” Toyoda said.

The team found that workers are more motivated when their work has meaning and algorithmic management is framed in a way that highlights worker autonomy. For example, when performing a tedious task like counting the number of infected blood cells on a laboratory slide, workers perform better when they are told about a societally meaningful goal — such as curing an infectious disease — and when feedback supports autonomy with helpful prompts and queries.

“We found that when people knew the goal was to help cure a disease, they actually overreported the number of infected cells. Their desire to see the work succeed actually undermined the usefulness of their work,” said Gratch, ICT director for virtual human research and a USC Viterbi professor of computer science.

In contrast, when the work holds no meaning, productivity is only enhanced when algorithmic management falls back on authoritative managerial control, framing the algorithm as a boss that commands conformity rather than promotes autonomy. That can be a challenge, as it is not always possible to provide the meaning behind a task because this information can sometimes bias results, the researchers said.

The new findings highlight the importance of autonomy and meaningfulness in a crowdwork environment and contribute to the growing body of literature in algorithmic management and human-AI interaction. Ride-hailing companies like Uber and Lyft currently use algorithmic management via an app that gives employees freedom in scheduling and routes, and findings by the USC research team suggest ways such systems can be improved.

###

USC’s ICT joins entertainment industry artists with computer and social scientists to explore immersive media for military training, health therapies, education and more. Researchers study how people engage with computers through virtual characters, video games and simulated scenarios. ICT is a leader in the development of virtual humans who look, think and behave like real people. Established in 1999, ICT is a U.S. Department of Defense-sponsored University Affiliated Research Center.

The research is supported in part by Fujitsu Laboratories of America and the U.S. Army.

USC Webinar Addresses Impact of COVID-19 on Telemedicine

The COVID-19 crisis has led to an unprecedented increase in the use of telemedicine. To assess what that means for healthcare now and in the long term, the USC Schaeffer Center for Health Policy & Economics gathered experts for a widely viewed webinar on May 19.

Learn more here.

TechNews World Speaks with David Krum

The next generation of Oculus Quest virtual reality headsets is in the works, but pandemic-related product development and supply chain problems may delay market arrival.

Oculus, which is a division of Facebook, has multiple potential Quest successors on the drawing board, Bloomberg reported Tuesday. Smaller, lighter versions with a faster image refresh rate for more realistic rendering are in the advanced testing stage.

Facebook planned to reveal the new models at its annual Oculus Connect conference at the end of the year, but it may have to wait until 2021 to start shipping them because of COVID-19, according to Bloomberg.

The models being tested reportedly are 10-15 percent smaller and weigh about a pound. The current Quest headset weighs 1.25 pounds and can be taxing when worn for extended periods of time.

Saving a few grams here and there can make the headset less tiring to wear, noted David Krum, associate director of the Institute for Creative Technologies’ MxR Lab at the University of Southern California in Los Angeles.

“The fatigue and discomfort adds up over time, so a small weight savings means you will find it more pleasant to wear. You will be able to wear it longer and get more done,” he told TechNewsWorld.

Continue reading in TechNews World.

Expanding the Utility and Interoperability of Rapidly Generated Terrain Datasets

Reality modeling advancements now allow for the generation of high-resolution 3D models from a variety of sources to rapidly meet modeling and simulation needs.

Continue reading in Trajectory Magazine.

POSTPONED – ICASSP 2020

Can a Building Help Thwart the Next Active Shooter?

USC researchers imagine a future in which building security provides a dynamic response to active shooters.

Continue reading in the Spring 2020 issue of USC Viterbi’s Magazine.

The Spoils of Playing War

To build even stronger partnerships with entertainment and academia, the army founded the Institute for Creative Technologies in 1999 at the University of Southern California. Into the 2000s, the CIA ‘worked with’ the scriptwriter of Zero Dark Thirty and the US Navy was listed ‘producer’ on four 2012 big-budget releases. Such synergy means reduced production budgetsfor studios, including low-cost access to military locations and high-end technology. In return, the military can inject pro-war and pro-nationalist framings into scripts.

Continue reading in Red Pepper.

Study: VR Helps You Feel Calm and Connected During Coronavirus Quarantine

CNBC speaks with Dr. Skip Rizzo about the benefits VR can provide during the COVID-19 pandemic.

Read the full article here.

Artificial intelligence is preserving our ability to converse with Holocaust survivors even after they die

60 Minutes features a segment on the New Dimensions in Testimony project in time for the 75th anniversary of the end of WWII and of the liberation of concentration camps across Europe.

Watch the full piece here.

USC Consultants Discover New Applied Sciences to Fight COVID-19

From digital actuality and machine studying to smartphone apps and super computing energy, researchers are figuring out which applied sciences would be the most helpful within the battle towards the coronavirus.

Continue reading in Medical Today Chronicle.

USC Experts Explore How New Technologies to Combat COVID-19

From virtual reality and machine learning to smartphone apps and supercomputing power, researchers are determining which technologies will be the most useful in the battle against the coronavirus.

Continue reading USC News.

IEEE International Conference on Pervasive Computing and Communications

A Modest iPad Update Holds the Key to Apple’s AR Future

THIS WEEK, APPLE debuted a  new iPad Pro. It has a little more power than the previous model, and a keyboard with a trackpad. Neat. But its most consequential upgrade is the one that will likely get the least use, at least on a tablet: a lidar scanner.

If you’ve heard of lidar it’s likely because of self-driving cars. It’s a useful technology in that context because it can build a 3D map of the sensor’s surroundings; it uses pulses of light to gauge distances and locations in a similar way to how radar uses radio signals. In an iPad Pro, that depth-sensing will be put in the service of augmented reality. But it’s not really about the iPad Pro. Apple put a lidar scanner in a tablet to prepare, almost certainly, for when it puts one in a pair of AR glasses.

Continue reading to see ICT’s Jessica Brillhart give commentary to WIRED on the news.

Get Faster, Stronger and Fitter Through the Power of Data

“The data game is about minimizing risk,” Leslie Saxon, a cardiologist and executive director of the USC Center for Body Computing, told attendees at an MIT Sloan Sports Analytics Conference. When the USC women’s soccer coach couldn’t figure out why a team with a legacy of national championships wasn’t winning, data from sensors alerted him that players were running six miles the day before a game, Saxon said. The aha moment spurred more rest before competition, leading to more wins and fewer injuries.

Continue reading in the USC Trojan Family Magazine.

Deepfakes: Battle of the Experts

In the last part of the deepfake series for Germany’s Shift series on Spektrum, computer scientist Hao Li explains why you should create deepfakes yourself.

Watch the video here.

Deepfakes: The Next Big Threat to American Democracy?

As anxieties about foreign interference in the 2020 presidential election grow, concerns about other vectors of misinformation are evident. Deepfakes, realistic video forgeries, have some of the most damaging potential.

Continue reading in Government Technology.

Virtual Reality Research Helps Veterans with PTSD

When Chris Merkle retired from the U.S. Marine Corps in 2010, he struggled to overcome the lingering trauma of having served in Iraq and Afghanistan. While he met with a therapist regularly, Merkle found it difficult to share his overseas experience.

“I was not prepared or ready to deal with the trauma, so I just talked about surface-level problems,” Merkle said. 

His therapist recommended the Bravemind project, which uses virtual reality technology to treat conditions like post-traumatic stress disorder. Patients are outfitted with a head-mounted virtual display and led by a therapist through a stress-inducing war environment. Participants physically hold a rifle as they experience a simulation that includes booming explosions and even smells of burning debris. 

Continue reading in the Daily Trojan.

AI Therapists May Eventually Become Your Mental Health Care Professional

Psychological well being is a courageous new frontier for synthetic-intelligence and device-studying algorithms driven by “big information.” Just before extensive, if some ahead-wanting psychologists, medical doctors and enterprise-cash investors have their way, your therapist could be a digital human able to listen, counsel and even invoice for that 50-moment hour.

Continue reading, via ABC 14 News in Colorado.

How Martin Luther King Jr. Was Recreated in Virtual Reality

See how TIME partnered with Digital Domain, using ICT’s Light Stage for the scanning process, in recreating the 1963 March on Washington.

How AI ‘Therapists’ Could Shrink the Cost of Mental Health

Could machines that look, act and sound human replace psychologists and psychiatrists? Probably not — that possibility is limited so far by a lack of technological understanding and infrastructure — but many clinicians fear this future nonetheless. Virtual therapists are available anytime, anywhere. “They’re never tired, never fatigued; they can build a giant database of what you say and how you say it,” says Skip Rizzo, director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies.

Continue reading in MarketWatch.

Deepfake Showing Tom Holland, Robert Downey Jr. Shows How Seamless Tech Can Be

The clip, while fun to imagine the recasting, shows how good deepfake technology has become, leaving some concerned that what voters see this political season is far from reality.

And deepfakes are not new, with CNN investigating the technology more than a year ago.

University of Southern California professor Hao Li called the practice of deepfakes scary, in an interview with the BBC.

“We are already at the point where you can’t tell the difference between deepfakes and the real thing,” Li said, despite developing a deepfake program.

Continue reading and/or watch the full segment, via CBS Atlanta.

Virtual Reality Applications That Can Help Save Lives

With more advancements in technology, we can expect to see more VR applications in the field of medicine, healthcare, crisis management, and other industries.

Continue reading in AR Post.

NPC Newsmaker: Veterans Affairs Secretary Robert Wilkie to announce new initiatives

Secretary of Veterans Affairs Robert Wilkie speaks at a National Press Club Newsmaker on Wednesday, February 5 announcing new initiatives designed to better serve America’s veterans and their families.

Secretary Wilkie discussed the Trump Administration’s plan to prevent veteran suicide, and outlined how the VA will implement the kinds of sweeping organizational changes needed to optimize new technologies and innovations and provide veterans with modern services. 

Watch here.

VR Exposure Therapy Provides Treatment Option for PTSD

“More than just sights and sounds,” explains Dr. Rizzo, “Bravemind uses a virtual reality headmounted display, directional 3-D audio, vibrations, and smells to generate a truly immersive recreation of the events that can be regulated at a pace the patient can handle.”

Continue reading in VR Fitness Insider.

How Virtual Reality Can Help Treat Mental Health Conditions

Two thirds of people with mental health disorders will never see a health-care professional. Here’s how VR can draw people to therapy.

Watch the full segment on MarketWatch.

Arkansas Vets Suicide Program Gets D.C. Audience

A House Veterans Committee panel Tuesday reviewed programs from across the country, including one from the Natural State, that are helping to lower suicide rates among veterans.

There was even a presentation from SoldierStrong, which is described as a “Virtual Reality therapy program.”

Wearing goggles, a veteran can see, hear and even smell high-stress simulations.

“Consistent exposure therapy can gradually make difficult memories less harrowing,” a description of the program stated.

Audience members were able to view some of the SoldierStrong images on a screen and listen to some of the sounds.

“Two things that aren’t here today are a rumble board; it’s a series of speakers that add a tactile element to the immersive experience. And also, we have a scent dispenser. … It increases the immersive experience: things like burnt rubber, sweat [and] explosions,” said Sharon Mozgai, a research analyst with the University of Southern California’s Institute for Creative Technologies.

Continue reading.



Virtual Reality System Helping North Texas Veterans with Post-Traumatic Stress

Clinicians are training this week on the StrongMind system, donated by the charitable organization SoldierStrong.

North Texas is one of 13 VA facilities across the country to receive one, including VA medical centers in Houston and San Antonio.

Rather than asking veterans to recall their stressful memories, or imagine the scenarios as part of their therapy, StrongMind puts them in the middle of something they can see, hear, feel and even smell.

Continue reading and watch the full segment on CBS DFW.

Home from War: Asheville VA Aims to Help Veterans with PTSD through Virtual Reality

“If done in the hands of a well-trained clinician, at a pace that the patient can handle, we can really help a patient to go back and confront and reprocess very difficult emotional memories, but in a safe environment,” said Skip Rizzo, research professor at USC Institute for Creative Technologies.

Continue reading the article and watch the segment on WLOS.

Army Modernization Translates into Accepting Risk and Learning Quickly

Two years ago, the Army recognized the need to rapidly and persistently modernize our force to stay ahead of technological change and national competitors.

Continue reading in The Hill.

Army Targeting Goggles, VR Training May Use JEDI Cloud

The Army’s building a detailed VR map of the planet and the service’s CIO sees JEDI as the logical place to host such a massive database. 

Continue reading in Breaking Defense.

Deepfakes: A Threat to Democracy Or Just a Bit of Fun

Live from the World Economic Forum, BBC talks deepfakes with Hao Li.

Deepfakes: Do Not Believe What You See

From the World Economic Forum in Davos, ICT’s Hao Li talking about deepfake technology , its potential implications on society, and how we need to react.

Watch here.

Deepfakes

“We use AI to actually synthesize digital humans.” Developers of deepfakes are making huge advances, allowing people to digitally change faces in real time.

Via Bloomberg.  

Pinscreen’s Real-Time Deepfake Demo

The video presents a state-of-the-art demo of a real-time DeepFake face-swapping technology. What we see is both amazing and terrifying. 

The demo was set up for the World Economic Forum in Davos to raise awareness of the danger of deepfakes. It will help shoe advanced video manipulation technologies and show how they can be misused for the purpose of disinformation. 

Just imagine the potential power of the app. It was developed by Hao Li, CEO/Co-Founder of Pinscreen, Associate Professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. His work focuses on digitizing humans and capturing their performances for immersive communication and telepresence in virtual worlds.

Learn more here.

International Meeting on Simulation in Healthcare (IMSH) 2020

OVR Technology Is Creating Olfactory Virtual Reality for Health Care, Education and Training

A key collaborator who has helped guide development of the Architecture of Scent is Albert “Skip” Rizzo, a research professor at the University of Southern California and director for medical virtual reality at USC’s Institute for Creative Technologies. He researches the use of VR to assess, treat, rehabilitate and increase resilience in psychology patients. Rizzo received the American Psychological Association’s 2010 Award for Outstanding Contributions to the Treatment of Trauma for his work using virtual reality-based exposure therapy to treat PTSD.

Continue reading.

Good Cop, Good Cop: Can VR Help to Make Policing Kinder?

As police forces in the US and elsewhere wrangle with accusations of bias and brutality, a quiet effort is underway using VR to boost empathy and reduce trigger responses. Some of the largest police forces in the US are experimenting with VR to build empathy, self-reflection and resilience, as well as hopefully shed their reputation for aggression.

Professor Albert ‘Skip’ Rizzo is a VR veteran stationed at the University of Southern California. He and his colleagues have been working with the Los Angeles Police Department (LAPD) on a pilot project to help police officers build the resilience necessary in their line of work. He is optimistic about what VR could do for the police. “We’re not just willy-nilly throwing technology at problems: there’s a pretty solid rationale for this,” he said.

Continue reading in Engineering & Technology.

Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

Continue reading.

Deepfakes: Informed Digital Citizens are the Best Defense Against Online Manipulation

Deepfakes, a specific form of disinformation that uses machine-learning algorithms to create audio and video of real people saying and doing things they never said or did, are moving quickly toward being indistinguishable from reality

Detecting disinformation powered by unethical uses of digital media, big data and artificial intelligence, and their spread through social media, is of the utmost urgency.

Continue reading in The Conversation.