Former ICT staffer Palmer Luckey Named One of Forbes 30 Under 30

Palmer Luckey, who was part of the ICT’s MxR Lab , was named one of the brightest young stars in video games by Forbes. Luckey recently launched Oculus Rift, a low-cost VR headset for video games.

At ICT he worked on a number of projects at the MxR Lab involving the research and development of low-cost, wide field-of-view head mounted displays.

ICT’s Paul Debevec earns a film credit on “The Hobbit: An Unexpected Journey”

Paul Debevec, ICT’s associate director for graphics research, earned a credit in the recently released Hobbit film for his visiting researcher work at Weta Digital in Wellington, New Zealand in August and September 2012. The credit is for research and development in the Weta Digital section.

Defense News’ Training and Simulation Journal Features ICT Work on Avatars for Army Training

A Training and Simulation Journal article highlighted ICT research and development efforts to create avatars that mirror their real world counterparts. Stating that more realistic avatars could revolutionize training, the article, which ran in the magazine and is online at Defense News, noted that ICT’s Light Stage system can collect data to create photoreal digital doubles that can match the lighting conditions in game engines like VBS2 and can also transition between facial expressions. ICT executive director Randy Hill stated that research indicates that people identify more with virtual humans who are realistic.

“If you’re going to be embodied by an avatar in a virtual environment, if it looks like you, you’re going to have more of an identity with it,” Hill said. “It’s going to feel like it’s really you.”

The story also stated that Lt. Gen. Robert Brown, current head of I Corps and former commander of the Maneuver Center of Excellence at Fort Benning, has asked TCM Gaming to determine whether more realistic avatars might improve training. During a visit to ICT Brown had his own face captured in order to personalize his avatar. His hope is that face capture will fit in with a push toward virtual replicas as an aid to motivation and training, the story noted.

“My common sense tells me the more realistic, the better,” Brown said of photorealistic faces. “But there needs to be more research on that.”

Hill also explained that ICT wants to make a portable version of Light Stage that is both inexpensive and quick. The device could then be set up at training depots to rapidly capture faces and expressions for avatars, stated the story, which also quoted Abhijeet Ghosh, of ICT’s Graphics Lab, who said that the Light Stage could be used to create, “a digital asset that can mimic the real actor.”

Paul Rosenbloom Wins Kurzweil Prize for Best Paper at Artificial General Intelligence Conference 2012

Paul Rosenbloom has once again been honored with a Kurzweil Prize at the annual Artificial General Intelligence Conference (AGI). This year, Rosenbloom, a computer science professor at the USC Viterbi School and a project leader at ICT, won the best paper award for his paper “Deconstructing Reinforcement Learning in Sigma”. This work is an extension of Rosenbloom’s paper at the 2011 AGI conference, which received the Kurzweil Prize for Best Artificial General Intelligence Idea.

Both prizes, the first for an early stage idea and the more recent one for the maturation of that work, recognize Rosenbloom’s pioneering efforts toward building computer systems that can behave like people in how they make decisions and solve problems.

Artificial general intelligence, or AGI, refers to the design of systems that can emulate full-range human intelligence as opposed to artificial intelligence systems that focus on modeling narrow or specific functions like generating speech, acquiring language or planning actions.

“Significant progress has been made in many individual areas since the founding of AI, but such progress by itself doesn’t yield human level intelligence,” Rosenbloom said. “AGI represents a return to this original vision of AI.”

Rosenbloom leads such an effort at ICT, where he is building the next generation of virtual human architecture – sort of a brain for computer-driven characters – that should enable them to behave appropriately when combined with the proper knowledge and skills. ICT is a leader in research and development of virtual humans, and it is hoped that Rosenbloom’s new architecture – when combined with other developments at ICT and elsewhere – will lead to more human-like systems.

Potential applications could include more intelligent virtual humans, robots and agents – virtual negotiation partners, for example, that learn from their mistakes, react to current situations and alter their behaviors depending on past and present interactions.

“The goal is to integrate all the mechanisms for thought, language, speech and motor control into a single system that can learn from experience,” he said. “Usually when you add functionality in architectures, you add complexity. My work is trying to simplify the process with an architecture that combines elegance with generality.”

Abram Demski: “Logical Prior Probability”

Abstract: A Bayesian prior over first-order theories is defined. It is shown that the prior can be approximated, and the relationship to previously studied priors is examined.

Paul: Rosenbloom: “Extending Mental Imagery in Sigma”

Abstract: This article presents new results on implementing mental imagery within the Sigma cognitive architecture. Rather than amounting to a distinct module, mental imagery is based on the same primitive, hybrid mixed, architectural mechanisms as Sigma’s other cognitive capabilities. The work here demonstrates the creation and modification of compound images, the transformation of individual objects within such images, and the extraction of derived information from these compositions.

Paul Rosenbloom: “Deconstructing Reinforcement Learning in Sigma “

Abstract: This article describes the development of reinforcement learning within the Sigma graphical cognitive architecture. Reinforcement learning has been deconstructed in terms of the interactions among more basic mechanisms and knowledge in Sigma, making it a derived capability rather than a de novo mechanism. Basic reinforcement learning – both model-based and model-free – are demonstrated, along with the intertwining of model learning.

Randall Hill, Jr. at the USC Symposium on Digital Media Research, Education and Innovation

ICT Executive Director Randall Hill, Jr. is participating in the USC Symposium on Digital Media Research, Education and Innovation being held at Shutters on the Beach, Santa Monica, CA.

H. Chad Lane: “Using Virtual Humans to Educate and Inspire”

Overview: I will begin by presenting several surprising findings from studies on the psychology of learning. Sometimes our self-perceptions and beliefs can interfere with our desire to learn, and so my hope is that you walk away being more curious about the hidden side of the thinking we do every day and how you can overcome these barriers. The research we do at the USC Institute for Creative Technologies (www.ict.usc.edu) focuses on how to use advanced technologies to teach. One of the key thrusts of our research is building artificially intelligent Virtual Humans who can speak, emote, listen, and participate in conversations. I will provide an overview of what it takes to build them and where I think virtual humans are going in the future. My hope is that you leave the talk wondering more about how you (yourself) learn, how technology helps (and hinders) you, and what kinds of things you might someday do with advanced technologies.

Discover Magazine Features ICT Virtual Characters Addressing PTSD and Depression

A story in the December issue of Discover magazine covers ICT’s SimCoach and SimSensei projects, which are both designed to leverage anonymous virtual human technologies to encourage soldiers, veterans and family members to seek care for mental health issues. “The intention here is not to replace traditional therapists,” said Skip Rizzo, ICT’s associate director for medical virtual reality. “We’re trying to break down barriers. Hopefully, once soldiers feel comfortable asking questions, they’ll feel more comfortable accessing help.”

The article states that SimCoach is being tested by several research teams at four groups of Veterans Administration hospitals and military bases and that over the next year SimSensei should be rolled out in kiosks at crowded military bases and VA hospitals, where soldiers and veterans now often have to wait in long lines for a counseling appointment.

Matthew Jensen Hays, Julia Campbell and Matthew Trimmer: “Can role-play with virtual humans teach interpersonal skills?”

Abstract: Interpersonal and counseling skills are essential to Officers’ ability to lead (Headquarters, Department of the Army, 2006, 2008, 2011). We developed a cognitive framework and an immersive training experience—the Immersive Naval Officer Training System (INOTS)—to help Officers learn and practice these skills (Campbell et al., 2011). INOTS includes up-front instruction about the framework, vignette-based demonstrations of its application, a role-play session with a virtual human to practice the skills, and a guided after-action review (AAR). A critical component of any training effort is the assessment process; we conducted both formative and summative assessments of INOTS. Our formative assessments comprised surveys as well as physiological sensor equipment. Data from these instruments were used to evaluate how engaging the virtual-human based practice session was. We compared these data to a gold standard: a practice session with a live human role-player. We found that the trainees took the virtual-human practice session seriously—and that interacting with the virtual human was just as engaging as was interacting with the live human role-player. Our summative assessments comprised surveys as well as behavioral measures. We used these data to evaluate learning produced by the INOTS experience. In a pretest-posttest design, we found reliable gains in the participants’ understanding of and ability to apply interpersonal skills, although the limited practice with the virtual human did not provide additional immediate benefits. This paper details the development of our assessment approaches, the experimental procedures that yielded the data, and our results. We also discuss the implications of our efforts for the future design of assessments and training systems.

Matthew Jensen Hays, Julia Campbell and Todd Richmond: “Beyond Doctrine: Mobile Training for Dismounted Threat Assessment”

Abstract: During patrols, Soldiers are often dismounted (on foot, outside the protection of a vehicle). Dismounting provides several tactical and strategic benefits. However, the training that Soldiers receive often takes the form of PowerPoint slideshows about improvised explosive devices (IEDs). This approach is insufficient for lasting learning (Hays, Ogan, & Lane, 2010). Further, veterans of multiple patrols report that relying exclusively on pre-scripted doctrine causes Soldiers to set patterns. Insurgents adapt to these patterns in planning their attacks; predictability is vulnerability. We interviewed several subject-matter experts, who said that Soldiers need to understand what underlies insurgents’ decisions and behavior and how their intent to attack generates threat for Soldiers. With this understanding, Soldiers would be better able to recognize and respond to threats, and could employ doctrine in unpredictable ways. To that end, we developed a training system research prototype: the Dismounted Interactive Counter-IED Environment for Training (DICE-T). DICE-T focuses on indicators of threat: elements of the environment, terrain, population behavior, and previous friendly and enemy activity. Critically, DICE-T is designed to enable trainees to understand why something is threatening so that they can recognize the same underlying threat in a novel situation. DICE-T improves on and complements existing training by providing up-front instruction in narrative-based videos; guidance and real-time feedback as trainees practice making decisions required by dismounted patrols; and structured, personalized feedback during an automated post-exercise review (PXR). Finally, DICE-T is entirely contained in a software package that can be installed on an Android tablet. In this paper, we describe DICE-T and our efforts to make it accessible, informative, and educational in a way that might complement mission-rehearsal exercises. We also describe the data-driven process by which we intend to evaluate and improve the system’s ability to transition Soldiers from the classroom to live training.

Paper and Tutorial Presentation on Medical Virtual Reality Topics

Brett Talbot presents paper on Designing useful Virtual Standardized Patient Encounters and Tutorial on  Biological fidelity In Medical Simulations.

Poster Presentation: A Reranking Approach for Recognition and Classification of Speech Input in Conversational Dialogue Systems

Abstract: We address the challenge of interpreting spoken input in a conversational dialogue system with an approach that aims to exploit the close relationship between the tasks of speech recognition and language understanding through joint modeling of these two tasks. Instead of using a standard pipeline approach where the output of a speech recognizer is the input of a language understanding module, we merge multiple speech recognition and utterance classification hypotheses into one list to be processed by a joint reranking model. We obtain substantially improved performance in language understanding in experiments with thousands of user utterances collected from a deployed spoken dialogue system

ICT’s Mark Bolas Quoted on Virtual Reality in the Orange County Register

Mark Bolas, ICT’s associate director for mixed reality research and development, was quoted in an Orange County story about virtual reality headsets that featured the work of former MxR Lab employee Palmer Lucky, who recently founded a company to develop low-cost virtual reality headsets for gaming. The story noted that Lucky worked under Bolas, a leading researcher in head mounted displays, for about a year.

“I’ve been doing VR for 25 years,” said Bolas, who hired Luckey on the spot after he contacted him for career advice. “He knew as much about the history of my products as I did.”

Paul Debevec: “The Light Stages and Their Application to Photoreal Digital Actors”

Abstract: The Light Stage systems built at UC Berkeley and the USC Institute for Creative Technologies have enabled a variety of facial scanning, reflectance measurement, and performance capture techniques reported in several research papers and used in the motion picture, simulation, and video game industries. We present the the evolutionary history of the Light Stage systems with a focus on the roles they have played in creating some of the world’s first photoreal digital actors.

Abhijeet Ghosh: “Polarised Light in Computer Graphics”

Abstract: In Computer Graphics, the polarisation properties of light currently play a role in several contexts: in certain forms of highly realistic ray-based image synthesis (sometimes colloquially referred to as Polarisation Ray Tracing), in some 3D display systems, and in some material acquisition technologies. The properties of light that are behind all of these applications are basically the same, although the technologies for which this property of light is being used differ considerably. Also, the notations and mathematical formalisms used in these application areas differ to some degree as well. This course aims to provide a unified resource for those areas of computer graphics which require a working knowledge of light polarisation: rendering and material acquisition. Consequently, the course is structured into three main parts: I – Background, II – Polarisation Ray Tracing, and III – Polarised Light in Acquisition Technology. Care is taken so that the information provided in Part I is applicable to both Part II and III of the course, and is formulated in a way that emphasises the underlying similarities.

This presentation is a Juried Course. At SIGGRAPH Asia 2012, the Courses program will feature a variety of instructional sessions from introductory to advanced topics in computer graphics and interactive techniques by speakers from institutions around the globe. Practitioners, developers, researchers, artists, and students will attend Courses to broaden and deepen their knowledge of their field, and to learn the trends of new fields. Join them in Singapore this November!

Abhijeet Ghosh: “Measurement and Modeling of Detailed Facial Reflectance”

Abstract: We present a set of new techniques for practical acquisition and modeling of detailed facial reflectance including separation of individual reflectance components and fitting measured data to appropriate reflectance and scattering models. We also describe a novel computational illumination approach based on measuring the second order statistics of surface reflectance that provides direct estimates of spatially varying specular roughness.

Skip Rizzo Interviewed about Using Virtual Reality to Address PTSD on CNN International and Minnesota Public Radio

Skip Rizzo, ICT’s associate director for medical virtual reality, spoke about using virtual reality as a tool for treating PTSD and for preparing soldiers for stresses of war before they reach the battlefield. Rizzo was interviewed about this work in a live segment on CNN International and in a radio story on Minnesota Public Radio.

ICT’s Chad Lane Quoted on Video Games and Learning in Riverside Press-Enterprise

ICT research scientist H. Chad Lane was featured in a Riverside Press-Enterprise story looking at whether video games can be effective teaching tools.

“That’s like asking are labs effective at teaching science. Of course, some labs are and some labs aren’t. It depends on how you use the labs,” said Lane, who specializes in educational games and artificial intelligence at the USC Institute for Creative Technologies. “The scientific literature is getting at, what are the features of games that seem to promote interest and learning? What are the things that are keeping kids engaged?.”

Studies about games show they aren’t necessarily better at conveying knowledge, but they have been shown to get students more engaged in their lessons, he also said.

The Light Stage on KTLA’s Tech Report

The Tech Report, a nationally broadcast television segment from KTLA’s Rich Dimuro, featured the visual effects innovations made possible with the Light Stage technology from ICT’s Graphics Lab.

Check out the full story on the KTLA website.

Conference Participation: National Academies Board on Army Science and Technology

ICT Executive Director Randall W. Hill, Jr. is a current member of the  Board on Army Science and Technology (BAST). BAST serves as a convening authority for the discussion of science and technology issues of importance to the Army and oversees independent Army-related studies conducted by the National Academies. In its study oversight role, the BAST takes into account public policy, as well as scientific and engineering considerations. While the BAST does not create policy, it may suggest policy alternatives for the Army to consider. In coordination with the Army, the BAST works to focus study issues and statements of task, reviews committee membership nominations, and provides BAST liaisons to participate in study committee activities.

Paper Presentation: FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager

This talk presents FLoRes, a new information-state based dialogue manager.

Paul Rosenbloom’s On Computing Receives an Outstanding Review in Nature

The first review of On Computing by ICT’s Paul Rosenbloom has just come out. In Nature 491, 331 (15 November 2012) the reviewer, John Gilbey, writes, “On Computing is an unusual, and welcome, mix of conventional academic text and personal odyssey. Any work citing Jane Austen and Richard Feynman in the same chapter easily passes my test for an interesting interdisciplinary read. Much more, this book offers an innovative set of tools that could kick-start debate and research on the future structure of the sciences.” Rosenbloom leads the cognitive architecture group at the USC Institute for Creative Technologies and is a professor of computer science at the USC Viterbi School of Engineering.

Read the full review.

Buy the book on Amazon.

Morteza Dehghani: “Interpersonal Effects of Emotions in Morally-charged Negotiations”

Abstract: Witnessing and displaying emotional expressions play a significant role in the facilitation and coordination of our social interactions. We investigate the impact of facial displays of discrete emotions, specifically anger and sadness, in a morally-charged multi-item negotiation task. Our results support the hypothesis that moral appraisals can be strongly affected by interpersonal emotional expressions. We show that displays of anger may backfire if one of the parties associates moral significance to negotiation objects, whereas displays of sadness promote higher concession-making. We argue that emotional expressions can shift moral concerns within a negotiation inways that can promote cooperation.

Ari Shapiro and Colleagues Awarded Best Paper at the Fifth International Conference on Motion in Games

Ari Shapiro and his co-authors Yuyu Xu and Andrew Feng won the Best Paper Award at the Fifth International Conference on Motion in Games in Rennes, France. Their paper is titled “Automating the Transfer of a Generic Set of Behaviors onto a Virtual Character. The paper shows how to leverage third party models and automatically use them in an animation system without needing any intermediate processing by 3D artists. This greatly reduces the cost of producing 3D animated content.

The Economist Features Research from ICT’s Skip Rizzo and Galen Buckwalter

The Economist featured work by Skip Rizzo and Galen Buckwalter of USC’s Institute for Creative Technologies. The story noted that virtual reality therapy is being used to help returning veterans confront traumas sustained on the battlefield. Rizzo and his team are also building a training regime from this research to see whether it can help prepare troops before they are deployed; soldiers would go through traumatic scenarios, then speak with a virtual mentor to learn stress-reduction tactics.

Mark Bolas Quoted in AP Story about New Wii U GamePad in Washington Post, US News and World Report

Mark Bolas, ICT’s associate director for mixed reality research and development, was quoted in an Associated Press article about Nintendo’s new Wii U GamePad. “It’s a second screen like a tablet or a cellphone, but it’s different,” said Bolas, who is also an associate professor of interactive media at the USC School of Cinematic Arts. “In addition to providing more information, the GamePad is also a second viewpoint into a virtual world. Nintendo is letting you turn away from the TV screen to see what’s happening with the GamePad.”. US News and World Report and other outlets ran the story.

Read the full story on the US News and World Report website and in the Washington Post.

And read it in the Republic.

Paul Debevec Joins Academy of Motion Picture Arts and Sciences Tech Council

Paul Debevec, the associate director for graphics research at the USC Institute for Creative Technologies and a research professor at the USC Viterbi School of Engineering’s Department of Computer Science, has accepted an invitation to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences.

Established in 2003 by the Academy’s Board of Governors, the Science and Technology Council provides a forum for the exchange of information, promotes cooperation among diverse technological interests within the industry, sponsors publications, fosters educational activities and preserves the history of science and technology of motion pictures.

In 2009 Debevec received a Scientific and Engineering Academy Award for the Light Stage capture devices and the image-based rendering system for character relighting. Debevec, whose work has been seen in such films as “Avatar” and “The Curious Case of Benjamin Button,” has been a member of the Academy’s Visual Effects Branch since 2010.

Others joining the council with Debevec this year are Doug Cooper, Ray Feeney, Josh Pines, David Stump, Steve Sullivan, Bill Taylor and Beverly Wood
The other 15 members are Peter Anderson, Lisa Churgin, Denny Clairmont, Elizabeth Cohen, David Gray, John Hora, Jim Houston, Randal Kleiser, Daryn Okada, Rick Sayre, Milt Shefter, Garrett Smith, and Academy governors Craig Barron, Richard Edlund and Don Hall.

For more information go to: http://www.oscars.org/press/pressreleases/2012/20121115.html

Invited Talk: Raising Awareness of PTSD at NASCAR Event

Randall W. Hill, Jr. and Skip Rizzo  give brief talk at Homestead Miami Speedway event.

Keynote Talk: “Creating Photoreal Digital Actors: Capturing Light and Reflectance”

Paul Debevec will be presenting a Keynote at the conference.

Abstract: Photoreal digital actors have become a practical reality in the last decade and are poised to revolutionize the entertainment industry. Paul Debevec from USC’s Institute for Creative Technologies will explain the technical progression and application of his lab’s LED-based “Light Stage” facial scanning systems, which have helped produce photoreal digital actors for movies such as Spider-Man 2, The Curious Case of Benjamin Button and Avatar.

Poster Presentation: Computerized Hints can Optimize Recall

Poster examines whether a computer can ensure that successful retrieval is always the most difficult retrieval, and therefore maximize the benefit of tests.

Invited Participation: NAKFI Conference – The Informed Brain in a Digital World

Jacki Ford Morie has been invited to attend the 10th Annual   National Academies Keck Futures Initiative (NAKFI) Conference, The Informed Brain in a Digital World.

NAKFI is a 15-year effort of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine to catalyze interdisciplinary inquiry and to enhance communication among researchers, funding organizations, universities, and the general public.

The objective is to support the climate for conducting interdisciplinary research, and to break down related institutional and systemic barriers. NAKFI works towards these objectives by harnessing the intellectual horsepower of approximately 150 individuals from diverse backgrounds who apply to attend its annual “think-tank” style conference; and by awarding $1 million in seed grants – on a competitive basis – to conference participants to enable further pursuit of bold, new ideas and connections stimulated by the conference.

Andrew W. Feng, Yuyu Xu and Ari Shapiro: “Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character”

Abstract: Humanoid 3D models can be easily acquired through various sources, including online. intervention in order to associate such models that is compatible with the animation necessary for the character to engage in an %animated performance. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system, and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associated arbitrary joint names with canonical ones, and describe an fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.

ICT Innovations for Veterans Featured by USC Viterbi News

ICT research designed to offer health care support was featured in a USC Viterbi Veteran’s Day story. The article highlighted SimSensei, a virtual human collaboration between Louis-Philippe Morency and Skip Rizzo that aims to be a tool that veterans can access from their home computer that serves as an anonymous first step towards seeking help.

Andrew W. Feng and Ari Shapiro: “An Analysis of Motion Blending Techniques”

Abstract: Motion blending is a widely used technique for character animation. The main idea is to blend similar motion examples according to blending weights, in order to synthesize new motions parameterizing high level characteristics of interest. We present in this paper an in-depth analysis and comparison of four motion blending techniques: Barycentric interpolation, Radial Basis Function, K-Nearest Neighbors and Inverse Blending optimization. Comparison metrics were designed to measure the performance across different motion categories on criteria including smoothness, parametric error and computation time. We have implemented each method in our character animation platform SmartBody and we present several visualization renderings that provide a window for gleaning insights into the underlying pros and cons of each method in an intuitive way.

Stefan Scherer Organizes Workshop and Presents at ICPR12

Stefan Scherer serves as a co-organizer of the first international Workshop on Multimodal Pattern Recognition of Social Signals in Human Computer Interaction, co-located at the International Conference on Pattern Recognition, ICPR 2012.  He also presents his recent work on “The Effect of Fuzzy Training Targets on Voice Quality Classification”.

Invited Talk: Collective Narrative and the Commonsense Knowledge Perplex

Andrew Gordon gives an invited talk as part of UC Santa Cruz’s “Inventing the Future of Games” series, part of the Games and Playable Media program.

Game Theory and Human Behavior Fall Symposium

Game Theory and Human Behavior addresses problems of global interest such as energy, healthcare, financial markets and security necessarily involve understanding and influencing the behavior of multiple parties with differing agendas. Our effort is to create a campus-wide collaborative environment for Game Theory and Human Behavior promises to fuse the mathematics and formal approaches of the former with the wealth of social science insights of the latter to create new and necessary approaches for 21st century issues. The National Academy of Engineering has identified several Grand Challenge areas including preventing nuclear terror, advancing personalized learning, securing cyberspace and renewing urban infrastructure. All involve multiple decision-makers in game-theoretic and human behavior settings, thus requiring the fusion of mathematical, engineering and social sciences to make significant progress in addressing these challenges.

USC is in the enviable position of being on the cusp of addressing these challenges. Over 50 faculty members have joined this effort from 13 schools and centers including the Annenberg School for Communication, the Gould School of Law, the Marshall School of Business, the College, the CREATE center, Center for Sustainable Cities, Center for Megacities, Center for Energy Informatics, Schaeffer Center for Health Economics and Policy, the School of Policy, Planning and Development, the Institute for Creative Technologies, the School of Architecture, and the Viterbi School of Engineering. We have expertise from architecture, civil and environmental engineering, computer science, economics, electrical engineering, industrial engineering, law, operations management, policy planning and development, psychology and sociology. While there have been some interdepartmental collaborations, we have not been able to connect to a degree necessary to expand our endeavors to the scope of solutions that the problems require. This effort on Game Theory and Human Behavior (GTHB) will create the momentum to overcome barriers by organizing a series of workshops, seminars, tutorials and courses culminating in a week of GTHB Showcase events that will highlight our first-year outreach and development efforts. The GTHB effort promises to put USC in a unique position to tackle many of the key challenges of the 21st century.

Welcome (9:00 AM – 9:05 AM) : Morteza Dehghani

Introduction (9:05 AM – 9:20 AM) : Milind Tambe

First Keynote talk : Tamer Basar
(9:20 AM – 10:20 AM) Multi-Agent Networked Systems with Adversarial Elements

Break (10:20 AM – 10:40 AM)

First session (10:40 AM – 12:00 AM):

    • 10:40 AM – 11:00 AM: Terry Benzel
      The Science of Cyber Security Experimentation
      The DETER Project
    • 11:00 AM – 11:20 AM: Jim Blythe
      Human Behavior and Computer Security

 

  • 11:20 AM – 11:40 AM: Rajiv Maheswaran
    Developing Spatiotemporal Game Theory Through Basketball
  • 11:40 AM – 12:00 PM: Burcin Becerik-Gerber
    Human Building Interaction Framework for User Driven Building Systems

Lunch (12:00 noon – 1:00 PM)

Second Keynote talk (1:00 PM – 2:00 PM): Baruch Fischhoff
From Our Lips …: (Mis)adventures in Applied Science

Second Session (2:00 PM – 5:00 PM):

  • 2:00 PM – 2:20 PM: Richard Dekmejian
    Typology of Sunni Islamist Groups: Gaming & Sacred Values
  • 2:20 PM – 2:40 PM: Morteza Dehghani
    Role of Sacred Values in Intergroup Conflicts

Break (2:40 PM – 3:15 PM)

  • 3:15 PM – 3:35 PM: Jesse Graham
    Political Ideology, Moral Concerns, and Moral Decision-Making
    Two Findings and Lots of Questions
  • 3:35 PM – 3:55 PM: David Tannenbaum
    Moral Signals and Person-centered Moral Judgment
  • 3:55 PM – 4:15 PM: Piercarlo Valdesolo
    Social Identity and Morality: Tipping the scales of judgment
  • 4:15 PM – 4:35 PM: Scott McCalla
    The effects of sacred value networks within an evolutionary, adversarial game

ICT’s MedVR Lab Awarded Infinite Hero Grant to Improve Lives for Military Members and Families, CNBC Covers

The USC Institute for Creative Technologies has been selected to receive a grant from the Infinite Hero Foundation, a non-profit group devoted to funding programs that drive innovation and the accessibility of effective treatments for military heroes and their families dealing with service-related mental and physical injuries. The grant will be used to to customize ICT’s virtual reality exposure therapy scenarios to meet the needs of combat medics and corpsmen, expanding the relevant scenarios available to help treat service members and veterans suffering from combat-related post-traumatic stress. Read the full CNBC post here.

Skip Rizzo Presents at International Society for Traumatic Stress Studies 2012

Skip Rizzo presented Virtual Reality Goes to War:Recent Advances in Military Behavioral Healthcare, including information about ICT’s STRIVE, virtual patient, SimCoach and Bravemind projects at the International Society for Traumatic Stress Studies Conference.

Technology & Innovation for the Prevention & Treatment of PTSD Conference

War is perhaps one of the most challenging situations that a human being can experience. The physical, emotional, cognitive and psychological demands of a combat deployment place enormous stress on even the best-prepared military service members. Since the start of Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) in Afghanistan and Iraq, approximately 2.5 million troops have been deployed. Between the unique nature of these conflicts, their duration and the common occurrence of multiple deployments, a significant number of returning service members have developed or are at risk for developing a range of behavioral health conditions. The urgency of this healthcare challenge, has driven the DoD and VA systems to accelerate research, development and application of novel and innovative approaches for clinical research and dissemination of care. This conference is meant to provide an overview of some exciting new directions being pursued to address the many unmet needs surrounding trauma-related disorders. The program will begin with an overview of the scope of the challenges that face veterans exposed to trauma. This will be followed by a number of speakers detailing the evolution and current use of prolonged exposure therapy. Innovations of this evidence based therapy including the use of virtual reality technology and cognitive enhancers to further improve outcomes will be discussed. This will be complemented by a basic science perspective on fear extinction learning, a process that underlies prolonged exposure, as well as a discussion of laboratory approaches to assessing this type of learning in humans. The remainder of the program will review some new pharmacological and psychotherapy approaches. The use of virtual environments, mobile apps and “virtual agents” that can all aid in promoting stress resilience, health coaching and treatment engagement will also be discussed. Participants will have the opportunity to experience demos of many of the technologies discussed at the conclusion of the program. One of the clinical “game changing” outcomes of the OIF/OEF conflicts could derive from the military’s support for new research and development in these areas that could potentially drive increased recognition and adoption within the civilian sector. As we have seen throughout history, innovations that emerge in military healthcare driven by the urgency of war, typically have a lasting influence on civilian healthcare long after the last shot is fired.

Learn more at http://ptsdtech.ict.usc.edu/.

Lunchtime Book Talk by Science Journalist and Author Matt Kaplan

Tuesday, October 30 from 12:30-1:30 PM

Introduction by Randy Hill

Get into the Halloween spirit with a lunchtime book talk with science journalist and author Matt Kaplan. Kaplan, who has covered ICT research in stories for the Economist and New Scientist, will be speaking about his new book “Medusa’s Gaze and Vampire’s Bite: The Science of Monsters”. In his chapter discussing man-made and AI creations, Kaplan cites research from our own Louis-Philippe Morency and Andrew Gordon.

Please join us for what promises to be both an entertaining and educational discussion. Lunch will be provided.

Here’s a Kirkus review…

“A delightfully serious—well, mostly—dissection of monsterland.

Give a nod of welcome to our old friends: rukh and the Minotaur, Chimera and the Sphinx, Charybdis and the leviathan, griffin and the cockatrice, ghosts, demons, spirits, zombies, vampires, werewolves and HAL 9000. What a parade, and we clearly love them, for a goodly number have been around for centuries. However, asks science journalist Kaplan, why do we willingly scare ourselves? And from what dark materials did we fashion these golems and Medusa and dragons? Kaplan plumbs a wide array of possible natural explanations: the simple amplifications of lions, tigers, bears and boars; the mutations that cause extremes in animal appearance; the mixing of bones in tar pits and in the general fossil record (of which the griffin is a prime example). The author mostly stays on solid ground, taking the monsters apart to see whether they might have come from some sort of natural science or history. There are moments when he can be somewhat cute, overreaching for jokey asides, but more often than not, he is on the path of scientific fun, deconstructing zombie brews, the behavioral ecology of vampires or the geological challenge of being buried alive. As for the evolutionary advantage: “Like lion cubs play-fighting in the safety of their den, monsters may be allowing threats to be toyed with in the safe sandbox of the imagination.
The appeal of monsters never stales, and in Kaplan’s hands, these characters shine.”

RSVP to Elizabeth Kalbers.

Street parking is available in front of the Institute for Creative Technologies. For directions, please click here.

Sunghyun Park: “Crowdsourcing Micro-Level Multimedia Annotations: The Challenges of Evaluation and Interface”

Abstract: This paper presents a new evaluation procedure and tool for crowdsourcing micro-level multimedia annotations and shows that such annotations can achieve a quality comparable to that of expert annotations. We propose a new evaluation procedure, called MM-Eval (Micro-level Multimedia Evaluation), which compares fine time-aligned annotations using Krippendorff’s alpha metric and introduce two new metrics to evaluate the types of disagreement between coders. We also introduce OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient multimedia behavior annotations, directly from Amazon Mechanical Turk interface. With an experiment using the above tool and evaluation procedure, we show that a majority vote among annotations from 3 crowdsource workers leads to a quality comparable to that of local expert annotations.

Andrew Gordon and Christopher Weinberg: “PhotoFall: Discovering Weblog Stories Through Photographs”

Abstract: An effective means of retrieving relevant photographs from the web is to search for terms that would likely appear in the surrounding text in multimedia documents. In this paper, we investigate the complementary search strategy, where relevant multimedia documents are retrieved using the photographs they contain. We concentrate our efforts on the retrieval of large numbers of personal stories posted to Internet weblogs that are relevant to a particular search topic. Photographs are often included in posts of this sort, typically taken by the author during the course of the narrated events of the story. We describe a new story search tool, PhotoFall, which allows users to quickly find stories related to their topic of interest by judging the relevance of the photographs extracted from top search results. We evaluate the accuracy of relevance judgments made using this interface, and discuss the implications of the results for improving topic-based searches of multimedia content.

Bill Swartout Gives Keynote Talk at Autumn Simulation Conference 2012

Bill Swartout gives a keynote talk entitled “Building and Using Virtual Humans” at this Society for Modeling and Simulation International conference.

Paul Debevec: “A Single-Shot Light Probe”

Abstract: We demonstrate a novel light probe which can estimate the full dynamic range of a scene with multiple bright light sources. It places diffuse strips between mirrored spherical quadrants, effectively co-locating diffuse and mirrored probes to record the full dynamic range of illumination in a single exposure. From this image, we estimate the intensity of multiple saturated light sources by solving a linear system.

Paul Debevec: “Measurement-Based Synthesis of Facial Microgeometry”

Abstract: Current face scanning techniques provide submillimeter precision, recording facial mesostructure at the level of pores, wrinkles, and creases. Nonetheless, the effect of surface roughness continues to shape specular reflection at the level of microstructure — surface texture at the scale of microns. In this work, we present a texture synthesis approach to increase the resolution of mesostructure-level facial scans using surface microstructure digitized from skin samples about the face. We digitize the skin patches using macro photography and polarized spherical gradient illumination at approximately 10 micron precision, and we make point-source reflectance measurements to characterize the specular reflectance lobe at this smaller scale. We then employ a constrained texture synthesis algorithm to synthesize appropriate surface microstructure per-region, blending the regions to cover the whole entire face. We show that renderings made with microstructure-level facial models preserve the original scanned mesostructure and exhibit surface reflection which is significantly more consistent with real photographs.

ICT’s Mark Bolas Featured in Technology Review

Mark Bolas, director of ICT’s Mixed Reality Lab and an associate professor in the USC School of Cinematic Arts, was quoted in a MIT Technology Review story about hands-free computer interfaces made possible by the Microsoft Kinect. “When using a computer today, we think of our bodies as a fingertip or at most two fingertips,” said Bolas. “But humans evolved to communicate with their whole bodies.”

The story noted that Bolas’s research group is experimenting with using Kinect to track very subtle behaviors, including monitoring the rise and fall of a person’s chest to measure breathing rate. Other Kinect-based gesture-controlled interfaces developed at ICT include Jewel Mine, a motor rehabilitation tool created by Belinda Lange of ICT’s game-based rehab group.

Randall W. Hill Jr. Speaks at AUSA Conference

ICT Executive Director Randall W. Hill Jr. gives a talk “On the Frontiers of Training with Virtual Technologies” in the Army Presentation Booth at the Association of the U.S. Army (AUSA) Conference.

ICT MultiComp Lab Organizes Workshop and Gives Six Presentations at ICMI 2012

MultiComp Lab researches present six papers during the conference: 2 oral presentations, 2 poster presentations and 2 workshop papers. For the Audio-visual Emotion Challenge, the MultiComp team received the 2nd position in the word-based classification challenge. Louis-Philippe Morency  co-organizes a workshop on Multimodal Learning Analytics

Stefan Scherer, Louis-Philippe Morency and Albert “Skip” Rizzo: “Multisense and SimSensei — A Multimodal Research Platform for Real-time Assessment of Distress Indicators”

Overview: USC Institute for Creative Technologies (ICT) is a leader in basic research and advanced technology development of virtual humans who think and behave like real people. ICT brings together experts in clinical psychology, cognitive science, computer vision, speech processing and artificial intelligence.

As part of our recent DARPA-funded project Detection and Computational Analysis of Psychological Signals (DCAPS), we perfected our expertise in automatic human behavior analysis to identify indicators of psychological distress such as depression, anxiety and PTSD. We developed two technologies which are particularly relevant to the Predicting Suicide Intent program:

  • MultiSense automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention, fidgeting). MultiSense infers indicators of psychological distress from these signals and behaviors, and sends this information to healthcare providers or the virtual human system.
  • SimSensei is a virtual human platform specifically designed for healthcare support and is based on the 10+ years of expertise at ICT in virtual human research and development. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts with its own speech and gestures to the perceived user state and intent.

In a recent project with Cincinnati Children’s Hospital, these sensing technologies were used to automatically identify the characteristics of prosody and voice quality of individuals who had attempted suicide in the past.

Louis-Philippe Morency Gives Invited Talk at WebVision 2012

Louis-Philippe Morency gives a talk on “Harvesting the Web for Multimodal Sentiment Analysis” at The Workshop on Computer Vision for the Web, part of the EVVC 12, the European Conference on Computer Vision in Florence Italy on October 13.

Jacquelyn Ford Morie, Eric Chance and Dinesh Rajpurohit: “Virtual Worlds and Avatars as the New Frontier of Telehealth Care”

Abstract: We are entering a new age where people routinely visit, inhabit, play in and learn within virtual worlds (VWs). One in eight people worldwide are VW participants, according to the latest 2011 figures from KZERO [1]. VWs are also emerging as a new and advanced form of telehealth care delivery. In addition to existing telehealth care advantages, VWs feature three powerful affordances that can benefit a wide range of physical and psychological issues. First, the highly social nature of VWs encourages social networking and the formation of essential support groups. Secondly, the type of spaces that have been proven in the physical world to promote psychological health and well-being can be virtually recreated. Finally, research suggests that embodied avatar representation within VWs can affect users psychologically and physically. These three aspects of VWs can be leveraged for enhanced patient-client interactions, spaces that promote healing and positive responses, and avatar activities that transfer real benefits from the virtual to the physical world. This paper explains the mounting evidence behind these claims and provides examples of VWs as an innovative and compelling form of telehealth care destined to become commonplace in the future.

Jacquelyn Ford Morie and Edward Haynes: “Storytelling with Storyteller Agents in Second Life”

Abstract: The “Coming Home” project at the University of Southern California’s Institute for Creative Technologies was started in 2009 to research how virtual worlds could facilitate health care activities for returning US military personnel, who frequently suffer from psychological and physical challenges. These challenges include increased stress and loss of self- esteem related to the war events they have experienced. As part of the activities of Coming Home, we implemented an ambitious storytelling activity that reinforces the positive ideals for which a warrior stands, by presenting historical figures from the past that illustrate those qualities. The storytelling includes scenes from a warrior’s life, and a conversational avatar agent that can answer questions about the historical parts of the story. We have also created an interactive authoring system to allow users to make their own story to populate the SL storytelling space. Although visualized and experienced in world, the authoring tools and repository for most of the narrative content are web-based. Scripts within Second Life fetch this authored data and media and display it in the appropriate places. We also report on progress in building a tool with which users can author their own storytelling agent as part of the story they create.

Sin-hwa Kang and Jonathan Gratch: “Socially Anxious People Reveal More Personal Information with Virtual Counselors That Talk about Themselves using Intimate Human Back Stories”

Abstract: In this paper, we describe our findings from research designed to explore the effect of virtual human counselors’ self-disclosure using intimate human back stories on real human clients’ social responses in psychological counseling sessions. To investigate this subject, we designed an experiment involving two conditions of the counselors’ self-disclosure: human back stories and computer back stories. We then measured socially anxious users’ verbal self-disclosure. The results demonstrated that highly anxious users revealed personal information more than less anxious users when they interacted with virtual counselors who disclosed intimate information about themselves using human back stories. Furthermore, we found that greater inclination toward facilitated self-disclosure from highly anxious users following interaction with virtual counselors who employed human back stories rather than computer back stories. In addition, a further analysis of socially anxious users’ feelings of rapport demonstrated that virtual counselors elicited more rapport with highly anxious users than less anxious users when interacting with counselors who employed human back stories. This outcome was not found in the users’ interactions with counselors who employed computer back stories.

Belinda Lange: “Virtual Rehabilitation: From Video Games to Virtual Humans”

Prism Magazine Features ICT’s Paul Debevec

The cover story in the September issue of the American Society for Engineering’s Prism Magazine, features Paul Debevec and ICT’s Light Stage technology in an article about the university science and scientists behind big screen movie magic. The story states that filmmakers and game publishers have formed symbiotic relationships with many engineering and computer science academics at top research schools, including USC, Carnegie Mellon, Georgia Tech, and MIT. “They are tasked with showing people things that nobody’s ever seen before, every single summer. And that’s a tall order to fill,” Debevec explains in the article. “That’s why they’ve become very important early adopters of new technology.”

FX Guide Features ICT Graphics Lab’s Abhijeet Ghosh

In their weekly podcast, FX Guide interviewed Abhijeet Ghosh from the USC Institute for Creative Technologies Graphics Lab about a range of issues relating to polarized light, surface properties, normals, scanning, Brewster’s angle and the reproduction of human skin, which he spoke about recently at SIGGRAPH 2012 in LA.

Albert “Skip” Rizzo, Bruce John, Josh Williams, Bradley Newman, Thomas Parsons, Sebastian Koenig, Belinda Lange and John Galen Buckwalter: “Stress Resilience in Virtual Environments: Training Combat Relevant Emotion Coping Skills Using Virtual Reality”

Abstract: The incidence of posttraumatic stress disorder (PTSD) in returning OEF/OIF military personnel has created a significant behavioral healthcare challenge. This has served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD. One emerging form of treatment for combat-related PTSD that has shown promise involves the delivery of exposure therapy using immersive Virtual Reality (VR). Initial outcomes from open clinical trials have been positive and fully randomized controlled trials are currently in progress to further investigate the efficacy of this approach. Inspired by the initial success of this research using VR to emotionally engage and successfully treat persons undergoing exposure therapy for PTSD, our group has begun developing a similar VR-based approach to deliver stress resilience training with military service members prior to their initial deployment. The STress Resilience In Virtual Environments (STRIVE) project aims to create a set of combat simulations (derived from our existing Virtual Iraq/Afghanistan PTSD exposure therapy system) that are part of a multi-episode interactive narrative experience. Users can be immersed within challenging combat contexts and interact with virtual characters within these episodes as part of an experiential learning approach for delivering psychoeducational material, stress management techniques and cognitive-behavioral emotional coping strategies believed to enhance stress resilience. The STRIVE project aims to present this approach to service members prior to deployment as part of a program designed to better prepare military personnel for the types of emotional challenges that are inherent in the combat environment. During these virtual training experiences users are monitored physiologically as part of a larger investigation into the biomarkers of the stress response. One such construct, Allostatic Load, is being directly investigated via physiological and neuro-hormonal analysis from specimen collections taken immediately before and after engagement in the STRIVE virtual experience. This paper describes the development and evaluation of the Virtual Iraq/Afghanistan Exposure Therapy system and then details its current transition into the STRIVE tool for pre-deployment stress resilience training. We hypothesize that VR stress resilience training with service members in this format will better prepare them for the emotional stress of a combat deployment and could subsequently reduce the later incidence of PTSD and other psychosocial health conditions.

Stefan Scherer, Stacy Marsella, Giota Stratou, Yuyu Xu, Fabrizio Morbini, Albert “Skip” Rizzo and Louis-Philippe Morency: “Perception Markup Language: Towards a Standardized Representation of Perceived Nonverbal Behaviors”

Abstract: Modern virtual agents require knowledge about their envi- ronment, the interaction itself, and their interlocutors’ behavior in order to be able to show appropriate nonverbal behavior as well as to adapt dialog policies accordingly. Recent achievements in the area of automatic behavior recognition and understanding can provide information about the interactants’ multimodal nonverbal behavior and subsequently their affective states. In this paper, we introduce a perception markup lan- guage (PML) which is a first step towards a standardized representation of perceived nonverbal behaviors. PML follows several design concepts, namely compatibility and synergy, modeling uncertainty, multiple inter- pretative layers, and extensibility, in order to maximize its usefulness for the research community. We show how we can successfully integrate PML in a fully automated virtual agent system for healthcare applications.

Chung-Cheng Chiu and Stacy Marsella: “Subjective Optimization”

Abstract: An effective way to build a gesture generator is to apply machine learning algorithms to derive a model. In building such a gesture generator, a common approach involves collecting a set of human conversation data and training the model to fit the data. However, after training the gesture generator, what we are looking for is whether the generated gestures are natural instead of whether the generated gestures actually fit the training data. Thus, there is a gap between the training objective and the actual goal of the gesture generator. In this work we propose an approach that use human judgment of naturalness to optimize gesture generators. We take an important step towards our goal by performing a numerical experiment to assess the optimality of the proposed framework, and the experimental results show that the framework can effectively improve the generated gestures based on the simulated naturalness criterion.

Sin-hwa Kang, Albert “Skip” Rizzo and Jonathan Gratch: “Understanding the Nonverbal Behavior of Socially Anxious People during Intimate Self-disclosure”

Abstract: This study explores the types of nonverbal behavior exhibited by socially anxious users over the course of an interview with virtual agent counselors that talked about themselves. The counselors provided self-disclosure using human back stories or computer back stories. The video data was collected from a previous study. We defined nine types of nonverbal behavior to investigate the associations between the types of nonverbal behavior and users’ anxiety levels. The results of preliminary data analysis show that five features out of the nine features are positively correlated with different levels of users’ anxiety in the “computer back stories” condition. These five types of nonverbal behavior are gaze aversion, moving arms and hands, constant rocking, shaking a head, and fidgeting arms and hands. There are no significant relationships between the kinds of nonverbal behavior and users’ anxiety levels in the “human back stories” condition.

Defense News Covers Video Games for Counter-IED Training, Including ICT Projects

A story in Defense News covered the use of video games for counter-IED training. ICT’s Todd Richmond was quoted in the story, which cited the statistic that IEDs have accounted for 111 U.S. “hostile fatalities” in Afghanistan in 2012.

“No matter where we go in the future, this is going to be the main threat that we have to deal with,” said Richmond, who leads the institute’s counter-IED training efforts, including the MCIT and DICE-T systems.

Richmond also noted that he expects counter-IED training to shift from larger training centers and even laptops down to the more mobile iPad and iPhone. The main challenge, he said, is converting and shaping the materials to fit on the smaller screens.

“You have to redesign the content for each of those,” Richmond said.

Photo: West Point Cadets going through MCIT, which includes a video game to assist troops in recognizing and reacting to IEDs. Credit: U.S. Army Public Affairs

Celso De Melo: “The Effect of Virtual Agents’ Emotion Displays and Appraisals on People’s Decision Making in Negotiation”

Abstract: There is growing evidence that emotion displays can impact people’s decision making in negotiation. However, despite increasing interest in AI and HCI on negotiation as a means to resolve differences between humans and agents, emotion has been largely ignored. We explore how emotion displays in virtual agents impact people’s decision making in human-agent negotiation. This paper presents an experiment (N=204) that studies the effects of virtual agents’ displays of joy, sadness, anger and guilt on people’s decision to coun- teroffer, accept or drop out from the negotiation, as well as on people’s expecta- tions about the agents’ decisions. The paper also presents evidence for a mecha- nism underlying such effects based on appraisal theories of emotion whereby people retrieve, from emotion displays, information about how the agent is ap- praising the ongoing interaction and, from this information, infer about the agent’s intentions and reach decisions themselves. We discuss implications for the design of intelligent virtual agents that can negotiate effectively.

Scientific American Story about Avatars Features ICT’s Jacquelyn Ford Morie

ICT’s Jacquelyn Ford Morie, and her avatars in Second Life, were featured in a Scientific American story about the online avatars and what they represent. The story noted that Morie studies the impact of immersive technologies and virtual worlds. Role playing in immersive environments like Second Life is natural, and even healthy, Morie said. “Our identity shifts all the time and every day, morphing and evolving based on what we are doing now.”

Louis-Philippe Morency and Stefan Scherer: “Investigating the influence of pause fillers for automatic backchannel prediction”

American Psychological Association’s Media Technology Division Features ICT’s H. Chad Lane

Division 46 of the American Psychological Association highlighted the work of ICT research scientist H. Chad Lane, who heads up ICT’s Learning Sciences group. In an spotlight interview posted on their website, Lane discussed ICT’s virtual human work and his efforts to apply artificial intelligence techniques to educational problems. Lane also spoke about Ada and Grace, ICT’s virtual human museum guides and Virtual Sprouts, a new informal science education project combating pediatric obesity.

“If informal learning doesn’t at least try to keep up with the powerful entertainment technologies available in homes, the future will probably not be bright,” said Lane in the interview. “We hope that virtual humans can play a role in the future of informal education.”

Division 46 is the Media Technology Division of the APA, which is focused on the psychology behind media and technology use and impact.

Andrew Gordon and Christopher Weinberg: “Different Strokes of Different Folks: Searching for Health Narratives in Weblogs”

Abstract: The utility of storytelling in the interaction between healthcare providers and patients is now firmly established, but the potential use of large-scale story collections for health-related inquiry has not yet been explored. In particular, the enormous scale of storytelling in personal weblogs offers investigators in health-related fields new opportunities to study the behavior and beliefs of diverse patient populations outside of clinical settings. In this paper we address the technical challenges in identifying personal stories about specific health issues from corpora of millions of weblog posts. We describe a novel infrastructure for collecting and indexing the stories posted each day to English- language weblogs, coupled with user interfaces designed to support targeted searches of these collections. We evaluate the effectiveness of this search technology in an effort to identify hundreds of first person and third person accounts of strokes, for the purpose of studying gender differences in the way that these health emergencies are described. Results indicate that the use of relevance feedback significantly improves the effectiveness of the search. We conclude with a discussion of sample biases that are inherent in weblog storytelling and heightened by our approach, and propose ways to mitigate these biases.

ICT’s Jacquelyn Ford Morie Named to Prestigious Advisory Panel

Jacquelyn Ford Morie, a senior research scientist at the University of Southern California Institute for Creative Technologies (ICT), has been named to the Information Science and Technology (ISAT) group, a Defense Advanced Research Projects Agency (DARPA) advisory panel whose charter includes identifying opportunities for developing new computer or communication technologies and recommending areas for research.

ISAT studies are noted to have a rich history of impact both inside DARPA and in the larger technical community.

Morie, a leading researcher in the uses and effects of virtual worlds and avatars, joins a select group of about 30 scientists who each serve for three-year terms.

“Jacki brings a unique combination of areas of expertise to ISAT, and her contributions as an invited participant in previous studies were extremely valuable,” said Richard Murray, the current group chair and a professor of control and dynamical systems and bioengineering at the California Institute of Technology. “We are very pleased that she is willing to help contribute her time and energy to ISAT.”

Other members come from institutions including MIT, Stanford, Carnegie Mellon and Berkeley. Previous USC representatives include Paul Rosenbloom, project director at ICT and professor at the Viterbi School, and Herb Schorr, vice dean for engineering and director of the newly formed Program on Informatics in the Viterbi School.

“Joining the ISAT group is an exciting opportunity to hang out with incredible visionaries and to be at the forefront of imagining the future,” said Morie.

At ICT, Morie’s work explores the potential of virtual worlds to address real world needs and affect positive change in participants who use them. Current projects include Coming Home, which implements ICT’s intelligent virtual humans within a specialized online virtual world for the benefit of post-deployment soldiers who are reintegrating to civilian life.

Morie holds master’s degrees in fine arts and computer science from the University of Florida. She earned her doctorate in computer information from the University of East London. Her dissertation focused on theories of space, embodiment and meaning in immersive virtual environments.

Arno Hartholt: “PTSD Treatment Using Intelligent Human Agents”

Abstract: This talk will cover the development of virtual reality medical simulations for the treatment and assessment of Post Traumatic Stress Disorder (PTSD) as well as training of service members to better handle stressful wartime trauma before deployment. Topics to be covered include an overview of the research rationale, VR hardware and results to date.

NCO Journal Covers ICT Technologies

The August issue of NCO Journal, the professional development magazine for U.S. Army non-commissioned officers, featured several stories about ICT, with an emphasis on how the institute uses technology and storytelling to help the Army train future leaders. The coverage included stories about ELITE, ICT’s interpersonal and informal counseling skills training that is being used at Ft. Benning; Strive and Virtual Iraq/Afghanistan, virtual reality projects that help troops prepare for and recover from experiences in the field; and SimCoach, ICT’s web-based virtual human coach that is part of an effort to break down barriers to getting mental health support.

Paul Rosenbloom at NEXT: People | Science | Tomorrow

NEXT: People | Science | Tomorrow — the Crawford Family Forum’s new series on the convergence of science, technology and society – an exploration of the future of civilization, the human species, and our place in the universe. The series is hosted by Mat Kaplan of Planetary Radio.

Admission is FREE, but RSVPs are required.

6:30pm – Doors Open
7:00pm – Program

Learn more here.

BBC and fxguide Feature ICT Graphics Lab Advances in Siggraph 2012 Coverage

New research from the ICT Graphics Lab was featured on Click, a BBC World Service technology podcast and on fxguide, a leading outlet covering the visual effects industry. Both stories highlighted new research presented by ICT’s Abhijeet Ghosh which discussed gaining surface normals from studio or natural light without special paints, sensors or light probes. Mike Seymour, who was interviewed on the BBC segment and wrote the fxguide piece, predicted the research, “could have great implications to how we work in a few years.”

Fast Company Features Louis-Philippe Morency and His YouTube Research

A story in Fast Company covered Louis-Philippe Morency’s work studying YouTube videos. The article states that this work has the potential to advance the growing industry of opinion mining beyond the hunt for insight amidst text-only Amazon product reviews and Facebook status updates.

“We are taking this field one step further by focusing on online videos, which provide verbal and non-verbal communication clues beyond just words,” says Morency.

Graham Fyffe: “High Fidelity Facial Hair Capture”

Abstract: Modeling human hair from photographs is a topic of ongoing interest to the graphics community. Yet, the literature is predominantly concerned with the hair volume on the scalp, and it remains difficult to capture digital characters with interesting facial hair. Recent stereo-vision-based facial capture systems (e.g. [Furukawa and Ponce 2010][Beeler et al. 2010]) are capable of capturing extremely fine facial detail from high resolution photographs, but any facial hair present on the subject is reconstructed as a blobby mass. Prior work in facial hair photo-modeling is based on learned priors and image cues [Herrera et al. ], and does not reconstruct the individual hairs belonging uniquely to the subject. We propose a method for capturing the three dimensional shape of complex, multi-colored facial hair from a small number of photographs taken simultaneously under uniform illumination. The method produces a set of oriented hair particles, suitable for point-based rendering.

Jewel Mine

Jewel Mine was created using the Microsoft Kinect for Windows SDK and is part of a series of game-based research prototypes using off-the-shelf video game hardware to explore the potential of interactive games to improve therapy in home and clinical settings.

Video game systems are increasingly being used for rehabilitation. However, games designed for entertainment do not always meet clinical needs. They can be too challenging for people with impairments to complete or they can encourage the wrong type of movements. Negative feedback often discourage patients from trying to complete tasks.

Jewel Mine is a rehabilitation therapy tool customized to overcome these issues. Developed at the Game Based Rehab Lab, part of the MedVR Group at University of Southern California Institute for Creative Technologies (ICT), this video game-based application targets balance training and upper limb reaching exercises and is designed to
motivate people with orthopedic and neurological injury or impairments, including stroke, traumatic brain injury, spinal cord injury and balance issues associated with aging.

A player takes on the role of a miner who must gather jewels from a mine shaft by reaching out from the center of the screen and touching each jewel individually. The environment and associated task can be changed easy during game play. For example, the scene can be switched instantly to a meadow where the user must reach out to gather flowers, or a library where books are the targets. The degree of challenge can be tailored to individuals with different levels of ability and the game tasks can be controlled by the clinician. Also, performance results can be saved and analyzed.

The project was funded by the National Institute on Disability and Rehabilitation Research (NIDRR) as part of a Rehabilitation Engineering Research Center focused on developing technologies for people aging with and into a disability.

Abhijeet Ghosh: “Estimating Diffusion Parameters from Polarized Spherical Gradient Illumination”

Abstract: Accurately modeling and reproducing the appearance of real-world materials is crucial for the production of photoreal imagery of digital scenes and subjects. The appearance of many common materials is the result of subsurface light transport that gives rise to the characteristic “soft” appearance and the unique coloring of such materials. Jensen et al. [2001] introduced the dipole-diffusion approximation to efficiently model isotropic subsurface light transport. The scattering parameters needed to drive the dipole-diffusion approximation are typically estimated by illuminating a homogeneous surface patch with a collimated beam of light, or in the case of spatially varying translucent materials with a dense set of structured light patterns. A disadvantage of most existing techniques is that acquisition time is traded off with spatial density of the scattering parameters.

Abhijeet Ghosh and Paul Debevec: “Estimating Specular Normals from Spherical Stokes Reflectance Fields”

Abstract: Despite being at the focal point of intense research in both computer graphics as well as in computer vision, accurately reproducing the shape and appearance of real-world scenes remains a challenging problem, especially under uncontrolled conditions. One cue that has been used to separate diffuse and specular reflectance is polarization. Recent work in computer graphics has explored polarization of incident illumination in conjunction with spherical gradient illumination to infer high quality diffuse-specular separation of both albedo as well as photometric normal information [Ma et al. 2007]. Ghosh et al. [2010] improved upon this by removing the view-dependence of the polarization scheme of Ma et al. by analyzing the Stokes reflectance field under incident circularly polarized spherical gradient illumination, and recover more detailed specular reflectance information including index of refraction as well as specular roughness.

Abhijeet Ghosh and Paul Graham: “Measurement-Based Synthesis of Facial Microgeometry”

Abstract: Current face scanning techniques provide submillimeter precision, recording facial mesostructure at the level of pores, wrinkles, and creases. Nonetheless, the effect of surface roughness continues to shape specular reflection at the level of microstructure — surface texture at the scale of microns. In this work, we present a texture synthesis approach to increase the resolution of mesostructure-level facial scans using surface microstructure digitized from skin samples about the face. We digitize the skin patches using macro photography and polarized spherical gradient illumination at approximately 10 micron precision, and we make point-source reflectance measurements to characterize the specular reflectance lobe at this smaller scale. We then employ a constrained texture synthesis algorithm to synthesize appropriate surface microstructure per-region, blending the regions to cover the whole entire face. We show that renderings made with microstructure-level facial models preserve the original scanned mesostructure and exhibit surface reflection which is significantly more consistent with real photographs.

Andrew Jones: “A Cell-Phone Platform for Facial Performance Capture “

Poster presentation.

USC Invites You to Mars: Traverse the Red Planet at Virtual Viewing Party Tonight at SIGGRAPH

Portable virtual reality device from the MxR Lab at the USC Institute for Creative Technologies brings 3-D Martian landscape to your iPhone or Android.

Contact: Orli Belman
USC Institute for Creative Technologies
310 709-4156, belman@ict.usc.edu

What: Just before the Curiosity rover sets down on Mars, Siggraph attendees can explore the red planet themselves using only their smartphones and the FOV2GO, an award-winning do-it-yourself virtual reality viewer developed at the Mixed Reality (MxR) Lab of the University of Southern California Institute for Creative Technologies.

This low-cost cardboard device, combined with software from the USC School of Cinematic Arts, enables people to navigate a detailed stereo model of the Mars Gale Crater, finding points of interest and unique facts provided by NASA. Images behind this immersive 3-D experience were provided courtesy of JPL-CalTech MultiMedia and NASA/JPL-CalTech.

When: 8:30 p.m., Sunday, August 5th, immediately following the Siggraph Technical Papers Fast Forward.

Where: The Geek Bar, Room 404 in the Los Angeles Convention Center. The event will kick-off SIGGRAPH’s live feed of the Mars Rover landing, scheduled to take place at 10:30 p.m.

Who: The team behind this virtual voyage to Mars includes:
Mark Bolas, director of the MxR Lab at the USC Institute for Creative Technologies and associate professor in the Interactive Media Division at the USC School of Cinematic Arts.

Perry Hoberman, associate research professor in the Interactive Media Division at USC School of Cinematic Arts, where he also heads S3D@USC, the Center for Stereoscopic 3D.

Thai Phan, ICT lead designer for the Viewing Party app, and
production team Nonny De La Pena, David Nelson, David Krum, Peggy Weil.

More: The FOV2GO was named Best Demo at the 2012 IEEE Virtual Reality Conference. CNET Senior Writer Daniel Terdiman called it, “one of the coolest things I’ve ever seen done on an iPhone.”

DIY: Dowload a free app on your iPhone or Android as well as plans to build your own cardboard viewer on the MxR Lab’s website.
http://projects.ict.usc.edu/mxr/diy/fov2go/

About the USC Institute for Creative Technologies
www.ict.usc.edu

An academic research center, the University of Southern California Institute for Creative Technologies brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, science education and more.

The institute’s Mixed Reality Lab (MxR) focuses on immersive systems for education and training simulations that incorporate both real and virtual elements. Projects push the boundaries of immersive experience design, through virtual reality and alternative controllers. The Interactive Media Division of the USC School of Cinematic Arts and the MedVR and Graphics Labs at ICT are frequent collaborative partners.

KTLA’s Tech Report Covers ICT

Reporter Rich Demuro of KTLA’s Tech Report visited ICT and filed this story about how ICT combines Hollywood talent and tech know-how to improve training and therapy. The Tech Report airs in Los Angeles and on dozens of stations nationwide.

Albert “Skip” Rizzo: “Virtual Reality Goes to War: Innovations in Military Behavioral Healthcare”

Abstract: War is perhaps one of the most challenging situations that a human being can experience. The physical, emotional, cognitive and psychological demands of a combat environment place enormous stress on even the best-prepared military personnel. Numerous reports indicate that the incidence of posttraumatic stress disorder (PTSD) in returning OEF/OIF military personnel is creating a significant healthcare challenge. This situation has served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD and other psychosocial conditions. In this regard, Virtual Reality delivered exposure therapy for PTSD is currently being used with initial reports of positive outcomes. This presentation will detail how virtual reality applications are being designed and implemented across various points in the military deployment cycle to prevent, identify and treat combat-related PTSD in OIF/OEF Service Members and Veterans. We will also present recent work being done with artificially intelligent virtual humans that serve in the role as “Virtual Patients” for clinical training of healthcare providers in both military and civilian settings and as online healthcare guides for breaking down barriers to care. The projects in these areas that will presented have been developed at the University of Southern California Institute for Creative Technologies, a U.S. Army University Affiliated Research Center, and will provide a diverse overview of how virtual reality is being used to deliver exposure therapy, assess PTSD and cognitive function, provide stress resilience training prior to deployment and its use in breaking down barriers to care.

Skip Rizzo and ICT’s Virtual Patient Work Featured on Live Science, GizMag and SmartPlanet

LiveScience featured research from ICT’s Skip Rizzo virtual patient work, involving training therapists to deal with traumatized veterans. Psychiatry residents use interactive programs to learn to diagnose disorders like post-traumatic stress ; they speak in real time to virtual characters with different personalities and problems, and the characters respond. Rizzo said he hopes to create a library of virtual patients with different diagnoses, for use by psychiatrists and psychologists. SmartPlanet and Gizmag also covered the work. The articles were based on the plenary talk that Rizzo gave at the recent convention of the American Psychological Association in Orlando, Florida.

Albert “Skip” Rizzo: “Virtual reality exposure therapy for combat-related posttraumatic stress disorder”

Abstract: Posttraumatic stress disorder (PTSD) is a chronic, debilitating, psychological condition that occurs in a subset of individuals who experience or witness life-threatening traumatic events. PTSD is highly prevalent in those who served in the military. In this paper, we present the underlying theoretical foundations and existing research on virtual reality exposure therapy, a recently emerging treatment for PTSD. Three virtual reality scenarios used to treat PTSD in active duty military and combat veterans and survivors of terrorism are presented: Virtual Vietnam, Virtual Iraq, and Virtual World Trade Center. Preliminary results of ongoing trials are presented.

Former ICT Staff, Palmer Lucky, Beats Kickstarter Goal in Hours

Only a couple hours after launching, former ICT MxR Lab employee, Palmer Lucky, surpassed his $250,000 goal. A day after going live, Palmer Lucky’s Kickstarter fundraiser for his innovative virtual reality headset, Occulus Rift, has achieved nearly four times as much as the initial goal. Congratulations, Palmer! Check out the Kickstarter page.

Rich DiNinni: “Future Learning Technologies”

Established in 1999 as a University Affiliated Research Center with a multi-year contract from the U.S. Army, the University of Southern California’s Institute for Creative Technologies (ICT) was given the mission to conduct basic and applied research and advanced technology development in immersive technologies to advance and maintain the state-of-the-art for human synthetic experiences that are so compelling the participants will react as if they are real. Based in Los Angeles, the entertainment capital of the world, ICT pursues that mission by bringing together the deep talents of those who understand what makes compelling content with those who understand how to develop realistic simulation technologies. These disparate groups of computer and social scientists, film producers, script writers, therapists, game designers and artists are pioneering new ways to teach, train, and heal. Research projects explore and expand how people engage with computers, through virtual characters, video games, and simulated scenarios, building on ICT’s recognized leadership in the development of virtual humans who look, think, and behave like real people. ICT prototype applications provide engaging experiences that help improve skills in decision-making, cultural awareness, leadership, and coping. This talk will describe and provide demonstrations of how ICT’s basic research efforts blend with prototype application development to build future learning technologies that can be used for military training, education, health therapies, and more.

Celso De Melo and Jonathan Gratch: “Reverse appraisal: The importance of appraisals for the effect of emotion displays on people’s decision making in a social dilemma”

Abstract: Two studies are presented that explore the interpersonal effect of emotion displays in decision making in a social dilemma. Experiment 1 (N=405) showed that facial displays of emotion (joy, sadness, anger and guilt) had an effect on perception of how the person was appraising the social dilemma outcomes (perception of appraisals) and on perception of how likely the person was to cooperate in the future (perception of cooperation). Experiment 1 also showed that perception of appraisals (partially and, in some cases, fully) mediated the effect of emotion displays on perception of cooperation. Experiment 2 (N=202) showed that manipulating perception of appraisals, by expressing them textually, produced an effect on perception of cooperation thus, providing evidence for a causal model where emotion displays cause perception of appraisals which, in turn, cause perception of cooperation. In line with Hareli and Hess’ (2010) findings and a social-functions view of emotion, we advance the reverse appraisal proposal that argues people can infer, from emotion displays, how others are appraising a situation which, in turn, support inferences that are relevant for decision making. We discuss implications of these results and proposal to decision and emotion theory.

Elnaz Nouri, Kallirroi Georgila and David Traum: “A Cultural Decision-Making Model for Negotiation Based on Inverse Reinforcement Learning”

Abstract: We learn culture-specific weights for a multi-attribute model of decision-making in negotiation, using Inverse Reinforcement Learning (IRL). The model takes into account multiple indi- vidual and social factors for evaluating the available choices in a decision set, and attempts to account for observed be- havior differences across cultures by the different weights that members of those cultures place on each factor. We apply this model to the Ultimatum Game (a well-known simple negoti- ation game) and show that weights learned from IRL surpass both a simple baseline with random weights, and a high base- line considering only one factor of maximizing gain in own wealth in accounting for the behavior of human players from four different cultures. We also show that the weights learned with our model for one culture outperform weights learned for other cultures when playing against opponents of the first cul- ture. We conclude that decision-making in negotiation is a complex, culture-specific process that cannot be explained just by the notion of maximizing one’s own utility, but which can be learned using IRL techniques.

ICT Computer Scientist Uses YouTube as a Research Tool

ICT Researcher Giving YouTube Views a Whole New Meaning

Computer scientist Louis-Philippe Morency is analyzing online videos to capture the nuances of how people communicate opinions through words and actions

Louis-Philippe Morency spends much of his workday watching YouTube. He logs hours at his desk viewing videos of people expounding on everything from presidential politics to peanut butter preferences.

But Morency is no slacker. He’s a scientist at the University of Southern California Institute for Creative Technologies whose focus is teaching computers to identify and understand the ways people convey emotion – including those times when we say one thing and mean the opposite. And in an interesting twist for those studying human communication, it turns out computers themselves are becoming one of the best places to explore how people express themselves these days.

“There is a growing field of opinion mining right now, where people study internet posts like Amazon book reviews or other text-based product and movie critiques to find out how people feel about a topic,” said Morency, who is also a research assistant professor of computer science at the USC Viterbi School of Engineering. “We are taking this field one step further by focusing on online videos which provide verbal and non-verbal communication clues beyond just words.”

Most people can cite countless cases of misreading either written or body language. For example, those tone-deaf emails where the jokes come across as an insult or conversations where it is only whether a statement is delivered with a smile or a stare that the sentiment can be understood.

“By looking at more than just text we can learn when someone is using sarcasm, for example saying they love something when their facial expressions and body language indicate that they hate it,” said Morency.

Social scientists have advanced understanding of everything from autism to cross- cultural differences by studying how people use verbal and non-verbal forms of communication. In the past, researchers needed live subjects to study. But with the increasing volume of videos posted online, the Internet has become an invaluable resource. For his latest effort – figuring out how to identify when someone is sharing a positive, negative or neutral opinion – YouTube provides a limitless library of likes and loathes.

“There are a lot of people sharing their sentiments on YouTube,” said Morency, “The goal of this work is to see if we can find a way to analyze these millions of videos and accurately assess what kinds of views they are expressing.”

To do this, Morency and his colleagues created a proof-of-concept data set of about 50 YouTube videos that feature people expressing their opinions. The videos were input into a computer program Morency developed that zeroes in on aspects of the speaker’s language, speech patterns and facial expressions to determine the type of opinion being shared.

Morency’s small sample has already identified several advantages to analyzing gestures and speech patterns over looking at writing alone. First, people don’t always use obvious polarizing words like love and hate each time they express an opinion. So software programmed to search for these “obvious” occurrences can miss many other valuable posts.

Also, Morency found that people smile and look at the camera more when sharing a positive view. Their voices become higher pitched when they have a positive or negative opinion, and they start to use a lot more pauses when they are neutral.

“These early findings are promising but we still have a long way to go,” said Morency. “What they tell us is that what you say, how you say it, and the gestures you make while speaking all play a role in pinpointing the correct sentiment.”

Morency first demonstrated his YouTube model at the International Conference on Multimodal Interaction in Spain last fall. He has since expanded the dataset to include close to 500 videos and will submit results from this larger sample for publication later this year.

The YouTube opinion dataset is also available to other researchers by contacting Morency’s Multimodal Communication and Machine Learning lab at ICT. Potential commercial uses could include for marketing or survey purposes. In the academic community, Morency foresees his research and database being resources for scientists working to understand human non-verbal and verbal communication,helping to identify conditions like autism or depression or to build more engaging educational systems. Potential commercial uses could include for marketing or survey purposes.

As for Morency, he plans to continue to view how people behave over computers in order to make computers behave more like people.

And that is an effort worth watching.

West Point Cadets Train on Mobile Counter-IED Trainer

As part of their summer training program, West Point Cadets are going through the Mobile Counter-IED Trainer, which includes a video game to assist troops in recognizing and reacting to improvised explosive devices (IEDs). A recent Army article says the system, which ICT took part in developing, provides a unique way to learn about IEDs, a leading cause of death and injury for troops in Afghanistan. Cadet Matthew Ghidotti said the MCIT training was unlike any he had experienced before at West Point.

“We’ve never really gotten that much detail about IEDs before,” said Ghidotti in the story. “And the way the MCIT was interactive really allowed us to learn a lot and understand the different facets of bombs that are out there.”

Photo Credit: U.S. Army Public Affairs

Ning Wang, David Pynadath and Stacy Marsella: “Toward Automatic Verification of Multiagent Systems for Training Simulations”

Abstract: Advances in multiagent systems have led to their successful application in experiential training simulations, where students learn by interacting with agents who represent people, groups, structures, etc. These multiagent simulations must model the training scenario so that the students’ success is correlated with the degree to which they follow the intended pedagogy. As these simulations increase in size and richness, it becomes harder to guarantee that the agents accurately encode the pedagogy. Testing with human subjects provides the most accurate feedback, but it can explore only a limited subspace of simulation paths. In this paper, we present a mechanism for using human data to verify the degree to which the simulation encodes the intended pedagogy. Starting with an analysis of data from a deployed multiagent training simulation, we then present an automated mechanism for using the human data to generate a distribution appropriate for sampling simulation paths. By generalizing from a small set of human data, the automated approach can systematically explore a much larger space of possible training paths and verify the degree to which a multiagent training simulation adheres to its intended pedagogy.

Belinda Lange: “Virtual Rehabilitation: Opportunities and Challenges”

This is a Plenary talk for the US-Turkey Advanced Study Institute on Global Healthcare Challenges. The presentation will present and discuss the opportunities and challenges of using virtual reality for rehabilitation and therapy, using virtual humans for coaching and using virtual humans for training. This presentation will highlight and discuss various ICT technologies.

Kallirroi Georgila, Anton Leuski and David Traum: “Reinforcement Learning of Question-Answering Dialogue Policies for Virtual Museum Guides”

Abstract: We use Reinforcement Learning (RL) to learn question-answering dialogue policies for a real-world application. We analyze a corpus of interactions of museum visitors with two virtual characters that serve as guides at the Museum of Science in Boston, in order to build a realistic model of user behavior when interacting with these characters. A simulated user is built based on this model and used for learning the dialogue policy of the virtual characters using RL. Our learned policy out- performs two baselines (including the original dialogue policy that was used for collecting the corpus) in a simulation setting.

Fabrizio Morbini, Eric Forbell, David DeVault, Kenji Sagae, David Traum and Albert “Skip” Rizzo: “A Mixed-Initiative Conversational Dialogue System for Healthcare”

Abstract: We present a mixed initiative conversational dialogue system designed to address primarily mental health care concerns related to military deployment. It is supported by a new information-state based dialogue manager, FLoReS (Forward-Looking, Reward Seeking dialogue manager), that allows both advanced, flexible, mixed initiative interaction, and efficient policy creation by domain experts. To easily reach its target population this dialogue system is accessible as a web application.

David DeVault and David Traum: “A Demonstration of Incremental Speech Understanding and Confidence Estimation in a Virtual Human Dialogue System”

This demonstration highlights some emerging capabilities for incremental speech understanding and processing in virtual human dialogue systems.

Anton Leuski and David DeVault: “A Study in How NLU Performance Can Affect the Choice of Dialogue System Architecture”

Abstract: This paper presents an analysis of how the level of performance achievable by an NLU module can affect the optimal modular design of a dialogue system. We present an evalua- tion that shows how NLU accuracy levels im- pact the overall performance of a system that includes an NLU module and a rule-based di- alogue policy. We contrast these performance levels with the performance of a direct classifi- cation design that omits a separate NLU mod- ule. We conclude with a discussion of the po- tential for a hybrid architecture incorporating the strengths of both approaches.

ICT Virtual Humans and Virtual Reality on CNET

CNET reporter Daniel Terdiman stopped at ICT on his West Coast road trip to see how we are using virtual humans and virtual reality to address post-traumatic stress, decision-making skills and more. Of his experience looking through the FOV2GO, ICT’s Mixed-Reality Lab’s portable device that transforms smartphones into 3D viewers Terdiman stated, “It was one of the coolest things I’ve ever seen done on an IPhone.”

Read his coverage and check out the slideshow here.

Joel Jurik and Paul Debevec: “Geometry-Corrected Light Field Rendering for Creating a Holographic Stereogram”

We present a technique to record and process a light field of an object in order to produce a printed holographic stereogram. We use a geometry correction process to maximize the depth of field and depth-dependent surface detail even when the array of viewpoints comprising the light field is coarsely sampled with respect to the angular resolution of the printed hologram. We capture the light field data of an object with a digital still camera attached to a 2D translation stage, and generate hogels (holographic elements) for printing by reprojecting the light field onto a photogrammetrically recovered model of the object and querying the relevant rays to be produced by the hologram with respect to this geometry. This results in a significantly clearer image of detail at different depths in the printed holographic stereogram.

David Traum: “Advanced Dialogue Models for 4Multi Negotiation”

Abstract: In this talk, I will present the dialogue model for virtual human negotiation, developed through a series of projects at the Institute for Creative Technologies. The models support Multi-party, multi-modal dialogues, in which humans and virtual agents negotiate on multiple issues, and the agents engage in multiple strategies throughout the course of the interaction. I will focus special attention on recent work in progress on incremental dialogue interpretation, conversational participant modeling, and models of secrecy-reasoning.

ICT’s Virtual Reality Exposure Therapy Featured on TNT’s Rizzoli and Isles

An episode of the TNT Series, “Rizzoli and Isles“, features a dramatic depiction of ICT’s Virtual Iraq/Afghanistan PTSD Exposure Therapy system appearing as part of the storyline of the episode “This Is How a Heart Breaks”. This episode will air on Tuesday, June 19, at 9 p.m. (ET/PT).

This portrayal is not representative of actual treatment which typically takes place one on one with a therapist. In a real therapeutic setting, graduated exposure to traumatic memories in VR is carefully measured to a degree that is acceptable to the person undergoing treatment. However, it is hoped that this dramatic depiction of some of the elements of the process will both serve to inform the general public as to the existence of this form of PTSD treatment and perhaps encourage those who are suffering from PTSD to seek the care that could make a real difference in their lives.

It has been documented in the clinical literature that prolonged exposure therapy has the best evidence for its effectiveness in treating PTSD. The use of virtual reality for delivering this treatment aims to provide a safe, controlled and gradual way to help service members and veterans confront and process their difficult experiences using a technology that may appeal to a generation of service members who have grown up “digital”.

Thanks to “Rizzoli and Isles” executive producer and series creator Janet Tamaro for her efforts to work with our team at ICT to present this treatment option embedded within a dramatic storyline as part of a very popular television drama.

For more information on the availability of this form of treatment, click on these links:
http://www.patss.com/cli_res/irq_afg.html?name1=Current+Studies&type1=2Select&name2=Iraq+and+Afghanistan+Wars+-+PTSD+Research+Study&type2=3Active
https://www.facebook.com/PTSDResearch
http://www.psychiatry.emory.edu/PROGRAMS/Trauma/index.htm
http://anxietyclinic.cos.ucf.edu/tmt.html
http://t2health.org/programs-clinical.html
http://medvr.ict.usc.edu/

Louis-Philippe Morency: “Multi-View Latent Variable Discriminative Models For Action Recognition” and “3D Constrained Local Model for Rigid and Non-Rigid Facial Tracking”

“Multi-View Latent Variable Discriminative Models For Action Recognition”
Abstract: Many human action recognition tasks involve data that can be factorized into multiple views such as body postures and hand shapes. These views often interact with each other over time, providing important cues to understanding the action. We present multi-view latent variable discriminative models that jointly learn both view-shared and viewspecific sub-structures to capture the interaction between views. Knowledge about the underlying structure of the data is formulated as a multi-chain structured latent conditional model, explicitly learning the interaction between multiple views using disjoint sets of hidden variables in a discriminative manner. The chains are tied using a predetermined topology that repeats over time. We present three topologies – linked, coupled, and linked-coupled – that differ in the type of interaction between views that they model. We evaluate our approach on both segmented and unsegmented human action recognition tasks, using the ArmGesture, the NATOPS, and the ArmGesture-Continuous data. Experimental results show that our approach outperforms previous state-of-the-art action recognition models.

3D Constrained Local Model for Rigid and Non-Rigid Facial Tracking
Abstract: We present 3D Constrained Local Model (CLM-Z) for robust facial feature tracking under varying pose. Our approach integrates both depth and intensity information in a common framework. We show the benefit of our CLMZ method in both accuracy and convergence rates over regular CLM formulation through experiments on publicly available datasets. Additionally, we demonstrate a way to combine a rigid head pose tracker with CLM-Z that benefits rigid head tracking. We show better performance than the current state-of-the-art approaches in head pose tracking with our extension of the generalised adaptive view-based appearance model (GAVAM).

U.S. Army Chief of Staff Visits ICT

U.S Army Chief of Staff General Ray Odierno visited the University of Southern California Institute for Creative Technologies (ICT) on Tuesday, June 12 to see demonstrations of the institute’s breakthroughs in graphics, virtual human and mixed reality technologies and to explore how these advances can continue to have a positive impact on Soldiers, from pre-deployment leader development and training to on-location practice tools to post-deployment recovery.

“The work that is being done here is something that I think is critical for us as we move to the future,” said Odierno. “It is important for the Army to work with institutions such as this – which have the creative capability, the expertise and the phenomenal credentials of people who work here – to try to utilize their knowledge.”

Odierno, who arrived at the institute’s Playa Vista campus in a Black Hawk helicopter, saw examples of current ICT work addressing improvised explosive devices and post-traumatic stress as well as projects in the research pipeline that aim to bring down the costs and increase the effectiveness of virtual reality-based simulations.

“It was a tremendous honor to host the Army Chief of Staff here today,” said Randall W. Hill, Jr., ICT’s executive director. “Our mission as an Army-sponsored university affiliated research center is to combine the technological and creative capital available at USC and in Los Angeles to develop new ways to teach and train. I think the general saw that we are achieving that and his visit here began a conversation on ways our partnership can do even more to benefit our troops.”

The four-star general toured the ICT Graphics Lab and learned about the group’s Academy-Award-winning Light Stage scanning process for creating realistic digital faces. He even volunteered himself as a scanning subject, taking a few minutes to pose inside the LED-filled sphere so that his face can possibly be recreated as an avatar or virtual character. Having a personalized digital avatar for every Soldier is something that the Army plans to incorporate into its virtual training programs. The techniques developed at ICT allow for an unprecedented level of detail, including fine wrinkles and skin pores.

Researchers showed the Army’s highest ranking officer how far the institute has come in developing believable virtual humans that can speak, understand and gesture like real people. Odierno entered the Gunslinger saloon, where computer-generated characters take on the iconic roles of the Western bartender, bad guy and damsel in distress, while a real person plays a Texas Ranger who must save the day. The Hollywood-inspired exchange demonstrates the degree that story, character and advanced technology all play a part in getting people immersed. Something the general noted is as important for training as it is for play.

“It is helping us to develop capabilities that will allow us to keep people interested, to keep simulations realistic,” he said. “I think that is important.”

He also added that that the immediacy and flexibility of practicing with a virtual role player, which can potentially be loaded on a laptop and programmed for an endless variety of situations, has other advantages as the Army strives to build adaptive leaders who have to operate in complex environments.

“What is important about it is you are able to interact, make mistakes, to run several different types of scenarios, to understand how to react and get feedback on how you react,” he said. “And you can critique yourself, you can have other people critique you. So in my mind it is really a capability and capacity we have never had before.

Current ICT virtual human prototypes are being used to train soldiers in counseling and interpersonal communication skills. Virtual patients for teaching interviewing skills are also in development.

Odierno also saw demonstrations of Virtual Iraq/Afghanistan, ICT’s virtual reality exposure therapy for treating post-traumatic stress, and of SimCoach, a web-based virtual human who provides support to Soldiers and their families seeking help or mental health support resources.

“To me these ideas are absolutely phenomenal in helping us to try to solve some of these real difficult issues that we have,” he said.

Tuesday’s visit was the first from General Odierno. He was accompanied by senior members of his staff as well as from the Army’s Research, Development and Engineering Command and Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology.

“It was a great opportunity for our researchers to hear firsthand how technologies developed here help solve real world problems,” said Hill. “It is gratifying to see that we are indeed making a difference.”

Belinda Lange: “Getting it into the hands of the consumer: Designing, Developing and Commercializing Games for Rehabilitation”

This one hour discussion will present and discuss important issues around the design and development of games for rehabilitation within the academic setting and present to the audience for open discussion a series of questions about the appropriate testing and commercialization of these products in preparation for clinical and mainstream use.

David Traum: “Recent Research in the ICT Natural Language Dialogue Group”

The Institute for Creative Technogies has developed natural language dialogue capability for a variety of virtual human characters that can be used for role-play training, and educational and entertainment purposes. In this talk, I describe some of the “genres” of dialogue for these agents and dialogue architectures that have been developed to support them, as well as recent research on spoken language understanding, authoring tools and dialogue policy learning, and conversational speech generation and synthesis.

Defense News Covers ICT’s Mental Health Work

A story in Defense News covers efforts to use virtual reality and other technologies to address mental health issues and barriers to care faced by returning troops. The article covers several ICT efforts including Virtual Iraq/Afghanistan exposure therapy for PTSD, SimCoach and Strive.

Celso de Melo: “Model of the Social Effects of Emotion in Decision-Making in Multiagent Systems”

Research in the behavioral sciences suggests that emotion can serve important social functions and that, more than a simple manifestation of internal experience, emotion displays communicate one’s beliefs, desires and intentions. In a recent study we have shown that, when engaged in the iterated prisoner’s dilemma with agents that display emotion, people infer, from the emotion displays, how the agent is appraising the ongoing interaction (e.g., is the situation favorable to the agent? Does it blame me for the current state-of-affairs?). From these appraisals people, then, infer whether the agent is likely to cooperate in the future. In this paper we propose a Bayesian model that captures this social function of emotion. The model supports probabilistic predictions, from emotion displays, about how the counterpart is appraising the interaction which, in turn, lead to predictions about the counterpart’s intentions. The model’s parameters were learnt using data from the empirical study. Our evaluation indicated that considering emotion displays improved the model’s ability to predict the counterpart’s intentions, in particular, how likely it was to cooperate in a social dilemma. Using data from another empirical study where people made inferences about the counterpart’s likelihood of cooperation in the absence of emotion displays, we also showed that the model could, from information about appraisals alone, make appropriate inferences about the counterpart’s intentions. Overall, the paper suggests that appraisals are valuable for computational models of emotion interpretation. The relevance of these results for the design of multiagent systems where agents, human or not, can convey or recognize emotion is discussed.

Celso De Melo and Jonathan Gratch: “Bayesian Model of the Social Effects of Emotion in Decision-Making in Multiagent Systems”

Abstract: Research in the behavioral sciences suggests that emotion can serve important social functions and that, more than a simple manifestation of internal experience, emotion displays communicate one’s beliefs, desires and intentions. In a recent study we have shown that, when engaged in the iterated prisoner’s dilemma with agents that display emotion, people infer, from the emotion displays, how the agent is appraising the ongoing interaction (e.g., is the situation favorable to the agent? Does it blame me for the current state-of-affairs?). From these appraisals people, then, infer whether the agent is likely to cooperate in the future. In this paper we propose a Bayesian model that captures this social function of emotion. The model supports probabilistic predictions, from emotion displays, about how the counterpart is appraising the interaction which, in turn, lead to predictions about the counterpart’s intentions. The model’s parameters were learnt using data from the empirical study. Our evaluation indicated that considering emotion displays improved the model’s ability to predict the counterpart’s intentions, in particular, how likely it was to cooperate in a social dilemma. Using data from another empirical study where people made inferences about the counterpart’s likelihood of cooperation in the absence of emotion displays, we also showed that the model could, from information about appraisals alone, make appropriate inferences about the counterpart’s intentions. Overall, the paper suggests that appraisals are valuable for computational models of emotion interpretation. The relevance of these results for the design of multiagent systems where agents, human or not, can convey or recognize emotion is discussed.

Rich DiNinni and Julia Kim: “Cognitive Cartography: Small Unit Readiness through Pre-deployment Priming of Mental Maps”

Abstract: Visual representations and story telling are the oldest traditions of remembering, recounting events, imparting lessons and projecting affect. These formats structure information in part-whole relations affording the listener or reader schematic frameworks to interpret past, present or future analogous events. Since all permutations of experiences in the deployed environment cannot be known a priori, proscribed intentional memorization of facts or scripted sequences are likely to be of limited value. Rather than methods directed toward specific content knowledge, a global method designed to prime cognitive structures through leveraging immersive technology is proposed. This paper will describe an ongoing project to develop “just-in-time” squad and small unit leader mission rehearsal tools for the purpose of 1) enabling better individual adjustment on the ground, 2) accelerating coordinated effort and team integration, and 3) increasing individual and unit resilience through enhancing unit cohesion.

Thomas Talbot: “Designing Medical Education for Today’s Brain”

Disruptive advances in technology, returns on cognitive science research and societal changes are leading us to question basic assumptions about medical education. This June 4th, 2012 keynote by Thomas B. Talbot MD, MS, FAAP at the 2012 Information Technology in Academic Medicine Conference Sponsored by the Group on Information Resources (GIR) explores emerging research and ideas that may be of keen interest to medical educators everywhere.

David DeVault and David Traum: “Incremental Speech Understanding in a Multi-Party Virtual Human Dialogue System”

David DeVault and David Traum will present a demonstration of ICT’s incremental speech processing capabilities at the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2012). David Traum will also chair a session and co-chair a workshop.

Matthew Jensen Hays, H. Chad Lane, Daniel Auerbach: “Clear and presence danger: Feedback in serious games.”

Serious games are generally designed with two goals in mind. First, they are designed to promote learning. Second, like most games, they seek to create compelling and engaging experiences, often referred to as a sense of presence. Presence itself is believed to promote learning, but serious games often incorporate additional features to increase pedagogical value. One such feature is the use of an intelligent tutoring system (ITS) to provide feedback during gameplay. Because feedback from an ITS is usually extrinsic (i.e., it operates outside of the primary game mechanic), attending to it may disrupt players’ sense of presence, thereby hindering learning. To avoid this potential disruption, ITS feedback has been removed from some serious games, and omitted from the design of others. However, the most beneficial conditions of instruction and practice are often counterintuitive; in this paper, we challenge the assumption that feedback during learning hinders sense of presence. Across three experiments, we examined how an ITS that provided extrinsic feedback during a serious game affected presence. Across different modalities and conditions, we found that feedback not affect presence (although manipulations of visual richness did). Presence was also unaffected when we manipulated different features of the ITS, such as the participants’ ability to control feedback delivery, or even their displeasure with feedback frequency. Our results suggest that it is possible to provide extrinsic feedback in a serious game without detracting from the immersive power of the game itself.

FX Guide Features Paul Debevec and the Development of the Light Stage Systems

Fx Guide posted an interview and video of Paul Debevec, ICT’s associate director for graphics research, discussing the Light Stage systems he developed. The coverage is part of the fxphd Background Fundamentals training course and was also showcased on the main FX Guide site.

NBC News Features ICT’s Virtual Therapy for PTSD

NBC News featured a story on ICT’s virtual therapy for treating and detecting PTSD. Skip Rizzo and Galen Buckwalter were interviewed in the segment.

View more videos at: http://nbclosangeles.com.

ELITE Leadership and Counseling Trainer Featured on the Ft. Benning Report

The Benning Report, a video news program produced at Ft. Benning, showcased ICT’s ELITE trainer in a recent segment. It begins at 16:22 in the video below.

Jump to 16:22 to watch the section about ICT.

Edward Haynes: “A Sequencer Tool for Non-Verbal Behavior in the ELITE Leadership Training System”

Abstract: The Emergent Leader Immersive Training Environment (ELITE) is a training tool developed for the US ARMY that uses an interactive virtual human to teach young Officers interpersonal and counseling skills. The ELITE system is the culmination of several key areas of research, including natural language understanding, dialog management and a specialized animation system called SmartBody. The Animation Sequencer Tool (AST) is a middleware tool designed to act as an integration point for controlling the Virtual Human’s responses with various non-verbal behaviors and animations.

SimCoach Goes Live on Braveheart, the Atlanta Braves Veterans Support Website

ICT’s SimCoach, a new way to help Veterans and their families find expert help and support for PTSD is now live on the BraveHeart website.

Anton Leuski: “The BladeMistress Corpus: From Talk to Action in Virtual Worlds”

Abstract: Virtual Worlds (VW) are online environments where people come together to interact and perform various tasks. The chat transcripts of interactions in VWs pose unique opportunities and challenges for language analysis: Firstly, the language of the transcripts is very brief, informal, and task-oriented. Secondly, in addition to chat, a VW system records users’ in-world activities. Such a record could allow us to analyze how the language of interactions is linked to the users actions. For example, we can make the language analysis of the users dialogues more effective by taking into account the context of the corresponding action or we can predict or detect users actions by analyzing the content of conversations. Thirdly, a joined analysis of both the language and the actions would empower us to build effective modes of the users and their behavior. In this paper we present a corpus constructed from logs from an online multiplayer game BladeMistress. We describe the original logs, annotations that we created on the data, and summarize some of the experiments.

Kallirroi Georgila, Alan W. Black, Kenji Sagae, David Traum: “Practical Evaluation of Human and Synthesized Speech for Virtual Human Dialogue Systems”

The current practice in virtual human dialogue systems is to use professional human recordings or limited-domain speech synthesis. Both approaches lead to good performance but at a high cost. To determine the best trade-off between performance and cost, we perform a systematic evaluation of human and synthesized voices with regard to naturalness, conversational aspect, and likability. We also vary the type (in-domain vs. out-of-domain), length, and content of utterances, and take into account the age and native language of raters as well as their familiarity with speech synthesis. We present detailed results from two studies, a pilot one and one run on Amazon’s Mechanical Turk. Our results suggest that a professional human voice can supersede both an amateur human voice and synthesized voices. Also, a high-quality general-purpose voice or a good limited-domain voice can perform better than amateur human recordings. We do not find any significant differences between the performance of a high-quality general-purpose voice and a limited-domain voice, both trained with speech recorded by actors. As expected, in most cases, the high-quality general-purpose voice is rated higher than the limited-domain voice for out-of-domain sentences and lower for in-domain sentences. There is also a not statistically significant trend for long or negative-content utterances to receive lower ratings.

Angela Nazarian: “The Interplay between Accent and Warmth on Person Perception”

This randomized study manipulated temperature prime (hot or cold) and accent of a female speaker (American English or Indian English). Results found that Indians rated the Indian speaker as warmer only if they were in the hot prime condition. The effects of accent and culture on person perception are discussed.

Priti Aggarwal, Ron Artstein, Jillian Gerten, Athanasios Katsamanis, Shrikanth Narayanan, Angela Nazarian, David Traum: “The Twins Corpus of Museum Visitor Questions”

The Twins corpus is a collection of utterances spoken in interactions with two virtual characters who serve as guides at the Museum of Science in Boston. The corpus contains about 200,000 spoken utterances from museum visitors (primarily children) as well as from trained handlers who work at the museum. In addition to speech recordings, the corpus contains the outputs of speech recognition performed at the time of utterance as well as the system interpretation of the utterances. Parts of the corpus have been manually transcribed and annotated for question interpretation. The corpus has been used for improving performance of the museum characters and for a variety of research projects, such as phonetic-based Natural Language Understanding, creation of conversational characters from text resources, dialogue policy learning, and research on patterns of user interaction. It has the potential to be used for research on children’s speech and on language used when talking to a virtual human.

New Scientist Covers Study Showing Effectiveness of ICT’s Virtual Reality Exposure Therapy for PTSD

A story in New Scientist featured preliminary results of a Navy study that found that virtual reality exposure therapy, including ICT’s Virtual Iraq/Afghanistan, helped post-traumatic stress sufferers find relief from their symptoms. Robert McLay of the U.S. Naval Medical Center led a study that compared individuals treated with virtual reality exposure therapy with others treated with standard exposure therapy. The article reports that after nine weeks of treatment both groups showed a reduction of symptoms but three months after treatment, the improvements for the non-VR group had largely disappeared but the VR group’s improvements continued. McLay used the virtual reality treatment developed by Skip Rizzo in the MedVR Lab at ICT as part of his study and reported his findings at the annual conference of the American Psychiatric Association.

SmartBody

smartbody.ict.usc.edu

SmartBody is a character animation platform originally developed at the University of Southern California.

SmartBody provides the following capabilities in real time:

  • Locomotion (walk, jog, run, turn, strafe, jump, etc.)
  • Steering – avoiding obstacles and moving objects
  • Object manipulation – reach, grasp, touch , pick up objects
  • Lip Syncing – characters can speak with simultaneous lip-sync using text-to-speech or prerecorded audio
  • Gazing – robust gazing behavior that incorporates various parts of the body
  • Nonverbal behavior – gesturing, head nodding and shaking, eye saccades
  • Character physics – ragdolls, pose-based tracking, motion perturbations

SmartBody is written in C++ and can be incorporated into most game and simulation engines. We currently have interfaces for the following engines:

  • Unity
  • Ogre
  • Unreal
  • Panda3D
  • GameBryo

SmartBody is a Behavioral Markup Language (BML) realization engine that transforms BML behavior descriptions into realtime animations.

SmartBody runs on Windows, Linux, OSX as well as the iPhone and Android devices. All the source code is available for download and is licensed under the LGPL license.

For questions about SmartBody and usage, please contact: Ari Shapiro, Ph.D.

David Traum: “ISO 24617-2: A semantically-based standard for dialogue annotation”

Abstract:This paper summarizes the latest, final version of ISO standard 24617-2 “Semantic annotation framework, Part 2: Dialogue acts”. Compared to the preliminary version ISO DIS 24617-2:2010, described in Bunt et al. (2010), the final version additionally includes concepts for annotating rhetorical relations between dialogue units, defines a full-blown compositional semantics for the Dialogue Act Markup Language DiAML (resulting, as a side-effect, in a different treatment of functional dependence relations among dialogue acts and feedback dependence relations); and specifies an optimally transparent XML-based reference format for the representation of DiAML annotations, based on the systematic application of the notion of ‘ideal concrete syntax’. We describe these differences and briefly discuss the design and implementation of an incremental method for dialogue act recognition, which proves the usability of the ISO standard for automatic dialogue annotation.

CBC Covers Virtual Therapy to Be Used in Canada

The Canadian Broadcasting Corporation covered Canada’s decision to begin using the ICT’s virtual reality exposure therapy for treating PTSD. Skip Rizzo, ICT’s associate director for medical virtual reality developed the innovative therapy, which is currently being used and studied across the U.S..

“The research shows, pretty consistently over the years, that by having the person gradually imagine or be exposed in VR to events in the traumatic memories, that they’re able to process emotional memories,” said Rizzo in the story.

Paul Debevec: “From Spider-Man to Avatar, Emily and Benjamin: Achieving Photoreal Digital Actors”

Somewhere between “Final Fantasy” in 2001 and “The Curious Case of Benjamin Button” in 2008, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. This talk describes some of the technological advances that have enabled this achievement. For an in-depth example, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create “Digital Emily”, a collaboration between the USC ICT Graphics Laboratory and Image Metrics. Actress Emily O’Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics’ video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008. The talk also presents techniques which may allow digital characters to leap from the movie screen and into the space around us, including a 3D teleconferencing system that uses live facial scanning and an autostereoscopic display to transmit a person’s face in 3D and make eye contact with remote collaborators.

The Huffington Post Features the ICT Graphics Lab

The Huffington Post featured Paul Debevec and the ICT Graphics Lab in a Talk Nerdy to Me segment on how to create a digital double. Debevec and his team scanned science correspondent Cara Santa Maria during her visit to our Playa Vista facilities and the realistic results can be seen at the end of the video.

Paul Debevec: “Avatar and Beyond: Lighting Hollywood’s Real and Virtual Actors”

Photoreal digital actors have become a practical reality in the last decade and are poised to revolutionize the entertainment industry. Paul Debevec from USC’s Institute for Creative Technologies will explain the technical progression and application of his lab’s LED-based “Light Stage” facial scanning systems, which have helped produce photoreal digital actors for movies such as Spider-Man 2, The Curious Case of Benjamin Button and Avatar.

Defense VIPs Visit ELITE Classroom at Ft. Benning

Recently some high profile visitors experienced ELITE, ICT’s virtual human training system being used at Ft. Benning. Secretary of Defense Leon Panetta observed a session, Chief of Staff of the Army, General Raymond Odierno helped describe some teaching points to Soldiers. Earlier, Raymond Chandler, Sergeant Major of the Army, took part in a class. You can see more photos from these visits on our Facebook page.

Thomas Talbot: “What is a Virtual Patient Anyway?”

Overview: Speaking of simulation research in medicine and the role of technology standards, what is a virtual patient in the first place? A variety of virtual patient archetypes are explored, including case presentation, virtual reality & game based virtual patients as well as virtual standardized patients.

The Huffington Post Showcases Gunslinger, ICT’s Virtual Human Shootout

Cara Santa Maria from the Huffington Post’s Talk Nerdy to Me science series, visited ICT and experienced our Wild West mixed-reality experience Gunslinger. Santa Maria proved to be a quick study as she learned about the technology behind the demonstration, which features interaction with three different virtual characters, and also proved to be a sharp shooter, as she took on our virtual villain Rio Laine.

Virtual Human Toolkit

Download a PDF overview.

Learn more on the Virtual Human Toolkit website.

Goal
ICT has created the Virtual Human Toolkit with the goal of reducing some of the complexity inherent in creating virtual humans. Our Toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at ICT and its partners.

The Toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching Toolkit technology with a research project’s proprietary or 3rd-party software. Through this Toolkit, ICT aims to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

What Is It
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of virtual human conversational characters. At the core of the Toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all ICT virtual human software is built on top of a common framework, as part of a modular architecture, researchers using the Toolkit can do any of the following:

  • Utilize all components or a subset thereof
  • Utilize certain components while replacing others with non-Toolkit components
  • Utilize certain components in other existing systems

The technology emphasizes natural language interaction, nonverbal behavior and visual recognition. The main modules are:

  • Non Player Character Editor (NPCEditor), a package for creating dialogue responses to inputs for one or more characters. It contains a text classifier based on cross-language relevance models that selects a character’s response based on the user’s text input, as well as an authoring interface to input and relate questions and answers, and a simple dialogue manager to control aspects of output behavior.
  • Nonverbal Behavior Generator (NVBG), a rule-based behavior planner that generates behaviors by inferring communicative functions from a surface text and selects behaviors to augment and complement the expression of those functions.
  • SmartBody is a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time using the Behavior Markup Language (BML).
  • MultiSense, a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language (PML). Its main use within the Toolkit is head and facial tracking through a webcam.

The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.

Low-Cost Immersive Viewer

Download a PDF overview.

The Mixed Reality Lab (MxR) at the USC Institute for Creative Technologies explores techniques and technologies to improve the fluency of human-computer interactions and create visceral synthetic experiences.

Mark Bolas, the MxR Lab’s director is also a professor at the Interactive-Media Division at the USC School of Cinematic Arts. His research and prototypes focus on immersive systems for education, training and entertainment that incorporate both real and virtual elements. Projects push the boundaries of immersive experience design, through virtual reality and alternative controllers.

MxR’s suite of low-cost immersive viewers, including the Socket HMD, the Socket Mobile (FOV2GO) and the iNVerse immersive reader. They enable the creation of 3-D, immersive virtual and augmented reality experiences using smart phones and tablets. These low-cost, lightweight systems can be used to create portable virtual reality applications for training, education, health and fitness, entertainment and more. These software and hardware platforms are part of the open-source design philosophy that helped inform the design of the new Oculus Rift HMD.

Naval Service Training Command Article about INOTS

The Naval Service Training Command published an article about ICT’s Immersive Naval Officer Training System (INOTS). Read the full article here.

ICT Virtual Humans Promote Science Education at USA Science and Engineering Festival

ICT’s virtual human museum guides, along with several of our real researchers, made the trip to Washington D.C. to teach kids about science as part of the National Science Foundation booth at the USA Science and Engineering Festival. ICT’s virtual humans Coach Mike and Ada and Grace, who were all created in collaboration with the Boston Museum of Science as part of an NSF-funded STEM education effort, were selected by the NSF to be showcased in their booth at the weekend event promoting science to school children.

Priti Aggarwal: “Ada and Grace: Engaging virtual human museum guides”

Presenting the Twins and Coach Mike @ USA Science & Engineering Festival Expo with NSF

Toronto Star Features ICT’s SimCoach Project

An article in the Toronto Star features ICT’s SimCoach project and how it employs virtual humans to help address PTSD and other mental health issues.

Skip Rizzo, ICT’s associate director for medical virtual reality, emphasized in the story that SimCoach is not designed to replace doctors or psychologists who treat PTSD.

“The purpose of SimCoach is to give a safe place for people to find information, take some light questions if they want. They may help them get insight into where they stand in terms of PTSD and depression,” said Rizzo.

Dismounted Interactive Counter-IED Environment for Training (DICE-T)

Download a PDF overview.

The University of Southern California Institute for Creative Technologies supports the mission of the Joint Improvised Explosive Device Organization (JIEDDO) to develop innovative methods for training threat assessment and how to counter threats during a dismounted patrol.

The Problem 

Threat assessment during dismounted patrol in the contemporary operational environment (COE) is a complex task that is typically mastered only through hours of on-the-job experience. While real-world dismounted patrols are vital to attacking insurgent networks and supporting friendly ones, learning how to identify and counter threats in that environment is dangerous and potentially deadly. Current training for threat assessment may include slide shows that introduce conceptual knowledge (such as the characteristics of a vulnerable point), why certain areas are vulnerable, and what ground signs might look like. Mission rehearsal exercises (MRXs) may review procedural knowledge, such as how to use specific equipment or how implement proper spacing techniques. What is missing, however, is the transition from the classroom to live training in a controlled practice environment that enables real-time assessment of mastery of critical concepts and procedures.

Addressing the Training Need

The Dismounted Interactive Counter-IED Environment for Training (DICE-T) is a prototype training application designed to introduce, reinforce and assess dismounted training concepts and principles in an engaging and immersive environment before live training and deployment. The DICE-T experience uses a combination of narrative video and immersive gameplay to deliver over-arching “first principles” related to threat assessment.

The current prototype has three different exercises or “missions.” Each mission is divided into three phases that emphasize critical components of dismounted patrol: planning a route, executing a patrol and countering threats, and mission debrief/AAR. The game scenarios represent real-world dismounted patrol situations, and trainees receive a video FRAGO describing the current threats in the area. As training progresses, difficulty and complexity of the missions escalate as more information is provided in the FRAGO. Each mission begins with a video that introduces threat-assessment concepts and highlights specific lessons for each phase.

During the initial planning phase, trainees learn to identify specific vulnerable points and vulnerable areas (VPs/VAs) on a two-dimensional map. They learn what experts are thinking when they are planning a route and then using touchscreen tablets, they are required to mark VPs/VAs on the game map. Then trainees use FRAGO information to help inform their planning and draw a patrol route.

The second phase emphasizes the proper execution of a patrol. In the video, the trainees learn how to identify threats from a first-person perspective. During their patrols they must continue to assess and identify threats, and also explain why they are dangerous along with how to counter threats in real time. The associated video content helps the trainees learn to be aware of ground signs and how to scan their environment for anomalies.

In the third phase the trainees review their responses to the threats. The after action review (AAR) describes the top five VPs/VAs identified or missed, other VPs/VAs identified, and provides feedback on trainees’ explanations of the dangers posed. Individuals can compare their performance during a group AAR at the end of the training session. Thus, DICE-T offers the unique capability of assessing trainee knowledge of and ability to identify and counter threats in real time.

Instructional Design and Assessment

DICE-T’s instructional design and assessment are informed by cognitive-task-analysis interviews of subject-matter experts, as well as observations of Counter-IED training and training support materials. One of the goals for DICE-T is to help novices think like experts before they are deployed. Novices experience a high cognitive load during classroom instruction when presented with the fire-hose of information typical of military training slide shows. When attempting to apply that conceptual knowledge in the field, they may have trouble remembering exactly what to do, when they should do it, or especially why. DICE-T’s instructional design thus incorporates the adult learning model referenced in Army Learning Concept 2015 (ALC 2015).

People learn how to use information more efficiently and remember that information when given real world problems to solve and when they are provided with scaffolded learning opportunities. DICE-T does exactly that: trainees use what they have learned in the classroom, think about what they would do during live training, and get a deeper understanding of the underlying principles of dismounted counter-IED behavior. Using evidence-based practices and assessment techniques for adult instruction, DICE-T provides an engaging element to traditional classroom instruction, and better prepares trainees for live exercises. DICE-T can serve as an effective “crawl” training phase, providing

Development and Deployment Cycle

The DICE-T v0.5 system is housed in a self-contained 45′ ISO Container with on-board generator. The trailer contains three training kiosks, each with four individual stations. Total training time is approximately 45-60 minutes. The system uses the Unity game engine and runs on Android tablets. The tablets communicate with each other and other system components via standard PCs. The research and development is being performed in a spiral process with the first v0.5 prototype being delivered in December 2011. This first prototype functions in single-player mode, and tracks all scores for AAR and assessment. The current v1.0 design includes multi-player modes with “blue vs. world” and “blue vs. red” game play, and could run on tablet or laptop PCs, both within a trailer or as a separate modular system. The ICT will investigate possible communication/integration with other game and narrative-based training systems.

For more information, please contact Dr. Todd Richmond.

The Huffington Post Features Mark Bolas and ICT’s Mixed Reality Lab

Huffington Post science reporter Cara Santa Maria visited ICT’s Mixed Reality Lab for her “Talk Nerdy to Me” video series and got a glimpse of the future of virtual reality. She tried out the lab’s Stretching Space demo, which uses perceptual tricks to make a small space appear larger in the virtual world. Next she set her sights on the lab’s FOV2GO, a portable, paper prototype that turns a smartphone into a 3-D viewer.

Defense News Features 3-D Virtual Reality Viewer for Smartphones from ICT’s MxR Lab

An article in Defense News covers the FOV2GO, the do-it-yourself 3-D viewer developed at ICT’s Mixed Reality Lab, along with collaborators from the Interactive Media Division at the USC School of Cinematic Arts.

An article in Defense News covers the FOV2GO, the do-it-yourself 3-D viewer developed at ICT’s Mixed Reality Lab, along with collaborators from the Interactive Media Division at the USC School of Cinematic Arts.

The article describes the FOV2GO, which can be created from cardboard or foam core and assembled in minutes, as employing a similar technique to that applied by photo interpreters who analyzed aerial photographs during World War II. FOV2GO simply places left-eye and right-eye views of a smartphone screen side-by-side and then uses two simple magnifying lenses to look at them, states the story.

“The real breakthrough was the understanding that, in fact, this should not cost anything,” said Mark Bolas, ICT’s associate director for mixed reality research and development and director of the MxR Lab. “It is hard to accept that what was once a $100,000 system is now effectively free and can fit in one’s pocket.”

Dava Casoni, Andres Chan, Cheryl Birch: “A-21: Preparing for Change; How the proposed revisions to A-21 impact you”

The OMB is rewriting Circular A-21 in an attempt to decrease administrative burdens on Universities. The presentation covers the areas OMB is addressing, comments by Universities, and, if available, the released draft of the rewrite.

Jacki Morie: “Social Networks and Virtual Worlds for Building Team Resilience”

The phenomenal success of Internet-enabled social networks can be leveraged for many types of training and behavioral improvement. Social networks, especially support maintaining group connectedness, and can be used as tools for rapid social change and mobilization (as was witnessed recently in Egypt). Their day-to-day usage can be adapted to many types of training programs to enable both ongoing team coherence and team resiliency. A distinct implementation of Internet-enabled social network technology, that of Virtual Worlds (VWs), is poised to become one of the dominant models, as VWs provide strong agency and extensive social immersion through the use of avatars (3D embodied representations) and simulated environments that afford a greater sense of being socially connected. Children, especially, participate in virtual worlds to a large degree, and will be familiar with them as they grow. To some degree, then, it can be expected that Virtual Worlds may supplant the more traditional spaces, like classrooms, we now use to participate in many activities. In this paper we report on current uses of these digital spaces, especially as these pertain to forms of resilience training for diverse groups, and explore their potential in bringing more effective resilience training to a wider audience in the future.

Paul Rosenbloom: “Towards a 50 msec Cognitive Cycle in a Graphical Architecture”

Achieving a 50 msec cognitive cycle in any sufficiently sophisticated cognitive architecture can be a significant challenge. Here an investigation is begun into how to do this within a recently developed graphical architecture that is based on factor graphs (with the summary product algorithm) and piecewise continuous functions. Results are presented from three optimizations that leverage the structure of factor graphs to reduce the number of message cycles required per cognitive cycle.

Jonathan Gratch: “Why people still matter: Modeling human behavioral processes in agents”

At its heart, research in autonomous agents and multiagent systems is multi-disciplinary. Rooted in artificial intelligence, the AAMAS community draws heavily from research in economics and rational choices, with strong influences from cognitive and social psychology. Our research over the last ten years has struggled with how to coalesce these different research areas and methodological traditions into a single coherent research program. In this talk we argue that these influences can realize a synergy that is not only desirable but essential for advancing the field. We will discuss three interrelated rationales for joining these research traditions. First, there is increasing demand to use computational methods to simulate human emotional and social processes. Second, the rise of human-agent interaction requires methods to better understand and predict human behavior with the aim of making these interactions more effective and “human centric.” Finally, a deeper understanding of how humans solve the challenges of acting and coordinating in complex dynamic environments can lend insight into expanding narrow conceptions of rational decision-making. We will illustrate how computational models of human processes can advance each of these rationales through several research projects in our laboratory.

ICT Developing Next-Generation Artificial Intelligence Tools for Mental Health

Effort is part of a new DoD initiative to use telemedicine and virtual humans to address barriers to care and to provide better care for service members who seek treatment for psychological issues, including post-traumatic stress, depression and suicide risk

Press Contact: Orli Belman
belman@ict.usc.edu

Imagine computer systems that can detect depression by analyzing facial expressions, body gestures and speech.

Such sensitive software has the potential to assist healthcare workers providing assistance over remote telemedicine applications that cannot convey subtle communication clues usually detected in face-to-face interactions. The software can also administer support in the form of an interactive virtual provider who can interact, based on signals from the patient transmitted via cameras and sensors, on a laptop or computer.

At the University of Southern California Institute for Creative Technologies (ICT) experts in computer science and psychology are making these perceptive programs a reality in order to better identify and treat service members and veterans suffering from psychological health issues, including PTSD, depression and suicidal ideation.

Funded by DARPA and in collaboration with scientists at MIT-spinoff Cogito Health Inc., a team of ICT researchers recognized for developing trailblazing technologies used to treat psychological trauma and track human behaviors, is developing sensing systems that can capture and comprehend communication clues and then use that information to better understand people’s emotional states and how to help them.

“Study after study show increasing incidence of PTSD and other psychological issues among today’s returning service members, ” said Albert ‘Skip’ Rizzo, an ICT psychologist who directs the MedVR Lab and is is co‐leading this research effort at the multidisciplinary institute. “Many people suffer in silence because they fear the stigma that may come from seeking help through traditional channels or because they simply don’t know where to turn. Computer‐mediated care offers anonymity and access that may help reach these service men and women who need it most.”

Rizzo’s other technology‐for-health work includes developing Kinect‐ based computer games for motor and brain injury rehabilitation and a virtual reality exposure therapy system for treating PTSD that is being used in close to 60 clinical sites across the country.

ICT’s pioneering efforts on the DARPA Detection and Computational Analysis of Psychological Signals (DCAPS) project encompass advances in the artificial intelligence fields of machine learning, natural language processing and computer vision. It will also bring these techniques to the next level by defining a framework capable of analyzing language, gestures and social signals to detect distress cues.

The technologies will be integrated with existing ICT virtual humans, including the current SimCoach prototype that provides resources and support based on what it learns about users through conversations over the internet. DCAPS is not aimed at providing an exact diagnosis, but at providing a general metric of psychological health.

Privacy and security are of paramount concern to the DCAPS program. Program data will be collected with the informed consent of individuals involved and stored in a secure, private data-sharing framework. DCAPS will develop, in conjunction with leading privacy experts, a novel trust framework such as envisioned in the National Strategy for Secure Identity in Cyberspace. This trust framework will allow warfighters to control and safely share their “honest signals” data.

“This project paves the way for a new generation of interactive virtual human‐based systems that can recognize audio‐visual signals correlated with the psychological state of the user, such as the levels of anxiety, understanding and engagement, ” said Louis‐Philippe Morency, who with Rizzo heads up the project research team, directs the MultiComp Lab at ICT and is a research assistant professor at the USC Viterbi School of Engineering. “We are getting one step closer to creating computers that can interact in a more human-like ways. I see a bright future where these technologies are applied to new medical and educational applications, making knowledge and social services more accessible to everyone.”

About the USC Institute for Creative Technologies
http://ict.usc.edu/
At the University of Southern California Institute for Creative Technologies, high‐tech tools and classic storytelling come together to pioneer new ways to teach and train. Historically, simulations focus on drills and mechanics. What sets ICT apart is a focus on human interactions and emotions. ICT is a world leader in developing virtual humans who think and behave like real people and in creating immersive environments that experientially transport participants to other places. ICT research and technologies include virtual reality applications for mental health, video games for U.S. soldiers to hone negotiation and cultural awareness skills and virtual human museum guides who teach science concepts to young people.

About Cogito
Cogito Corporation, headquartered in Charlestown, MA, serves organizations responsible for improving the health and well‐being of populations by delivering real time call center and mobile based psychological sensing systems that improve customer and patient engagement and detect individual risk of behavioral health problems in populations. Cogito’s Social Signal Processing (SSP) Platform technology assesses “honest signals” or unconscious cues in natural speech and social behavior to support more timely intervention for psychological issues, as well as to improve health engagement and outcomes in people coping with psychological disorders, chronic illness and disability. The company also conducts fundamental research to investigate additional applications of social signaling and human behavior analysis. Cogito was founded based on conceptual frameworks developed at the Human Dynamics Lab at MIT. For more information, please visit www.cogitocorp.com.

___

The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

19095 ‐ Approved for Public Release, Distribution Unlimited.

Read Morteza Dehghani’s Op-Ed: Sacred Values and the Development of Nuclear Weapons

Check out this insightful op-ed from our own Morteza Dehghani and his Northwestern colleague Sonya Sachdeva published on Aljazeera.com. The piece, based on the pair’s research on sacred values and the role they play in socio-cultural conflicts, can also be read on Think USC, USC’s website highlighting the opinion and analysis of its professors.

Paul Rosenbloom: “Towards a 50 msec Cognitive Cycle in a Graphical Architecture”

Paul Rosenbloom’s talk at ICCM 2012 in Berlin is called “Towards a 50 msec Cognitive Cycle in a Graphical Architecture.”

Paul Rosenbloom

Expertise

  • artificial intelligence
  • cognitive science
  • machine learning
  • intelligent agents
  • cognitive architecture
  • military simulation
  • computer science

Additional Information

  • Co-author, Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies (1986)
  • Co-editor, The Soar Papers: Research on Integrated Intelligence (1993)
  • Fellow, American Association for Artificial Intelligence

More

For more information, click here.

Todd Richmond

Expertise

  • communication
  • technology
  • new media
  • education

More

For more information, click here.

Arno Hartholt

Expertise

  • virtual human research and technology
  • virtual characters
  • artificial intelligence
  • game development
  • advanced user interfaces
  • natural language understanding and generation

Additional Information

  • Project Leader, Integrated Virtual Humans group
  • Project Leader, Institute for Creative Technologies Art Group
  • Earned bachelor’s and master’s degrees in computer science at University of Twente in the Netherlands
  • Received Fortis IT Student scholarship

Foreign Languages

  • Dutch

More

For more information, click here.

Andrew Gordon

Expertise

  • social media
  • Weblogs
  • personal stories and storytelling
  • artificial intelligence
  • common sense reasoning and psychology
  • natural language processing
  • computational linguistics

Additional Information

  • Author, Strategy Representation: An Analysis of Planning Knowledge (2004)

More

For more information, click here.

Jonathan Gratch

Expertise

  • emotion
  • virtual reality
  • virtual humans
  • Second Life
  • modeling and simulation
  • affective computing
  • embodied conversational agents
  • media psychology
  • computational models
  • artificial intelligence
  • virtual worlds

Additional Information

  • President Elect, Humane Association for Research on Emotion in Human-Computer Interaction
  • Member, International Society for Research on Emotion

More

For more information, click here.

Albert “Skip” Rizzo

Expertise

  • virtual reality and mental health
  • neuropsychological assessment, cognitive rehabilitation, PTSD/anxiety disorders, Motor Rehabilitation, pain management
  • spatial ability, attention, memory
  • memory enhancement for older adults
  • ADHD assessment
  • gender and hormonal factors that influence aging, cognition and dementia
  • psychology of older adults
  • memory processes and aging
  • perception and aging
  • cognition and aging
  • cognitive psychology
  • cognitive-behavioral psychology
  • clinical psychology

Additional Information

  • Director, USC Institute for Creative Technologies VRPSYCH Lab
  • Developer of the Memory Enhancement Seminars for Seniors (MESS), USC Davis School of Gerontology
  • Creator of the Virtual Iraq/Afghanistan PTSD Exposure Therapy System
  • CyberPsychology and Behavior – Associate Editor
  • Presence: Teleoperators and Virtual Environments – Senior Editor
  • Journal of Computer Animation and Virtual Worlds – Editorial Board Member
  • Media Psychology – Editorial Board Member
  • International Journal of Virtual Reality – Associate Editor

More

For more information, click here.

William Swartout

Expertise

  • artificial intelligence
  • virtual reality and virtual humans
  • human-computer interaction
  • immersive environments
  • modeling and simulation
  • serious games, and video games for training
  • computer-aided training and education
  • natural language processing
  • autonomous agents
  • knowledge-based systems
  • science, technology, engineering and mathematics

Additional Information

  • Fellow, American Association for Artificial Intelligence
  • Recipient, Robert Engelmore Award from Association for the Advancement of Artificial Intelligence, for seminal contributions to knowledge-based systems and explanation, groundbreaking research on virtual human technologies and their applications, and outstanding service to the artificial intelligence community
  • Past member: Air Force Scientific Advisory Board, Board on Army Science and Technology of the National Academies

More

For more information, click here.

Narrative

Download a PDF overview.

The Narrative Group at ICT investigates storytelling and the human mind, exploring how people experience, interpret, and narrate the events in their lives. We pursue this research goal using diverse interdisciplinary methods, including the large-scale analysis of narrative in social media, the logical formalization of commonsense knowledge, and the creation of story-based learning environments.

Narrative analysis
The rise of social media has created new opportunities for an empirical science of storytelling. Over the last few years, we have collected tens of millions of personal stories from Internet weblogs for use in a wide variety of analyses. We have studies the health information needs expressed by parents of children with cancer, the gender differences in the way that people describe strokes and heart attacks, and cross-cultural differences in the way that people frame the events in their lives in terms of sacred values.

Narrative intelligence
A central engineering challenge in the creation of human-like artificial intelligence is to enable commonsense reasoning about the everyday world. At ICT, we pursue two opposite approaches to the problem of acquiring commonsense knowledge. On one hand, we employ traditional knowledge engineering methods to author formal commonsense content theories in first-order predicate logic. On the other hand, we attempt to harvest commonsense knowledge directly from the millions of personal stories that people post to their Internet weblogs.

Story-based learning environments
Immersive training simulations provide environments where learners can acquire and practice cognitive skills through guided, interactive experiences. Crafting effective simulations is still more of an art than a science, requiring the collaborative efforts of writers, experts, instructors, technologists, and learning science researchers. We support these creative efforts through the development of authoring tools and methodologies, helping teams articulate instructional objectives and construct scenario content through the analysis of the real-world experiences narrated by practitioners.

Paul Rosenbloom: “Graphical Models for Integrated Intelligent Robot Architectures”

The theoretically elegant yet broadly functional capability of graphical models shows intriguing potential to span in a uniform manner perception, cognition and action; and thus to ultimately yield simpler yet more powerful integrated architectures for intelligent robots and other comparable systems. This position paper explores this potential, with initial support from an effort underway to develop a graphical architecture that is based on factor graphs (with piecewise continuous functions).

Disaster Response Training for Incident Command: TELL

The DHS TELL program was designed to be a federated set of systems to facilitate the training and exercise of incident commanders. Based upon the National Incident Management System (NIMS), TELL was envisioned as a key component in training managers of large-scale disasters at the incident command level that would use simulation to provide more efficient and effective training and allow incident commanders to experience situations and scenarios before they happen in the “real” world.

ICT was involved in a two-pronged approach towards this overarching system. First was through collaboration with Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratory on their existing simulation-based training system (based on JCATS, a military computational model/engine). This project had the ICT using its expertise in storytelling and narrative along with visualization and production to increase the immersive and interactive value of the LLNL system. The second area was focused on core research in artificial intelligence (AI) to create examples of possible future technology applications within TELL. In addition, ICT is also helped to provide a broader look at possible ways to develop federated systems, and move towards creating proof-of-concept examples of these connections.

Goals

  • Create compelling and accurate training opportunities for incident command staff
  • Create new artificial intelligence models for simulations
  • Creatively use existing media capabilities in computer simulations

External Collaborators

  • Lawrence Livermore National Laboratories
  • Paul Carpenter
  • Sandia National Laboratories

American Public Media Program The Story Features Sounds of Virtual Iraq on NPR

The NPR radio program The Story interviewed Martin Daughtry, a New York University ethnomusicologist who traveled to Iraq in the waning days of Operation Iraqi Freedom to record sounds of weapons and helicopters. Daughtry’s sounds will be compiled in a forthcoming collection, and used as a part of ICT’s virtual reality exposure therapy program to help veterans cope with post-traumatic stress disorder.

ICT’s Morteza Dehghani Wins AFOSR Young Investigator Program Award

ICT Research Assistant Professor Morteza Dehghani, was awarded a grant from the Air Force’s Young Investigator Research Program (YIP). Dehghani, who is also on the faculty of the computer science department of the USC Viterbi School of Engineering, won his award to examine the role of religiosity in moral cognition, specifically in the formation of sacred values, and to research computational models for analyzing sacred rhetoric and its consequential emotions.

The objective of the YIP is to foster creative basic research in science and engineering, enhance early career development of outstanding young investigators, and increase opportunities for the young investigators to recognize the Air Force mission and the related challenges in science and engineering. This highly selective award is given to researchers who received Ph.D. or equivalent degrees in the last five years and show exceptional ability and promise for conducting basic research.

Read the USC News story here.

ICT’s Mixed Reality Lab Wins Awards at 2012 IEEE VR Conference

The ICT Mixed Reality Lab won the best demo award at the recent IEEE VR Conference where they introduced the new FOV2GO, a fold-out 3-D viewer for the the creation of immersive virtual reality experiences using smartphones and also demonstrated the group’s gesture work using their FAAST toolkit.

Evan Suma earned an honorable mention for his paper: Impossible Spaces: Maximizing natural walking in virtual environments with self-overlapping architecture (Evan A. Suma and Zachary Lipps and Samantha Finkelstein and David M. Krum and Mark Bolas).

The team’s new post-doc, Adam Jones also received an honorable mention for the poster: Depth Judgments by Reaching and Matching in Near-Field Augmented Reality (Gurjot Singh, J. Edward Swan II, J. Adam Jones and Stephen R. Ellis).

Paul Rosenbloom: “Towards Functionally Elegant, Grand Unified Architectures”

When developing cognitive architectures, the ultimate goal is typically a unified theory of intelligent behavior, with the working focus then being on integrating across the capabilities implicated within central cognition, and the result being a unified architecture for cognition. What can be called a grand unified architecture sets the bar higher, striving to also include the key non-cognitive aspects of intelligent behavior, such as perception, motor control, personality, motivation and affect. Such architectures can further be considered functionally elegant if they provide the requisite breadth of functionality in a simple and theoretically elegant manner, yielding a form of cognitive Newton’s laws that provides broad coverage from interactions among a small set of general principles/mechanisms. The pursuit of functionally elegant, grand unified architectures provides a challenging research path, yet one that points the way towards rapid progress beyond today’s state of the art, even within the more traditional cognitive focus; and which should also support both deep science and useful systems. I am currently approaching this goal by rethinking architectures from the ground up, leveraging the interactions between a pair of very general mechanisms – graphical models (factor graphs, in particular) and piecewise continuous multivariate functions – to yield a parameterized space of state-of-the-art capabilities over the processing of symbols, probabilities and signals. The availability of this broad parameterized space promises to accelerate the evolution of cognitive architectures by facilitating the exploration of a wider range of the requisite capabilities and their variations; and without the need to explicitly implement a whole new module for each. Work to date – much of which will be summarized here – demonstrates that within the resulting space can be found: standard flavors of long-term memory, such as a procedural rule-based memory and declarative semantic and episodic memories, plus other variations and blends; forms of knowledge-based, decision-theoretic and social problem solving; perception and mental imagery; and key bits of language processing. Much more is still required on many of these topics, and additional capabilities must also be added, but the already proven applicability of graphical models to many of these problems shines a bright light on the path towards their rapid incorporation into such a grand synthesis. It also may help understand other topics – such as personality, motivation and affect – that have not previously been investigated via these kinds of techniques. For some capabilities – such as learning – more principles/mechanisms will likely be required, but functional elegance still looks to be within reach, with the inclusion of only a small number of additional general principles/mechanisms.

Jacki Morie: “Using Virtual Worlds are an Adavnced Form of Telehealth Care”

Jacki Morie will be reporting on how we are making advances in using Virtual Worlds as advanced telehealth care through the TOPSS-VW Project at ICT.

Thomas Talbot: “How Interactive Agents Can Benefit Medical Devices”

Abstract: The Department of Defense has made a considerable investment in virtual human technology. Recent advancements with interactive artificial intelligence (AI) agents have created virtual foreign negotiation foes, leadership training challenges, immersive VR therapy and virtual patients. This new generation of virtual humans understands what you are saying, reacts emotionally and communicates through verbal & non-verbal means. Current research is advancing this technology to read human user arousal, physical and emotional states. The next tranche of innovation will embed virtual human AI into training hardware and medical devices. This presentation explains virtual human technology, shares current applications and explores a future where we have conversations and even rapport relationships with our medical devices.

Andrew W. Feng, Yuyu Xu, Ari Shapiro: “An Example-Based Motion Synthesis Technique for Locomotion and Object Manipulation”

We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.

Virtual Reality Cognitive Performance Assessment Test (VRCPAT)

Download a PDF overview.

ICT has developed an adaptive virtual environment for assessment and rehabilitation of neurocognitive and affective functioning. This project brings together a team of researchers to incorporate cutting-edge neuropsychological and psychological assessment into state of the art interactive/adaptive virtual Iraqi/Afghani scenarios (City, Checkpoint, HMMWV).

The Army’s Needs: An Adaptive VRCPAT based upon individual Soldier differences can be used to greatly enhance assessment and training.

  1. Assess Soldier’s performance within VRCPAT allow one to establish a baseline that is reflective of individual differences.
  2. Neurocognitive and psychophysiological profile data may be used for real-time adaptation of the VRCPAT.
  3. Evolution of these profiles developed for use in VRCPAT could lead to direct training of military operations in the real world.

How ICT Met Those Needs: Findings from our research have provided the military with the following.

  1. A Neurocognitive and psychophysiological interface modeled off of trainees interacting in a virtual environment that mimics Iraqi and Afghan environments, for modeling a trainee’s adaptive responses to environmental situations.
  2. A system for military trainers to develop more reliable and valid measures of training performance.
  3. Civilian dual-use capability in conditions involving psychophysiological correlates to neurocognitive function and emotion regulation in persons immersed within a virtual environment.

Future
ICT is extending the VRCPAT findings by examining performance not simply by a suer, but teams of Soldiers.

Facts and Figures

  • VRCPAT is being used to run subjects at Tripler Army Medical Center, Ft. Lewis, Madigan Army Medical Center, West Point, USC and UCSD.
  • VRCPAT has been used in studies with over 400 subjects, including both Soldiers and civilians.

External Collaborators

  • Kaleb McDowell, PhD (U.S. Army Research Laboratory)
  • Kelvin Oie, PhD (U.S. Army Research Laboratory)
  • Scott Kerick, PhD (U.S. Army Research Laboratory)
  • Greg Reger (Madigan Army Medical Center)
  • Mike Dawson, PhD (USC Psychology)
  • Shri Narayanan, PhD (USC Viterbi)
  • Kirby Gilliland, PhD (C-SHOP; ANAM)
  • Robert Schlegel, PhD (C-SHOP; ANAM)

Stretching Space: Exploiting Change Blindness for Redirected Walking

Download a PDF overview.
Virtual reality training systems do not allow users to walk through large areas due to size limitations in the physical space. Change blindness redirection is a novel technique for enabling real walking through an immersive virtual environment that is considerably larger than the available physical workspace by subtly manipulating the environment structure behind the user’s back. This approach improves on previous redirection techniques, as it does not introduce any visual-vestibular conflicts from manipulating the mapping between physical and virtual motions, nor does it require breaking presence to stop and explicitly reorient the user. We conducted two user studies to evaluate the effectiveness of the change blindness illusion when exploring a virtual office building that was an order of magnitude larger than the physical walking space. Only one out of 77 participants across both studies definitively noticed that a scene change had occurred, suggesting that change blindness redirection provides a remarkably compelling illusion. Perhaps more significant is that despite the dynamically changing environment, participants were able to draw coherent sketch maps of the environment structure, and pointing task results indicated that they were able to maintain their spatial orientation within the virtual world.

In the above scenario, users explore an Afghan village encompassing over 3,000 square feet of virtual space. The blue rectangle indicates the dimensions of the physical walking space. As the user searches through each building in the village for a stash of hidden weapons, we apply manipulations to the environment structure behind his back, allowing him to walk through the entire virtual environments while staying within the boundaries of the physical walking space. This provides the effect of sliding the tracking space left and right, providing a virtual corridor (the red dashed rectangle) that is nearly limitless.

References
E. Suma, S. Clark, S. Finkelstein, and Z. Wartell, “Exploiting Change Blindness to Expand Walkable Space in a Virtual Environment,” IEEE Virtual Reality 2010, pp. 305-306.

External Collaborators
Mary Whitton (UNC)

In the News
Huffington Post science reporter Cara Santa Maria visited ICT’s Mixed Reality Lab for her “Talk Nerdy to Me” video series. Learn more.

Sensory Environments Evaluation

The Sensory Environments Evaluation (SEE) project develops methodologies for creating and utilizing multimodal, emotionally affective virtual environments to provide more effective training. Effective training means training that results in a high degree of initial learning and subsequent retention of the lessons learned. Retention is particularly important for military training because lessons learned often fade over time requiring expensive retraining. Humans remember events longer if they have an emotional connection to the event. The SEE project capitalizes on this fact by eliciting degrees of emotional responses from participants during a scenario using integrated sensory cues (visual, aural, haptic and olfactory) combined with current findings from the neurobiological, cognitive and psychological fields. Rather than a focus on photo-realism, this methodology of emotional connection results in a simulation that elicits the gestalt sense that it “feels real.” SEE’s ongoing evaluation studies test the effectiveness of the sensory modalities and the emotional response of participants through physiological monitoring. Analysis of data to date has shown that high arousal states in the virtual environment equate to increased retention, and that participants with high first person shooter (FPS) game skills may need enhanced stimulation to achieve the same amount of arousal/retention as non-FPS game players.

Contact: Jacquelyn Ford Morie

Self-Directed Learning Internet Module — Every Soldier a Sensor Simulation

The Self-Directed Learning Internet Module–Every Solder a Sensor Simulation (SLIM-ES3) is a web-delivered and web-enabled combat patrol training tool for US ground forces, providing practice in Active Surveillance, Threat Indicator Identification and Information Operations. Developed with the cooperation of the US Army’s Office of the Deputy Chief of Staff, Intelligence G2, SLIM-ES3 is currently in use as a component of basic training instruction with the First Combat Training Brigade, Fort Jackson and in pre-deployment training for soldiers of the 10th Mountain Division, Fort Drum, New York. The SLIM-ES3 environment is a southwest Asian urban setting. The user navigates the urban terrain populated with civilians, security personnel, NGOs, and insurgents, to detect threats while attempting appropriate interaction with everyone encountered. Users work from a menu of actions, recording their observations, checking maps, taking GPS readings, and even taking digital “photographs”. Each action has a time “cost”. Skillful users will balance immediate reporting with notes and memory to maximize their observations by minimizing easier but more time “costly” methods. Following the patrol, users prepare a patrol report, working from recorded objects and recalling items observed. Scoring takes into account objects reported and an Information Operations indicator reflecting success in civilian interactions. The After Action Review (AAR) allows study of objects observed and missed along with current doctrinal information about the category of threat that the object represented.

Facts and Figures
SLIM-ES3 was the recipient of the US Army Modeling and Simulation Award for training excellence in 2006. Learn more here.

OneSAF Graphical User Interface

This project developed an intuitive game-style Graphical User’s Interface (GUI) to One-SAF, the US Army’s premier simulation driver, to dramatically reduce the time needed to train operators to use complex simulations in a training environment. Game-like iconography, “3D” map views and human factors-based organization of data demonstrated how the One-SAF operator’s experience could successfully be transformed. Developed in cooperation with the US Army Program Executive Office for Simulation, Training & Instrumentation (PEO STRI), the USC Institute for Creative Technologies (ICT), and Emergent Game Technologies.

Military Terrain for Games Pipeline

Commercial gaming technology has advanced dramatically over the past decades, with the fidelity of virtual environments reaching significant new bounds. With these growing capabilities has been a growing need for rapidly generated military terrain for use in virtual training environments, simulations and exercises. However, most leading AAA gaming titles take around 5 years and $50 million dollars to produce, and typically, 70% of those resources are devoted toward creating those virtual environments. The significant art resources are required because efforts are predominantly manual, which makes this process extremely labor intensive and time consuming. The US military desires immersive and realistic virtual terrain, but lacks those lofty resources, thus requiring that it be accomplished with a fraction of the costs and in a fraction of the time. Other challenges are presented as these datasets need to be current, accurate, relevant and correlated with other military simulations like OneSAF.

In conjunction with the US Army Simulation and Training Technology Center (STTC), the University of Southern California Institute for Creative Technologies (USC ICT) has developed the Military Terrain for Games Pipeline (MTGP), consisting of several automated processes and tools that procedurally analyze military terrain data obtained in COLLADA format from various forms (LIDAR, DTED, etc), corrects any issues, enhances the aesthetics of the scene, then exports it for use with the systems the Military uses most, such as the Virtual Battlespace 2 (VBS2) and OneSAF, and with other widely used game engines like Gamebryo and Source. Through this pipeline, the MTGP is able to produce immersive, game-engine ready environments with little to no human intervention and in a partial amount of time it would take to perform these tasks manually.

The MTGP optimizes and enhances incoming datasets through the following processes:

  • Pre-processing
  • Import COLLADA
  • Procedural texturing
  • Augmentation of geo-typical objects and clutter
  • Generate virtual environment for game/simulation engines


Click image to view larger.

Goals

  • Create geo-typical immersive virtual environments that tap into the capabilities of today’s gaming technologies, but in a fraction of the time, manpower and cost
  • Research and adapt existing photogrammetry, image processing and computer vision methods to incorporate real world information about the area of interest to make the generated environments increasingly “geo-specific”
  • Develop the user interfaces and tools necessary that would enable non-programming training developers to provide necessary configuration information and create these virtual environments with just a push of a button

External Collaborators

  • Simulation and Training Technology Center
  • TRADOC Capabilities Manager (TCM) Gaming
  • Applied Research Associates, Inc.
  • UCF Institute for Simulation and Training

Full Spectrum Video Games

ICT’s Full Spectrum training aids were a suite of game-based applications for military providing practice and training at the strategic, operational and tactical levels.

Full Spectrum Command Board Game

The focus of the Full Spectrum Command board game was on tactical understanding of fire and maneuver at a company level, in a military operations in urban terrain (MOUT) environment. Game play was scenario-based and two-player, with one player commanding a conventional light infantry company while the other player commands a group of asymmetric cells.

Army Infantry School instructors and students in the Captains Career Course at Fort Benning were integral to the game development, in defining core concepts and play testing mechanics to ensure fidelity with United States Army doctrine. Prototypes of the game became part of the curriculum of the Captains Career Course, being used to illustrate the principles of company-level firepower & maneuver and adaptive thinking in tactical decision-making.

The Full Spectrum Command board game was delivered in January 2002, four months after funding.

Key Features:

  • Was based on existing U.S. Army infantry training doctrine.
  • Focused on fire and maneuver techniques, emphasizing adaptive thinking.
  • User-level scenario editing and creation.
  • Re-configurable playfield for scenario customization.
  • Models current operations & systems as well as notional future concepts.

Full Spectrum Command

Full Spectrum Command (FSC) is a PC-based training aid that modeled the command and control of a U.S. Army Light Infantry Company in a MOUT environment. As the Captain of a U.S. Army Light Infantry Company, the user interprets a five-part Operational Order (OPORD) for a given scenario, organizes his platoons, develops a multi-phase plan and coordinates the actions of approximately 120 soldiers during the engagement. The scenarios were designed to develop cognitive skills: tactical decision-making, resource management and adaptive thinking. These scenarios focused on asymmetric threats within peacekeeping and peace-enforcement operations. Each scenario was developed in conjunction with subject matters experts from the US Army Infantry Center in Fort Benning, Georgia to ensure both military and pedagogic fidelity.

FSC was delivered in January 2003.

An expanded version of FSC was developed via a technology partnering agreement between the US Army and Singapore Armed Forces and was completed in April of 2004.

Full Spectrum Leader

Full Spectrum Leader (FSL) extends the Full Spectrum suite of applications to the Light Infantry Platoon level with a cognitive leadership trainer featuring several innovations. Similar to the expanded version of Full Spectrum Command, FSL was developed via a technology partnering agreement between the US Army and Singapore Armed Forces.

FSL showcases a computer opponent with an adaptive intelligence.  Unlike most games and simulations that have a fixed (albeit concealed) OPFOR Plan, FSL’s opponent attempts to recognize players’ actions and respond dynamically.  Additionally, FSL includes close air support, artillery calls-for-fire and casualty evacuation to heighten the realism and training value of the game.  FSL is the first Full Spectrum application to include a “hasty defense” mission.

FSL was delivered in March 2005.

Key Features:

  • Control of United States Army Platoon
  • Control of Singapore Armed Forces Platoon
  • Third Person View
  • Intuitive user interface
  • Platoon Leader is able directly engage enemy combatants via light infantry small arms
  • Hasty defense mission
  • CAS Artillery

Full Spectrum Warrior

Full Spectrum Warrior (FSW) is a squad-based, tactical-action game that places the player into 21st Century set in a Middle Eastern MOUT environment.

In FSW, players assume the role of a squad leader, in command of a squad of two fire teams of US Army infantry soldiers.  It is their responsibility to achieve specific objectives through the skillful deployment and use of the men under their command.

It is the combination of tactical planning and guided execution on modern, asymmetrical battlefield that is the foundation of FSW.

As with Full Spectrum Command and Full Spectrum Leader, FSW was developed with close support from subject matter experts at the U.S. Army Infantry School at Fort Benning, Georgia to ensure content fidelity. Motion capture production of actual Soldiers was used for computer character animations.

Full Spectrum Warrior was released for Xbox on June 1, 2004 and later released by other entities for Windows and PlayStation 2.

Facts and Figures:

  • Full Spectrum Warrior was the first military training application published for a commercial game console.
  • The game was a critical and commercial success, acquired for commercial release by game publisher THQ. At the 2003 Electronic Entertainment Expo (E3) Game Critics Awards, Full Spectrum Warrior won “Best Original Game” and “Best Simulation Game.”

 

 

Virtual Patient

Download a PDF overview.

The current virtual patient effort at ICT is USC Standard Patient Hospital and Studio.

The Virtual Patient project uses virtual human technology to create realistic lifelike character avatars and uses speech recognition, natural language, non-verbal behavior and realistic scenarios for both military and non-military issues to train clinicians in interpersonal areas such as rapport, interviewing and diagnosis. ICT-developed virtual patients are being incorporated into the curriculum at the USC School of Social Work, in collaboration with their Center for Innovation and Research for Veterans and Military Families, as a way to train future clinicians in therapeutic interview skills.

Recognizing a use for virtual standardized patients, the project began as an offshoot from the virtual human project in 2005. The virtual patient team submitted and won a Provost Teaching with Technology Grant for the University of Southern California in 2006 to fund a pilot project. The success of this effort led to ICT funding a yearlong research effort in 2007-2008 to transition the technology from the virtual humans to the virtual patients.

ICT is developing virtual patients for military specific scenarios for the US Army STTC. In 2010 and additional project started with the USC School of Social Work and the Army TATRC to apply virtual patients to train social workers in military specific issues and how to converse with military personnel about the issues they deal with every day form family life to return form service to PTSD. The virtual patient system will have virtual client classrooms setup at USC’s School of Social Work starting in 2011 for every student to use. The future virtual patient system will be delivered over the web and mobile devices such as tablets for easy and continuous training.

The use of virtual patient technology is not meant to replace human standardized patients but augment live actor programs with virtual characters that are available 24/7 and can portray a multitude of conditions that might be difficult for actors to represent or repeat with success. Additionally being able to have a variety of characters available from elderly and young persons in different genders and cultures will be a benefit.

Goals

  • Design intelligent Virtual Patients that have realistic and consistent human-to-human interaction and communication skills to open up possibilities for clinical psychosocial applications that address interviewing skills, diagnostic assessment and therapy training.
  • Create a comprehensive Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic trainer that has a diverse library of VPs modeled after each diagnostic category. The VPs would be created to represent a wide range of age, gender and ethnic backgrounds and could be interchangeably loaded with the language and emotional model of any of the DSM disorders.

External Collaborators

  • Caroly S. Pataki, M.D.
  • Michele T. Pato, M.D.
  • Cheryl St-George, M.S.
  • Jeffery Sugar, M.D.
  • Celestine A. Ntuen, PhD

Journey 2: The Mysterious Island Debuts New Digital Double Pipeline Using ICT’s Light Stage

For the recent blockbuster Journey 2: The Mysterious Island, the ICT Graphics Lab collaborated with Icon Imaging Studio, Lightstage LLC and House of Moves (HOM) to develop a new process of creating digital doubles for the film’s main actors – Dwayne Johnson, Michael Caine, Josh Hutcherson, Vanessa Hudgens and Luis Guzman.

ICT’s Light Stage 6 captured full body lighting scenarios simultaneous to the scanning of character topology by Icon Imaging. Lightstage LLC recorded the actors’ facial shapes and appearances at the level of pore detail and fine creases. HOM contributed character rigging and conducted mocap sessions. All facial and full body data, lighting and textures were captured in just a few hours resulting in the creation of high-quality digital doubles that were ready for the VFX vendors with minimal time required of the talent.

“With this process, the VFX vendors on the show were able to start at the 50-yard line, so to speak, instead of working from scratch and laboring through the tedious R&D process of building digital doubles,” said Paul Debevec, who heads up the graphics lab at ICT and developed both the Light Stage 6 and the technology behind the facial scanning Light Stage used by Lightstage LLC. “With the physical proximity of ICT, Icon and House of Moves, talent was in and out having had their body scan, texture, lighting and mocap data all captured in under three hours.”

Read the full story.

Transitional Online Post-Deployment Soldier Support in Virtual Worlds

Download a PDF overview.

TOPSS-VW also known as “Coming Home,” explores the domains of persistent, easily accessible virtual worlds for delivering 3D tele-health experiences. Using the Second Life platform, we have created a specialized online virtual world for the benefit of post-deployment soldiers who are reintegrating to civilian life. This is often a difficult time for many soldiers due to both physical and psychological challenges. We believe the virtual world has great potential to mitigate many of these issues by providing a dedicated space where soldiers can interact, find camaraderie, and connect to resources for healing and transition.

The center of the veterans’ social space is Chicoma Lodge. The design of this inviting lodge is patterned after National Park lodges of the American West. The central area, the Great Hall, contains comfortable seating areas around fireplaces, and a warm decor that includes hunting, skiing, cowboy and Native American motifs. Additional seating areas on two decks to the rear of this space afford serene views of a lake fed by a cascading mountain waterfall, as well as wildlife indigenous to the area. Two large wings extending from the Great Hall are devoted to a wide range of games that encourage social interaction. We have designed our space as a connected social network where soldiers can find camaraderie with others who have gone through recent wartime experiences. Thus, this environment serves a function similar to the twentieth century VFW Halls. In addition, soldiers access therapeutic activities.

Coming Home builds on two of ICT’s research strengths: intelligent virtual humans and immersive virtual world expertise. We have expanded ICT’s virtual humans into the persistent VW domain, where they must remain logged in and active weeks on end. These embedded conversational agent “avatars” serve as “always on” helpers, guides and storytellers, enhancing player experience and providing helpful information and way finding.

We have also expanded current virtual world capabilities by creating activities that provide physical – “real” – world benefits to participants. For example, inspired by breathing techniques of many relaxation therapies, as well as research in biofeedback and advice from military social work experts, we have created and tested a virtual jogging path where a veteran uses controlled breathing in an ordinary headset microphone to cause their avatar to run in the virtual world. Results from our study on this activity have shown a significant reduction in stress markers for participants. We expect users who engage in this enjoyable practice regularly will derive real world stress reduction benefits.

We have pioneered the implementation of Mindfulness-Based Stress Reduction (MBSR) delivery within the virtual world. This is a Complementary and Alternative Medicine (CAM)-based intervention developed by Jon Kabatt-Zin, with over two decades of clinical research that validate its benefits for pain mitigation, stress reduction, and overall psychological resilience. With our MBSR experts from the University of San Diego Center for Mindfulness, we have held two experimental 8-week sessions thus far in the virtual world for therapists and stakeholder. We plan to enroll veterans in 2012 in the virtual world Mindfulness classes through our partnerships with AMEDD at Ft. Sam Houston, and the National Intrepid Center of Excellence in TBI and Psychological Health (NICoE) in Bethesda MD.

We have also created a Warrior’s Journey experience that consists of interactive narratives about the life and ideals of classic warriors, such as the Cheyenne Dog Warrior and the Samurai, designed to be relevant to today’s soldiers. The capstone of the Warrior’s Journey is an interactive authoring system where veterans can create their own Warrior’s Journey story that others can experience in the virtual world, including an intelligent agent that can be questioned and which can relay the veteran’s own facts and backstory to their authored Warriors Journey.

External Collaborators
Jamie Antonisse

Want to learn more?
Click here for downloads and more info.

Slate Highlights Skip Rizzo’s Research Using Gaming Technologies for ADHD

A story on Slate, by Stanford’s Jeremy Bailenson, highlighted research by ICT’s Skip Rizzo using motion recognition programs, similar to the Microsoft Kinect to detect ADHD. The article notes that systems analyzing nonverbal behavior could be used to automatically diagnose kids with a variety of disorders.

UrbanSim

Download a PDF overview.

UrbanSim is a PC-based virtual training application for practicing the art of mission command in complex counterinsurgency and stability operations. It consists of a game-based practice environment, a web-based multimedia primer on doctrinal concepts of counterinsurgency and a suite of scenario authoring tools.

The UrbanSim practice environment allows trainees to take on the role of an Army battalion commander and to plan and execute operations in the context of a difficult fictional training scenario. After developing their commander’s intent, identifying their lines of effort and information requirements, and selecting their measures of effectiveness, trainees direct the actions of a battalion as they attempt to maintain stability, fight insurgency, reconstruct civil infrastructure and prepare for transition.

UrbanSim targets trainees’ abilities to maintain situational awareness, anticipate second and third order effects of actions and adapt their strategies in the face of difficult situations. UrbanSim is driven by an underlying socio-cultural behavior model, coupled with a novel story engine that interjects events and situations based on the real-world experience of former commanders. UrbanSim includes an intelligent tutoring system, which provides guidance to trainees during execution, as well as after action review capabilities.

UrbanSim is being used and evaluated in an increasing number of U.S. Army training contexts, including in courses at the School for Command Preparation and the Command and General Staff College. UrbanSim has also been utilized by operational U.S. Army units to provide an opportunity for staff members to work together and improve their coordination skills.

The UrbanSim project is being performed under the ICT contract being managed by the United States Army Simulation and Training Technology Center (STTC).

Goals

  • Provide trainees the opportunity to practice the art of mission command in complex counterinsurgency and stability operations.
  • Develop new tools, metrics and methods that enable training developers to rapidly create effective and interactive virtual training environments.

Facts and Figures

  • UrbanSim can be downloaded from the U.S. Army’s MilGaming website.
  • Key deployment sites:
    • School for Command Preparation (SCP), Ft. Leavenworth, KS
    • Intermediate Level Education (ILE), Ft. Leavenworth, KS
    • Maneuver Captain’s Career Course (MC3), Ft. Benning, GA
    • Maneuver Support Captain’s Career Course (MSCCC), Ft. Leonard Wood, MO
    • Warrior Skills Training Center, Ft. Hood, TX

External Collaborators

  • Simulation and Training Technology Center
  • Army Research Institute for the Behavioral Sciences
  • U.S. Army School for Command Preparation
  • Stranger Entertainment, Inc.
  • Quicksilver Software, Inc.
  • Psychic Bunny, LLC
  • MYMIC, LLC

Sergeant Star

The Sergeant Star Immersive Demonstration presents a new class of virtual human guide, using ICT’s interactive character technologies, such as natural language understanding and realistic conversational gestures.

The Sergeant Star persona began as a non-animated web chat personality on the GoArmy.com website. There he responds verbally and in writing to visitor-typed questions using artificial intelligence software that was developed by Next IT of Spokane, WA. ICT researchers were asked by the United States Army Accessions Command to bring Sergeant Star to life, and have since elevated him to a new level of realism.

Projected onto a semi-transparent screen, he now appears life size, and using Hollywood storytelling techniques with cutting-edge computer graphics his personality matches his good looks. His chest moves with each breath, his eyebrows arch to certain questions. He speaks, he shrugs, he laughs and he informs.

Sergeant Star answers questions on topics including Army careers, training, education and money for college. He can also handle queries about the technology behind his development and explain how his creation fits in with plans for future Army training environments.

Facts and Figures
The Sergeant Star Immersive Demonstration debuted at the Association of the United States Army Conference in Washington, DC October 8-10, 2007. Later that month he appeared at the Future Farmers of America convention in Indianapolis and he is scheduled to appear at the All-American Bowl (Fan Fiesta) in San Antonio in early January 2008. From October 2007 to January 2009, SGT Star is estimated to have been seen by 75,000 audience members, and remained active in the Army’s Interactive Adventure Vans for some time thereafter.

Future
SGT Star continued to pave the way for other ICT full-scale interactive characters, such as the InterFaces twins at The Museum of Science Boston and the FITE-JCTD CHAOS Marine training system at Camp Pendelton.

External Collaborators
Voice talent: Holt Boggs

Sergeant Blackwell

Sergeant John Blackwell, Interactive Character is a 3D virtual character capable of spoken interaction using ICT natural language processing technology. He appears at human scale on a transparent digital flat display system developed by ICT Mixed Reality Research and Development.

Showcasing the state-of-the-art virtual human and graphics technology developed at ICT, this work accelerated ongoing efforts to present an engaging, interactive avatar combining natural language, speech recognition with audio-driven, real time facial animation research.

The emphasis of the live demonstration is on the character’s interaction capabilities. Using ICT’s unique spoken dialogue system, the character engages with a demonstrator in conversation in a query and response format.

With his extensive vocabulary and artificial intelligence language programming, Sgt. Blackwell can answer a wide range of questions, structuring his answers based on recognized words in the questions. Engaging and funny, Blackwell even answers open-ended questions. For example, when asked, “What time is it?” he may answer, “What am I, a clock?” Or, if asked about the weather, he may reply, “You are asking me the wrong question. Why don’t you try asking me about my technology?

The USC ICT Mixed Reality Research and Development team produced this demonstration showcasing virtual human technology for the 24th Army Science Conference. He made his public debut November 28th – December 2nd, 2004 in Orlando and then traveled the country, appearing in the Presidential Classroom, an honors program that brings high school students to Washington, D.C.. Blackwell also  spoke with visitors at the Smithsonian’s prestigious Cooper-Hewitt Museum in New York as part of their National Design Triennial, Design Life Now exhibit. The exhibit was also be seen at the Institute of Contemporary Art, Boston and will traveled  to the Contemporary Arts Museum, Houston (January 2008).

Goals

  • Create a believable virtual human capable of real-time, spoken interaction on a limited content domain.
  • Present this character life size in a convincing, mixed reality display environment.

External Collaborators

  • David Hendrie
  • Dick Lindheim
  • Ernie Eastlund
  • Peter McNerney
  • Kia Zokaie

Ada and Grace: Responsive Virtual Human Museum Guides

Download a PDF overview.

Bringing Science and Technology to Life

Meet Ada and Grace, two bright and bubbly educators who arrived at the Museum of Science, Boston in 2009. Science and technology are literally part of their being. That is because they aren’t real people – but virtual ones. Designed to advance the public’s awareness of, and engagement in, computer science and emerging learning technologies, the virtual guides make a museum visit richer by answering visitor questions, suggesting exhibits and explaining the technology that makes them work. Visitors can even participate in the research that will make them better.

Named for Ada Lovelace and Grace Hopper, two inspirational female computer science pioneers, these digital docents are trailblazers in their own right. As part of an exhibit called “InterFaces” they are among the first and most advanced virtual humans ever created to speak face-to-face with museum visitors. As both examples and explainers of technical scientific concepts, they represent a new and potentially transformative medium for engaging the public in science.

A collaboration between University of Southern California’s Institute for Creative Technologies and the Museum of Science, Boston, this project highlights the educational and research potential of virtual characters by getting them out of the lab and interacting with people in meaningful and memorable ways.

Combining computer-generated character animation with artificial intelligence and autonomous agent research, the life-sized virtual human museum guides speak directly with visitors. Not only are they capable of discussing the science content of a museum exhibit, they also can be funny and model a convincing range of human emotions, providing an unprecedented opportunity to inspire youth and learners of all ages about computer science and related STEM fields.

Because virtual humans are based upon cutting-edge technology in artificial intelligence, and graphics, they are themselves an intriguing display of advanced STEM technology. At the museum, they don’t just serve as guides but as a technology exhibit too. In a “Science Behind Virtual Humans” exhibit dynamic displays placed next to the characters further educate visitors by showing the underlying processing the virtual humans perform in areas such as automatic speech recognition and natural language processing that allow the 19-year-old twins to move, listen, and talk just like real young adults.

At the Museum of Science, visitors not only observe science, they also participate in the process of science: In the “Living Laboratory” element of the project, data acquired from visitor interactions with the virtual humans is being used on an ongoing basis to improve the performance of the virtual humans. Results will be displayed to visitors so they can better understand the iterative research, design and development process of advanced software systems. By interacting with the many thousands of people that visit the museum annually, a database will be acquired that can help advance the state of the art in virtual human technology. In turn, such a rich database can have benefits for other virtual human applications in areas such as training, education, medical interventions, and entertainment. In addition, by moving a research project into a museum, the Virtual Museum Guides project transforms museums from a place where science is merely displayed to a place where science is actually done.

Click here to read about the National Science Foundation expo booth featuring Ada and Grace at the American Association for the Advancement of Science conference in San Diego.

Facts and Figures

  • Opened December 2009 at the Museum of Science, Boston.
  • An estimated 160,000 museum visitors have intereacted with Ada and Grace.

Goals

  • Inspire youth and learners of all ages about computer science and related STEM fields.
  • Extend the capabilities of ICT virtual humans and build on work in intelligent coaching.
  • Advance the field of informal science by introducing an engaging, responsive and reinforcing social interface into informal science that will help turn passive visitors into active participants.

REFLCT: A Head Mounted Projector Display

Download a PDF overview.
Sharing Space with Virtual Humans

ICT’s Mixed Reality Lab has developed a near axis retroreflective projector system called REFLCT (Retroreflective Environments For Learner-Centered Training) that is conducive to mixed reality training. REFLCT is designed to unobtrusively deliver mixed reality training experiences. REFLCT:

  • Places no glass or optics in front of a user’s face.
  • Needs only a single projector per user.
  • Provides each user with a unique and perspective correct image.
  • Situates imagery within a physical themed and prop-based environment.
  • Can be low power, lightweight, and wireless.
  • Works in normal room brightness.

A Texas Instruments micro-projector and active LED tracking markers are mounted on a helmet. The markers are part of a PhaseSpace Impulse tracking system that provides position and orientation to a personal computer running Panda3D for graphics and VRPN for tracker data communication.

Each user only sees the imagery from his or her own projector, since retroreflective screens bounce light straight back towards the light source. This personalized information display allows each user to experience a perspective correct viewpoint, enabling each user to unambiguously perceive whether a virtual character is establishing eye contact, gesturing, or pointing a tool or weapon at them. Furthermore, since no bulky optics cover the users’ eyes, trainees can also establish eye contact with each other.

Retroreflective surfaces also open up new user interface capabilities beyond mixed reality training. The spatially targeted nature of the information presentation is well matched to other applications that require user-personalized data. For example, cell phones with embedded projectors could be used as “cheek-based displays”: at an airport, a user would hold a cell phone projector to his/her cheek and look at a blank retroreflective surface to see real-time directions, guidance arrows, and flight information, in the user’s preferred language.

References: Augmented Reality Applications and User Interfaces Using Head-Coupled Near-Axis Personal Projectors with Novel Retroreflective Props and Surfaces. Mark Bolas and David M. Krum. Pervasive 2010 Ubiprojection Workshop, May 17, 2010, Helsinki, Finland.

Bravemind: Virtual Reality Exposure Therapy

Download a PDF overview.

ICT’s virtual reality exposure therapy is aimed at providing relief from post-traumatic stress.

Currently found at over 60 sites, including VA hospitals, military bases and university centers, ICT’s Virtual Iraq/Afghanistan exposure therapy approach has been shown to produce a meaningful reduction in PTS symptoms. Additional randomized controlled studies are ongoing.

Exposure therapy, in which a patient – guided by a trained therapist – confronts their trauma memories through a retelling of the experience, is now endorsed as an “evidence-based” treatment for PTS.  ICT researchers added to this therapy by leveraging virtual art assets that were originally built for the commercially successful X-Box game and combat tactical simulation scenario, Full Spectrum Warrior. The current applications consist of a series of virtual scenarios specifically designed to represent relevant contexts for VR exposure therapy, including Middle-Eastern themed city and desert road environments. In addition to the visual stimuli presented in the VR head mounted display, directional 3D audio, vibrations and smells can be delivered into the simulation. Now rather than relying exclusively on imagining a particular scenario, a patient can experience it again in a virtual world under very safe and controlled conditions. Young military personnel, having grown up with digital gaming technology, may actually be more attracted to and comfortable with a VR treatment approach as an alternative to traditional “talk therapy”.

The therapy requires well-trained clinical care providers that understand the unique challenges that they may face with service members and veterans suffering from the wounds of war. Stimulus presentation is controlled by the clinician via a separate “wizard of Oz” interface, with the clinician in full audio contact with the patient. ICT researchers are also adapting the system as a tool for stress resilience training and PTS assessment.

This basic and applied research effort has been funded by ONR, TATRC, USAMRMC and the Infinite Hero Foundation. Collaborators include JoAnn Difede, Weill Cornell Medical Center; Barbara Rothbaum, Emory University; and Virtually Better, Inc.

The BRAVEMIND VR Exposure Therapy software was created at the University of Southern California Institute for Creative Technologies and is provided free of charge for its clinical use and research upon documenting clinician expertise in the delivery of Prolonged Exposure Therapy for the treatment of combat-related PTSD. It is the responsibility of the requesting agency to acquire the necessary equipment to run the system, but we can provide a fully detailed equipment directory and instructions for set up with supporting links. For further information, please contact project PI, Dr. Skip Rizzo at: rizzo@ict.usc.edu.

National Training Center: Comprehensive Enhanced Fidelity Program

As the nature of war has changed, it has become increasingly clear that soldiers need training in cultural awareness. To respond to this need, the National Training Center (NTC), based at Fort Irwin, Calif., developed realistic role-playing scenarios focused on deepening the understanding of Iraqi culture and customs. The Institute for Creative Technologies worked with the NTC to enhance the fidelity of this training.

ICT drew upon the talents of Hollywood artists and technicians who have successfully created immersive environments for well-known motion pictures and popular theme parks around the world. By bringing together professionals from the entertainment industry in the areas of writing, directing, special effects and production services, ICT helped create a more realistic training field.

Inside the NTC, 1100 square kilometers of Mojave Desert currently serve as Iraqi villages. In these villages a combination of Iraqi Nationals and local National Guard forces role-play with the soldiers who come to train at the NTC. The scenarios, from the NTC scenario development team, are designed based on learning objectives gathered from the incoming troops in the months prior to the exercises. Each scenario is specific to that particular rotation.

ICT worked with the NTC staff to enhance scenario development, role-play and special effects as they apply to specific exercises. The goal of ICT’s work was to “train the trainers” – help the NTC with not only the technologies and tools, but most importantly, transferring skills and knowledge to be utilized by the NTC team long after ICT personnel departed.

This effort was performed under the Institute for Creative Technologies (ICT) contract being managed by the United States Army Simulation and Training Technology Center (STTC).

External Collaborators

  • John Levoff, Producer
  • Bob Reynolds, TV Producer, Academic Technology, San Jose State University
  • Richard Stutsman, Special Effects
  • Carl Weathers, Actor/Director
  • Bob Wolterstoff, Writer
  • Herman Zimmerman, Set Designer

Joint Fires and Effects Training System (JFETS)

Download a PDF overview of ICT’s immersive and cognitive training aids.

JFETS is a suite of state-of-the-art immersive virtual reality environments designed to help soldiers make critical decisions under stress and provide collective team training and cultural awareness lessons. Tasks not only focus on the technical application of skills, but also on the thought processes involved in employing those skills.

By leveraging the ICT’s mixed reality technology, JFETS recreates life-like environments that place soldiers in real world settings. Stressors include heat, wind, explosions, human distress noise, and snipers. JFETS also provides added artificial intelligence behaviors to insurgent forces and realistic, reactive behaviors to civilians. Using JFETS, soldiers interact with both the physical and virtual worlds seamlessly without the costs associated with live exercises.

Installed at Fort Sill, Oklahoma, JFETS has trained over 16,000 Soldiers since 2004 and is currently being used by members of United States Army and Marine Corps for training prior to deployments to Afghanistan and Iraq. The success of JFETS serves as an example of the application of cutting edge virtual simulation technologies and research in a real-world training setting.

JFETS currently consists of three Digitally Interactive Virtual Environments (DIVEs):

  • Open Terrain Module (OTM)
  • Urban Terrain Module (UTM)
  • Close Air Support Module (CASM)

The OTM and UTM are four-dimensional training environments that focus on the application of indirect fires in open and urban settings respectively. In the UTM, Soldiers are placed in an apartment and must observe the activity of a busy city. The UTM training space is configured to accommodate up to three virtual windows overlooking a virtual environment.

The CASM enables joint forward observers to hone proficiency in Close Air Support (CAS) in an immersive and cognitively challenging manner. This is achieved via an innovative display interface consisting of a 300-degree perimeter field-of-view (360-degree overhead), 11 channels of video, rear projection, and the achievement of depth of awareness through the inclusion of 16-channel audio. The CASM offers a dynamic training experience allowing soldiers to practice immediate and pre-planned CAS missions, issuing 9-line, 6-line, and 5-line CAS requests, and pilot interaction from beginning to end of a mission.

The JFETS IOTA project is providing a suite of tools will allow operators overseeing JFETS training to spend less time on routine tasks more on handling out-of-the-ordinary situations and maintaining the quality of the overall training.

In Government Fiscal Year 2007, the Joint Close Air Support Executive Steering Committee recommended the JFETS CAS simulation be certified as a training system that could replace CAS Type 1 and Type 2 controls for Joint Terminal Attack Controller currency.

JFETS training scenarios utilize the US Army developed One Semi-Automated Forces Testbed (OTB) as a simulation driver. In conjunction with the Intelligent Forces (IFOR) project, JFETS enhances the capabilities of OTB to augment the flexibility of the overall training system.

JFETS realizes the collective vision of the United States Army Field Artillery School, the United States Army Fires Center of Excellence, Fires Battle Lab, the Project Manager Combined Arms Tactical Trainer, and the ICT.

Goals

  • To provide immersive and cognitively challenging training that replicate real world conditions.
  • To hone trainee proficiency more effectively and efficiently than live training.
  • To transfer the ICT’s mixed reality technologies and artificial intelligence capabilities into useable and useful training applications.

Facts and Figures

  • JFETS was installed at Fort Sill in 2004.
  • In 2007, the Joint Clos Air Support Executive Steering Committee recommended the JFETS CAS simulation be certified as a training system that could replace CAS Type 1 and Type 2 controls for Joint Terminal Attack Controller currency.
  • Over 20,000 Soldiers trained since 2007.
  • Successfully transitioned to the Army PEO STRI in 2008.

Gunslinger

Download a PDF overview.

Imagine stepping into a portal to another place and time. You appear in a darkened room, hearing the tinkling of an upright piano, the clinking of glasses, and horses and coaches in the distance. As your eyes focus, you make out a long, wooden bar full of glasses and whiskey bottles. Feeling a weight around your hips you realize you are wearing a holster with a six-shot revolver. A conspicuous metal star sits pinned to your chest. The star says “U.S. Ranger.” Suddenly you realize someone is staring at you from across. He looks like a bartender out of an old American western movie. “Howdy Ranger,” he says. “You’re here to rid our town of that evil bandit, Rio Laine, right?” You feel several eyes turn to you, waiting expectantly for an answer…

Welcome to Gunslinger, an interactive-entertainment application of virtual humans that transforms this iconic movie scene into a vivid semblance of reality.

Gunslinger combines virtual humans technology, and Hollywood storytelling and set building into an engaging, mixed-reality, story-driven experience, where a single participant can play the hero in a wild west setting by interacting both verbally and non-verbally with multiple virtual characters.

The Gunslinger project also pushes the frontier of virtual human research by proposing a new architecture for story-driven interaction. The system combines traditional question-answering dialogue techniques with a capability for biasing question understanding and dialogue initiative through an explicit story representation. The system incorporates advanced speech recognition techniques and visual sensing to recognize multimodal user input. It further extends existing behavior generation methods such as BEAT and SmartBody to drive tightly coupled dialogue exchanges between characters. Together, these capabilities strive to seek a balance between the open-ended dialogue interaction and carefully crafted narrative.

Flexible Action and Articulated Skeleton Toolkit (FAAST)

FAAST is middleware to facilitate integration of full-body control with games and VR applications. The toolkit relies upon software from OpenNI and PrimeSense to track the user’s motion using the PrimeSensor or the Microsoft Kinect sensors. FAAST includes a custom VRPN server to stream the user’s skeleton over a network, allowing VR applications to read the skeletal joints as trackers using any VRPN client. Additionally, the toolkit can also emulate keyboard input triggered by body posture and specific gestures. This allows the user add custom body-based control mechanisms to existing off-the-shelf games that do not provide official support for depth sensors.

FAAST is free to use and distribute for research and noncommercial purposes (for commercial uses, please contact us). If you use FAAST to support your research project, we request that any publications resulting from the use of this software include a reference to the toolkit (a tech report will be posted here within the next week or so). Additionally, please send us an email about your project, so we can compile a list of projects that use FAAST. This will be help us pursue funding to maintain the software and add new functionality.

The preliminary version of FAAST is currently available for Windows only. We are currently preparing to release code as an open-source project. Additionally, we plan to develop a Linux port in the near future.

To download FAAST and for more information visit the MxR Lab website.

FlatWorld

The FlatWorld project is an immersive virtual reality environment in which users can walk and run freely among simulated rooms, buildings, and streets. FlatWorld uses a system of “digital flats,” large rear-projection screens that employ digital graphics to depict a room’s interior, a view to the outside, or a building’s exterior. By adding physical props to the flats, FlatWorld becomes a “mixed reality” environment where users interact with both the physical and virtual worlds seamlessly.

Digital flats can be readily assembled in any open space to simulate multiple situations in a variety of geographic locations. The scenes on the digital flats can represent a real world setting, such as a city in the Middle East, depicting actual city buildings in their correct locations.

The FlatWorld display system also integrates immersive audio and 4D sensory features with the visual displays. The FlatWorld Simulation Control Architecture software enables environment designers to synchronize sounds and effects with real-time graphics, as well as respond to user interactions with physical props (such as opening a door or window). Sensory immersion in a scene can further be augmented by strobe lighting and tactile floor speakers, which simulate the flashes and vibrations of explosions or a lightning storm. FlatWorld provides a genesis point for a training system that participants could easily walk through unencumbered by head mount displays, and incorporates life-size projected displays with physical props and real-time, 3D graphics.

The FlatWorld immersive system acts as an integration point wherein the advancing work of other ICT research projects such as ICT Natural Language, Virtual Humans and the ICT Graphics Lab can be incorporated. This work will assist the transition of ICT technologies to actual use in operational settings.

The JFETS installations at Fort Sill, Oklahoma leverage FlatWorld mixed reality technology to replicate real world conditions without the costs associated with live exercises. Two different multi-room configurations of the FlatWorld system are under development for use by the Office of Naval Research. These are the Infantry Immersive Trainer- Flatworld systems installed at Quantico, Virginia and at Camp Pendleton, California. Each system is custom designed to meet the needs of their respective facility.

Goals

  • Create a modular and transportable mixed reality environment that can simulate a variety of real-world locations
  • Exploit low cost, common off-the-shelf technology (COTS) whenever possible

Coach Mike

Download a PDF overview.

Summary
Coach Mike is the newest virtual human in Cahner’s Computer Place at the Boston Museum of Science (MOS). Installed in 2010, he was built to support visitors, particularly young people, in the quest to learn about science and have fun while they do it. A National Science Foundation-funded collaboration between the University of Southern California Institute for Creative Technologies and the Boston Museum of Science, Coach Mike “works” at the museum’s Robot Park exhibit teaching visitors how to program a robot. Not only can he guide people to get the robot turning, buzzing, singing, and more, but he is capable of describing how the exhibit actually works and creating specific challenges for guests to solve. He’s there to explain, encourage, and give help when needed.

Museum-goers quickly notice that Coach Mike reacts with humor and emotion to whatever they make the robot do. But he doesn’t just entertain visitors, he also helps them learn. A recent evaluation conducted by the Institute for Learning Innovation found that Coach Mike’s presence in Robot Park leads to more productive interactions with the exhibit: visitors wrote more computer programs and took on more challenges when they interacted with the virtual coach.

Background
Coach Mike was inspired by Professor Michael Horn at Northwestern University. Professor Horn created Robot Park so that museum visitors could have a fun and intuitive way to learn programming. He created the Tern programming language that uses special programming blocks with codes that allow visitors to construct programs. Tern is known as a tangible programming language meaning that you interact with it by touching, moving, and assembling the pieces.

Visitors to Robot Park who had the help of museum volunteers tended to stay longer and do more programming than those who did not have a guide. So, working with museum staff, ICT researchers built Coach Mike to simulate some of these interactions. This includes helping visitors right at the beginning by explaining what the buttons mean, how to assemble programs, and how the codes are recognized. Also, volunteers tend to suggest problems and give hints to visitors to help them along. Thus, Coach Mike has some of this same knowledge and is willing to deliver it to visitors when they want it.

Technologies
To allow Coach Mike to interact with visitors and monitor interactions with Robot Park, the existing exhibit was augmented with several new software components:

  1. Physical tracking: weight-sensitive mat, robot camera, help button
  2. Virtual Human system: animation, speech, lip syncing, art
  3. Pedagogical Manager: session manager, intelligent tutoring system

The Pedagogical Manager acts as the hub by monitoring physical inputs from the exhibit (including tested blocks and programs), triggering virtual human actions (i.e., speaking and animating), assessing user actions, and providing learning support. Coach Mike’s animations run on ICT’s SmartBody system and in the Gamebryo game engine. He speaks via synthesized speech.

Coach Mike uses the techniques of artificial intelligence (AI) to support visitors: he estimates their knowledge, can judge when programs are correct (or not), and is willing to give feedback and suggestions to visitors when they want it. Pedagogical decisions are driven by a rule-based cognitive model of coaching that models a frequently changing world state. Built to simulate museum staffs’ strategies, the model encodes a variety of tutoring and motivation tactics to orient people to the exhibit, encourage them to try new things, suggest specific problems, and give knowledge-based feedback on their programs. A general aim is to balance the importance of exploration and play with the goal of giving feedback and guidance for specific challenges. Since the museum is a “free-choice” learning environment, visitors can walk away at any moment. Thus, Coach Mike’s help is always delivered in entertaining and encouraging ways that seek to maximize visitor engagement.

Out of the Box: USC Researchers Debut Smartphone 3-D Virtual Reality Viewer Made Out of Cardboard

Easy-to-assemble fold-out viewer transforms smartphones into portable immersive virtual reality systems. First 100 do-it-yourself VR viewers being distributed at IEEE Virtual Reality Conference in Orange County, March 4 – 8

Press Contact: Orli Belman
belman@ict.usc.edu

Smart phones have endless applications that keep users connected to the real world. But researchers from the Mixed Reality (MxR) Lab at the University of Southern California Institute for Creative Technologies are developing ways to use these ubiquitous devices to transport people to virtual worlds as well.

Beginning Sunday, March 4, at the IEEE Virtual Reality Conference Workshop on Off-The-Shelf Virtual Reality, they began handing hundreds of manila envelopes, each containing a FOV2GO, a portable fold-out iPhone and Android viewer that turns the smartphone screen into a 3-D virtual reality system.  Downloadable software allows users to create their own virtual worlds or environments to display.

Developed with researchers from the USC School of Cinematic Arts, the folded paper device, constructed for less than five dollars, is one of a growing number of do-it-yourself projects that are decreasing the overhead and hardware required for fully immersive virtual reality experiences.  These low-cost, lightweight systems can be used to create portable virtual reality applications for training, education, health and fitness, entertainment and more.

“I am happy to be able to introduce the FOV2GO at the IEEE VR Conference,” said Mark Bolas who heads up the MxR Lab at ICT and is also an associate professor in the interactive media division of the USC School of Cinematic Arts. “This conference has been the premiere venue in the field of VR since its inception in 1999 and now more than a decade later we can put rendering, sensing, and display technologies in the hands – literally – of all participants. This kit enables exploration all facets of virtual reality, from algorithm design to perceptual psychology to visual design. We are already seeing projects that use the FOV2GO to deepen the feeling of immersion and are excited to see what else people can create with this portable paper prototype.”

The FOV2GO was distributed at the ICT-hosted workshop on Off-The-Shelf Virtual Reality and will also be available at the ICT booth during the IEEE Virtual Reality Conference. Recipients can fold the viewer together, download a simple demo app and then slip in their smartphone into the viewer for a portable immersive 3-D experience. Links will be provided to software libraries and packages to help develop immersive VR packages.

“We’re at a unique moment where all the components for creating fully immersive virtual worlds have suddenly become ubiquitous and cheap, often built into devices that we have already in our pockets and on our desktops,” said Perry Hoberman, a research associate professor at the USC School for Cinematic Arts who developed the software that allows the viewer to display stereo images on the Unity game engine.  “All that’s been lacking is a kit that puts all the parts together. That’s what we’ve tried to do with FOV2GO. Our hope is that by making these tools available to artists and designers everywhere, we’ll see VR develop to its true potential as an artistic medium.”

The project and weekend workshop were inspired by the late Randy Pauch’s 1991 paper, “Virtual Reality on Five Dollars a Day” in which he described a virtual reality system costing $5000 – substantially less expensive than much of the immersive technology being used at the time. More than twenty years later, inexpensive sensing and 3-D display technology is a reality. Other devices demonstrated at the workshop included depth cameras for hand and touch interaction and fibrotactile gloves for virtual exploration.

The workshop keynote address by Evan Suma, a postdoctoral researcher in the MxR Lab covered tools for using natural gestures and movements as opposed to traditional keyboards and mouse clicks on a computer. Examples of the lab’s work include FAAST, or Flexible Action and Articulated Skeleton Toolkit and their popular videos on using gestures to power World of WarcraftSecond Lifeand Gmail Motion’s April Fools Prank.

David Krum, who is co-director of the Mixed Reality Lab, will also present a demonstration of Virtual Realty To Go at the IEEE conference.

Other ICT presentations include:

  • Impossible Spaces: Maximizing natural walking in virtual environments with self-overlapping architecture (Evan A. Suma and Zachary Lipps and Samantha Finkelstein and David M. Krum and Mark Bolas)
  • A taxonomy for deploying redirection techniques in immersive virtual Environments (Evan A. Suma, Gerd Bruder, Frank Steinicke, David M. Krum, Mark Bolas)
  • Unobtrusive measurement of subtle nonverbal behaviors with the Microsoft Kinect (Nathan Burba and Mark Bolas and David M. Krum and Evan A. Suma)
  • Immersive training games for smartphone-based head mounted displays (Perry Hoberman and David M. Krum and Evan A. Suma and Mark Bolas)
  • Spatial misregistration of virtual human audio: Implications of the precedence effect (David M. Krum and Evan A. Suma and Mark Bolas)

Presentations at the Workshop on Off-The-Shelf Virtual Reality:
SHAYD: Juli Griffo and James Illiff
THE MINUS LAB: Tales from the Minus Lab, Alex Beachum, Sarah Scialli, Steve Wenzke, David Young, Robyn Tong Gray.

Skip Rizzo Wins Satava Award

Skip Rizzo received the 17th annual Satava Award at the recent Medicine Meets Virtual Reality Next Med Conference (MMVR19) in Orange County, Calif. Rizzo was given the award in recognition of his understanding and realization of the potential of virtual reality as a therapeutic tool to help patients overcome mental and physical disabilities. Established in 1995 to acknowledge the contribution of Dr. Richard M. Satava, , the award is presented to an individual or research group demonstrating unique vision and commitment to the improvement of medicine through advanced technology. For more on the conference and award click here.

Distribution Management Cognitive Training Initiative (DMCTI)

Download a PDF overview of ICT’s immersive and cognitive training aids.

Designed to assist Soldiers and instructors, the DMCTI application is built on a solid educational design, incorporating lessons learned from the ICT’s significant experience in developing game-based cognitive training aids.

The DMCTI prototype application trains U.S. Army logistical planners and supports the understanding of the Army distribution management process. It promotes the development of strategies for best exploiting the capabilities of logistics management systems, including the Army’s recognized logistics command and control tool, Battle Command Sustainment Support System (BCS3).

Lessons are available in three levels of difficulty: beginner, intermediate, and advanced.  Beginner lessons include missions with few tasks and a significant amount of in-game feedback whereas advanced levels include missions with many tasks and little to no input along the way.  A post exercise review provides students with an evaluation as well as a representation of how their performance compares to experts in the field.

The target audience for the DMCTI application is the Sustainment Brigade Support Operations Officer who must request information from BCS3 operators in order to carry out a given mission. However, the overall design of the DMCTI training aid may be leveraged to support distribution management training to multiple levels of US Army command and staff positions.

For this project, ICT worked with the Product Manager, Battle Command Sustainment Support System (PM BCS3) and the US Army Simulation and Training Technology Center (STTC).

Facts and Figures
DMCTI was the recipient of the 2008 Army Modeling and Simulation Award for Army-wide team training.

It was transitioned by the government for use in training.

External Collaborators

  • PdM BCS3
  • STTC
  • Quicksilver Software, Inc.
  • Stranger Entertainment

Digital Emily

In the Digital Emily project, Image Metrics and the University of Southern California’s Institute for Creative Technologies (USC ICT) animated a digital face using new results in 3D facial capture, character modeling, animation, and rendering. The project aimed to cross the “uncanny valley” that divides a synthetic-looking face from a real, animated, expressive person. The key technologies included a fast high-resolution digital face-scanning process using USC ICT’s Light Stage capture system and Image Metrics’ video-based facial-animation system. The project generated one of the first photorealistic digital faces to speak and emote convincingly in a medium close-up.

Learn more about Digital Emily on the ICT Vision and Graphics Lab webpage.

Cultural and Cognitive Combat Immersive Trainer (C3IT)

C3IT-D depicts placing soldiers in highly realistic, critical decision-making situations that require cultural awareness to afford making the best judgments. The proof of concept trainer employs ICT technologies to move beyond tactical training situations by means of cultural & cognitive immersive content.

The objective is to portray a one-on-one exchange between the trainee and an interactive character wherein cultural awareness is of primary importance. The trainee engages with interactive characters projected at full human scale in real-time graphics on digital flat displays using a natural language speech recognition interface.

ICT worked with subject matter experts to identify scenarios for the demonstration that had appropriate relationship to specific training tasks being trained at Ft. Benning and other Army training centers. One such scenario presents investigative questioning after an IED explosion in a public market, and was temporarily installed and demonstrated at Ft. Benning in December 2006.

Combat Hunter Action and Observation Simulation (CHAOS)

Download a PDF overview.

The Combat Hunter Action and Observation Simulation (CHAOS) is part of ICT’s work on the Future Immersive Training Environment (FITE) Joint Capabilities Technology Demonstration (JCTD) and its goal of developing next-generation training for infantry small units. Located at the Infantry Immersion Trainer (IIT) at Camp Pendleton, CHAOS demonstrates advanced capabilities in immersive training for a mixed reality environment.

CHAOS incorporates mixed reality and immersive techniques, including the ability to interact with virtual characters and use of a storyline to drive participants towards specific choreographed experiences.

ICT developed a multi-room installation for the CHAOS environment that represents a house compound in Helmand Province, Afghanistan. The interior and exterior settings include a mix of real and virtual elements. ICT also developed virtual characters that can interact with the infantry squad and each other in one area of the compound. The virtual characters are part of a scenario that requires the squad to apply the techniques of tactical questioning, information gathering and acute observation in order to be successful in the CHAOS mission.

One of the key issues dealt with is decision-making under stress and chaos on the battlefield, when squads must be prepared for both lethal and non-lethal engagements. Instead of dictating right and wrong times to use lethal vs. non-lethal tactics, techniques and procedures (TTPs), squads are asked to apply good judgment about rules of engagement, escalation of force, and shoot-no shoot choices depending on the situation and the appropriateness. Key to the learning experience is the after action review (AAR) conducted outside of the CHAOS environment. The AAR is designed to help the squads understand their performance and give them confidence for future missions.

ICT collaborated with military subject matter experts, government personnel, training developers, consultants and contractors to develop the FITE scenarios, including the CHAOS scenario, which runs 5-10 minutes in length and is replayable to allow for multiple paths through the scenario. The scenario is designed to be set up in the IIT beforehand and continue in the IIT afterwards, as appropriate.

Goals
On the training side, CHAOS must:

  • Address training objectives
  • Be relevant to operations
  • Be emotionally compelling
  • Allow for “Handler”/white cell control
  • Synch with the larger IIT experience

On the technical side, CHAOS must:

  • Be as stable as possible
  • Be as rugged as possible
  • Operate in a noisy (sound) environment
  • Handle more than one real person in the environment
  • Support background “dumb” characters

Facts and Figures
CHAOS is a milestone in virtual human development for ICT. The project features ICT’s first non-English speaking interactive virtual human.

Cognitive Air Defense Training System (CAD-TS) Engagement Control Station Simulation (ECS2)

Download a PDF overview of ICT’s immersive and cognitive training aids.

The CAD-­‐TS ECS2 was a collaboration between the Institute for Creative Technologies (ICT) and the United States Army Air Defense Artillery School (USAADASCH). The CAD-­‐TS ECS2 is able to accommodate up to 64 soldiers per training session through the combination of immersive simulation and digital media-­‐enhanced classroom instruction.

The CAD-TS ECS2 leverages ICT’s expertise in immersive media techniques and learning sciences.

Through a unique means of bridging the gap between the 2D scope and 3D visualization of the airspace, the CAD-TS ECS2 presents soldiers with a tool to aid in their preparation of using the US Army Patriot.

This cognitive training aid is designed to test soldiers’ ability to develop and execute courses of action, and exercise their ability to understand the significance of the consequences of those actions based on adequate awareness of what is happening in the operational environment.

Facts and Figures

  • Successfully transitioned to the Army in 2010

Bilateral Negotiation Trainer (BiLAT)

Download a PDF overview.

BiLAT is a portable-PC based training program designed with a specific objective in mind: to provide students an immersive and compelling training environment to practice their skills in conducting meetings and negotiations in a specific cultural context. The application was a winner of the U.S. Army Modeling and Simulation Awards for FY08 and has been deployed as part of a training curriculum for officers assigned to foreign posts. BiLAT is available for download from the U.S. Army’s MilGaming website.

In BiLAT, students assume the role of a U.S. Army officer who needs to conduct a series of bi-lateral engagements or meetings with local leaders to achieve the mission objectives. In one campaign, the student is tasked with understanding why a U.S. built marketplace is not being used. The student must gather information on the social relationships among the characters in the scenario.

Students must also establish their own relationships with these characters and be sensitive to the character’s cultural conventions. Any misstep could set the negotiations back or end them completely. Students must also apply sound negotiation strategies such as finding win-win solutions and properly preparing prior to the meeting.

The BiLAT social “simulation” was developed through a collaborative, multi-disciplinary approach. USC’s Game Innovation Lab was involved in the game design as well as creating a compelling set of scenarios with realistic characters that would be appropriate for the training objectives identified.

To represent and model the social and cultural elements, the BiLAT infrastructure uses research technologies including a dialogue manager, SmartBody animation technology and the PsychSim social simulation system from ICT’s virtual human research project, as well as an intelligent coach and tutor to provide the student with run-time coaching and in-depth feedback during after action reviews.

Authoring tools were developed to support the content workflow. There has also been an effort to create a new scenario in the BiLAT framework within 12 weeks. BiLAT AIDE is a complementary web-based course created with the USC Rossier School of Education. The course further enhances learning by providing instruction on the theories behind the practice of negotiation and cultural understanding. BiLAT was a part of the Learning with Adaptive Simulation and Training (LAST) Army Technology Objective (ATO). The project was a collaboration between the University of Southern California’s Institute for Creative Technologies (ICT), U.S. Army Research Institute for the Behavioral and Social Sciences (ARI), U.S. Army Research Laboratory Human Research and Engineering Directorate (ARL-HRED) and U.S. Army Simulation and Training Technology Center (STTC).

Facts and Figures

  • Currently deployed at over 60 battle simulation and training centers throughout the Army.
  • The application was a winner of the Army’s Army Modeling and Simulation Awards FY08.
  • The system has been formally transitioned to the PEO STRI and the Army’s Games for Training program and can be downloaded from the Army’s MilGaming website.

Goals

  • Develop new tools, metrics, and methods that enable training developers to rapidly create effective, interactive, virtual training simulations.
  • Reduce scenario development time/costs while using realistic stories from the field.
  • Provide a better understanding of the soldier/ student learner.

Army Excellence in Leadership

Download a PDF overview.

The AXL project focuses on accelerating leadership development. AXL provides an engaging and memorable way to transfer tacit knowledge and develop critical thinking through case-method teaching, filmed storytelling and interactive training. The primary products are filmed cases created in collaboration with Hollywood talent to address specific leadership issues and an easily modifiable website, AXLnet.

Filmed Cases
ICT has developed five filmed leadership cases addressing complex decision-making skills for the U.S. Army. The films, all based on real-life situations, were brought to life by experienced Hollywood screenwriters and professional Hollywood actors. The first case, Power Hungry, a 13-minute film set against the backdrop of a food distribution operation in Afghanistan, addresses lessons on how to think like a commander. Trip Wire uses the leadership challenges posed by the threat of IEDs in Iraq to consider the balance between force presence and mission accomplished, and Red Tight addresses interpreting threat levels in a Patriot battery operation. Working with the U.S. Army Chaplaincy, ICT developed Fallen Eagle, a two-part film series for squad level training told from the perspectives of both enlisted and officer ranks. It focuses on moral and ethical decision-making on the battlefield.

AXLnet
AXLnet provides a dynamic and interactive experience for students and easy-to-use tools for instructors to author customized lessons. Students can log on to view and analyze cases, including but not limited to the AXL films that ICT has developed, and even ask questions of the characters from the films. The system draws on ICT’s research in natural language processing to allow students to interview characters from the cases through free-text questions. The system can also provide feedback and tailor the learning experience based on student responses.

Facts and Figures

  • 10,000+ Soldiers trained.
  • AXL videos are shown to West Point Cadets annually as part of their leadership training.

Additional Uses
ICT has an ongoing collaboration with the USC Marshall School of Business, where AXL is being used to help address business-oriented leadership and ethical issues.

Goals

  • Accelerate the development of U.S. Army leaders who need strong team, interpersonal and critical thinking skills, and cultural awareness and adaptability in unfamiliar situations
  • Focus on acquisition of leadership skills
  • Create “blended learning” applications for use in self-directed leader development and classroom instruction

External Collaborators

SimCoach

Download a PDF overview.

In response to health care challenges that the conflicts in Iraq and Afghanistan have placed on the population of service members and their families, the U.S. Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCoE) have funded development of an intelligent, interactive, online virtual human healthcare guide program currently referred to as SimCoach.

The SimCoach project is developing virtual human support agents to serve as online guides for assisting military personnel and family members in breaking down barriers to initiating the healthcare process. The SimCoach experience is designed to attract and engage military Service Members, Veterans, and their significant others who might not otherwise seek help. It aims to motivate users to take the first step and seek information and advice with regard to their healthcare (e.g. psychological health, traumatic brain injury, addiction, etc.), and their general personal welfare (i.e., other non-medical stressors such as economic or transition issues). If determined necessary, SimCoach will also encourage them to take the next step towards seeking more traditional resources available. The SimCoach project is not conceived to deliver diagnosis or treatment, or as a replacement for human providers and experts.

SimCoach will allow users to engage in a dialog about their healthcare concerns with an interactive virtual human, selected from a variety of character options. He or she will answer direct questions and guide the user through a sequence of user-specific exercises and assessments. These characters are designed to solicit basic anonymous background information about the user’s history and clinical/psychosocial concerns. With this information they can provide advice and support, direct the user to relevant online content, and potentially facilitate the process of seeking appropriate care with a live clinical provider.

Facts and Figures

  • SimCoach is ICT’s first web based interactive virtual human.
  • SimCoach has been incorporated into the Braveheart website (braveheartveterans.org), a veteran support initiative of the Atlanta Braves and Emory University.

Light Stages

Download a PDF overview.

Plastic-looking characters that do not fully blend with their surroundings can be a distraction rather than an enhancement in virtual environments. Convincing virtual characters are needed in training, educations and entertainment. The Light Stage systems and development at ICT are also examples of the collaboration between academia, the Army and entertainment industry that was imagined when ICT was established in 1999.

Virtual characters — digitally generated humans that speak, move and think — are a core component of ICT’s training and educations systems. Making these characters look realistic, as well as lighting them convincingly, is a central goal of ICT’s Vision and Graphics Laboratory, beginning with the Light Stage 2 and continuing through today with the latest Light Stage X (2011).

The Light Stages have been used to help create ever-more realistic virtual characters for ICT training and educations projects and by studios such as Sony Pictures Imageworks, WETA Digital and Digital Domain to create photoreal digital actors as part of the Academy Award-winning visual effects in Spider-Man 2, King Kong, The Curious Case of Benjamin Button and Avatar, as well as other Hollywood blockbusters.

In 2008, Light Stage V was used for the Digital Emily Projects, a collaboration with a digital animation company, Image Metrics, which produced one of the first digital facial performances to cross the “Uncanny Valley,” meaning it created a completely convincing virtual character.

In 2010 Paul Debevec, ICT’s Associate Director for Graphics Research and the co-developer of the Light Stage systems, received an Academy of Motion Picture Arts and Sciences Scientific and Engineering Award (Academy Award) for the design and engineering of his Light Stage technologies. The award recognized more than ten years of research, development and application of technologies designed to help achieve the goal of photoreal digital actors who can appear in any lighting condition. It was presented to Debevec and his colleagues, Tim Hawkins of LightStage LLC, John Monos of Sony Pictures Imageworks, and Mark Sagar of WETA Digital, who co-developed the system with him.

Immersive Naval Officer Training System (INOTS)

Download a PDF overview.

The Immersive Naval Officer Training System (INOTS) targets leadership and basic counseling for junior leaders in the U.S. Navy. The INOTS experience incorporates a virtual human, classroom response technology and real-time data tracking tools to support the instruction, practice and assessment of interpersonal communication skills.

While the Navy recognizes that communication skills are important, junior leaders often receive little or no opportunity to practice important interpersonal skills prior to deployment. If they do receive practice, live role-play sessions may be used. In an effort to provide a structured framework for teaching and practicing communication skills, INOTS replaces one human role-player with a life-sized virtual human. The virtual human component addresses the issues inherent to live role-play practice sessions that cannot be easily standardized, tracked and assessed following the interaction.

After receiving up-front instruction and example demonstrations on basic strategies for helping a subordinate with a performance problem or a personal issue, one student from a class of 50 is selected to speak to the virtual human, and the rest of the students participate by selecting the option they would choose using remote-controlled clickers. INOTS’ instructional framework, interactive response system and visual data tracking are engaging tools to facilitate practice and class discussion, including an instructor-led after action review, before junior leaders reach their first assignments. Ultimately, the INOTS experience provides several interactive case studies and a framework for learning how to employ interpersonal skills related to basic counseling.

INOTS is a partnership with the Office of Naval Research (ONR), Naval Service Training Command (NSTC) and Officer Training Command Newport (OTCN). INOTS was installed at OTCN in August 2011 with a focus to supplement the Division Officer Leadership Course (DOLC) in support of the Officer Candidate School (OCS) and Officer Development School (ODS). In July 2013, INOTS was augmented with additional scenarios for use in the Limited Duty Officer/Chief Warrant Officer (LDO/CWO) Academy’s leadership curriculum.

ICT developed a similar Army prototype, the Emergent Leader Immersive Training Environment (ELITE), which is in use at the Maneuver Center of Excellence (MCoE) at Fort Benning, Georgia. ELITE’s technology and instructional approach has been leveraged by the USC School of Social Work’s Center for Innovation and Research on Veterans & Military Families (CIR) to train Motivational Interviewing with the Motivational Interviewing Learning Environment and Simulation (MILES) effort.

Facts and Figures

  • INOTS has trained over 15,000 Sailors at OTCN since 2012.