Watch YouTube Video Showcasing ICT-Tools to Play World of Warcraft with Microsoft Kinect

video featuring FAAST, a new toolkit that facilitates integration of full-body control with games and VR applications, has been widely viewed on YouTube. The video shows FAAST creator Evan Suma, who is a postdoctoral researcher in the ICT Mixed Reality Lab, playing World of Warcraft using Microsoft Kinect. With the addition of FAAST, he is able to control game play using his body movements rather than a joystick, keyboard or mouse. Skip Rizzo, ICT’s associate director of medical virtual reality, speaks in the video about the implications of this work.

“I think the real compelling aspect of all this is that you can now take off-the-shelf games, content that’s already built, and emulate the keyboard actions with body movement,” Rizzo said. “This opens up the doorway for building rehabilitation exercises for people after a stroke or traumatic brain injury. And in an area that’s getting a lot of attention, the area of childhood obesity and diabetes.”

And perhaps equally important was Suma’s observation after using his new tool.

“Gameplay was surprisingly fun,” he said.

LA Times, Slashdot, Intl Business Times Cover ICT’s World of Warcraft Video & Tool for Games/VR

FAAST, a new toolkit that facilitates integration of full-body control with games and VR applications, was covered in the Los Angeles Times technology blog, on the tech blog, Slashdot and in the International Business Times. A widely viewed YouTube video shows FAAST creator Evan Suma, who is a postdoctoral researcher in the ICT Mixed Reality Lab, playing World of Warcraft using Microsoft Kinect. With the addition of FAAST, he is able to control game play using his body movements rather than a joystick, keyboard or mouse.

Skip Rizzo, ICT’s associate director of medical virtual reality, speaks in the video about the implications of this work.

“I think the real compelling aspect of all this is that you can now take off-the-shelf games, content that’s already built, and emulate the keyboard actions with body movement,” Rizzo said. “This opens up the doorway for building rehabilitation exercises for people after a stroke or traumatic brain injury. And in an area that’s getting a lot of attention, the area of childhood obesity and diabetes.”

And perhaps equally important was Suma’s observation after using his new tool.

“Gameplay was surprisingly fun,” he said.

Read the LA Times coverage here.

Read Slashdot’s coverage here.

Read the International Business Times story here.

ICT and USC School of Social Work To Demo Virtual Reality Educational Technologies at CES

FOR IMMEDIATE RELEASE

CONTACT:
Dotty Diemer, DDK Communications
562.212.6014, dotty@ddk-communications.com

Cindy Monticue, USC School of Social Work
213.740.2021, monticue@usc.edu

Orli Belman, USC Institute for Creative Technologies
310.709.4156, belman@ict.usc.edu

USC RAISES THE BAR ON USE OF REVOLUTIONARY VIRTUAL TECHNOLOGY TO ADDRESS THE MENTAL HEALTH NEEDS OF VETERANS

Video Game-Inspired Virtual Iraq and Patient Avatar to be Demonstrated Live at Consumer Electronics Show

LOS ANGELES– December 15, 2010 – Real-life application of revolutionary virtual technology to train and teach in the classroom will be the focus of the University of Southern California’s display and presentations at the Consumers Electronics Show (CES) Jan. 6-10 at the Las Vegas Convention Center.

The USC Institute for Creative Technologies (ICT), in partnership with the USC School of Social Work, will be demonstrating its Virtual Patient Avatar and its video-game inspired Virtual Iraq technologies, created by ICT and currently being used by the USC School of Social Work to train and prepare social workers to help address the mental health needs of soldiers returning from war.

“Using virtual technology to train students was not something even on the radar 10 years ago,” said Paul Maiden, vice dean of the USC School of Social Work. “With so many soldiers returning from Iraq and Afghanistan with post-traumatic stress disorder (PTSD) and other problems, we are implementing this technology in the classroom to realistically and more practically train thousands of students on how to treat veterans with these types of mental health needs.”

Entering beta testing in School of Social Work classrooms this year, the program is the first application of virtual reality in a social work setting.

“ICT’s virtual reality technology allows humans to interact with computers and extremely complex data in a more natural fashion,” said Dr. Skip Rizzo, ICT’s associate director for medical virtual reality. “Through our research and development, we’ve been able to take technologies once considered expensive toys and functionally apply them to address real-world problems and situations.”

ICT and the USC School of Social Work will be presenting the Virtual Iraq and Virtual Patient technologies at the High Tech U Panel, Thursday, Jan. 6, 12:45 p.m. – 2 p.m. in room N253 of the North Hall, as well as the Living in Digital Times Press Conference at 2 p.m. -2:15 p.m. in room S227 of the South Hall, Upper Level that same day. From Jan. 6 until the show’s closing, attendees at the show will be able to personally experience and test the technologies at the Higher Ed Tech Pavillion located in the North Hall, Grand Lobby, Lower Meeting Room at booth number 3206.

The demonstrations will highlight:

– Virtual Iraq – This virtual, immersive environment recreates the war zone where trauma was experienced, allowing patients to work through their fears. Developed by ICT’s Rizzo, this simulated environment is currently being used by the USC School of Social Work to train students on treating patients with PTSD and other disorders. Virtual Iraq, which was adapted from the Full Spectrum Warrior video game, allows guests to experience the simulation first-hand using its virtual reality goggles, gun-shaped joystick, scent machines and vibrating platform. Click here to view an excerpt.

– Virtual Patient Avatar – The virtual patient is an avatar-based simulation program designed to replicate the experiences of veterans exposed to combat stress and to help prepare students to interact with real clients. Through conversations with digital avatars, created using ICT’s virtual human technology, students hone their clinical and interviewing skills to prepare them for future interactions with soldiers returning from war. A collaboration between the USC School of Social Work’s Center for Innovation and Research on Veterans and Military Families and ICT, the virtual patient avatar program is supported by a $3.2 million grant from the U.S. Department of Defense. CES attendees will be able to interview and interact with virtual patient Lieutenant Rocco, just as social work students will do as part of their training at USC. Click here to view an interview excerpt with a virtual patient.

The virtual patient avatar is also expected to be made available in the future to USC School of Social Work graduate students via its Virtual Academic Center, a web-based master’s degree program launched in October 2010 that uses a highly advanced web-based platform to conduct live, virtual classes between faculty and students.

About the USC School of Social Work
The University of Southern California’s School of Social Work (www.usc.edu/socialwork) ranks among the nation’s top 10 social work graduate programs (U.S. News & World Report), with the oldest social work master’s and Ph.D. programs in the West. A recognized leader in academic innovation, experiential learning, online education and translational research, the school prepares students for leadership roles in public and private organizations that serve individuals, families and communities in need. This is the only program in the nation offering a military social work curriculum track to prepare social workers to meet the needs of veterans and their families, supported by more than $14 million in appropriations from the U.S. Department of Defense. The school is also a campus exemplar for its research efforts, with funding exceeding $35 million. Our own research institute, the Hamovitch Center for Science in the Human Services, was the first endowed center for interdisciplinary social work research and remains a pioneer in translational science – the acceleration of research findings into practice settings.

About the USC Institute for Creative Technologies
At the University of Southern California Institute for Creative Technologies (ict.usc.edu), high-tech tools and classic storytelling come together to pioneer new ways to teach and to train. Historically, simulations focus on drills and mechanics. What sets ICT apart is a focus on human interactions and emotions—areas that are recognized as increasingly important in developing critical thinking and decision-making skills. ICT is a world leader in developing virtual humans who think and behave like real people and in creating immersive environments to experientially transport participants to other places. ICT technologies include virtual reality applications for mental health treatment and training, videogames for U.S soldiers to hone negotiation and cultural awareness skills and virtual human museum guides who teach science concepts to young people.

Abhijeet Ghosh, Tongbo Chen, Pieter Peers, Cyrus Wilson, Paul Debevec: “Circularly Polarized Spherical Illumination Reflectometry”

We present a novel method for surface reflectometry from a few observations of a scene under a single uniform spherical field of circularly polarized illumination. The method is based on a novel analysis of the Stokes reflectance field of circularly polarized spherical illumination and yields per-pixel estimates of diffuse albedo, specular albedo, index of refraction, and specular roughness of isotropic BRDFs. To infer these reflectance parameters, we measure the Stokes parameters of the reflected light at each pixel by taking four photographs of the scene, consisting of three photographs with differently oriented linear polarizers in front of the camera, and one additional photograph with a circular polarizer. The method only assumes knowledge of surface orientation, for which we make a few additional photometric measurements. We verify our method with three different lighting setups, ranging from specialized to off-the shelf hardware, which project either discrete or continuous fields of spherical illumination. Our technique offers several benefits: it estimates a more detailed model of per-pixel surface reflectance parameters than previous work, it requires a relatively small number of measurements, it is applicable to a wide range of material types, and it is completely viewpoint independent.

ICT Virtual Museum Guides Featured in the Sacramento Bee

The Sacramento Bee featured Ada and Grace, two virtual humans ICT created in collaboration with the Boston Museum of Science. The characters were based on real life model Bianca Rodriguez, who focus groups selected to be the face of the virtual characters. “To be successful, the virtual character needs to appeal to wide audiences,” said William Swartout, ICT’s director of technology. The story noted that the process of turning Rodriguez into a virtual human began with a photo session in the ICT Graphic Lab’s Light Stage. The characters are now based at the Boston museum where they answer visitor questions.

Read the full story here.

Army Times Covers ICT Games on Army Games Portal

An article featured the Army’s release of ICT-developed games UrbanSim and ELECT BiLAT on the Army’s gaming portal, MilGaming. The story states that Soldiers can expect to spend more time playing games as part of their training. ICT’s games address the arts of negotiation and battle command.

“Gaming is probably the most significant improvement to training in the past five years,” said Col. Anthony Krogh, director of the National Simulation Center, who also added that the Army is embracing the technology as leaders see benefits.

Read the full story here.

ICT’s Skip Rizzo Discusses Approaches to Dealing with Trauma Memories on National Public Radio

Skip Rizzo, ICT’s associate director for medical virtual reality, appeared on local NPR affiliate KPCC’s Pat Morrison Show to discuss different approaches to address how best to help people deal with trauma memories. Rizzo spoke of his work developing a virtual reality exposure therapy to therapy to treat PTSD and also mentioned an ongoing clinical trail that combines virtual reality exposure therapy with a drug therapy. Also appearing on the program was Rick Huganir, professor and director of the department of neuroscience at Johns Hopkins University School of Medicine. Huginar is conducting research exploring whether proteins can be removed from the brain’s fear center which may be able to erase traumatic memories forever.

Listen to the program here.

AOL News Features Virtual Afghanistan Treatment for PTSD

Virtual Afghanistan, a virtual reality exposure therapy developed at ICT for the treatment of soldiers suffering from PTSD was featured in a story on AOL News. The therapy, an adaptation of ICT’s Virtual Iraq system, was demonstrated at the recent Army Science Conference in Orlando, Florida. The story describes the system and states that Virtual Afghanistan is expected to be even more realistic—and detailed—than the Iraq computer program. “We’ve got literally hundreds of stories people have told in therapy about where and what occurred to them and what happened to them,” Skip Rizzo, ICT’s associate director for medical virtual reality, told AOL News. “That’s stuff we didn’t have when we started.”

Read the full story here.

ICT’s Skip Rizzo and Virtual Reality Therapy Featured on Defense Talk

An article by Army News covered a recent panel discussion about the latest treatments for PTSD at the 27th Army Science Conference which included Skip Rizzo, ICT’s associate director for medical virtual reality. “VR is a way for humans to interact with computers and extremely complex data in a more naturalistic fashion,” said Rizzo. “Over the last 15 years, we’ve seen VR go from the realm of expensive toy into that of functional technology. We can design virtual environments to assess and rehabilitate. Since 1994, we’ve seen pretty dramatic growth in VR technology.”

Read the full story here.

Jacki Morie, John Galen Buckwalter: “AXLNet Mobile: Using a Mobile Device for Leadership Training”

The AXLNet Mobile project builds upon the web-based leadership classroom training application, AXLNet, originally created by ICT and ARI in 2006. We ported both the functionality and pedagogical approach of AXLNet to an Apple iPhone in 2008-2009, and tested the resulting application for usability with a population of 46 Captains and 1st Lieutenants at Ft. Leonard Wood. Initial findings indicate supplemental training on the mobile device is an acceptable and possibly more engaging delivery method for today�s troops. a particular theme and pedagogical goal. Although AXLNet has several of these modules available, as well as a powerful tool that allows educators to author new modules, recreating the entirety of this content and functionality was beyond the scope of our project. Instead, we focused on porting two of the modules, “Power Hungry” and “Tripwire,” to the mobile platform. In adapting these modules to mobile, we aimed to maintain the original content and pedagogical approach as

Timothy Wansbury, John Hart, Andrew Gordon, Jeff Wilkinson: “UrbanSim: Training Adaptable Leaders in the Art of Battle Command”

Antonio Roque, Kallirroi Georgila, Ron Artstein, Kenji Sagae, David Traum: “Natural language processing for joint fire observer training”

We describe recent research to enhance a training system which interprets Call for Fire (CFF) radio artillery requests. The research explores the feasibility of extending the system to also understand calls for Close Air Support (CAS). This work includes automated analysis of complex language behavior in CAS missions, evaluation of speech recognition performance, and simulation of speech recognition errors.

Matthew Jensen Hays, Teri Silva, Todd Richmond: “Rapid Development of a Mixed-Media, Deployable Counter-IED Trainer”

In February 2009, the Institute for Creative Technologies (ICT) at the University of Southern California was contracted to develop a training system for Soldiers and Marines. The goal of the training system was to reduce the number of casualties caused by improvised explosive devices (IEDs). Over the next five months, a team at the ICT responded by drawing on subject-matter experts’ input, findings from cognitive psychology, and cinema-style script-writing. The result is the Mobile Counter-IED Interactive Trainer (MCIT), a narrative-focused, mixed-media training simulator that can be deployed anywhere in the world. This paper details the course of MCIT’s development, from initial concept to working prototype to finalized training system. We also discuss the technical challenges of developing in Virtual Battlespace 2 (VBS2) and the changes driven by user feedback throughout the development process.

Patrick Kenny, Thomas Parsons, Pat Garrity: “Virtual Patients for Virtual Sick Call Medical Training”

Training military clinicians and physicians to treat Soldiers directly impacts their mental and physical health and may even affect their survival. Developing skills such as: patient interviewing, interpersonal interaction and diagnosis can be difficult and is severely lacking in hands-on-training due to the cost and availability of trained standardized patients. A solution to this problem is in using computer generated virtual patient avatars that exhibit the mental and physiologically accurate symptoms of their particular illness; such physical indicators as sweating, blushing and breathing due to discomfort and matching conversational dialog for the disorder. These avatars are highly interactive with speech recognition, natural language understanding, non-verbal behavior, facial expressions and conversational skills. This paper will discuss the research, technology and the value of developing virtual patients. Previous work will be stated along with issues behind creating virtual characters and scenarios for the joint forces. It will then focus on subject testing that is being conducted with a Navy scenario at the Navy Independent Duty Corpsman (IDC) School at the Navy Medical Center in San Diego. The protocol involves pre and post tests with a 15 minute interview of the virtual patient. Analysis of the data will yield results in user interactions with the patient and discuss how the system can be used for training for future deployment of these systems for medical professionals. The Virtual Sick Call Project under the Joint Medical Simulation Technology Integrated Product Team (JMST IPT) seeks to push the state of the art in developing high fidelity virtual patients that will enable the caregiver to improve interpersonal skills for scenarios that require not only medical experience, but the ability to relate at an interpersonal level, with interviewing and diagnosis skills as patients can be hiding symptoms of post-traumatic stress disorder, suicide and domestic violence

Paul S. Rosenbloom: “Implementing First-Order Variables in a Graphical Cognitive Architecture”

Graphical cognitive architectures implement their functionality through localized message passing among computationally limited nodes. First-order variables, particularly universally quantified ones, while critical for some potential architectural mechanisms, can be quite difficult to implement in such architectures. A new implementation strategy based on message decomposition in graphical models is presented that yields tractability while preserving a priori symmetries in the graphs concerning how variables are represented and how symbols, probabilities and signals are processed.

ICT on Engadget

An article about ICT research was featured on Engadget today. The article focused on our history and move to Playa Vista and on the innovative work ICT makes.  An excerpt from the article reads, “Funded by the US Army to develop virtual reality technology, the ICT’s work is now found on 65 military sites across the country. Before your brain starts wandering towards thoughts of Call of Duty on military-grade steroids though, keep in mind that much of the institute’s innovations revolve around simulating surrogate interactions with so-called ‘virtual humans’. For example, thanks to advanced AI language programming, soldier patients projected on life size semi-transparent screens help teach doctors about treating combat trauma, while virtual Army personnel characters such as Sergeant Star can interact naturally with soldiers in leadership training exercises.”

Read the full article.

ICT Virtual Humans Research in New Scientist Magazine

An article about ICT research creating believable virtual human characters for the military to use in cultural and negotiation training applications appears in New Scientist magazine and on their website. The story states that the success or failure of many Army missions hinges on soldiers engaging with people in unfamiliar cultural settings. And one way of getting that knowledge across is to use realistic, immersive virtual-reality programs in which the soldiers are trained by being presented with avatars that behave in a similar way to the people they will meet in the field.

The story features Randy HIll and ICT virtual humans research by Johnathan Gratch, Louis-Philippe Morency and colleagues and also discusses ICT’s role in creating digital faces for movies includingAvatar. According to the story, it is only a matter of time before we are all facing up to a new digital world of emotion in games and films.

Read the story here.

Paul Debevec on the Cover of UC Berkeley’s Forefront Magazine

Paul Debevec, ICT’s associate director for graphics research, was featured as the cover story of the Fall 2010 UC Berekely Forefront magazine.  Debevec earned his Ph.D at Berkeley and has gone on to make significant contributions in the fields of computer graphics and visual effects. The article provides a comprehensive overview of his work and achievements.

Read the story. (pages 10-13)

Iwan de Kok, Derya Ozkan, Dirk Heylen, Louis-Philippe Morency: “Learning and Evaluating Response Prediction Models using Parallel Listener Consensus”

Traditionally listener response prediction models are learned from pre-recorded dyadic interactions. Because of individual di erences in behavior, these recordings do not capture the complete ground truth. Where the recorded listener did not respond to an opportunity provided by the speaker, another listener would have responded or vice versa. In this paper, we introduce the concept of parallel listener consen- sus where the listener responses from multiple parallel interactions are combined to better capture di erences and similarities between individuals. We show how parallel listener consensus can be used for both learning and evaluating probabilistic prediction models of listener responses. To improve the learning performance, the parallel consensus helps identifying better negative samples and reduces outliers in the positive samples. We propose a new error measurement called fConsensus which exploits the parallel consensus to better de ne the concepts of exactness (mislabels) and completeness (missed labels) for prediction models. We present a series of experiments using the MultiLis Corpus where three listeners were tricked into believing that they had a oneon- one conversation with a speaker, while in fact they were recorded in parallel in interaction with the same speaker. In this paper we show that using parallel listener consensus can improve learning performance and represent better evaluation criteria for predictive models.

Ramy Sadek: “Automatic Parallelism for Dataflow Graphs”

This paper presents a novel algorithm to automate high-level parallelization from graph-based data structures representing data flow. This automatic optimization yields large performance improvements for multi-core machines running host-based applications. Results of these advances are shown through their incorporation into the audio processing engine Application Rendering Immersive Audio (ARIA) presented at AES 117. Although the ARIA system is the target framework, the contributions presented in this paper are generic and therefore applicable in a variety of software such as Pure Data and Max/MSP, game audio engines, non-linear editors and related systems. Additionally, the parallel execution paths extracted are shown to give effectively optimal cache performance, yielding significant speedup for such host-based applications.

Ramy Sadek: “Simulating Hearing Loss In Virtual Training”

Audio systems for virtual reality and augmented reality training environments commonly focus on high-quality audio reproduction. Yet many trainees may face real-world situations wherein hearing is compromised. In these cases, the hindrance caused by impaired or lost hearing is a significant stressor that may affect performance. Because this phenomenon is hard to simulate without actually causing hearing damage, trainees are largely unpracticed at operating with diminished hearing. To improve the match between training scenarios and real-world situations, this effort aims to add simulated hearing loss or impairment as a training variable. The goal is to affect everything users hear, including non-simulated sounds such as their own and each other’s voices, without overt noticeability, risk to hearing, or requiring headphones.

John Galen Buckwalter, Belinda Lange, Josh Williams, Eric Forbell, Julia Kim, Kenji Sagae, Skip Rizzo: “SimCoach: an online virtual human information and recommendation system”

Los Angeles Times Covers ICT Technologies and New Building

An articleblog post and photo gallery in the Los Angeles Times cover much of ICT’s newest video game and virtual human technologies. The article focused on ICT’s recent move to a larger facility in Playa Vista and the many ways which ICT has expanded since its founding in 1999. The article states that ICT’s wide-ranging virtual technologies, now found on 65 military sites across the country, have popped in and out of the public spotlight, but last week they were on full display when the institute opened the doors to its new 72,000-square-foot facility in Playa Vista.

“The move is a mark of a new era for us,” said Randall W. Hill Jr., executive director of the institute, which outgrew its facility in Marina del Rey. “But really, it’s a new era for the Army as well.”

Read the article.

Read the blog post.

View the photos.

Reid Swanson, Andrew Gordon: “A Data-Driven Case-Based Reasoning Approach to Interactive Storytelling”

In this paper we describe a data-driven interactive story- telling system similar to previous work by Gordon & Swanson. We ad- dresses some of the problems of their system, by combining information retrieval, machine learning and natural language processing. To evaluate our system, we leverage emerging crowd-sourcing communities to collect orders of magnitude more data and show statistical improvement over their system. The end result is a computer agent capable of contributing to stories that are nearly indistinguishable form entirely human written ones to outside observers.

ICT Celebrates Move to New Building and 10 Years of Work

On Thursday October 28, USC President C.L. Max Nikias, the USC marching band and special guests from the military, state, city and entertainment industry joined ICT Executive Director Randall W. Hill Jr. in a ribbon cutting ceremony at ICT’s new Playa Vista campus. Pictured left to right: COL John Langhauser, commander of the U.S. Army Simulation and Training Technology Center, John Miller, director of the U.S. Army Research Lab, ICT’s Randall W. Hill, Jr., USC President C.L. Max Nikias, Bill Allen, president and CEO of the Los Angeles County Economic Development Corporation, Scott Seigler, entertainment industry executive and CEO of MediaSeigler, and Karen Kukerin, deputy director and community liaison in the Governor’s Los Angeles office. The day-long celebration also included a speaker symposium and technology demonstrations.

Read the USC News story here.

Belinda Lange, Sheryl Flynn, Jamie Antonisse, Debra Leiberman: “Games for Rehabilitation: How do we find the balance between play and therapy?”

The recent release and world wide acceptance and enjoyment of Nintendo(R) Wii(TM) and WiiFit(TM) and Sony PlayStation(R)2EyeToy(TM) has provided evidence for the notion that exercise can be fun, provided it is presented in a manner that is entertaining, motivating and distracting. Off-the-shelf games for commercial gaming consoles have been developed and tested for the purpose of entertainment, however, the games and consoles were not designed as medical devices nor with a primary focus of an adjunct rehabilitation tool. While games on these consoles were not designed with rehabilitation in mind, they have the advantage that they are affordable, accessible and can be used within the home. Many clinics are adopting the use of these off-the-shelf devices for exercise, social interaction and entertainment. The Nintendo(R) Wii(TM) and Sony Playstation(R)2 EyeToy(TM) have demonstrated promising results as a low-cost tool for balance rehabilitation. Furthermore, using these devices for exercise, individuals have anecdotally reported a high level of enjoyment by interacting and exercising with friends and family members. However, the concept of using off-the-shelf video games for rehabilitation alters the context in which these games were initially intended. Since these games were initially designed for entertainment, the game play mechanics are not entirely applicable to those with disabilities. Initial usability tests indicate that off-the-shelf video game devices could be well-received as rehabilitation tools, however, many of the games provide significant barriers for patient groups. These barriers include game-play that is too fast or requires the player to perform movements that are prohibitive to therapy goals, feedback that is not in line with therapy outcomes (game score that does not represent the functional outcome goals) or demeaning for the player (providing feedback that the user failed the task or they are ‘unbalanced’ can reduce motivation). Games used for rehabilitation must focus on specific movement goals, provide appropriate feedback and be tailored to the individual user. Feedback should provide the player with useful information about their actions and improve and motivate skill acquisition without reducing player morale. The level of challenge of the task should be easily changed by the therapist to allow the game to be challenging enough to motivate patients to improve but not too difficult so the task can be achieved eventually. These criteria, if not met by existing games, must be integrated into games designed by researchers in the rehabilitation and game design fields. This panel will discuss the current use of off-the-shelf games for rehabilitation and seek to find the balance between play and therapy. How can we use the positive aspects of off-the-shelf games to improve therapy? Which elements of game-play should be incorporated into rehabilitation games without restricting the key therapy goals?

Jacki Morie: “ICT’s Immersion and Virtual Human Work”

Jacki Morie will give a presentation titled, “ICT’s Immersion and Virtual Human Work.” at the Immersive Technology SummitLearn more about the Immersive Technology Summit 2010.

Belinda Lange: “Game-based motor rehabilitation: Moving beyond the Wii”

Many clinics are adopting the use of these off-the-shelf devices for exercise, social interaction and entertainment. However, many of the games provide significant barriers for people with different injuries and levels of ability. These barriers include game-play that is too fast or requires the player to perform movements that are prohibitive to therapy goals, feedback that is not in line with therapy outcomes (game score that does not represent the functional outcome goals) or demeaning for the player (providing feedback that the user failed the task or they are ‘unbalanced’ can reduce motivation). This talk will present and discuss existing literature supporting the use of video games as rehabilitation tools. The talk will outline a potential direction of Game-Based Motor Rehabilitation research and development. The focus of this research and development concept is three-fold: 1) Assess the usability of off-the-shelf games and consoles within a range of user populations; 2) Using this feedback, re-purpose or develop low-cost interaction devices that are appropriate for use within the rehabilitation setting; 3) Design, develop and test games specifically focused on rehabilitation tasks. A range of examples from each will be presented and discussed. The presentation will aim to demonstrate support for the development of specific tailored rehabilitation games. The key advantage of designing rehabilitation games over existing off-the-shelf games is to provide the therapist and/or patient with the ability to alter elements of game play in order to individualize treatment tasks for specific users and expand the use of these tasks to a wider range of level of abilities.

ICT’s Virtual Iraq and Virtual Patients Featured on NBC and CBS News in Portland

ICT virtual technologies being used to train mental health clinicians and treat patients suffering from mental health issues, including post-traumatic stress were highlighted in segments on the local CBS and NBC affiliates in Portland, Oregon. ICT has partnered with the USC School of Social Work to incorporate these innovative technologies as part of student training. Hands-on demonstrations were presented at the Council on Social Work Education (CSWE) conference last week in Portand.

Watch the NBC story here.

ICT at the Council on Social Work Education (CSWE) Conference

ICT’s researchers and artists provided hands-on, live demonstrations of the ICT technology being used by the USC School of Social Work at the Council on Social Work Education (CSWE) conference at the Oregon Convention Center in Portland, Oct. 16, 2010 from 7 p.m.–9:30 p.m. Kevin Chang and Brad Newman demoed Virtual Iraq. Patrick Kenny and Tomer Mor-Barak demoed the Virtual Patient project.

H. Chad Lane Presents at Academic Lessons from Video Game Learning

In this presentation I will give an overview of the issues involved with providing guidance in game-based learning environments. With roots in the classic debate between discovery and guided learning, the tension arises from simultaneous desires to not detract from the features that generally make games appealing (e.g., freedom to explore, making meaningful choices, observing outcomes, solving problems, being emotionally engaged), but prevent floundering and other non-productive game activities that hinder learning. I will discuss two methods our group has explored for addressing this tension. The first is to ensure that game success aligns directly with learning goals, meaning that feedback and guidance directly contributes to game success (and is thus accepted). The second method has been to provide feedback implicitly through dynamic adjustments to the game experience (aka, pedagogical experience manipulation).

ICT Contributions to the Future Immersive Training Enviromnent Featured on ABC News San Diego

ICT’s Julia Kim was interviewed in a story about the U.S. Joint Forces Command’s Future Immersive Training Environment (FITE), a virtual reality-based training system to improve team decision-making skills through realistic scenarios that is being tested by Camp Pendleton Marines. The FITE demonstratoin is an upgrade to Camp Pendleton’s Infantry Immersion Trainer and includes virtual human role players developed by ICT.

“The characters are interactive, said Kim.  “You can actually talk to them and they talk to each other as well.”

Watch the ABC San Diego story here.

Press Release for ICT’s Grand Opening and Ten Year Anniversary

Contact: Orli Belman
USC Institute for Creative Technologies
310 301-5006, 310 709-4156
belman@ict.usc.edu

Bringing the Holodeck to Life – USC Institute for Creative Technologies Marks a Decade Bringing Hollywood Magic to Military Training and More

Experience the Latest 3D, Virtual Human and Immersive Technologies at ICT’s New Playa Del Rey Home on October 28


HANDS-ON DEMOS INCLUDE:

· Gunslinger – The Western goes to the future in this engaging, mixed-reality, story-driven experience where a single participant can play the hero in a wild west setting by interacting with multiple virtual characters.

· Live 3D Video Teleconferencing – Finally, you can attend a remote meeting like a 3-D “hologram” from Star Wars, able to look around and make eye contact with whomever you need to address.

· Virtual Iraq – An immersive virtual environment for use by trained therapists to help treat combat-related post-traumatic stress disorder.

· Digital Actors – See Academy Award-winning facial scanning technology used in Avatar, Spider-Man 2, Benjamin Button and more that transforms a real person into a convincing digital double.

· IED Attack – First-person multi-player game allow players to take part in a simulated IED attack.

· Digital Docents – Meet Ada and Grace, two bright and bubbly educators who arrived at the Museum of Science, Boston in 2009. They can answering visitor questions, suggesting exhibits and explain the technology that makes them work.

· Stretching Space – Expand a virtual world with redirected walking utilizing a head-mounted display that allows you to cover vast virtual territory with just a few steps.

· Virtual Patients – Experience an avatar-based simulation program designed to replicate the experiences of patients exposed to combat stress and to help prepare clinicians to interact with real clients.

See the full demo list here.
To see videos of our work: http://www.youtube.com/uscict

SPEAKERS: USC President C.L. Max Nikias will join ICT Executive Director Randall W. Hill Jr., Bill Allen, CEO, LA Economic Development Corp, and others in welcoming representatives from the military, entertainment and scientific communities, including:

· John Seeley Brown, former chief scientist of Xerox Corporation and the director of its Palo Alto Research Center (PARC).

· John Nelson, Academy-Award winning visual effects supervisor.

· John Miller, director, U.S. Army Research Laboratory, the Army’s premier laboratory for basic and applied research and analysis.

· Col. Craig G. Langhauser, head of the U.S. Army’s Research, Development and Engineering Command – Simulation & Training Technology Center.

See the full symposium speaker list here.
See the full ribbon-cutting speaker list here.

WHEN: Thursday, October 28
9 a.m. – 2 p.m. Technology Demonstrations and Speaker Symposium
2:30 p.m. Ribbon Cutting Ceremony
4:00p.m. Social

WHERE: USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536

RSVP: Orli Belman, belman@ict.usc.edu

MORE: http://ict.usc.edu/grandopening/
http://ict.usc.edu/

ABOUT ICT: ICT was established in 1999 with a multi-year contract from the US Army to explore a powerful question: What would happen if leading technologists in artificial intelligence, graphics, and immersion joined forces with the creative talents of Hollywood and the game industry?

The answer is the creation of engaging, memorable and effective interactive media that are revolutionizing learning in the fields of training, education and beyond.

Kallirroi Georgila, Ning Wang, Jonathan Gratch: “Cross-Domain Speech Disfluency Detection”

We build a model for speech disfluency detection based on conditional random fields (CRFs) using the Switchboard corpus. This model is then applied to a new domain without any adaptation. We show that a technique for detecting speech disfluencies based on Integer Linear Programming (ILP) significantly outperforms CRFs. In particular, in terms of F-score and NIST Error Rate the absolute improvement of ILP over CRFs exceeds 20% and 25% respectively. We conclude that ILP is an approach with great potential for speech disfluency detection when there is a lack or shortage of in-domain data for training.

Sudeep Gandhe and David Traum at the 11th Annual SIGdial Meeting on Discourse and Dialogue in Tokyo

We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behaviour in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpus-based selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues.

Kallirroi Georgila, Maria Wolters, Johanna D. Moore: “Learning Dialogue Strategies from Older and Younger Simulated Users”

Older adults are a challenging user group because their behaviour can be highly variable. To the best of our knowledge, this is the first study where dialogue strategies are learned and evaluated with both simulated younger users and simulated older users. The simulated users were derived from a corpus of interactions with a strict system-initiative spoken dialogue system (SDS). Learning from simulated younger users leads to a policy which is close to one of the dialogue strategies of the underlying SDS, while the simulated older users allow us to learn more flexible dialogue strategies that accommodate mixed initiative. We conclude that simulated users are a useful technique for modelling the behaviour of new user groups.

ICT Virtual Patient and Researchers to Speak at USC’s Body Computing Conference

ICT’s Virtual Patient, along with Medical Virtual Reality Associate Director Skip Rizzo and Computer Scientist Patrick Kenny, will be giving a glimpse into how virtual reality can impact the future of healthcare at the USC Body Computing Conference on Friday, September 24. Rizzo will give an overview of ICT’s work in medical virtual reality. Kenny and the virtual patient will be introducing Dr. Leslie Saxon of the USC Keck School of Medicine and serve as a life-sized, interactive example of how virtual humans can be used in medical settings. The virtual patient project at ICT began as a collaboration with the USC Keck School of Medicine to develop virtual role-players to aid in medical students in training for skills for diagnosing and interviewing patients.

Visit the Body Computing Conference site here.

ICT Develops Virtual Human Role Players for New Military Training Prototype

The U.S. Joint Forces Command posted an article about the Future Immersive Training Environment (FITE), a new virtual reality-based training system to improve team decision-making skills through realistic scenarios that has been installed at Camp Pendleton.  Julia Kim, an ICT project director, explained ICT’s mixed reality portion of the system, called the Combat Hunter Action and Observation Simulation (CHAOS). Inside an area of the simulated village, soldiers encounter two virtual personalities: an Afghan adult named Omar and his mother.

“We’re bringing this technology together with Hollywood, creating an experience that augments what the virtual role players do,” Kim said.

Read the full story here.

Presentation of the Virtual Human Toolkit at the Tools Tutorial at IVA10

Arno Hartholt’s presentation provides an overview of the capabilities of the Virtual Human Toolkit.

Lixing Huang Presents at the 10th IVA

Backchannel feedback is an important kind of nonverbal feedback within face-to-face interaction that signals a person’s interest, attention and willingness to keep listening. Learning to predict when to give such feedback is one of the keys to creating natural and realistic virtual humans. Prediction models are traditionally learned from large corpora of annotated face-to-face interactions, but this approach has several limitations. Previously, we proposed a novel data collection method, Parasocial Consensus Sampling, which addresses these limitations. In this paper, we show that data collected in this manner can produce effective learned models. A subjective evaluation shows that the virtual human driven by the resulting probabilistic model significantly outperforms a previously published rule-based agent in terms of rapport, perceived accuracy and naturalness, and it is even better than the virtual human driven by real listeners’ behavior in some cases.

Sin-hwa Kang Presents at the 10th International Conference on Intelligent Virtual Agents

In typical communication situations, it is desirable to avoid any type of simultaneous talking due to lack of coordination between communicators, as it is not easy to maintain sufficient mutual clarity over conversation at the same time. Researchers have long commented on the lack of coordination in the turn-taking of conversation partners. Leighton et al. specifically pointed out more interrupting and simultaneous talking occurred in communication of people who needed psychotherapy. In our previous study we mainly investigated people’s self-disclosure in the interview interaction with real human videos and virtual agents. Bavelas et al. demonstrated that collaboration between a speaker’s acts and listeners’ responses were coordinated by speaker gaze. Therefore, we speculated that it would be crucial to study the timely exchange of speaking turns coordinated by interviewee gaze in the self-disclosure interview interaction, where participants were human interviewees and the interview-ers were either virtual agents or humans represented by a modified or unmodified video avatar. We would like to elucidate our starting assumption based on one of the floor-yielding cues defined in the study by Duncan, Jr.: head direction. In his study, Duncan defines the signals and rules for the turn-taking mechanism. The head direction is described as “turning of the speaker’s head toward the auditor” which is considered “the speaker’s part of the gaze-direction pattern.” Exline et al. describe the head direction, specifically looking at or away from another, as a natural gesture in which communicators normally perceive the other’s intentions based on the fixations or avoidances of gaze. They further report that communicators could have high degree of certainty if they experience the other’s full gaze in the face by citing Gibson and Pick’s work in 1963.

We discovered that interviewees’ gaze at interviewers at the end of their speaking turn provided more appropriate gaze times when they interacted with agent interviewers than with human interviewers shown as modified video avatars. In this paper we discuss our findings and other take-home messages found in different turn-taking patterns in the interactions with agent interviewers and human interviewers.

Bill Swartout: “Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides”

To increase the interest and engagement of middle school students in science and technology, the InterFaces project has created virtual museum guides that are in use at the Museum of Science, Boston. The characters use natural language interaction and have near photoreal appearance to increase engagement. The paper presents an evaluation of natural language performance and presents reports from museum staff on visitor reaction.

Celso De Melo Presents at the 10th IVA

Acknowledging the social functions that emotions serve, there has been growing interest in the interpersonal effect of emotion in human decision making. Following the paradigm of experimental games from social psychology and experimental economics, we explore the interpersonal effect of emotions expressed by embodied agents on human decision making. The paper describes an experiment where participants play the iterated prisoner’s dilemma against two different agents that play the same strategy (tit-for-tat), but communicate different goal orientations (cooperative vs. individualistic) through their patterns of facial displays. The results show that participants are sensitive to differences in the facial displays and cooperate significantly more with the cooperative agent. The data indicate that emotions in agents can influence human decision making and that the nature of the emotion, as opposed to mere presence, is crucial for these effects. We discuss the implications of the results for designing human-computer interfaces and understanding human-human interaction.

Barbara Boxer Visits USC in Support of Military Mental Health, Sees Virtual Patient Demo

U.S. Senator Barbara Boxer visited the USC School of Social Work today to see a demonstration of the virtual patient project, a collaboration between the school’s Center on Innovation and Research on Veterans and Military Families and ICT.  The virtual patient is developed with ICT’s virtual human technology and designed to help teach students in the USC School of Social Work’s new military specialization program to interview patients and diagnose their conditions. A goal of the project is to create a library of virtual characters suffering from mental health issues relevant to current and former service members and their families.

Boxer also addressed a small audience of school administrators/faculty, students and community members, thanking the men and women in uniform for their service and the school for its innovative efforts to speed and improve the training of military social workers.

Watch Boxer speak after the event.

Read the Los Angeles Times story about her visit.

Training and Simulation Journal Features Q&A with Randy Hill

Randy Hill, executive director of ICT, was interviewed by Michael Peck, U.S editor for Training and Simulation Journal. The Q and A with Randy was part of a regular series spotlighting leaders in the training and simulation arena. In their conversation, Hill spoke about ICT’s work in advancing the state-of-the-art in virtual humans and facial animation. He aslo discussed ICT’s prototypes for counter-IED, counter-insurgency and negotiation training. Hill highlighted how storytelling how storytelling can enhance training.

“I think what storytelling does is bring in other characters, he said. “It brings other people into the picture. That is the soft side of a lot of the skills that the military needs. It’s understanding people, whether it’s your colleagues, your subordinates or people in another culture. It has an emotional impact. That’s how learning takes place. Not just appealing to the mind, but to your heart.”

Read the full story here.

H. Chad Lane: “Individualized Virtual Humans for Social Skills Training”

Virtual humans are now being used as role players for a variety of domains that involve social skills. In this paper, we discuss how such systems can provide individualized practice through dynamically adjusting the behaviors of virtual humans to meet specific learner needs.

Paul Debevec SIGGRAPH Asia 2010 Speaker Tour

SIGGRAPH Asia is honored to present Paul Debevec, who shared a Scientific and Engineering Academy Award for his work on the LightStage device, which was used in “Avatar” and other recent films. He will deliver a series of provocative, revealing presentations titled: From Spider-Man to Avatar, Emily to Benjamin: Achieving Photoreal Digital Actors.

September 7, 2010 – Beijing
September 8, 2010 – Seoul
September 10, 2010 – Hong Kong
September 13, 2010 – Taipei
September 16, 2010 – Tokyo

Somewhere between “Final Fantasy” in 2001 and “The Curious Case of Benjamin Button” in 2008, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. This talk describes some of the key technological advances that have enabled this achievement.

Two technologies from our laboratory, High-Dynamic-Range Lighting and the Light Stage facial capture systems, have been used to create realistic digital characters in movies such as “Spider-Man 2”, “Superman Returns”, “The Curious Case of Benjamin Button”, and “Avatar”. For an in-depth example, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create “Digital Emily”, a collaboration between our laboratory and Image Metrics. Actress Emily O’Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics’ video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008.

The talk also presents a 3D teleconferencing system that uses live facial scanning and an autostereoscopic display to transmit a person’s face in 3D and make eye contact with remote collaborators, and a new head-mounted facial performance-capture system based on photometric stereo.

Paul Debevec is Associate Director of Graphics Research at the University of Southern California’s Institute for Creative Technologies and a Research Associate Professor in USC’s Viterbi School of Engineering’s Computer Science Department. His ACM SIGGRAPH involvement includes:

ACM SIGGRAPH Executive Committee Director-At-Large, 2009-
SIGGRAPH 2007 Computer Animation Festival Chair
ACM SIGGRAPH Distinguished Lecturer, 2008
Contributor to SIGGRAPH Technical Papers, Courses, Talks, Panels, Computer Animation Festival, and Art Gallery
Reviewer for SIGGRAPH Technical Papers, Courses, Talks, Emerging Technologies, and Computer Animation Festival
Member of ACM SIGGRAPH and the Los Angeles SIGGRAPH Chapter
Honorary member of Perth SIGGRAPH Chapter

ICT Workshop: Game Theory and Human Behavior Retreat

Click here to learn more about the Game Theory and Human Behavior Retreat.

Belinda Lange, Sheryl Flynn, Chien-Yen Chang, Skip Rizzo: “Development of an interactive rehabilitation game”

Visual biofeedback and force plate systems are often used for treatment of balance and mobility disorders following neurological injury. Conventional Physical Therapy techniques have been shown to improve balance, mobility and gait. The training program encourages patients to transfer weight onto the impaired limb in order to improve weight shift in standing and during gait. Researchers and therapists have been exploring the use of video game consoles such as the Nintendo WiiFit as rehabilitation tools. Initial case studies have demonstrated that the use of video games has some promise for balance rehabilitation. However, initial usability studies and anecdotal evidence has indicated that the commercial games that are currently available are not necessarily suitable for the controlled, specific exercise required for therapy. Based on focus group data and observations with patients, a game has been developed to specifically target weight shift training using an open source game engine and the WiiFit balance board. The prototype underwent initial usability testing with a sample of four Physical Therapists and four patients with neurological injury or disease. Overall, feedback was positive and areas for improvement were identified. This preliminary research provides support for the development of a game that caters specifically to the key requirements of balance rehabilitation.

ICT Presents at Virtual Worlds conference, Seattle WA

The University of Southern California Institute for Creative Technologies is developing SimCoach, a semi-immersive, online, interactive virtual support program where users can be guided by a virtual human agent. SimCoach is a web portal to mental health information, resources, and advice that aims to break down barriers to care.. The virtual human is being developed with the primary goal of engaging those warfighters, and their families, who might not otherwise seek help for psychological and/or emotional conditions. The virtual human will attempt to engage the user by developing rapport and providing unconditional support. Although not set in a fully immersive virtual world, thes application provides a realistic setting that aimsto create a sense of emotional support and provide individually relevant information. The SimCoach web application does not intend to replace a trained clinician or their methods. Instead, SimCoach is designed to provide information and awareness.

Associated community forums and social network sites will be referenced and referrals to in-person support groups and healthcare professionals will be made as appropriate. This presentation will detail the first year of development, early prototype applications and results from initial user testing.

Derya Ozkan, Louis-Philippe Morency: “Self-Based Feature Selection for Nonverbal Behavior Analysis”

One of the key challenge in social behavior analysis is to automatically discover the subset of features relevant to a specific social signal (e.g., backchannel feedback). The traditional approach for feature selection focuses on finding the relevant behaviors from a dataset made of multiple human interactions. The problem with this group-based approach is that it oversees the inherent behavioral differences among people (e.g., culture, age, gender) by focusing on the average model. In this paper, we present a feature selection approach which first looks at important behaviors for each individual, called self-based features, before building a consensus. To enable this approach, we propose a new feature ranking scheme which exploits the sparsity of probabilistic models when trained on human behavior problems. We validated our self-based approach on the task of listener backchannel prediction and showed improvement over the traditional group-based approach. Our technique gives researchers a new tool to analyze individual differences in social nonverbal communication.

Derya Ozkan, Louis-Philippe Morency: “Self-Based Feature Selection for Nonverbal Behavior Analysis”

One of the key challenges in social behavior analysis is to automatically discover the subset of features relevant to a specific social signal (e.g., backchannel feedback). The way that these social signals are performed exhibit some variations among different people. In this paper, we present a feature selection approach which first looks at important behaviors for each individual, called self-features, before building a consensus. To enable this approach, we propose a new feature ranking scheme which exploits the sparsity of probabilistic models when trained on human behavior problems. We validated our self-feature consensus approach on the task of listener backchannel prediction and showed improvement over the traditional group feature approach. Our technique gives researchers a new tool to analyze individual differences in social nonverbal communication.

Derya Ozkan, Kenji Sagae, Louis-Philippe Morency: “Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling”

During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backchannel feedback. In this paper we introduce a new model called Latent Mixture of Discriminative Experts which addresses some of the key issues with multimodal language processing: (1) temporal synchrony/asynchrony between modalities, (2) micro dynamics and (3) integration of different levels of interpretation. We present an empirical evaluation on listener nonverbal feedback prediction (e.g., head nod), based on observable behaviors of the speaker. We confirm the importance of combining four types of multimodal features: lexical, syntactic structure, eye gaze, and prosody. We show that our Latent Mixture of Discriminative Experts model outperforms previous approaches based on Conditional Random Fields (CRFs) and Latent-Dynamic CRFs.

Paul Rosenbloom: “Towards a New Generation of Cognitive Architectures”

Paul Rosenbloom was invited to give the keynote talk at the 2nd International Conference on Advanced Intelligence in Beijing.

Read the Abstract.

Live Science Features ICT’s Virtual Reality Work in PTSD

Skip Rizzo, ICT’s associate director for medical virtual reality, was quoted in a Live Science article exploring whether virtual reality depictions based on real-life combat experiences could help prepare recruits before deployment, help train them for the real thing and even help prevent cases of post-traumatic stress disorder (PTSD) in soldiers.  The story described ICT work developing virtual recreations based on the stories told by returning veterans. “Our aim is to make the return home as smooth as possible,” said Rizzo.

Read the article.

Skip Rizzo Speaks about Clincial Virtual Reality at American Psychological Association Convention

Rizzo presents a review of basic research Into the clinical efficacy of virtual environments in clinical care as part of the APA convention’s Therapeutic Use of Interactive Virtual Environments symposium.

American Psychological Association Honors Outstanding Work of ICT’s Skip Rizzo and Collaborators

Skip Rizzo, ICT’s associate director for medical virtual reality, received the 2010 Award for Outstanding Contributions in Trauma Psychology from the American Psychological Association’s Division 56, psychology’s focal point for research, practice and education on trauma. This award recognizes distinguished contributions to psychological practice and was also given to Rizzo’s collaborators Barbara Rothbaum, JoAnne Difede and Greg M. Reger for the group’s research, practice and testing in developing and evaluating virtual reality exposure therapy to treat post-traumatic stress.

The awards ceremony takes place Friday, August 13, at the APA Convention in San Diego.

Visit the convention site.

Visit the APA’s Division 56 awards and honors site.

Morteza Dehghani Joins the ICT Emotion Group – Read about his Work in Scientific American

We welcome Morteza Dehghani, who has has joined ICT as a research staff scientist working with Jon Gratch and Stacy Marsella in the Emotion Group.

Dehghani has an interdisciplinary background. He received his Ph.D from Northwestern University in computer science under Ken Forbus and has been working as a postdoc for two social scientists, Doug Medin and Scott Atran, exploring the role of culture and sacred values in negotiations. This recent Scientific American article describes this research.

Here at ICT, he will be involved in basic research on how culture and emotion interact to influence social interactions.

Read the Scientific American article.

Paul Rosenbloom: “Combining Procedural and Declarative Knowledge in a Graphical Architecture”

A prototypical cognitive architecture defines a memory architecture embodying forms of both procedural and declarative memory, plus their interaction. Reengineering such a dual architecture on a common foundation of graphical models enables a better understanding of both the substantial commonalities between procedural and declarative memory and the subtle differences that endow each with its own special character. It also opens the way towards blended capabilities that go beyond existing architectural memories.

Jacki Morie Invited to a Workshop in Houston for NASA Human Behavioral Research

With the explosion and pedagogical potential of online virtual worlds (VWs), the possibilities for using VWs for countermeasures and long duration spaceflight is increasing. This poster discusses the various benefits of VWs for individual training, team-performance assessment, game-based at-the-moment learning, and benchmarking performance.

ICT Workshop: Predictive Models of Human Communication Dynamics Conference

In the study of human face-to-face communication, the patterning of interlocutor actions and interactions, moment-by-moment, is a matter of great scientific interest; and predictive models of such behavior are needed in order to build systems that can understand and interact with people in more natural ways.

Louis-Philippe Morency and Nigel G. Ward organized this workshop in order to bring together researchers with diverse backgrounds and perspectives to share knowledge, advance the field, and roadmap the future.

Paul Debevec in AP Story about New 3-D Technologies

ICT’s Paul Debevec was quoted in an Associated Press article about innovations in 3-D technologies. Many of the latest advances in 3-D were on display at the recent Siggraph conference in Los Angeles. Debevec spoke about how they will also be finding their way into movie theaters across the globe in upcoming movies like Disney’s TRON: Legacy and other productions from Weta and Industrial Light and Magic “We’re going to see 3-D that’s completely naturalistically built into the artistic process, so it’s really truly a story-telling element, not just a visual effect to make things prettier,” said Debevec.

Read the Associated Press story here.

Cyrus Wilson, Abhijeet Ghosh, Pieter Peers, Matt Jen-Yuan Chiang, Jay Busch, Paul Debevec: “Temporal Upsampling of Performance Geometry using Photometric Alignment”

We present a novel technique for acquiring detailed facial geometry of a dynamic performance using extended spherical gradient illumination. Key to our method is a new algorithm for jointly aligning two photographs – under a gradient illumination condition and its complement – to a full-on tracking frame, providing dense temporal correspondences under changing lighting conditions. We employ a two step algorithm to reconstruct detailed geometry for every captured frame. In the first step, we coalesce information from the gradient illumination frames to the full-on tracking frame, and form a temporally aligned photometric normal map, which is subsequently combined with dense stereo correspondences yielding a detailed geometry. In a second step, we propagate the detailed geometry back to every captured instance guided by the previously computed dense correspondences. We demonstrate reconstructed dynamic facial geometry, captured using moderate to video rates of acquisition, for every captured frame.

Edward Haynes, Jacki Morie, Eric Chance: “I Want My Virtual Friends to be Life Size! Adapting Second Life to Multi-Screen Projected Environments”

Second Life (SL) is a popular 3D online virtual world designed for human interaction (also known as a MUVE, or multi-user virtual environment). It typically supports 60-70 thousand concurrent users. The assets and physical environments within SL are easy to create and use, and the environments themselves are very much part of the human interaction experience. However, the typical means of accessing SL is through a single computer screen, which lessens the immersion that is inherent in such a rich 3D world. Because of this, the SL virtual world is a good candidate for adaptation to large scale immersive displays such as a CAVETM or other multi projector systems. This paper describes a novel approach to using synchronized SL viewers to drive a large format multi-projector display system.

ICT at the 7th Symposium on Applied Perception in Graphics and Visualization

Previous research, which has used images of real human faces and mostly from the same facial expression database [Matsumoto and Ekman 1988], has shown that individuals perceive emotions universally across cultures. We conducted an experiment to determine whether culture affects the perception of emotions rendered on virtual faces. Specifically, we test the holistic perception hypothesis that individuals from collectivist cultures, such as East Asians, visually sample information from central regions of the face (near the top of the nose by the eyes), as opposed to sampling from specific features of the face. If the holistic perception hypothesis is true, then individuals will confuse emotional facial expressions that are different in terms of the shape of the mouth facial feature. Our stimuli were computer generated using a face graphical rendering tool, which affords a high level of experimental control for perception researchers.

Skip Rizzo’s TEDxUSC Talk Now Online

ICT’s Skip Rizzo spoke at this year’s TEDx USC conference and was introduced by Dean Marilyn Flynn of the USC School of Social Work. Video of their talk is now available for viewing on the TEDx Talk’s Channel on YouTube.

Of the more than 1.6 million men and women deployed to Iraq and Afghanistan, nearly one-third are expected to return with disabling combat stress disorders that may affect some for a lifetime if left untreated.

Through and unlikely marriage of social work and cutting edge technology, the USC Institute for Creative Technologies and the USC School of Social Work are revolutionizing the training methods for a new generation of mental health professionals, shifting the way clinicians learn to interact with their patients.

Introduction: Marilyn Flynn, Dean, USC School of Social Work
Virtual Reality Demonstration: Albert “Skip” Rizzo, Research Scientist, USC Institute for Creative Technologies

Watch it here.

Dongrui Wu, Thomas Parsons, Emily Mower, Shrikanth Narayanan:

Speech processing is an important aspect of affective computing. Most research in this direction has focused on classifying emotions into a small number of categories. However, numerical represen- tations of emotions in a multi-dimensional space can be more ap- propriate to reflect the gradient nature of emotion expressions, and can be more convenient in the sense of dealing with a small set of emotion primitives. This paper presents three approaches (ro- bust regression, support vector regression, and locally linear re- construction) for emotion primitives estimation in 3D space (va- lence/activation/dominance), and two approaches (average fusion and locally weighted fusion) to fuse the three elementary estimators for better overall recognition accuracy. The three elementary esti- mators are diverse and complementary because they cover both lin- ear and nonlinear models, and both global and local models. These five approaches are compared with the state-of-the-art estimator on the same spontaneously elicited emotion dataset. Our results show that all of our three elementary estimators are suitable for speech emotion estimation. Moreover, it is possible to boost the estimation performance by fusing them properly since they appear to leverage complementary speech features.

David Traum Presents at ACL 2010 Workshop on Companionable Dialogue System

Companioning is a tough job, and few if any current dialogue systems are up to the task. How can systems develop a relationship with their users that will transcend a simple task interaction or momentary amusing diversion? The goal should be systems that people look forward to interacting with as much as playing their favorite games or meeting their real friends. Many current dialogue system strategies conflict with this goal, even though they provide good value at their intended functions. In this talk I will propose a set of principles for companionable dialogue systems and illustrate with successful and unsuccessful techniques from existing dialogue systems and embodied conversational agents.

Belinda Lange: “Gaming and Exer-Gaming”

Belinda Lange will give a talk called, “Gaming and Exer-Gaming” at an NIH Workshop in Washington, D.C.

Kenji Sagae: “Self-Training without Reranking for Parser Domain Adaptation and Its Impact on Semantic Role Labelling”

Sagae compares self-training with and without reranking for parser domain adaptation and examines the impact of syntactic parser adaptation on a semantic role labeling system. Although self-training without reranking has been found not to improve in-domain accuracy for parsers trained on the WSJ Penn Treebank, Sagae shows that it is surprisingly effective for parser domain adaptation. Sagae also shows that simple self-training of a syntactic parser improves out-of-domain accuracy of a semantic role labeler.

Anton Leuski and David Traum: “Practical Language Processing for Virtual Humans”

Leuski and Traums’s talk is titled, “Practical Language Processing for Virtual Humans.” NPCEditor is a system for building a natural language processing component for virtual humans capable of engaging a user in spoken dialog on a limited domain. It uses a statistical language classification technology for mapping from user’s text input to system responses. NPCEditor provides a user-friendly editor for creating effective virtual humans quickly. It has been deployed as a part of various virtual human systems in several applications.

Liang Huang, Kenji Sagae: “Dynamic Programming for Linear-time Shift-Reduce Parsing”

Incremental parsing techniques such as shift-reduce have gained significant popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that in most cases dynamic programming is indeed possible for shift-reduce parsing, by merging “equivalent” stacks based on feature values. Empirically, our algorithm leads to up to 5-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser achieves the highest accuracy and the fastest speed among dependency parsers trained on the Treebank.

MCIT Marks One Year and Over 15,000 Trained

Portable interactive system helps troops address improvised explosive devices – the most lethal element of combat today. Successful collaboration credited for rapid development and deployment of high volume trainer.

It has been just over one year since the first Mobile Counter-IED Interactive Trainer, or MCIT, was set up at Fort Bragg, N.C. to assist soldiers in recognizing and reacting to improvised explosive devices. In that time, over 15,000 warfighters nationwide have gone through the modified shipping containers featuring fictional video narratives and a computer game that are now set up at three different bases across the country. Other installations are at Camp Pendleton, Calif. and Camp Shelby Joint Forces Training Center, Miss.

“From the time this contract was awarded in 2009 to this one-year-anniversary milestone, this project has shown the power of strong relationships and collaborations across many organizations united in wanting to make a difference for our soldiers,” said Randall W. Hill, Jr., executive director of the University of Southern California Institute for Creative Technologies, which was awarded the initial MCIT contract. “The positive impact this trainer has made in such a short time should be a great source of pride for all of us.”

MCIT was funded through the Joint IED Defeat Organization and the Joint Center of Excellence, which also provided subject matter experts integral to the project.
ICT’s program managers at the U.S. Army RDECOM Simulation, Training and Technology Center (STTC) oversee it. Industry partner A-T Solutions was responsible for the original concept, administering the physical build and deployment of the system including field service representatives.

Isolated Ground, Psychic Bunny, Blind Spots Content, ExPlus, Quicksilver Software, and Stranger Entertainment provided additional creative, design and production services.

The MCIT containers each house interactive exercises that combine many of ICT’s core strengths in using storytelling, videogames and simulations as teaching tools. For example, story vignettes from an insurgent bomb maker and US soldiers help deliver the training materials and guide the trainees through the self-paced experience. Students also participate in a multiplayer videogame where they assume either the roles of being an insurgent ambush team or coalition military patrol.

A recent story in Defense News online reports that, “MCIT gets a thumbs-up from Marines who have used it at Camp Pendleton.”

“Our Marines get wrapped around the axle looking for the IED,” said Sgt. Alexander Wilterdink in the June article. “This helps break us out of the box, looking for the different components of the terrorist cell, like the triggerman or the camera guy, that we usually forget about.”

Go to the MCIT project page.

Arno Hartholt, Kim LeMasters, Jonathan Gratch: “Creating Gunslinger, By Far the World’s Most Fun Mixed-Reality, Multi-Agent, Story-Driven, Interactive Experience”

A talk about how to create human engagement using advanced AI and traditional Hollywood story telling and stagecraft.

Paul Rosenbloom at Visual Representations and Reasoning Workshop at AAAI-10

A new approach to implementing cognitive architectures based on graphical models holds the potential for simpler yet more functional architectures. It also raises the possibility of incorporating visual representation and reasoning into architectures in a manner that is uniformly implemented, and tightly coupled, with both perception and cognition. While much of this is still highly speculative, the core of how it might work is outlined here.

H. Chad Lane, Mark Core, Eric Forbell, Robert Wray, Brian Stensrud, Laura Hamel: “Pedagogical Experience Manipulation for Cultural Learning”

Acquiring intercultural competence is challenging. Although intelligent learning environments developed to enhance cultural learning can be effective, there is limited evidence regarding how best to dynamically manipulate these environments in support of learning. Further, the space of potential manipulations is extremely large and sometimes tangled with the implementation details of particular learning systems in particular domains. This paper offers a framework for organizing approaches to such dynamic tailoring of the learning experience. The framework is hypothesized to be useful as a general tool for the community to organize and present alternative approaches to tailoring. To highlight the use of the framework, we examine one potential tailoring option in detail in the context of an existing simulation for learning intercultural competence.

Counter-IED Training in the News

Two recent stories featured the Mobile Counter-IED Interactive Trainer, or MCIT.

An article in Defense News describes the underlying philosophy of thinking like an insurgent that MCIT was built upon and explains how ICT’s contributions in video games, storytelling and mutli-media techniques helped their industry partner, A-T Solutions, realize their original design concept.  “They carry out a complex attack and think like the bad guys,” Todd Richmond, ICT’s project manager for the training system, said. “What are the terrain features for placing our device? Where do we put our guys? My spotter needs a line of sight and he needs markers. The triggerman has to be able to see when to trigger. My security guy has to be in a position so when the bomb goes off, he can provide secondary fire. Or they can do an RPG attack first and the IED second. The beauty of this not being computer-based is that these 18-year-old kids are all gamers. They understand pretty well how to conduct an ambush.”

Another story in Defense Systems calls MCIT, “more Hollywood than Fort Leonard Wood.” The article goes on to say, “ICT is making strides in counter-IED training, partnering with the military and industry contractors to simulate combat environments.  Movie set-inspired stages are housed in trailers that teams of trainees navigate. The trailers can be quickly deployed and set up almost anywhere.”

“In these [training scenarios], the MCIT is teaching themes and concepts, not just facts,” said Todd Richmond in the story.

Read the Defense News story.

Read the Defense Systems story.

Sheryl Flynn and Belinda Lange: “Gaming and rehabilitation”

We’ve Moved!

We’ve said good-bye to our Marina del Rey offices and getting ready to relocate to our new home in nearby Playa Vista. See pictures of our new place on our Facebook page and click here for directions if you are planning to visit.

Here’s what Randall W. Hill, Jr., our executive director, had to say about our new space, which incidentely is located at the former home of Hughes Aircraft and the birthplace of the Spruce Goose.

“Our new building has been designed to showcase all the brilliant technologies and prototypes that are being developed at ICT.  The ICT story will be told throughout the building.  It will maximize our ability to collaborate—among ourselves, with universities, industry and the international community.  The new building has a large amount of public space to enable this sort of collaboration: a beautiful reception area, a large theater, cafe, meeting rooms, breakout rooms, conference rooms, and an amphitheater.  Finally, it will be a fun place to work.  There will be a gym, a quiet room/library, soccer fields, basketball courts, a park and outdoor amphitheater across the street.  Let’s dream together about what kind of space we can make it.”

See our photo album here.

Jonathan Gratch and Sin-hwa Kang: “‘It doesn’t matter what you are!’: Explaining Social effects of agents and avatars”

Empirical studies have repeatedly shown that autonomous artificial entities, so-called embodied conversational agents, elicit social behavior on the part of the human interlocutor. Various theoretical approaches have tried to explain this phenomenon: According to the Threshold Model of Social Influence (Blascovich et al., 2002), the social influence of real persons who are represented by avatars will always be high, whereas the influence of an artificial entity depends on the realism of its behavior. Conversely, the Ethopoeia concept (Nass & Moon, 2000) predicts that automatic social reactions are triggered by situations as soon as they include social cues. The presented study evaluates whether participants’ belief in interacting with either an avatar (a virtual representation of a human) or an agent (autonomous virtual person) lead to different social effects. We used a 2×2 design with two levels of agency (agent or avatar) and two levels of behavioral realism (showing feedback behavior versus showing no behavior). We found that the belief of interacting with either an avatar or an agent barely resulted in differences with regard to the evaluation of the virtual character or behavioral reactions, whereas higher behavioral realism affected both. It is discussed to what extent the results thus support the Ethopoeia concept.

ICT’s Mark Bolas in The Christian Science Monitor

After last week’s E3 conference, The Christian Science Monitor wrote an article about gesture control and gesture research. They spoke to ICT’s Mark Bolas to learn about emerging technologies in the field.  “The adoption pattern is similar to that of many other new technologies. Most of the new gesture-driven games on display at this past week’s E3 simply replace the actions once taken by analog buttons with arm waves and leg jabs,” said Bolas who runs the MRRD lab at ICT and is an associate professor in the Interactive Media Division of USC’s School of Cinematic Arts.

Read the full story here.

Mark Bolas, Logan Olson: “Design Approach for Multi-Touch Interfaces in Creative Production Environments”

Multi-touch gained a lot of interest in the last couple of years and the increased availability of multi-touch enabled hardware boosted its development. However, the current diversity of hardware, toolkits, and tools for creating multi- touch interfaces has its downsides: there is only little reusable material and no generally accepted body of knowledge when it comes to the development of multi- touch interfaces. This workshop seeks a consensus on methods, approaches, toolkits, and tools that aid in the engineering of multi-touch interfaces and transcend the differences in available platforms. The patterns mentioned in the title indicate that we are aiming to create a reusable body of knowledge.

H. Chad Lane, Matthew Jensen Hays, Daniel Auerbach, Mark Core: “Investigating the relationship between presence and learning in a serious game”

We investigate the role of presence in a serious game for intercultural communication and negotiation skills by comparing two interfaces: a 3D version with animated virtual humans and sound against a 2D version using text-only interactions with static images and no sound. Both versions provide identical communicative action choices and are driven by the same underlying simulation engine. In a study, the 3D interface led to a significantly greater self-reported sense of presence, but produced significant, but equivalent learning on immediate posttests for declarative and conceptual knowledge related to intercultural communication. Log data reveals that 3D learners needed fewer interactions with the system than those in the 2D environment, suggesting they benefited equally with less practice and may have treated the experience as more authentic.

Ning Wang, Jonathan Gratch: “Don’t Just Stare at Me!”

Communication is more effective and persuasive when par-ticipants establish rapport. Tickle-Degnen and Rosenthal argue rapport arises when participants exhibit mutual attentiveness, positivity and coordination. In this paper, we investigate how these factors relate to perceptions of rap-port when users interact via avatars in virtual worlds. In this study, participants told a story to what they believed was the avatar of another participant. In fact, the avatar was a computer program that systematically manipulated levels of attentiveness, positivity and coordination. In contrast to Tickel-Degnen and Rosenthal’s findings, high-levels of mutual attentiveness alone can dramatically lower percep-tions of rapport in avatar communication. Indeed, an agent that attempted to maximize mutual attention performed as poorly as an agent that was designed to convey boredom. Adding positivity and coordination to mutual attentiveness, on the other hand, greatly improved rapport. This work un-veils the dependencies between components of rapport and informs the design of agents

Matthew Jensen Hays, Amy Ogan, H. Chad Lane: “The Evolution of Assessment: Learning about Culture from a Serious Game”

In ill-defined domains, properly assessing learning is, itself, an ill-defined problem. Over the last several years, the domain of interest to us has been teaching Americans about Iraqi business culture via a serious-game-based practice environment. We describe this system and the various measures we used in a series of studies to assess its ability to teach. As subsequent studies identified the limits of each measure, we selected additional measures that would let us better understand what and how people were learning, using Bloom’s revised taxonomy as a guide. We relate these and other lessons we learned in the process of refining our solution to this ill-defined problem.

Stacy Marsella: “Modelling Emotion and its Expression”

Emotion and its expression play a powerful role in shaping human behavior. As research has revealed the details of emotion’s role, researchers and developers increasingly have sought to exploit these details in a range of applications. Work in human-computer interaction has sought to infer and influence a user’s emotional state as a way to improve the interaction. Tutoring systems, health interventions and training applications have sought to regulate or induce specific, often quite different, emotional states in learners in order to improve learning outcomes. A related trend in HCI work is the use of emotions and emotional displays in virtual characters that interact with users in order to motivate, engender empathy, induce trust or simply arouse.

Common to many of these applications is the need for computational models of the causes and consequences of emotions. To the extent that emotion’s impact on behavior can be modeled correctly in artificial systems, it can facilitate interactions between computer systems and human users. In this talk, I will give an overview of some of the applications that seek to infer and influence a user’s emotions. I will then go into detail on how emotions can be modeled computationally, including the theoretical basis of the models, how we validate models against human data and how human data are also used to inform the animation of virtual characters.

ICT Science and Technology in Miller-McCune

Miller-McCune magazine featured a comprehensive story about ICT research and applications. The story describes ICT’s personnel as, “teams of computer scientists, graphics visionaries, artificial-intelligence wizards, social-science experts, digital game makers and Hollywood storytellers who are taking the notion of virtual reality to a new level of fidelity, creating immersive environments that, among other things, help America’s soldiers experience the culture of Iraq and Afghanistan before they go and treat them for post-traumatic stress when they return.” The story also says, “the institute is at the forefront of creating “virtual humans” — that is, extremely lifelike animations that, through the near-magic of artificial-intelligence algorithms and a host of intertwined technologies, can respond in realistic ways to the speech and actions of actual human beings.

Read the full story here.

Admiral Michael G. Mullen, Chairman of the Joint Chiefs of Staff, Visits USC

Admiral Michael G. Mullen, chairman of the Joint Chiefs of Staff, the principal military advisor to the president, the secretary of defense, the National Security Council and the Homeland Security Council, visited the USC School of Social Work on Friday. While there, he was treated to a demonstration of ICT’s virtual patient, Lt. Rocko, which is going to be used to help train future social workers in clinical interviewing and diagnostic skills related to issues involving military personnel, veterans and their families.  The virtual patient and other ICT-developed technologies, including providing virtual reality therapy for treating post-traumatic stress, are to be part of the curriculum of the USC School of Social Work’s specialization in military social work and veteran services.  The new program is designed to prepare social workers and other trained mental health professionals to help the nation’s armed forces personnel, military veterans and their families manage the pressures of military life and post-war adjustments. After the demonstration, the School of Social Work and the USC Center for Innovation and Research on Veterans and Military Families hosted Admiral Mullen in a town hall meeting at Town and Gown.

Read more here.

Ning Wang to Present at the Tenth International Conference on Intelligent Tutoring Systems

Previous studies on the Politeness Effect show that using politeness strategies in tutorial feedback can have a positive impact on learning (McLaren et al. 2010; Wang and Johnson 2008; Wang et al. 2005). While prior research efforts tried to uncover the mechanism through which the politeness strategies impact the learner, the results were inconclusive. Further, it is unclear how the politeness strategies should adapt over time. In this paper, we analyze the video tapes of participants’ facial expression while interacting with a polite or direct tutor in a foreign language training system. The Facial Action Coding System was then used to analyze the facial expressions. Results show that as social distance decreases over time, polite feedback is received less favorably while the preference for direct feedback increases.

H. Chad Lane: “Virtual Humans with Secrets”

Virtual humans are animated, lifelike characters capable of free-speech and nonverbal interaction with human users. In this paper, we describe the development of two virtual human characters for teaching the skill of deception detection. An accompanying tutoring system provides solicited hints on what to ask during an interview and unsolicited feedback that identifies properties of truthful and deceptive statements uttered by the characters. We present the results of an experiment comparing use of virtual humans with tutoring against a no-interaction (baseline) condition and a didactic condition. The didactic group viewed a slide show consisting of recorded videos along with descriptions of properties of deception and truth-telling. Results revealed that both groups significantly outperformed the no-interaction control group in a binary decision task to identify truth or deception in video statements. No significant differences were found between the training conditions.

Paul Debevec Presents at E3 Expo 2010 at the Los Angeles Convention Center

Paul Debevec will participate as speaker/panelist at the E3 Expo 2010 being held at the Los Angeles Convention Center.

Paul Debevec and ICT’s Light Stage Technology in the Economist

An article in the Economist Magazine highlighted the facial rendering system developed by Paul Debevec and colleagues. The process, used to create digital faces in such films as “Spider-Man 2,” produces results so realistic that individual wrinkles can be clearly seen, the story reported. “There’s a new genre rising here,” Debevec said. “This technology will probably be used to bring actors back to the screen who have long since died.”

Read the full story »

Edward Haynes, Jacki Morie, Eric Chance: “Jogging in a Virtual World Using Breath as Avatar”

Recent research in the fields of Complementary and Alternative Medicine (CAM) and virtual technologies suggest some potentially new and beneficial therapies that may help returning servicemen who present symptoms of psychological stress. CAM therapies are now being validated with evidence-based research. Data from the emerging field of Self-Perception Theory shows that use of avatars in virtual worlds can affect a user’s psychology and behavior. These findings and discussions with psychologists who use Mindfulness-Based Stress Reduction (a validated CAM therapy), and advice from experts in military social work, has led us to develop a virtual jogging system in the online virtual world of Second Life. Our system is novel in that we use breathing to control the movement of the avatar rather than keyboard controls. There is no spirometer but breath detection is instead done with a microphone by using the seldom-used volume level detection technology available in Second Life that is interpreted through a custom scripting solution.

Edward Haynes, Jacki Morie and Eric Chance: “Jogging in a Virtual World Using Breath as Avatar Control”

Edward Haynes, Jacki Morie and Eric Chance will present their research paper entitled, “Jogging in a Virtual World Using Breath as Avatar Control” at CyberPsychology and CyberTherapy Conference. Recent research in the fields of Complementary and Alternative Medicine (CAM) and virtual technologies suggest some potentially new and beneficial therapies that may help returning servicemen who present symptoms of psychological stress. CAM therapies are now being validated with evidence-based research. Data from the emerging field of Self-Perception Theory shows that use of avatars in virtual worlds can affect a user’s psychology and behavior. These findings and discussions with psychologists who use Mindfulness-Based Stress Reduction (a validated CAM therapy), and advice from experts in military social work, has led us to develop a virtual jogging system in the online virtual world of Second Life. Our system is novel in that we use breathing to control the movement of the avatar rather than keyboard controls. There is no spirometer but breath detection is instead done with a microphone by using the seldom-used volume level detection technology available in Second Life that is interpreted through a custom scripting solution.

David Traum: “Intentions, Belief, Common Ground and Communication”

Intentions (including Gricean intentions that these intentions be recognized) play an important part in both the planning and understanding of communicative behavior. Some behavior, such as indirect requests or cooperative responses can hardly be understood without recourse to such concepts. However, the relationship between such attitudes and communication is a complex one, in which intentions and intention recognition are neither necessary nor sufficient for communication and are in some cases precluded for successful communication. We will present a view on which intention adds more meaning to communication when it can be ascribed, but is not basic to the communication process. From this view we also taxonomize different types of dialogue acts depending on presence or absence of different (ascribed) mental and social attitudes. We then use this perspective to analyze human-machine dialogue, in which a machine takes on conversational roles typically held by humans. We claim that it is reasonable (and perhaps necessary) to ascribe mental attitudes to machines engaging in dialogue, and such attitudes share some but not all of the properties of human attitudes. Such dialogues have added complexity in that one must also consider the intentions and other mental attitudes of the system designer and perhaps the system (in a multiagent system) as well as those of the individual agent.

David Traum: “Intentions, Belief, Common Ground and Communication”

Intentions (including Gricean intentions that these intentions be recognized) play an important part in both the planning and understanding of communicative behavior. Some behavior, such as indirect requests or cooperative responses can hardly be understood without recourse to such concepts. However, the relationship between such attitudes and communication is a complex one, in which intentions and intention recognition are neither necessary nor sufficient for communication and are in some cases precluded for successful communication. We will present a view on which intention adds more meaning to communication when it can be ascribed, but is not basic to the communication process. From this view we also taxonomize different types of dialogue acts depending on presence or absence of different (ascribed) mental and social attitudes. We then use this perspective to analyze human-machine dialogue, in which a machine takes on conversational roles typically held by humans. We claim that it is reasonable (and perhaps necessary) to ascribe mental attitudes to machines engaging in dialogue, and such attitudes share some but not all of the properties of human attitudes. Such dialogues have added complexity in that one must also consider the intentions and other mental attitudes of the system designer and perhaps the system (in a multiagent system) as well as those of the individual agent.

Jorge Silva, Shrikanth Narayanan: “On Data-Driven Histogram-Based Estimation for Mutual Information”

The problem of mutual information (MI) estimation based on data-dependent partition is addressed in this work. Sufficient conditions are stipulated on a histogram-based construction to guarantee a strong consistent estimate for the MI. The practical implications of this result are in the specification of a range of design parameters for two data-driven histogrambased approaches – statistically equivalent blocks and treestructure vector quantizations – to yield density-free strongly consistent estimates for the MI.

ICT’s Jacki Morie Named Director-at-Large of ACM SIGGRAPH Executive Committee

The results are in! ICT’s Jacki Morie has once again been elected to the ACM SIGGRAPH Executive Committee as Director-at-Large.  Morie hopes to continue steering the organization into a rich future.  Morie will serve on the ACM SIGGRAPH Executive Committee for a three-year term beginning July 1, 2010.

Arno Hartholt, Jon Gratch, Stacy Marsella, Kim LeMasters: “Creating Gunslinger, By Far the World’s Most Fun Mixed-Reality, Multi-Agent, Story-Driven, Interactive Experience”

Gunslinger is an interactive-entertainment application of virtual humans that transforms an iconic Wild West movie scene into a vivid semblance of reality. The project combines virtual human technology with Hollywood storytelling & set building into an engaging, mixed-reality, story-driven experience, where a single participant can interact verbally and non-verbally with multiple virtual characters that are imbedded in a physical saloon. All of these techniques and technologies are used with the primary goal of engaging and immersing the participant. This talk focuses on the design and development processes behind Gunslinger, emphasizing the importance of rich interaction between elements, rather than a single approach, in achieving a unique and engaging experience.

David Traum, David DeVault and Kenji Sagae: “Interpretation of Partial Utterances in Virtual Human Dialogue Systems”

Dialogue systems typically follow a rigid pace of interaction where the system waits until the user has finished speaking before producing a response. Interpreting user utterances before they are completed allows a system to display more sophisticated conversational behavior, such as rapid turn-taking and appropriate use of backchannels and interruptions. We demonstrate a natural language understanding approach for partial utterances, and its use in a virtual human dialogue system that can often complete a user’s utterances in real time.

Paul Rosenbloom: “A Graphical Memory Architecture”

Paul Rosenbloom discusses joint research on presidential funding with John Laird in a talk titled, “A Graphical Memory Architecture.” This talk discusses a new approach to building diverse long-term memories – including procedural, semantic and episodic structures – for cognitive architectures in a general yet uniform manner based on graphical models.

Anton Leuski: “NPCEditor: A Tool for Building Question-Answering Characters”

ICT’s Anton Leuski will give at talk called, “NPCEditor: A Tool for Building Question-Answering Characters” at The International Conference on Language Resources and Evaluation. NPCEditor is a system for building and deploying virtual characters capable of engaging a user in spoken dialog on a limited domain. The dialogue may take any form as long as the character responses can be specified a priori. For example, NPCEditor has been used for constructing question answering characters where a user asks questions and the character responds, but other scenarios are possible. At the core of the system is a state of the art statistical language classification technology for mapping from user’s text input to system responses. NPCEditor combines the classifier with a database that stores the character information and relevant language data, a server that allows the character designer to deploy the completed characters, and a user-friendly editor that helps the designer to accomplish both character design and deployment tasks. In the paper we define the overall system architecture, describe individual NPCEditor components, and guide the reader through the steps of building a virtual character.

ICT Researchers Talk at The International Conference on Language Resources and Evaluation

As conversational agents are now being developed to encounter more complex dialogue situations it is increasingly difficult to find satisfactory methods for evaluating these agents. Task-based measures are insufficient where there is no clearly defined task. While user-based evaluation methods may give a general sense of the quality of an agent’s performance, they shed little light on the relative quality or success of specific features of dialogue that are necessary for system improvement. This paper examines current dialogue agent evaluation practices and motivates the need for a more detailed approach for defining and measuring the quality of dialogues between agent and user. We present a framework for evaluating the dialogue competence of artificial agents involved in complex and underspecified tasks when conversing with people. A multi-part coding scheme is proposed that provides a qualitative analysis of human utterances, and rates the appropriateness of the agent’s responses to these utterances. The scheme is outlined, and then used to evaluate Staff Duty Officer Moleno, a virtual guide in Second Life.

ICT Researchers Earn Best Paper Award

Lixing Huang, Louis-Phillippe Morency and Jon Gratch received the Best Virtual Agents Paper Award at the AAMAS Conference for their paper, Parasocial Consensus Sampling: Combining Multiple Perspectives to Learn Virtual Human Behavior.

Another paper, Evaluating Models of Speaker Head Nods for Virtual Agents by Jina Lee, Zhiyang Wang and Stacy Marsella was also one of the finalists. Congratulations to all for the well-deserved recognition.

Fast Company Features ICT Virtual Patients

An article on the Fast Company website showcased Lt. Rocco, an ICT-developed virtual patient, who is among the virtual human characters being developed to help train clinicians in treating and diagnosing mental illness in returning veterans and soldiers. These military-themed avatars will be deployed as part of specialized clinical training in the new military concentration of the masters degree program at the USC School of Social Work. The goal of the program is to train a new generation of clinical social workers to deal with veterans’ mental trauma.

The article states that ICT Skip Rizzo and his team are working to create a female soldier as a “training tool for clinicians to practice sensitive interviewing skills for addressing the growing problem of sexual assault within military ranks.” Rizzo says the system is also being designed for use by command staff to foster better skills for recognizing the signs of sexual assault in subordinates under their command and for improving the provision of support and care.

Read the story.

Andrew Gordon: “Mining Commonsense Knowledge From Personal Stories in Internet Weblogs”

David Krum Presents at Ubiprojection at Pervasive 2010

David Krum will present his paper accepted to the Ubiprojection workshop which is part of the Pervasive 2010 conference (Eighth International Conference on Pervasive Computing). The title of Krum’s presentation is “Augmented Reality Applications and User Interfaces Using Head-Coupled Near-Axis Personal Projectors with Novel Retroreflective Props and Surfaces,” and his paper is a result from the ICT Seedling supporting the head mounted projector project.

One motivation for the development of augmented reality technology has been the support of more realistic and flexible training simulations. Computer-generated characters and environments – combined with real world elements such as furniture and props to ‘set the stage’ – create the emotional, cognitive, and physical challenges necessary for well-rounded team-based training. This paper presents REFLCT, a mixed reality staging and display system that couples an unusual near-axis personal projector design with novel retroreflective props and surfaces. The system enables viewer-specific imagery to be composited directly into and onto a surrounding environment, without optics positioned in front of the user’s eyes or face. Characterized as a stealth projector, it unobtrusively offers bright images with low power consumption. In addition to training applications, the approach appears to be well-matched with emerging user interface and application domains, such as asymmetric collaborative workspaces and mobile personalized guides.

Sydney Morning Herald Covers ICT Virtual Reality Therapy and Games

An article in Australia’s Sydney Morning Herald featured ICT virtual therapies and games developed by Skip Rizzo that are used to treat PTSD in soldiers as well as distract children from painful medical treatments.

Read the full story.

Trojan Family Magazine Covers ICT

Three stories in the summer 2010 issue of Trojan Family Magazine cover aspects of ICT’s work. One article covers the ICT’s Virtual Patient and Virtual Iraq technologies that will be part of the new USC military social work program. Another story spotlights Paul Debevec and the ICT Graphics Lab’s involvement helping the create the very special effects in James Cameron’s Avatar. A large feature on gaming at USC discusses ICT training games BiLAT and UrbanSim as well as the work of the ICT Graphics Lab.

“As our technologies become widely available, “ said ICT Executive Director Randall W. Hill, Jr., “not only will game designers finally be able to undertake emotionally complex projects they’ve been considering for years, but also, more importantly, they’ll be thinking up entirely new kinds of games not yet imagined.”

Download the summer 2010 issue.

Stacy Marsella and Jon Gratch Receive Research Award, Give Invited Talk at AAMAS

Jon Gratch and Stacy Marsella were recognized by the Association for Computing Machinery’s Special Interest Group on Artificial Intelligence (ACM/SIGART) with the 2010 Autonomous Agents Research Award, an annual award for excellence for researchers influencing the field of autonomous agents.

As part of the honor, the pair gave talks at AAMAS-2010, the 9th International Conference on Autonomous Agents and Multiagent Systems, which took place at the Sheraton Centre Toronto Hotel in downtown Toronto Canada, on May 10-14 2010. AAMAS is the premier scientific conference for research on autonomous agents and multiagent systems.

In granting the award, the selection committee cited the pair’s significant and sustained contributions in the area of virtual agents in announcing their award. “Their work balances theoretical and engineering achievements, allowing the understanding of the factors and processes underlying how emotion affects behaviors,” they wrote. “They have also proposed a novel way to validate computational models of human emotions.”

Visit the Award Site.

Read ICT’s award announcement.

Jonathan Ito, David Pynadath, Liz Sonenberg, Stacy Marsella: “Wishful Thinking in Effective Decision Making”

Creating agents that act reasonably in uncertain environments is a primary goal of agent-based research. In this work we explore the theory that wishful thinking can be an effective strategy in uncertain and competitive decision scenarios. Specifically, we present the constraints necessary for wishful thinking to outperform Expected Utility Maximization and take instances of popular games from Game-Theoretic literature showing how they relate to our constraints and whether they can benefit from wishful-thinking.

Kallirroi Georgila to Present Accepted Paper at Speech Prosody 2010

At this conference, Georgila will present the following paper: Prediction and Realisation of Conversational Characteristics by Utilising Spontaneous Speech for Unit Selection.
Unit selection speech synthesis has reached high levels of naturalness and intelligibility for neutral read aloud speech. However, synthetic speech generated using neutral read aloud data lacks all the attitude, intention and spontaneity associated with everyday conversations. Unit selection is heavily data dependent and thus in order to simulate human conversational speech, or create synthetic voices for believable virtual characters, we need to utilise speech data with examples of how people talk rather than how people read. In this paper we included carefully selected utterances from spontaneous conversational speech in a unit selection voice. Using this voice and by automatically predicting type and placement of lexical fillers and filled pauses we can synthesise utterances with conversational characteristics. A perceptual listening test showed that it is possible to make synthetic speech sound more conversational without degrading naturalness.

Jacki Morie and Edward Haynes Host a Workshop at the Federal Consortium of Virtual Worlds

Morie and Haynes will host a workshop to show attendees how they created the world in their project, TOPSS: “Transitional Online Post-Deployment Soldier Support in Virtual Worlds.”

Jerry R. Hobbs, Andrew Gordon: “Goals in a Formal Theory of Commonsense Psychology”

Birgit Endrass, Lixing Huang, Jonathan Gratch, Elisabeth André: “Using Virtual Agents to Simulate Interpersonal Communication Coordination of Different Cultures”

Virtual agents are a great opportunity in teaching inter- cultural competencies. Advantages, such as the repeatabil- ity of training sessions, emotional distance to virtual charac- ters, the opportunity to over-exaggerate or generalize behav- ior or simply to save the costs for human training-partners support that idea. Especially the way communication is co- ordinated varies across cultures. In this paper, we present our approach of simulating differences in the management of communication for the American and Arabic cultures. Therefore, we give an overview of behavioral tendencies de- scribed in the literature, pointing out differences between the two cultures. Grounding our expectations in empiri- cal data we analyzed a multi-modal corpora. These findings were integrated into a demonstrator using virtual agents and evaluated in a preliminary study.

Lixing Huang, Louis-Philippe Morency, Jonathan Gratch: “Parasocial Consensus Sampling: Combining Multiple Perspectives to Learn Virtual Human Behavior”

Virtual humans are embodied software agents that should not only be realistic looking but also have natural and realistic behaviors. Traditional virtual human systems learn these interaction behaviors by observing how individuals respond in face-to-face situations (i.e., direct interaction). In contrast, this paper introduces a novel methodological approach called parasocial consensus sampling (PCS) which allows multiple individuals to vicariously experience the same situation to gain insight on the typical (i.e., consensus view) of human responses in social interaction. This approach can help tease apart what is idiosyncratic from what is essential and help reveal the strength of cues that elicit social responses. Our PCS approach has several advantages over traditional methods: (1) it integrates data from multiple independent listeners interacting with the same speaker, (2) it associates probability of how likely feedback will be given over time, (3) it can be used as a prior to analyze and understand the face-to-face interaction data, (4) it facilitates much quicker and cheaper data collection. In this paper, we apply our PCS approach to learn a predictive model of listener backchannel feedback. Our experiments demonstrate that a virtual human driven by our PCS approach creates significantly more rapport and is perceived as more believable than the virtual human driven by face-to-face interaction data.

Jina Lee, Stacy Marsella: “Evaluating Models of Speaker Head Nods”

Virtual human research has often modeled nonverbal behaviors based on the findings of psychological research. In recent years, however, there have been growing efforts to use automated, data-driven approaches to find patterns of nonverbal behaviors in video corpora and even thereby discover new factors that have not been previously documented. However, there have been few studies that compare how the behaviors generated by different approaches are interpreted by people. In this paper, we present an evaluation study to compare the perception of nonverbal behaviors, more specifically head nods, generated by different approaches. Studies have shown that head nods serve a variety of communicative functions and that the head is in constant motion during speaking turns. To evaluate the different approaches of head nod generation, we asked human subjects to evaluate videos of a virtual agent displaying nods generated by a human, by a machine learning data-driven approach, or by a hand-crafted rule-based approach. Results show that there is a significant effect on the perception of head nods in terms of appropriate nod occurrence, especially between the data-driven approach and the rule-based approach.

Jonathan Gratch: “A data-driven approach to model culture-specific communication management styles for virtual agents”

Virtual agents are a great opportunity in teaching inter-cultural competencies. Advantages, such as the repeatability of training sessions, emotional distance to virtual characters, the opportunity to over-exaggerate or generalize behavior or simply to save the costs for human training-partners support that idea. Especially the way communication is coordinated varies across cultures. In this paper, we present our approach of simulating differences in the management of communication for the American and Arabic cultures. Therefore, we give an overview of behavioral tendencies described in the literature, pointing out differences between the two cultures. Grounding our expectations in empirical data we analyzed a multi-modal corpora. These findings were integrated into a demonstrator using virtual agents and evaluated in a preliminary study.

Mei Si, Stacy Marsella, David Pynadath: “Evaluating Directorial Control in a Character-Centric Interactive Narrative Framework”

Interactive narrative allows the user to play a role in a story and interact with other characters controlled by the system. Directorial control is a procedure for dynamically tuning the interaction towards the author’s desired effects. Most existing approaches for directorial control are built within plot-centric frameworks for interactive narrative and do not have a systematic way to ensure that the characters are always well-motivated during the interaction. Thespian is a character-centric framework for interactive narrative. In our previous work on Thespian, we presented an approach for applying directorial control while not affecting the consistency of characters’ motivations. This work evaluates the effectiveness of our directorial control approach. Given the priority of generating only well-motivated characters’ behaviors, we empirically evaluate how often the author’s desired effects are achieved. We also discuss how the directorial control procedure can save the author effort in configuring the characters.

ICT FML Workshop

Examples of FML in use

Burning Issues

  • Relationship of FML and BML: division of labour – what goes where, e.g. role of linguistic syntax
  • Agent vs System point of view: should FML/BML be for a single autonomous agent or for groups of agents working as part of a unified presentation system?
  • How to express unintentional funcion – e.g. unsuccesful deception
  • Breaking/opening up the SAIBA pipeline
  • Stefan:
    • Clearly confine what FML is mainly for (<- important to take a practical stance here)
      • spec of everything that needs to be realized by behaviors in order to achieve some higher-level goal?
      • spec of everything that might possibly causal of overt behavior (e.g. cognitive operations)?
      • context variables: part of FML?
      • goal/intention hierarchy – define where FML has to pick up from (e.g. include full-blown Gricean communicative intentions?)
      • required to make everything explicit (e.g. implicit speech acts)?
    • Structural issues (<- this is in my view one of the most important first steps we should tackle now)
      • what is the basic layout of fundamental structures: units, operations, dimensions, other parameters?
      • what dimensions are to be involved? how can they be combined (e.g. emotion and speech acts)? how can each of them be organized within sub-spaces (e.g. taxonomy of speech acts)?
      • granularity
      • compositionality and (in-)dependencies
      • preferences and conflict resolution rules
    • Possible starting points
      • pro’s and con’s of taking things like DIT++, which provide a lot of insight but were not really meant for the same thing (e.g. annotation system vs. generation specification language)
      • working towards FML bottom-up from behavior planning issues (i.e. what needs a Behavior Planner?) or top-down from content and action planning issues?
      • adopting info-state update approach for FML? do we need an underlying concept of a dialogue model in relation to which FML is to work?
  • Paul
    • How is incrementality dealt with?
    • How to balance generality and representation of domain-specific information?
    • How is contextual information dealt with?
    • Do we need to make a clear distinction between reactive and planned behaviours?
    • Do we need to take into account multilinguality and also accessibility issues?
    • How does FML fit in with language generation architectures (such as Reiter & Dale and RAGS) and language production architectures (e.g., Levelt and De Ruiter)?
    • To what extent does FML allow for representation of non-communicative behaviours.

Possible FML Elements

  • Dialogue Acts
  • Information Structure
  • Emotion
  • Dishonesty/deception
  • explicitness/importance of information (implications for whether one should be subtle or overt, implicated or explicit, complementary or redundant in expressing the function)
  • interpersonal relationship management (status, dominance/submission, closeness/distance)
  • culture-specific factors & personality differences  (is this a function itself or a method of realizing function in behavior, or some of each)
  • behavior-level function (e.g. to nod head, regardless of what this means)
  • Environmental effects (i.e. monitoring status of objects/characters, capture of attention) Conversational regulation (i.e. nonverbal backchannel feedback)

Other Issues

Paul: Rather than introduce specific elements, I have added some bullet points on issues related to specific elements/attributes. (Referencesat the bottom of the page)

  • Should we distinguish between felt and expressed emotion?
  • Should we distinguish between explicitly communicated and implicated information?
  • Should we have a representation of private and shared information?
  • For referring expression generation, some kind of representation of context may be required. Do we also include salience (see Krahmer & Theune 2002 and Piwek 2009) and importance of objects in this?
  • Do we allow for multiple dialogue act tags for a single segment?
  • Dialogue participants: Do we need to distinguish participants, side participants, bystanders and eavesdroppers.
  • Should pragmatic parameters be included. For example those of Hovy (1988) about the setting:
    • time (much, some, little)
    • tone (formal, informal, festive)
    • conditions (good, noisy)
  • Should parameters influencing politeness be included, e.g, those from Brown and Levinson (1987) which were operationalized in Walker et al. (1996):
    • Social distance
    • Power that the speaker has over the hearer
    • Do we need to represent bias, e.g., for agents that are not cooperative?

Students Look Back on their Day at ICT

On a Saturday in March, a group of Los Angeles area youth visited ICT to learn about the careers that can come from a solid understanding of math, science and engineering concepts. ICT researchers demonstrated applications including virtual reality therapy and the Light Stage systems. All the students are from the Pacoima-based non-profit MEND, an organization devoted to to break the bonds of poverty by providing basic human needs and a pathway to self-reliance, coordinated the outreach visit.  Check out what several students had to say about their trip to ICT.

“Thanks, ICT, for inviting MEND to that amazing field trip.  The best part was when I met the twins.”– Viviana

“Thank you for the great time I had at USC ICT, and thank you for the cool things you showed us.  I hope I can go another time.” – Juan

“Thank you, ICT, for inviting us into your building of amazement and technology.  The part I loved most was the virtual world. Thank you for helping the soldiers who risk their lives to protect us from harm.” – Alexis

Recent UrbanSim Coverage from Around the Web

Several gaming sites have taken notice of ICT’s UrbanSim game for practicing stability and counterinsurgency operations. Here are some links to their stories.

Kokatu.com’s story

GamePolitics.com’s story 

Softpedia.com story

Army Hopeful About Virtual Reality for Treating Post Traumatic Stress

An article by the Army News Service covered an Army study of ICT’s virtual reality system for treating post traumatic stress.

The story stated that, the Defense Department’s Centers of Excellence for Psychological Health and Traumatic Brain Injury have begun a pilot program that uses multi-sensory virtual reality to treat Soldiers with post-traumatic stress disorder. The program enables doctors to choose a scenario, customized around a Soldier’s personal experience. In the article, Brig. Gen. Loree K. Sutton, director of the program, said she is very hopeful in the use of virtual reality but notes that no one approach will reach out and touch everyone.

“We owe these young Americans our very best,” Sutton said. “We know the issues of post-traumatic stress, these unseen wounds of war. If left in silence, they can be the deadliest wounds of all.”

Dr. Greg Reger, acting chief for the Defense Department’s National Center for Telehealth and Technology, Innovative Technology Applications Division, explained the traditional approach to treatment is exposure therapy, which involves the individual (with the guidance of a doctor) confronting the anxiety issues, instead of avoidance. He said research has shown that individuals that have a high level of emotional engagement respond best to treatment. To increase emotional engagement, virtual reality enables servicemembers to confront these issues, which activates the memory and potentially, treats PTSD.

“Treatment works and it’s getting better all the time,” Sutton said. “The earlier we can intervene, the better.”

Read the full story here.

Paul Debevec Gives Keynote Speech at FMX conference

Dr. Debevec is the keynote speaker with the FMX conference which is held in Stuttgart, Germany. Also, Dr. Debevec will visit the Technical Univ. in Berlin and the Bubelsberg Studio.

David Herrera, David Novick, Dusan Jan, David Traum: “The UTEP-ICT Cross-Cultural Multiparty Multimodal Dialog Corpus”

To help answer questions about conversational control behaviors across cultures, a collaborative team from the University of Texas at El Paso and the Institute for Creative Technologies collected and partially coded approximately ten hours of audiovisual multiparty interactions in three different cultures and languages. Groups of four native speakers of Arabic, American English and Mexican Spanish completed five tasks and were recorded from six angles. Excerpts of four of the tasks were coded for proxemics, gaze, and turn-taking; interrater reliability had a Kappa score of about 0.8. Lessons learned from the multiparty corpus are being applied to the recording and annotation of a complementary dyadic corpus.

Paul Debevec Wins Viterbi Award

In their annual award ceremony, the USC Viterbi School of Engineering honored Paul Debevec, ICT associate director for graphics research and a research associate professor at the USC Viterbi School of Engineering, with an award for use-inspired research. 

More info and photos from the event

H. Chad Lane Gives Keynote Speech at the Army’s Junior Science and Humanities Symposium National

Virtual humans are embodied, artificially intelligent characters that bring with them new social dimensions to computing. One of their most popular roles is that of pedagogical agent, or teacher. In the last decade, they have expanded to become role players for helping acquire and develop social and intercultural skills. Virtual Humans now exist for training in a variety of interpersonal contexts, including clinical interviewing, intercultural business, and even healthy play in autistic children. What does it take to build a virtual human? How can we best use them to provide virtual experiences for learning?

In this talk, I will present on overview of research at the Institute for Creative Technologies to design and build virtual humans that seek to promote learning. This includes work to build virtual humans for culture-specific training and collaborative work with the Boston Museum of Science to build virtual guides who answer questions about computer science and technology (http://www.mos.org/interfaces). I will discuss the underlying psychology of interacting with virtual humans, what it takes to build one, and summarize the results of a series of studies we have conducted that examine the efficacy of virtual humans for teaching culture and the role feedback plays in this process. My talk will conclude with some thoughts on current limitations of virtual humans, how ongoing research is addressing those issues, and what the future may hold for virtual humans who want to help you learn.

ICT’s Thomas Parsons Presents at TRADOC Human Dimension Gap Conference

The use of neuropsychological and psychophysiological measures in studies of patients immersed in high-fidelity virtual environments offers the potential to develop current psychophysiological computing approaches into affective computing scenarios that can be used for assessment, diagnosis and treatment planning. Such scenarios offer the potential for simulated environments to proffer cogent and calculated response approaches to real-time changes in user emotion, neurocognition, and motivation. The presentation will describe 1) literature on virtual environments for neurocognitive and psychophysiological profiles of users’ individual strengths and weaknesses; and 2) real-time adaptation of virtual environments that could be used for virtual reality exposure therapy and cognitive rehabilitation. Specifically, I discuss an approach to an adaptive environment that uses the principles of flow, presence, neuropsychology, psychophysiology to develop a novel application for rehabilitative applications.

Tibor Bosse, Jonathan Gratch, Johan Hoorn, Matthijs Pontier, Ghazanfar Siddiqui: “Comparing Three Computational Models of Affect”

In aiming for behavioral fidelity, artificial intelligence cannot and no longer ignores the formalization of human affect. Affect modeling plays a vital role in faithfully simulating human emotion and in emotionally-evocative technology that aims at being real. This paper offers a short expose about three models concerning the generation and regulation of affect: CoMERG, EMA and I-PEFiCADM, which each in their own right are successfully applied in the agent and robot domain. We argue that the three models partly overlap and where distinct, they complement one another. We provide an analysis of the theoretical concepts, and provide a blueprint of an integration, which should result in a more precise representation of affect simulation in virtual humans.

Paul Debevec to Participate in Academy of Motion Pictures Panel on Acting in the Digital Age

Revolutionary developments in digital technology are impacting every aspect of filmmaking, including the discipline of acting. Today’s actors are performing in ways their predecessors couldn’t have imagined.
Short Films and Feature Animation Branch governor Bill Kroyer will lead an evening focused on two Oscar-winning films – “The Curious Case of Benjamin Button” (2008) and “Avatar” (2009) – to demonstrate how innovations in performance-capture technology, digital doubles, digital makeup, photorealism and image manipulation are affecting actors.

“Acting in the Digital Age” will feature a panel of cast and crew members from both films who will discuss the challenges and opportunities that come with these changing technologies. The program includes film clips and behind-the-scenes footage of the actors at work.

Kroyer received an Academy Award nomination for his 1988 short film “Technological Threat,” which pioneered the technique of combining hand-drawn and computer animation. He directed the animated feature film “FernGully: The Last Rainforest” (1992), was senior animation director at Rhythm & Hues Studios in Los Angeles, and is Director of Digital Arts of the Dodge College of Film and Media Arts at Chapman University.

Scheduled guests:
Peter Donald Badalamenti – Actor, “The Curious Case of Benjamin Button”
Richard Baneham – Animation supervisor, “Avatar” (2009 Academy Award winner, Visual Effects)
Paul Debevec – Co-creator of Light Stage (2009 Scientific and Technical Award winner)
Jon Landau – Producer, “Avatar” (2009 Best Picture nominee)
Joel David Moore – Actor, “Avatar”
CCH Pounder – Actor, “Avatar”
Steve Preeg – Character supervisor, “The Curious Case of Benjamin Button” (2008 Academy Award winner, Visual Effects)
Andy Serkis – Actor, “King Kong,” “The Lord of the Rings” trilogy
Wes Studi – Actor, “Avatar”
Garrett Warren – Stunt coordinator, “Avatar”

TEDxUSC Showcases ICT Innovations

Skip Rizzo, ICT’s associate director for medical virtual reality was one of the featured presenters at the 2010 TEDxUSC event.  Rizzo demonstrated his virtual reality therapy designed to treat post traumatic stress in soldiers and veterans. He also spoke to the audience of over 1,200 people about ICT’s collaboration with the USC School of Social Work’s specialization in military social work and veterans services which will incorporate training in virtual reality therapy and other ICT-developed technologies including virtual patients for diagnostic and interview skill training.  At the TEDxUSC reception, attendees experienced live demonstrations of ICT’s Virtual Museum Guides, live 3D Video Teleconferencing System and motor and rehab therapies. The TED conference began over 25 years ago a to bring together some of the world’s greatest thinkers.  In 2009 USC piloted the TEDx program – a new program of independently organized events. That successful event was followed by more than 500 TEDx events held in 60 countries and 25 languages.

Read about the event here.

See more photos from photographer Steve Cohn.

Read more about TEDxUSC.

San Diego Union Tribune Covers ICT Technologies Helping Make Decisions Under Stress

A story about the the Marine Corps Infantry Immersive Trainer at Camp Pendleton noted that the mixed-reality training facility was built on years of research by the Navy and software development by the University of Southern California’s Institute for Creative Technologies, among other organizations. The story reports that the Marines Corps plans to expand its use of infantry immersion training to help small units make better decisions under stress and that Navy research shows that simulated battlefield scenarios also may help inoculate troops against debilitating psychological effects of war-zone chaos, including post-traumatic stress disorder. “The magic of combat simulators is the ability to stop, push rewind and try again. The Marines may not get a second chance in the war zone, but here in the Infantry Immersion Trainer, a do-over is as easy as walking around the building and rebooting the computer,” stated the report.

Read the full story here.

Army.mil Covers ICT “Technologies of the Future”

Two recent stories on the Army.mil website overview ICT tools and technologies used by the Army, from virtual humans to serious games to virtual reality. Command Sgt. Maj. Hector Marin, RDECOM’s senior enlisted advisor, was quoted in one article and was enthused by what he saw on his recent visit to ICT. “These are the great minds that develop the Army’s technology of the future,” Marin said.

Read the ICT overview story.

Read the gaming story.

Army Studies Virtual Iraq – ICT’s Virtual Reality System for Treating PTSD

An article on NextGov.com, a website that covers goverment uses of technology, featured the Army’s use and study of Virtual Iraq, ICT’s virtual reality based therapy for treating PTSD. The story states that, faced with a PTSD rate as high as 35 percent among veterans of the Iraq war, the Army has instigated a four-year study at the Madigan Army Medical Center in Tacoma to track the results of using virtual reality to treat the disorder. The PTSD project is managed by the Center for Telehealth and its parent unit, the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury. According to Dr. Greg Reger, an Army psychologist with the Defense Department’s National Center for Telehealth and Technology, the study marks the first time virtual reality has been tested with active duty soldiers to find out how the treatment compares with traditional talk therapy. Another story covered the use of telemedicine and websites to address diagnosing and supporting soldiers and their families.

Read the story.

ICT’s Jonathan Gratch and Stacy Marsella Honored for Virtual Human Breakthroughs

Research recognizing and replicating emotion’s role in thought and behavior, earns duo top international award in their field.

From computer-generated counselors who help mothers of sick children make decisions under stress to virtual humans who can sway the course of a negotiation depending on whether the human participant is angry or amenable, Stacy Marsella and Jonathan Gratch, computer scientists with University of Southern California Institute for Creative Technologies and the USC Viterbi School of Engineering, are furthering the understanding of the psychology of emotion and how it can make for more believable computer-driven characters and environments.

Their pioneering work in emotion modeling and social simulation has been recognized by the Association for Computing Machinery’s Special Interest Group on Artificial Intelligence (ACM/SIGART) with the 2010 Autonomous Agents Research Award, an annual award for excellence for researchers influencing the field of autonomous agents.

Gratch is ICT’s associate director for virtual human research and Marsella is ICT’s associate director for social simulation.  The two are developing a computational model of appraisal theory – the idea that emotions come from evaluations of events and cause specific reactions in different people – and conversational virtual humans that can model the influences of emotion in the ways they think and behave.

“The fact that the selection committee chose to honor both Stacy and Jon this year is a testament to their individual contributions to the field of autonomous agents and multi-agent systems and to the collaborative nature of the research taking place at ICT,” said Randall W. Hill, Jr., executive director of ICT.

Gratch and Marsella are also both research associate professors in the Department of Computer Science at the USC Viterbi School. Together they teach a graduate level course on affective computing which covers techniques for recognizing and synthesizing emotional behavior, and illustrates how these can be applied to games, immersive environments and teaching tools.

“One of the main reasons that I decided to come to USC is that our computer science program has many top-tier research faculty who greatly enhance the environment of computing at USC,” said Shanghua Teng, chair of the computer science department at the USC Viterbi School.

Marsella and Gratch will receive their awards and present lectures at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2010 in Toronto, Canada in early May.

The selection committee cited the pair’s significant and sustained contributions in the area of virtual agents in announcing their award. “Their work balances theoretical and engineering achievements, allowing the understanding of the factors and processes underlying how emotion affects behaviors,” they wrote. “They have also proposed a novel way to validate computational models of human emotions.”

Milind Tambe of the USC Viterbi School of Engineering’s Computer Science Department is a past recipient of this award. USC and Carnegie Mellon University are now the only institutions to have three of their researchers recognized with this honor.

Visit the conference site.

Mashable.com Features ICT’s 3D Teleconferencing System

A video demonstration and an explanation of the technology behind ICT’s 3D teleconferencing display at the TEDxUSC reception is up on mashable.com.

Read their full story.

Ning Wang Talks at the 28th ACM Conference on Human Factors in Computing Systems

Communication is more effective and persuasive when participants establish rapport. Tickle-Degnen and Rosenthal argue rapport arises when participants exhibit mutual attentiveness, positivity and coordination. In this paper, we investigate how these factors relate to perceptions of rap-port when users interact via avatars in virtual worlds. In this study, participants told a story to what they believed was the avatar of another participant. In fact, the avatar was a computer program that systematically manipulated levels of attentiveness, positivity and coordination. In contrast to Tickel-Degnen and Rosenthal’s findings, high-levels of mutual attentiveness alone can dramatically lower percep-tions of rapport in avatar communication. Indeed, an agent that attempted to maximize mutual attention performed as poorly as an agent that was designed to convey boredom. Adding positivity and coordination to mutual attentiveness, on the other hand, greatly improved rapport. This work unveils the dependencies between components of rapport and informs the design of agents

David Krum: “The Isolated Practitioner”

Over the past few decades, a community of researchers and professionals has been advancing the art and science of interaction design. Unfortunately, many practitioners are isolated from this community. We feel that the lack of a relationship between these isolated practitioners and the human-computer interaction community is one of the greater challenges in improving the overall quality of interaction design in the products and services used by our society. In this position paper, we describe how this isolation arises. We then propose ways to improve the connection between the HCI community and these isolated practitioners. These include early HCI instruction in the undergraduate curriculum, establishing HCI certificate programs, utilizing new media to summarize and disseminate important HCI results, highlighting accomplishments in interaction design, and performing other forms of outreach.

Sin-hwa Kang, Jonathan Gratch: “The Effect of Avatar Realism of Virtual Humans on Self-Disclosure in Anonymous Social Interactions”

In this paper, we illustrate progress in research designed to investigate interactants’ self-disclosure when they communicate with virtual humans or real humans in computer-mediated interactions. We explored the effect of the combination of avatar realism and interactants’ anticipated future interaction (AFI) on self-disclosure in emotionally engaged and synchronous interaction. We primarily aimed at exploring ways to promote interactants’ self-disclosure while securing their visual anonymity, even with minimal cues of virtual humans, when interactants anticipate future interaction. The research examined interactants’ self-disclosure through measuring their verbal behaviors. The preliminary findings indicated that interactants revealed greater intimate information about themselves in interactions with virtual humans than with real humans. However, interactants’ AFI did not affect their self-disclosure, which does not correspond to the results of previous studies using text based interfaces.

William Swartout Presents at Northwestern University

For the last ten years, we have been building virtual humans — computer-generated characters — at the USC Institute for Creative Technologies. Ultimately, our vision is to create virtual humans that look and behave just like real people. They will think on their own, model and exhibit emotions, interact using natural language along with the full repertoire of verbal and non-verbal communication techniques that people use. Although the realization of that goal is still in the future, making steps toward it has required us to weave together different threads of AI research such as computer vision, natural language understanding and emotion modeling that are often treated as independent areas of investigation. Interestingly, this is not just an exercise in systems integration, but instead has revealed synergies across areas that have allowed us to address problems that are difficult to solve if addressed from one perspective alone. I will illustrate some of these synergies in this talk. I will also discuss the role of story in our work and show how embedding virtual humans in a compelling story or scenario can both make them more feasible to implement and suggest new areas of research. Finally, I will suggest future areas of research in virtual humans and suggest what might be possible in the not too distant future.

This talk is based on the Robert Engelmore Memorial Lecture given at IAAI-09.

Invited Talk: Computational Photography in Special Effects

This talk will cover Debevec’s work in developing computational photography systems for special effects. Debevec’s Ph.D. thesis (UC Berkeley, 1996) presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Using Facade he led the creation of virtual cinematography of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were used to create virtual backgrounds in The Matrix. Subsequently, Debevec pioneered high dynamic range image-based lighting techniques in his films Rendering with Natural Light (1998), Fiat Lux (1999), and The Parthenon (2004); he also leads the design of HDR Shop, the first high dynamic range image editing program. At USC ICT, Debevec has led the development of a series of Light Stage devices for capturing and simulating how objects and people reflect light, used to create photoreal digital actors in films such as Spider Man 2, Superman Returns, and The Curious Case of Benjamin Button, and Avatar

Jacki Morie, Belinda Lange: “Interactions in the Virtual World”

This presentation will overview the work being done in the Coming Home or TOPSS-VW project.

Paul Debevec Honored At Viterbi/ICT Event

Colleagues hailed Paul Debevec’s Academy Award Tuesday evening at a reception hosted by ICT and the Viterbi School.

Debevec, ICT’s associate director for graphics research and a research associate professor in the Computer Science Department of USC’s Viterbi School of Engineering received a Scientific and Engineering Academy Award from the Academy of Motion Picture Arts and Sciences for the development of the Light Stage technologies used create believable digital faces in major motion pictures, including the blockbuster Avatar.

The reception was held at the conclusion of the computer science department’s annual research review day. In a room filled with professors, Ph.D candidates and undergraduates, Department of Computer Science Chair Sheng-Hua Teng praised Debevec not only for his achievements but for providing inspiration to future inventors.

“I am so honored that we decided to collocate this celebration with our computer science event,” said Teng. “We tell our students to aim high and Paul’s achievements are a great example of what that means.”

ICT Executive Director Randall W. Hill, Jr. congratulated Debevec and noted how uncommon it is for a computer scientist to win an Academy Award. He highligted the unique environment at ICT, Viterbi and USC that enabled him to conduct his groundbreaking research.

“What made it possible for Paul to pursue his wonderful work was having the support of our sponsors, the encouragement of colleagues at ICT and in the dept of computer science and an overall culture of creativity at USC and in Los Angeles that drives this kind of innovation,” he said. I don’t believe this could have been happened anywhere else.”

Viterbi School Senior Associate Dean John O’Brien remarked that art and science are not as far apart as it would seem, particularly considering advances in technologies for creating cinema, television and electronic music.

“The last century and a half have seen the emergence of totally new artistic forms, growing directly out of the work of scientists and engineers like Paul,” he said. “Interactive games are now emerging and even more are in the pipeline, thanks to departments like the one gathered here today and remarkable minds like Paul’s.”

Debevec brought his plaque to the stage and encouraged the assembled students to follow their passions.

“Find something fun and try to do something new,” he told them. “Grab on and keep pushing and enjoy all the possibilities before you.”

Canadian Newspapers Feature Andrew Gordon’s Blog Mining Research

A story published in several Canadian daily newspapers explains the blog mining research being done by ICT research associate professor Andrew Gordon. The story reports that Gordon aims to archive every English-language personal story blog entry posted online in hopes of using this vast database to teach artificial-intelligence computers about real life.  According to the report, Gordon believes this “narration of the mundane” is the best way to teach computers the common sense knowledge about everyday life that comes naturally to humans but is extremely difficult for artificial intelligence to grasp. “The opportunity there is one that we’ve never had before as a research community,” he says. “I think it will radically change a lot of our theories, a lot of our understanding of social life and daily life.”

Read the full story.

Skip Rizzo, Thomas Parsons, Patrick Kenny, John Galen Buckwalter: “A new generation of intelligent virtual patients for clinical training”

Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality simulation technology for clinical purposes. Recent shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality with the “birth” of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive, artificially intelligent and natural language capable virtual human agents that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, virtual human representations can be designed to perceive and act in a 3D virtual world, engage in face-to-face spoken dialogues with real users (and other virtual humans) and in some cases, they are capable of exhibiting human-like emotional reactions. This paper will present a brief rationale and overview of their use in clinical training and then detail our work developing and evaluating artificially intelligent virtual humans for use as virtual standardized patients in clinical training with novice clinicians. We also discuss a new project that uses a virtual human as an online guide for promoting access to psychological healthcare information and for assisting military personnel and family members in breaking down barriers to initiating care. While we believe that the use of virtual humans to serve the role of virtual therapists is still fraught with both technical and ethical concerns, we have had success in the initial creation of virtual humans that can credibly mimic the content and interaction of a patient with a clinical disorder for training purposes. As technical advances continue, this capability is expected to have a significant impact on how clinical training is conducted in psychology and medicine.

Ramy Sadek, David Krum, Mark Bolas: “Simulating Hearing Loss in Virtual Training”

Audio systems for virtual reality and augmented reality training environments commonly focus on high-quality audio reproduction. Yet many trainees may face real-world situations where hearing is compromised. In these cases, the hindrance caused by impaired or lost hearing is a significant stressor that may affect performance. Because this phenomenon is hard to simulate without actually causing hearing damage, trainees are largely unpracticed at operating with diminished hearing. To improve the match between training scenarios and the real-world situation, this effort aims to add simulated hearing loss or impairment as a training variable. Stated briefly, the goal is to effect non everything users hear – including non-simulated sounds such as their own and each other’s voices – without damaging their hearing, being overtly noticeable, or requiring the donning of

ICT Researchers Nominated for Best Paper Awards

Papers by Jina Lee and Lixing Huang and ICT colleagues have been nominated for best virtual agent paper at the Autonomous Agents and Multiagent Systems conference (AAMAS ), the premier scientific conference for research on autonomous agents and multiagent systems. A total of three papers were nominated by the award committee. This award represents AAMAS’s continued support for virtual agents research at the conference, which takes place in May in Toronto, Canada

The nominations were:

Lixing Huang, Louis-Phillippe Morency & Jonathan Gratch , Parasocial Consensus Sampling: Combining Multiple Perspectives to Learn Virtual Human Behavior

Jina Lee, Zhiyang Wang & Stacy Marsella, Evaluating Models of Speaker Head Nods for Virtual Agents

Visit the conference site.

The Eye’s Have It! Wired Interviews Paul Debevec About Realistic Virtual Actors

Wired’s Hugh Hart interviewed Paul Debevec about the future of virtual actors and the technology developments that are allowing them to be believable. “Benjamin Button and Avatar created digital characters that achieved the one thing that hadn’t happened before,” said ICT’s Debevec, in a phone interview with Wired.com. “When you look in their faces, look into their eyes, the performance that drove that character comes through in a believable way,” he said. “You can get a sense of what the character is thinking.”

Not long after winning his Scientific and Technical Academy Award for the development of the Light Stage technologies, which were used in both Avatar and Benjamin Button, Debevec took part in the South by Southwest panel titled “The Birth of Eye-Def Acting.” Debevec, was joined by Avatar animator A.J. Briones, Ubisoft’s Mathieu Ferland, Wired magazine contributing editor Clive Thompson and Advanced Micro Devices exec Neal Robison.

Read the interview here.

Paul Debevec: “SXSW Panel Discussion: The Birth of Eye-Def Acting”

Hollywood is breaking boundaries when it comes to creating practically indistinguishable, lifelike human characters in films such as Avatar and games such as Assassin’s Creed II. We’ll discuss the future of film and video games and how recent innovations in technology are allowing movie makers and game developers to create a more realistic, interactive and engaging viewing experience for audiences.

Chi-Chun Lee, Shrikanth Narayanan: “Predicting Interruptions in Dyadic Spoken Interactions”

Interruptions occur frequently in spontaneous conversations, and they are often associated with changes in the flow of conversation. Predicting interruption is essential in the design of natural human-machine spoken dialog interface. The modeling can bring insights into the dynamics of human-human conversation. This work utilizes Hidden Condition Random Field (HCRF) to predict occurrences of interruption in dyadic spoken interactions by modeling both speakers’ behaviors before a turn change takes place. Our prediction model, using both the foreground speaker’s acoustic cues and the listener’s gestural cues, achieves an F-measure of 0.54, accuracy of 70.68%, and unweighted accuracy of 66.05% on a multimodal database of dyadic interactions. The experimental results also show that listener’s behaviors provides an indication of his/her intention of interruption.

Angeliki Metallinou, Sungbok Lee, Shrikanth Narayanan: “Decision Level Combination of Multiple Modalities for Recognition and Analysis of Emotional Expression”

Emotion is expressed and perceived through multiple modalities. In this work, we model face, voice and head movement cues for emotion recognition and we fuse classifiers using a Bayesian framework. The facial classifier is the best performing followed by the voice and head classifiers and the multiple modalities seem to carry complementary information, especially for happiness. Decision fusion significantly increases the average total unweighted accuracy, from 55% to about 62%. Overall, we achieve average accuracy on the order of 65-75% for emotional states and 30-40% for neutral state using a large multi-speaker, multimodal database. Performance analysis for the case of anger and neutrality suggests a positive correlation between the number of classifiers that performed well and the perceptual salience of the expressed emotion. Index Terms: Multimodal Emotion Recognition, Hidden Markov Model, Bayesian Information Fusion, Perceptual Salience

Jangwon Kim, Sungbok Lee, Shrikanth Narayanan: “An Exploratory Study of Manifolds of Emotional Speech”

This study explores manifold representations of emotionally modulated speech. The manifolds are derived in the articulatory space and two acoustic spaces (MFB and MFCC) using isometric feature mapping (Isomap) with data from an emotional speech corpus. Their effectiveness in representing emotional speech is tested based on the emotion classification accuracy. Results show that the effective manifold dimensions of the articulatory and MFB spaces are both about 5 while being greater in MFCC space. Also, the accuracies in the articulatory and MFB manifolds are close to those in the original spaces, but this is not the case for the MFCC. It is speculated that the manifold in the MFCC space is less structured, or more distorted, than others. Index Terms: Emotional speech, Isomap, manifold, acoustic feature, articulatory feature

Angeliki Metallinou, Carlos Busso, Sungbok Lee, Shrikanth Narayanan: “Visual Emotion Recognition Using Compact Facial Representations and Viseme Information”

Emotion expression is an essential part of human interaction. Rich emotional information is conveyed through the human face. In this study, we analyze detailed motion-captured facial information of ten speakers of both genders during emotional speech. We derive compact facial representations using methods motivated by Principal Component Analysis and speaker face normalization. Moreover, we model emotional facial movements by conditioning on knowledge of speech-related movements (articulation). We achieve average classification accuracies on the order of 75% for happiness, 50-60% for anger and sadness and 35% for neutrality in speaker independent experiments. We also found that dynamic modeling and the use of viseme information improves recognition accuracy for anger, happiness and sadness, as well as for the overall unweighted performance. Index Terms: Emotion recognition, Principal Component Analysis, Principal Feature Analysis, Fisher Criterion, visemes

Andreas Tsiartas, Panayiotis Georgiou, Shrikanth Narayanan: “Langauge model adaptation using WWW documnets obtained by utterance-based queries”

In this paper, we consider the estimation of topic specific Language Models (LM) by exploiting documents from the World Wide Web (WWW). We focus on the quality of the generated queries and propose a novel query generation method. In contrast to the n-gram based queries used in past works, our approach relies on utterances as queries candidates. The proposed approach does not rely on any language specific information other than the initial in-domain training text. We have conducted experiments with Web texts of size 0-150 million words, and we have shown that despite not using any language specific information, the proposed approach results in up to 1.1% absolute Word Error Rate (WER) improvement as compared to keyword-based approaches. The proposed approach reduces the WER by 6.3% absolute in our experiments, compared to an in-domain LM without considering any Web data. Index Terms Adapt language models, utterance queries, WWWcorpora, in-domain documents

The Economist Features Work of ICT’s Andrew Gordon

In a story appearing in print and online, the Economist magazine describes Andrew Gordon’s research identifying and analyzing personal stories told in blog entries. The idea, the article states, is that this will eventually lead to a system that can gather aggregated statistics on a day-by-day basis about the personal lives of large populations—information that would be impossible to garner from any other source.

Read the full story.

For more information on this work.

Viterbi’s Shri Narayanan Wins Best Paper Award for ICT Supported Emotion Work

A 2005 paper co-authored by Shri Narayanan and his former student Chul Min Lee has won a 2009 Best Paper prize from the IEEE Signal Processing Society (SPS). The prize honors Lee (PhD EE, 2004) and Narayanan, Andrew Viterbi Professor of Engineering in the Ming Hsieh Department of Electrical Engineering for their publication “Toward Detecting Emotions in Spoken Dialogs,” published in the IEEE Transactions on Speech and Audio Processing, vol. 13, March 2005.

Read more on the Viterbi website.

Science Day at ICT

A group of Los Angeles area youth visited ICT on Saturday to see first hand what technologies and career paths can come from a solid understanding of math, science and engineering concepts. ICT researchers demonstrated applications including virtual reality therapy and the Light Stage systems. Pacoima-based non-profit MEND, an organization devoted to to break the bonds of poverty by providing basic human needs and a pathway to self-reliance, coordinated the outreach visit. “The kids asked great questions,” said ICT executive director Randall W. Hill, Jr.

UrbanSim featured on DoD’s Armed with Science

ICT’s Andrew Gordon was interviewed on the DoD podcast Armed with Science about UrbanSim, ICT’s computer-based game to support the training of military commanders and their staffs in complex counter-insurgency and stability operations. “One of the innovative things about UrbanSim is it also has this story-driven component where we’re taking the real-world experiences of commanders from places like Iraq and Afghanistan and trying to find innovative ways of moving those real life experiences directly into the simulation environment,” Gordon said. “So that the real-world experiences of soldiers are the things that are driving the underlying simulation in the UrbanSim environment.”

Listen to the podcast.

Read the Defense News article about UrbanSim.

ICT Virtual Humans Ada and Grace Talk Science at AAAS in San Diego

NSF showcases science education collaboration with Museum of Science, Boston

Ada and Grace, two bright and bubbly young women literally stopped visitors in their tracks in the National Science Foundation (NSF) expo booth at the recent American Association for the Advancement of Science (AAAS) conference in San Diego.

“I heard them talking and really liked their personalities,” said 16-year-old Joshua Glover, a student from the United Kingdom attending the meeting as part of a school trip. “I was impressed that they could understand what people were asking them.”

What impressed Glover, and other attendees, is that the women in question – and answering questions – were not real people but life-like computer-generated virtual human characters created as part of an NSF-funded collaboration between the University of Southern California Institute for Creative Technologies and the Museum of Science, Boston.

“Ada and Grace represent an exciting and potentially transformative medium for engaging the public in science”, said Jeff Nesbit, director of NSF’s Office of Legislative and Public Affairs.

They were selected to be featured at the NSF booth at the AAAS conference in large part because of their ability to highlight the educational and research potential of virtual characters by getting them out of the lab and interacting with people in meaningful and memorable ways.

“It is a tremendous honor to have our technology showcased by the National Science Foundation,” said Bill Swartout, ICT’s director of technology. “ICT virtual characters are currently being developed for a wide variety of roles, from helping train clinicians in diagnosing mental health conditions to helping military officers practice interpersonal leadership skills.  I hope Ada and Grace are just the beginning in a line of virtual characters used to expose the public to science.”

Currently in service as guides at the Boston museum’s Cahners ComputerPlace, Ada and Grace help staff and volunteers make visits there richer by answering visitor questions, suggesting exhibits and explaining the technology that makes them work.

Named for Ada Lovelace and Grace Hopper, two inspirational female computer science pioneers, these digital docents are trailblazers in their own right. They are among the first and most advanced virtual humans ever created to speak face-to-face with museum visitors.

According to Dan Noren, the program manager at the museum who has been working with the duo since they arrived last December, having virtual museum guides on hand enhances their team.

“These virtual humans are getting people’s attention in a way that a real person can’t,” he said. “They are able to capture the imagination of everyone from esteemed and established scientists at this conference to school groups with little science exposure who come through our museum.”

The hope, all the collaborators on this project agree, is that one of those young people who meet Ada and Grace might be inspired to pursue a career in science and one day present their research at a conference like AAAS.  And judging by the enthusiastic reaction of students like Joshua Glover, that day might not be far off.

“I definitely wanted to learn more,” he said.

Learn more about Ada and Grace.

Forbes Asks if CG Can Bring Actors Back From the Dead – Features ICT’s Paul Debevec and Light Stage

The March 15 issue of Forbes Magazine looks at the impact of digital filmmaking on acting, asking whether technology will be able to bring actors back from the dead. Reporter Dorothy Pomerantz interviewed ICT’s Paul Debevec and the website features both video and a slide show of his work on the Digital Emily collaboration with Image Metrics.

Read the story online.

Academy Award® Shines Light on Innovations of ICT Researcher

Paul Debevec and colleagues recognized with Academy Award® for development of the Light Stage technologies. System has been used in films including Avatar, Benjamin Button and Spider-Man™ 2

A researcher whose work involves precisely lighting actors had the spotlight shone on him at the Academy of Motion Picture Arts and Sciences’ Science and Engineering Awards on Saturday.

Paul Debevec, associate director for graphics research at the USC Institute for Creative Technologies and a research associate professor in the computer science department of the USC Viterbi School of Engineering, received the Academy’s Scientific and Engineering Award for the design and engineering of his Light Stage technologies, which have been used to create believable digital faces for major films including Avatar, The Curious Case of Benjamin Button, and Spider-Man™ 2.

The award recognizes more than ten years of research, development and application of technologies designed to help achieve the goal of photoreal digital actors who can appear in any lighting condition.

This video shows amazing recent Light Stage work.

Debevec accepted the award along with colleagues Tim Hawkins of LightStage LLC, John Monos of Sony Pictures Imageworks, and Mark Sagar of WETA Digital who co-developed the system with him.

“It’s an incredible thrill to have the results of our work recognized in this way,” said Debevec.  “There just aren’t that many computer scientists who get to bring an Academy Award back to in the office.”

Many USC stars were in attendance for the awards ceremony, highlighting the interdisciplinary reach of Debevec’s graphics work, which enhances storytelling through credible visual effects. Along with Debevec and his mother, were Randall W. Hill, Jr., executive director of ICT, Bill Swartout, ICT’s director of technology, Elizabeth Daley, dean of the USC School of Cinematic Arts, Yannis C. Yortsos, dean of the USC Viterbi School of Engineering and Shanghua Teng, chair of the Viterbi School Department of Computer Science.

“This award really validates the quality of research that Paul and the ICT Graphics Lab have been performing over the last ten years,” said Hill. “Paul has truly set the bar with a unique and pioneering technology that has become invaluable to filmmaking. We look forward to future developments of this technology which will allow for the creation of photoreal, interactive virtual characters who can be used in all sorts of applications.”

The Academy’s Scientific and Technical Awards honor the men, women and companies whose discoveries and innovations have contributed in significant, outstanding and lasting ways to motion pictures, according the Academy. This year they bestowed 15 awards to 45 individuals for such behind-the scenes technical work in a black-tie ceremony held at the Beverly Wilshire hotel.

“The audience that saw Paul Debevec receive his award was remarkable: an amazing gathering of technical and artistic creativity, paying tribute to the best of their own,” said Yortsos. “I felt privileged to be there and enormously proud that one of the honorees is a professor in the Viterbi School of Engineering.”

With the success of this year’s blockbuster Avatar, it is hard to ignore that leading role that digital characters are taking in films.

“I was delighted to see Paul Debevec and his associates honored Saturday night with an Academy Award,” said Daley. “I continually marvel at the pace of technological growth in the industry, which is opening new avenues for storytellers and artists to realize ever-greater visions. Paul’s years of hard work have yielded groundbreaking results, and I’m looking forward to the future innovations that are sure to come from the tools he has created.”

Actress Elizabeth Banks, the host of the Scientific and Technical Awards ceremony, will present a short review of the ceremony during the Oscar telecast on Sunday, March 7 at the Kodak Theater.

Learn more about ICT’s Light Stage technology.

Visit ICT’s Facebook fan page to see photos.

PBS Frontline’s Digital Nation Features Interviews and Video from ICT

Digital Nation, a new internet and documentary project from PBS FRONTLINE is featuring many ICT researchers and technologies on its website. The project aims to capture life on the digital frontier and explore how the web and digital media are changing the way we think, work, learn and interact. The FRONTLINE crew spent a day at ICT to learn more about our work in virtual humans, virtual reality therapy, mixed reality and graphics. Now footage and interviews from that visit can be seen on their website. The one and a half hour television documentary, which featured a patient and clinician using ICT’s Virtual Iraq therapy, can also be viewed on their website.

Watch the segment.

Visit Frontline’s section on Virtual Iraq.

Go to the Digital Nation homepage.

The Economist Features Work of ICT’s Louis-Philippe Morency

The Economist featured research by ICT research assistant professor Louis-Philippe Morency that found a way for a computer to understand a human gesture: the nod. Morency and colleagues developed a computer system that can analyze video and audio recordings to recognize gestures of both posture and voice. The system logs the sequence of these cues, then compares sequences from different speakers to see which combinations routinely lead to a listener nodding and which don’t. Morency found more cues than were previously known for nodding, including a gaze shift toward the listener and the use of the word “and.” The U.S. military is already using the technology to analyze interactions between people in other countries, with the goal of including this information in programs designed to teach cultural differences to soldiers stationed abroad, the article stated.

Read the full article online.

From Academe to the Academy Awards – Read All About it in the Chronicle of Higher Education

Reporter Marc Parry interviews Paul Debevec about his Academy Award-winning Light Stage technology.

Read the interview here.

Paul Debevec Delivers Keynote at SPAR 2010

Paul Debevec delivers a keynote address presenting photorealistic 3D renderings from the ICT Graphics Lab which capture both the shape and reflectance of the Parthenon in Athens, illuminated manuscripts and the faces of Hollywood actors in applications ranging from cultural heritage to movie visual effects.

ICT’s Light Stage and the Making of Avatar on CNN

CNN’s Larry King Live devoted a show to the film Avatar. The program featured footage of some of the film’s actors being scanned in the Light Stage device. ICT’s Paul Debevec developed the Light Stage and received a credit in the film along with other members of his team at the ICT Graphics Lab.

Tommy Parsons: “Military Relevant Virtual Environments for Neurops”

The traditional approach to assessing neurocognitive performance makes use of paper and pencil neuropsychological assessments. This received approach has been criticized as limited in the area of ecological validity. While virtual reality environments provide increased ecological validity, they are often done without taking seriously the demands of rigorous research design and control for potentially confounding variables. The newly developed Virtual Reality Cognitive Performance Assessment Test (VRCPAT) focuses upon enhanced ecological validity using virtual environment scenarios to assess neurocognitive processing.

ICT’s Kim LeMasters Explains UrbanSim on NPR

In a story called Sim City Baghdad National Public Radio’s program, On the Media, spoke to Creative Director Kim LeMasters about UrbanSim, a computer game developed at ICT for practice in counterinsurgency operations. LeMasters gave host Bob Garfield a comprehenive description of the game and explained how and why it is being used by the U.S. miltary.

Read the interview.

The CBS Evening News Features ICT’s Light Stage Technology Used in Avatar

CBS News reporter Ben Tracy visited the ICT Graphics Lab and was scanned in Light Stage 5 as part of a story about the technological innovations in the blockbuster film Avatar. Many of the film’s principal actors were scanned in the Light Stage and that detailed data was used to help create the digital faces used in the film.

Learn about the Light Stage technologies.

New York Times Features MCIT and Virtual Iraq

An article about the Pentagon’s high-tech efforts to counter the threat of low-tech IEDs highlighted the Mobile Counter IED Trainer (MCIT), which uses well-crafted stories to impart a framework for understanding counter-IED lessons and a role-playing video game that engages young trainees and reinforces lessons by asking them to think like insurgents. The article also mentioned the widespread use of ICT’s Virtual Iraq system for treating PTSD as part of a larger trend by the military to use virtual reality, 3-D technology and computer game software to train deploying troops and treat combat-scarred veterans.

Read the full article.

Celso de Melo, Jonathan Gratch: “Evolving Expression of Emotions through Color in Virtual Humans using Genetic Algorithms”

For centuries artists have been exploring the formal elements of art (lines, space, mass, light, color, sound, etc.) to express emotions. This paper takes this insight to explore new forms of expression for virtual humans which go beyond the usual bodily, facial and vocal expression channels. In particular, the paper focuses on how to use color to influence the perception of emotions in virtual humans. First, a lighting model and filters are used to manipulate color. Next, an evolutionary model, based on genetic algorithms, is developed to learn novel associations between emotions and color. An experiment is then conducted where non-experts evolve mappings for joy and sadness, without being aware that genetic algorithms are used. In a second experiment, the mappings are analyzed with respect to its features and how general they are. Results indicate that the average fitness increases with each new generation, thus suggesting that people are succeeding in creating novel and useful mappings for the emotions. Moreover, the results show consistent differences between the evolved images of joy and the evolved images of sadness.

Academy Award® Honors Developers of USC ICT’s Light Stage Technologies

Used in films including Avatar , Benjamin Button and Spider-Man™ 2, award recognizes Institute for Creative Technologies’ visual effects systems that create believable digital actors and have ever-increasing entertainment applications.

The Academy of Motion Picture Arts and Sciences announced that Paul Debevec, associate director, graphics research at the USC Institute for Creative Technologies, Tim Hawkins of LightStage LLC, John Monos of Sony Pictures Imageworks, and Mark Sagar of WETA Digital, will be honored with a Scientific and Engineering Academy Award® for “the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures.”

According to the Academy’s citation, “the combination of these systems, with their ability to capture high fidelity reflectance data of human subjects, allows for the creation of photorealistic digital faces as they would appear in any lighting condition.” The award recognizes over ten years of research, development and application of technologies designed to help achieve the goal of photoreal digital actors.

Debevec, who is also a research associate professor in the computer science department of USC’s Viterbi School of Engineering, continues to lead ICT’s graphics research program, which has published over 20 peer-reviewed publications involving the Light Stage systems to date.

Based on original research led by Debevec at the University of California at Berkeley and published at the 2000 SIGGRAPH conference, the Light Stage systems efficiently capture how an actor’s face appears when lit from every possible lighting direction. From this captured imagery, specialized algorithms create realistic virtual renditions of the actor in the illumination of any location or set, faithfully reproducing the color, texture, shine, shading, and translucency of the actor’s skin.

While the first Light Stage had just one spotlight which spiraled around on a wooden gantry, Light Stage 2 built at USC’s Institute for Creative Technologies featured thirty bright strobe lights on a ten foot semicircular arm which rotated to capture detailed facial reflectance in just eight seconds. In 2002, this process attracted the attention of visual effects supervisor Scott Stokdyk of Sony Pictures Imageworks, who chose it for creating photoreal computer-generated stunt doubles of actors Alfred Molina (“Doc Ock”) and Tobey Maguire (“Spider-Man™”) for the movie Spider-Man™ 2. Mark Sagar, a collaborator on the original research, led the effort to adapt the process for film production. He was soon joined by computer graphics supervisor John Monos on Imageworks’ look development team. The technology was used in nearly 40 shots for the 2004 film which earned an Academy Award® for Best Achievement in Visual Effects.

After Spider-Man™ 2, Mark Sagar transitioned to Peter Jackson’s visual effects company WETA Digital in New Zealand where he oversaw the use of USC’s Light Stage 2 system to record the facial reflectance of actress Naomi Watts for her digital stunt double in Peter Jackson’s King Kong in 2005. Continuing at Sony Imageworks, John Monos led an effort which used Light Stage 2 scans of actor Brandon Routh to create a digital Superman character for the 2006 movie Superman Returns. The film achieved a new high water mark in the realism of virtual actors, with the digital Superman being successfully employed in both action sequences and extended close-up shots. The seamless digital character work helped earn Superman Returns an Academy Award® nomination for Best Visual Effects.

Sony Imageworks subsequently used Light Stage 2, as well as its full-sphere LED-based successors Light Stage 3 and Light Stage 5, to create digital versions of principal actors for Spider-Man™ 3 in 2007 and Hancock in 2008.

In 2008, visual effects company Digital Domain used detailed reflectance information captured with ICT’s Light Stage 5 system to help create a computer-generated version of Brad Pitt as an old man for David Fincher’s The Curious Case of Benjamin Button. The film, which featured the first extended performance of a digitally rendered actor in a feature film, won last year’s Academy Award for Best Visual Effects.

USC ICT’s Light Stage 5 system was most recently employed in the extensive visual effects in James Cameron’s worldwide hit Avatar. Working closely with the visual effects team at WETA Digital, ICT’s Graphics Laboratory digitized the faces of most of the film’s principal cast using a new high-resolution version of their geometry and appearance capture techniques. This innovative technology, housed at ICT’s Marina del Rey campus, precisely captures the shape, shine, color and texture of an actor’s face down to the level of each skin pore, crease, and wrinkle. These detailed scans were used by WETA Digital in their process of creating the film’s photorealistic digital humans and humanoid aliens, which have been lauded as a groundbreaking achievement in the evolution of digital filmmaking.

Through USC’s Stevens Institute for Innovation, the Light Stage technologies have been licensed to LightStage LLC, a Burbank-based company which offers commercial scanning services to the motion picture and interactive entertainment industries. LightStage LLC’s Chief Technology Officer Tim Hawkins was involved in the development of the Light Stage technology beginning with the original research at UC Berkeley and throughout its application in motion pictures as a researcher in the Graphics Laboratory at USC ICT.

The Academy’s Scientific and Technical Awards honor the men, women and companies whose discoveries and innovations have contributed in significant, outstanding and lasting ways to motion pictures. The Scientific and Engineering Award will be presented to Debevec, Hawkins, Monos and Sagar at the Academy’s Scientific and Technical Awards Ceremony in Beverly Hills on February 20th, 2010.

Academy Scientific and Technical Awards Press Release 
The Academy Scientific and Technical Awards 
The Scientific and Engineering Award 
The Light Stages at USC ICT 
ICT Graphics Laboratory 
LightStage LLC 
Sony Pictures Imageworks 
WETA Digital

The Atlantic Magazine Features UrbanSim, ICT’s Counterinsurgency Computer Training Game

UrbanSim, ICT’s turn-taking strategy game created in collaboration with the U.S. Army’s Simulation and Training Technology Center (STTC), is the subject of a feature story in the Jan/Feb 2010 issue of The Atlantic Magazine.  The article covers how the game was modeled on the real world experiences of officers in Iraq and Afghanistan and how it is being used to help Army commanders practice counterinsurgency operations before entering the battlefield.

“Members of a tribe, for instance, want jobs, but they won’t work if they don’t feel safe. Instead, they might join the insurgents,” writes reporter Brian Mockenhaupt. “Patrolling neighborhoods, meeting with tribal elders, and creating more economic opportunities—tactics straight from counterinsurgency manuals—can reduce the likelihood of that outcome in the game.”

The article also states that the game is intended to teach commanders new ways of thinking about multiple problems in a fast-changing environment and notes that computer games provide an inexpensive and portable training option that allows for the comparison of different approaches by having multiple students run the same scenarios.

Read the full article.

Download the article.

ICT Virtual Human Toolkit Provides a Free and Common Platform to Create Characters

The University of Southern California Institute for Creative Technologies has created the ICT Virtual Human Toolkit with the goal of reducing some the complexity inherent in creating virtual humans. Our toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at ICT and its partners. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

Current users of some of the toolkit features include CSI/UCB Vision Group at UC Berkeley, Component Analysis Lab at Carnegie Mellon University, Affective Computing Research group at MIT Media Lab and Microsoft Research.

The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of question / answer characters, with an emphasis on natural language interaction, nonverbal behavior and visual recognition. At the core of the toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete, basic virtual human.