Randall W. Hill, Jr. Joins National Academies’ Board on Army Science and Technology

ICT Executive Director Randall W. Hill Jr., has been selected as a member of the Board on Army Science and Technology (BAST) of the National Academies. The BAST was established at the National Academies by the Undersecretary of the Army James Ambrose in 1982. It serves as a convening authority for the discussion of science and technology issues of importance to the Army and formulates independent Army-related studies conducted by The National Academies. In coordination with the Army, the BAST conducts four meetings per year and works to focus study issues and statements of task, reviews committee membership nominations, and encourages BAST members to participate on ad hoc committees.

BAST website

ICT Researchers Earn Special Effects Credits in Groundbreaking Avatar Film

Paul Debevec, ICT’s associate director for graphics research, Abhijeet Ghosh, research assistant professor, and Wan-Chun (“Alex”) Ma, postdoctoral researcher, have been recognized with film credits for their work using ICT’s Light Stage facial scanning technology in James Cameron’s latest action adventure movie Avatar. Working closely with the visual effects wizards at Weta Digital, USC ICT’s Graphics Laboratory scanned the faces of many of the film’s principal cast members using Light Stage 5, ICT’s latest geometry and appearance capture system.  This innovative technology, housed at ICT’s Marina del Rey campus, precisely captures the shape, shine, color and texture of an actor’s face down to the level of each fine pore and wrinkle.  These detailed scans were used by WETA Digital in their process of creating the film’s photorealistic digital humans and creatures, which are being lauded as a groundbreaking achievement in the evolution of digital filmmaking.

Avatar joins a growing list of visual effects movies that have used USC ICT’s Light Stage technology; others include Spider-Man 2Spider-Man 3Superman Returns, King KongHancockG.I. Joe andThe Curious Case of Benjamin Button.

ICT Graphics Lab

Avatar trailer

Avatar full production credits

Belinda Lange: “Games for rehabilitation online survey: preliminary results”

Pieter Peers, Paul Debevec, Daniel Vlasic, Ilya Baran, Jovan Popovic, Szymon Rusinkiewicz, Wojciech Matusik: “Dynamic Shape Capture using Multi-View Photometric Stereo”

We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel hemispherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.

Dhaval Shah, Kyu J. Han, Shrikanth Narayanan: “A Low-Complexity Dynamic Face-Voice Feature Fusion Approach to Multimodal Person Recognition”

In this paper, we show the importance of face-voice correlation for audio-visual person recognition. We evaluate the performance of a system which uses the correlation between audio-visual features during speech against audioonly, video-only and audio-visual systems which use audio and visual features independently neglecting the interdependency of a person’s spoken utterance and the associated facial movements. Experiments performed on the Vid-TIMIT dataset show that the proposed multimodal scheme has lower error rate than all other comparison conditions and is more robust against replay attacks. The simplicity of the fusion technique also allows the use of only one classifier which greatly simplifies system design and allows for a simple real-time DSP implementation.

Mei Si, Stacy Marsella, David Pynadath: “Directorial Control in a Decision-Theoretic Framework for Interactive Narrative”

Computer aided interactive narrative has received increasing attention in recent years. Automated directorial control that manages the development of the story in the face of user interaction is an important aspect of interactive narrative design. Most existing approaches lack an explicit model of the user. This limits the approaches’ ability of predicting the user’s experience, and hence undermines the effectiveness of the approaches. Thespian is a multi-agent framework for authoring and simulating interactive narratives with explicit models of the user. This work extends Thespian with the ability to provide proactive directorial control using the user model. In this paper, we present the algorithms in detail, followed by examples.

ICT Virtual Humans Debut at Museum of Science, Boston

On Tuesday, December 8, 2009, just in time for National Computer Science Education Week, the the USC Institute for Creative Technologies and the Museum of Science, Boston introduced “Grace” and “Ada,” two of the most advanced virtual humans ever created to interact with museum visitors. Programmed to find activities in the Museum’s Cahners ComputerPlace that match visitors’ interests, these artificially intelligent digital twins love to talk about themselves and even have a sense of humor!

A group of fourth-graders from Cambridge’s Graham and Parks Alternative School were the first visitors to meet the museum’s new computer-based interpreters—named fittingly after computer pioneers Grace Hopper and Ada Lovelace. The kids’ mission is to help Museum educators and visiting scientists make the twins even “smarter” than they already are.

In the next year, museum visitors will play a key role in “teaching” Ada and Grace even more. Part of a three-year research project, funded by the National Science Foundation, scientists from ICT working with museum educators, integrated some of the latest research in natural language understanding, artificial intelligence (AI), and computer graphics and animation to program the 19-year-old virtual female twins to move, listen, think, and talk just like real people.

See local coverage of the debut.

See an interview with the twins.

Read coverage in the Boston Globe.

Proceedings of the U.S. Naval Institute Features ICT

ICT Executive Director Randall W. Hill, Jr and Director of Technology Bill Swartout discuss the current state and future of simulation in this journal article, which also covers ICT’s Virtual Iraq application for treating PTSD.

Read the fulll article.

Download the article.

David Traum at HCSNET Summerfest in Sydney, Australia

On the morning of November 30, ICT’s Dr. David Traum will give a presentation titled, “An Introduction to Dialog Systems.” This course will present an overview of some of the most popular approaches to dialogue system organization. Dr. Traum will briefly survey some prominent dialogue domains and systems to engage in dialogue within those domains. He will go over the different functional components of a dialogue system and some different approaches to provide that functionality. Finally, Dr. Traum will focus on the dialogue management component, and discuss different techniques for dialogue manager, including keyword, IR-inspired techniques, finite state systems, frame based, plan- and agent-based, and information-state based methods.

On Friday morning Dr. David Traum will share the latest on Spoken Dialogue Models for Virtual Humans.

Graham Fyffe, Cyrus Wilson, Paul Debevec: “Cosine Lobe Based Relighting from Gradient Illumination Photographs”

We present an image-based method for relighting a scene by analytically fitting a cosine lobe to the reflectance function at each pixel, based on gradient illumination photographs. Realistic relighting results for many materials are obtained using a single per-pixel cosine lobe obtained from just two color photographs: one under uniform white illumination and the other under colored gradient illumination. For materials with wavelength-dependent scattering, a better fit can be obtained using independent cosine lobes for the red, green, and blue channels, obtained from three monochromatic gradient illumination conditions instead of the colored gradient condition. We explore two cosine lobe reflectance functions, both of which allow an analytic fit to the gradient conditions. One is non-zero over half the sphere of lighting directions, which works well for diffuse and specular materials, but fails for materials with broader scattering such as fur. The other is non-zero everywhere, which works well for broadly scattering materials and still produces visually plausible results for diffuse and specular materials. Additionally, we estimate scene geometry from the photometric normals to produce hard shadows cast by the geometry, while still reconstructing the input photographs exactly.

Matt Jen-Yuan Chiang, Paul Debevec, Oleg Alexander, Mike Rogers, William Lambeth: “Creating a Photoreal Digital Actor”

The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California’s Institute for Creative Technologies to achieve one of the world’s first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress’ face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.

Astrid von der Putten, Nicole Kramer, Jonathan Gratch: “Who’s there? Can a Virtual Agent Really Elicit Social Presence”

This study investigates whether humans perceive a higher degree of social presence when interacting with an animated character that displays natural as opposed to no listening behaviors and whether this interacts with people’s believe that they are interacting with an agent or an avatar. In a 2×2 between subjects experimental design 83 participants were either made believe that they encounter an agent, or that they communicate with another participant mediated by an avatar. In fact, in both conditions the communication partner was an autonomous agent that either exhibited high or low behavioral realism. We found that participants experienced equal amounts of presence, regardless of interacting with an agent or an avatar. Behavioral realism, however, had an impact on the subjective feeling of presence: people confronted with a character displaying high behavioral realism reported a higher degree of mutual awareness.

Louis-Philippe Morency at the International Computer Science Institute

Louis-Philippe Morency has been invited by the International Computer Science Institute to present his non-verbal communication research at UC Berkeley.

ICT’s Jonathan Gratch Named Editor of New Journal on Computing and Emotion

Jon Gratch, ICT’s associate director for virtual humans research, has been named the inaugural editor of the new journal, The IEEE Transactions on Affective Computing. The IEEE Transactions on Affective Computing is intended to be a cross disciplinary and international archive journal aimed at disseminating results of research on the design of systems that can recognize, interpret, and simulate human emotions and related affective phenomena. The journal will publish original research on the principles and theories explaining why and how affective factors condition interaction between humans and technology, on how affective sensing and simulation techniques can inform our understanding of human affective processes, and on the design, implementation and evaluation of systems that carefully consider affect among the factors that influence their usability. Surveys of existing work will be considered for publication when they propose a new viewpoint on the history and the perspective on this domain.

Paul Debevec at AnimfxNZ

Somewhere on the way from 2001’s Final Fantasy to 2008’s The Curious Case of Benjamin Button, digital actors progressed from looking strangely synthetic to believably real. Debevec will overview some of this history and then focus on how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create “Digital Emily”, a believably photorealistic digital actor.

The talk will also present ICT’s Graphics Lab’s latest 3D Teleconferencing system which uses real-time face scanning and a three-dimensional display to transmit a life-sized facial performance in real time and 3D with accurate eye contact and occlusion.

Louis-Philippe Morency: “Workshop on Use of Context in Vision Processing”

Louis-Philippe Morency will host a workshop titled, “Data-driven Context Representation for Head Gesture Recognition during Multi-Party Interaction.”

ICT’s Patrck Kenny on Panel for Virtual Healthcare Interaction at AAAI 2009 Fall Symposium

Interaction between healthcare providers and consumers has a central role in consumer satisfaction and successful health outcomes. The healthcare consumer, facing increasing responsibility for healthcare decisions, may turn to electronic resources to supplement the information given by his healthcare provider. Here intelligent systems can assist in retrieval and summarization of relevant and trustworthy information, in tailoring the information so that it is comprehensible, and in making it accessible to computer users with disabilities. Furthermore, intelligent systems are beginning to appear that provide virtual healthcare services to the patient: for example, monitoring the patient’s health, reminding him to take his medicine, and encouraging him to exercise or eat a healthy diet. On the health care provider’s side, artificial intelligence can provide virtual patients for training providers to diagnose, care for, or communicate with clients.

Multi-Representational Architectures for Human-Level Intelligence at AAAI Fall Symposium

A multiplicity of representational frameworks has been proposed for explaining and creating human-level intelligence. Each has been proven useful or effective for some class of problems, but not across the board. This fact has led researchers to propose that perhaps the underlying design of cognition is multi-representational, or hybrid, and made up of subsystems with different representations and processes interacting to produce the complexity of cognition. Recent work in cognitive architectures has explored the design and use of such systems in high-level cognition. The main aim of this symposium is to bring together researchers who work on systems utilizing different types of representations to explore a range of questions about the theoretical framework and applications of such systems.

The symposium will be a mixture of invited talks, refereed full and position papers, expert panels and discussion sessions. The first session on each day will feature invited talks from experts in the field. The second and fourth sessions on Thursday and Friday (and the second session on Saturday) will be devoted to paper presentations. The exact length of time reserved for each presentation will be determined according to the number of number of papers accepted and will include time for answering questions. Time will also be reserved at the end of each paper session for an expert panel formed from the presenters of that session. More general questions that focus on areas common to the presentations or those that compare and contrast the various approaches discussed in that session will be the focus of these discussions. The third session on Thursday and Friday will be devoted to discussion groups. There will be between four and six groups devoted to various theoretical and application-oriented topics. Symposium participants will be able to select their group of choice. The end of these discussion sessions will include a 20-30 minute meeting where various groups will present their summary of the individual discussions.

Paul Debevec Covered by the New Zealand Press

On his recent trip to New Zealand’s AnimfxNZ conference, Paul was featured on the TV New Zealand as well as in an article in the New Zealand Dominion Post.  The stories focused on the work of the ICT Graphics Lab in bringing computer-generated animation closer to real life. “We’re trying to create a puppet, for lack of a better word,” said Debevec. “We haven’t done anything yet to threaten the craft of acting. It might be that we can create more realistic versions of actors for stunt scenes, or if we need them younger or older, of if we need to bring an actor back to life.

Read the full article online. »

Download the article.

Belinda Lange, Skip Rizzo: “Virtual Reality Rehabilitation”

International Society for Traumatic Stress Studies Annual Meeting

The goal of the International Society for Traumatic Stress Studies is to foster communication between basic scientists, clinicians and policy makers in order to advance the integration of current scholarship and practice. In doing so, the society hopes to advance understanding that will promote the dismantling of the conditions that produce trauma, as well as facilitate the mitigation of adverse responses to traumatic experiences. Skip Rizzo wil present at the International Society for Traumatic Stress Studies Annual Meeting on the Virtual Iraq and SimCoach project.

The New York Times Features ICT’s Coming Home Project

The New York Times featured Coming Home, an ICT project led by Jacquelyn Morie. Coming Home is a destination within the virtual world Second Life that provides camaraderie, support and resources for returning troops. “The real world actually can be affected by being in the virtual world,” Morie said. ICT specializes in building virtual humans, which it uses for marketing and negotiations, or to act as museum guides or help with virtual diagnoses, the story stated. Next year, the institute will do pilot studies on the healing center with an Army hospital, the article noted.

Jacquelyn Ford Morie Presents on New Uses for Emerging Technologies at Web 2.0 Summit

ICT’s Jacki Morie was selected to present her work on ICT’s Coming Home project, an effort that is building an online community for returning veterans to find support, resources and healing in Second Life. Other speakers at the summit included Yahoo! CEO Carol Bartz and Jeff Immelt, Chairman and CEO of GE.

Visit the conference site.

Paul Debevec’s TED Talk Posted at TED.com

Paul Debevec’s presentation from TEDx USC has been posted on the main TED conference site. Debevec’s presentation covered the Digital Emily project, a collaboration between the ICT Graphics Lab and facial animation company Image Metrics that produced what has been heralded as the most believable digital face ever created.

TED is a small nonprofit devoted to “Ideas Worth Spreading”. On TED.com, the best talks and performances from TED conferences and partners are made available online.

Last year, USC and the prestigious TED conference partnered to deliver an independently organized TED event at USC. TEDxUSC stayed true to the spirit of the TED Conference – hosting the world’s most fascinating thinkers and doers, and challenging them to give the talk of their lives.

3-D Teleconferencing Featured in the Chronicle of Higher Education

The Chronicle of Higher Education highlighted work by Paul Debevec and colleagues at ICT in a story on increase uses of and advances in videoconferencing technology. Debevec has built a holographic conferencing system inspired by “Star Wars,” the article noted. The system projects video onto a spinning mirror to create a 3-D image of a talking person, and the technology doesn’t require users to wear special glasses.

Read the story.

Skip Rizzo Named Associate Director for Medical Virtual Reality at ICT

In a newly created position, Dr. Albert “Skip” Rizzo will be responsible for advancing and overseeing the use of virtual reality and game technology in a wide range of medical domains for training, therapy, diagnostics and other forms of intervention.

Beginning with his work using a virtual classroom presented through a head-mounted display that could help diagnose children with attention deficit disorder, Skip has been a leader in using virtual reality for clinical purposes.  More recently, Skip’s efforts to use VR for post-traumatic stress disorder therapy have led to very promising results where roughly 75% of patients receiving the therapy became sub-clinical at therapy completion.  This effort received an award at the Laval VR conference and was the subject of a story in the New Yorker magazine.  Skip is also one of the principal investigators on the SimCoach project, which will provide information and resources to returning veterans and their families who are experiencing post-deployment problems.

Variety Covers ICT

ICT was featured in a Variety article about ways the military transitions Hollywood technologies for training. The story highlights ICT’s Mobile Counter-IED Interactive Trainer (MCIT) as an example of a project that uses not just technological, but storytelling expertise as well. “We refer to them as cognitive training systems,” says Kim LeMasters, creative director for the ICT. “We’re trying to train the brain.” The story noted that, rather than deliver a non-interactive training video that would likely bore 18- 20-year-olds, ICT created a story told from the p.o.v. of two characters: the bomb-maker and a young soldier who had just survived an IED attack. “To make this a compelling experience, you have to hook ‘em,” says LeMasters, “You have to have a story.”

Download a copy of the print version.

Kenji Sagae, Andrew Gordon: “Clustering Words by Syntactic Similarity Improves Dependency Parsing of Predicate-Argument Structures”

We present an approach for deriving syntactic word clusters from parsed text, grouping words according to their unlexicalized syntactic contexts. We then explore the use of these syntactic clusters in leveraging a large corpus of trees generated by a high-accuracy parser to improve the accuracy of another parser based on a different formalism for representing a different level of sentence structure. In our experiments, we use phrase-structure trees to produce syntactic word clusters that are used by a predicate-argument dependency parser, significantly improving its accuracy.

LabTV Posts New Segments about ICT Work

Stories about the work of the ICT Graphics Lab are now up on the LabTV website. Click on the link below to be directed to new segments about Light Stage technology and 3D teleconferencing. Scroll down for earlier segments about ICT research and development on virtual humans, virtual therapy, Sgt. Star and FlatWorld. Lab TV highlighted the work going on at ICT as part of a National Defense Education Program to promote science, technology, math and engineering to young people.

Visit the Lab TV website.

Jina Lee, Stacy Marsella: “Learning Models of Speaker Head Nods with Affective Information”

During face-to-face conversation, the speaker’s head is continually in motion. These movements serve a variety of important communicative functions, and may also be influ- enced by our emotions. The goal for this work is to build a domain-independent model of speaker’s head movements and investigate the effect of using affective information dur- ing the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker’s head nods using an annotated corpora of face-to-face human interaction and emotion labels gener- ated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help pre- dict head nods better than when no affective information is used.

Sin-hwa Kang, Jonathan Gratch: “Interactants’ Most Intimate Self-Disclosure in Interactions with Virtual Humans”

This study explored the effect of the combination of visual fidelity of a virtual human and interactants’ anticipated future interaction on self-disclosure in emotionally engaged and synchronous communication. The preliminary results were compared between interactions with embodied virtual agents and with real humans. We par-ticularly aimed at investigating ways to allow interactants’ intimate self-disclosure while securing their anonymity, even with minimal cues of an embodied virtual agent, when interactants anticipate their future interaction with interaction partners. The results of preliminary data analysis showed that interactants revealed intimate information about their most common sexual fantasy when they had anticipated future interaction with their interaction partners.

Dusan Jan, Antonio Roque, Anton Leuski, Jacki Morie, David Traum: “A virtual tour guide for virtual worlds”

In this paper we present an implementation of a embodied conversational agent that serves as a virtual tour guide in Second Life. We show how we combined the abilities of a conversational agent with navigation in the world and present some preliminary evaluation results.

ICT Researchers Win Best Paper Award

Research by Jonathan Gratch, Stacy Marsella, Ning Wang and Brooke Stankovic received the Best Paper Award at the IEEE 2009 International Conference on Affective Computing and Intelligent Interaction in Amsterdam. Their paper, Assessing the Validity of Appraisal-based Models of Emotion, describes an empirical study comparing the accuracy of competing computational models of emotion in predicting human emotional responses in naturalistic emotion-eliciting situations. The results find clear differences in models’ ability to forecast human emotional responses, and provide guidance on how to develop more accurate models of human emotion. The conference series on Affective Computing and Intelligent Interaction is the premier international forum for state of the art in research on affective and multi modal human-machine interaction and systems. Over seventy papers were presented.

Read the paper.

Visit the conference website.

Celso de Melo, Jonathan Gratch, Liang Zheng: “Expression of Moral Emotions in Cooperating Agents”

Moral emotions have been argued to play a central role in the emergence of cooperation in human-human interactions. This work describes an experiment which tests whether this insight carries to virtual human-human interactions. In particular, the paper describes a repeated-measures experiment where subjects play the iterated prisoner’s dilemma with two versions of the virtual human: (a) neutral, which is the control condition; (b) moral, which is identical to the control condition except that the virtual human expresses gratitude, distress, remorse, reproach and anger through the face according to the action history of the game. Our results indicate that subjects cooperate more with the virtual human in the moral condition and that they perceive it to be more human-like. We discuss the relevance these results have for building agents which are successful in cooperating with humans.

Celso de Melo, Jonathan Gratch: “Expression of Emotions using Wrinkles, Blushing, Sweating and Tears”

Wrinkles, blushing, sweating and tears are physiological manifestations of emotions in humans. Therefore, the simulation of these phenomena is important for the goal of building believable virtual humans which interact naturally and effectively with humans. This paper describes a real-time model for the simulation of wrinkles, blushing, sweating and tears. A study is also conducted to assess the influence of the model on the perception of surprise, sadness, anger, shame, pride and fear. The study follows a repeated-measures design where subjects compare how well is each emotion expressed by virtual humans with or without these phenomena. The results reveal a significant positive effect on the perception of surprise, sadness, anger, shame and fear. The relevance of these results is discussed for the fields of virtual humans and expression of emotions.

Celso de Melo, Jonathan Gratch: “The Effect of Color on Expression of Joy and Sadness in Virtual Humans”

For centuries artists have been exploring color to express emotions. Following this insight, the paper describes an approach to learn how to use color to influence the perception of emotions in virtual humans. First, a model of lighting and filters inspired on the visual arts is integrated with a virtual human platform to manipulate color. Next, an evolutionary model, based on genetic algorithms, is created to evolve mappings between emotions and lighting and filter parameters. A first study is, then, conducted where subjects evolve mappings for joy and sadness without being aware of the evolutionary model. In a second study, the features which characterize the mappings are analyzed. Results show that virtual human images of joy tend to be brighter, more saturated and have more colors than images of sadness. The paper discusses the relevance of the results for the fields of expression of emotions and virtual humans.

Stacy Marsella, Jonathan Gratch, Ning Wang: “Assessing the validity of a computational model of emotional coping”

In this paper we describe the results of a rigorous empirical study evaluating the coping responses of a computational model of emotion. We discuss three key kinds of coping, Wishful Thinking, Resignation and Dis-tancing that impact an agent’s beliefs, intentions and desires, and compare these coping responses to related work in the attitude change literature. We discuss the EMA computational model of emotion and identify sev-eral hypotheses it makes concerning these coping processes. We assess these hypotheses against the beha-vior of human subjects playing a competitive board game, using monetary gains and losses to induce emo-tion and coping. Subject’s appraisals, emotional state and coping responses were indexed at key points throughout a game, revealing a pattern of subject’s al-tering their beliefs, desires and intentions as the game unfolds. The results clearly support several of the hypo-theses on coping responses but also identify (a) exten-sions to how EMA models Wishful Thinking as well as (b) individual differences in subject’s coping responses.

Ning Wang, Jonathan Gratch: “Rapport and Facial Expression”

How to build virtual agents that establish rapport with human? According to Tickle-Degnen and Rosenthal [4], the three essential components of rapport are mutual attentiveness, positivity and coordination. In our previous work, we designed an embodied virtual agent to establish rapport with a human speaker by providing rapid and contingent nonverbal feedback [13] [22]. How do we know that a human speaker is feeling a sense of rapport? In this paper, we focus on the positivity component of rapport by investigating the relationship of human speakers’ facial expressions on the establishment of rapport. We used an automatic facial expression coding tool called CERT to analyze the human dyad interactions and human-virtual human interactions. Results show that recognizing positive facial displays alone may be insufficient and that recognized negative facial displays was more diagnostic in assessing the level of rapport between participants.

USC Coach Pete Carroll, Provost C. L. Max Nikias and Athletic Director Mike Garrett Visit ICT

Pete Carroll, coach of the USC football team, took a break from football practice to experience a different kind of training here at ICT. Coach Carroll was accompanied by USC’s executive vice president and provost, C. L. Max Nikias.Provost Max Nikias, who organized the visit, and USC athletic director Michael Garrett.  The trio learned about several ICT technologies and applications including virtual reality therapy for treating PTSD, the Light Stage scanning systems for creating realistic digital characters and our use of film and storytelling to address ethics and leadership. The “good sports” also interacted with ICT’s virtual humans and tried on our latest virtual reality headgear.

Read the USC News story here.

ICT Training Games in Training and Simulation Journal

The August issue of Training and Simulation Journal features two stories about ICT work. One covers the BiLAT negotiation trainer that recently won an Army Modeling and Simulation award and the other discusses UrbanSim, which teaches commanders to manage operations at a citywide level. Both stories mention the Army’s use of these ICT-developed games to provide training in interpersonal skills, such as building trust and cultural awareness. The stories note that training that models human behavior is being recognized as increasingly important in the Army today.

Read the BiLAT story here.

Read the UrbanSim story here.

3D Teleconferencing Makes Heads Spin at SIGGRAPH – Forbes and Technology Review Cover

ICT’s 3D Teleconferencing Display is being showcased at the SIGGRAPH-09 Emerging Technologies Exhibit. The system, also known as HeadSPIN, is profiled in the August issue of Forbes Magazine and in MIT’s Technology Review. Created in collaboration with the USC School of Cinematic Arts and Fakespace Labs, the display allows a remote participant to appear in 3D and maintain appropriate eye contact with speakers.

The hologram-like effect is reminiscent of the floating images of Princess Leia and Yoda from the Star Wars movies but now in the real world, not a galaxy far far away. “Except for projecting onto thin air, we can do everything Star Wars can,” said Paul Debevec, ICT’s associate director of graphics research and lead developer of the system.

Read the Forbes article here.

Watch Forbes video story here.

Read the MIT Technology Review post here.

Pieter Peers, Bruce Lamond, Abhijeet Ghosh, Paul Debevec, Dhruv Mahajan, Wojciech Matusik, Ravi Ramamoorthi: “Compressive Light Transport Sensing”

In this paper we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid mathematical framework to infer a sparse signal from a limited number of non-adaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for image-based relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting inter-pixel coherency relations. Additionally, we design new non-adaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed high-resolution reflectance fields for image-based relighting.

Recent Coverage for ICT’s “Coming Home” Project

ICT’s project to create a space within Second Life to provide camaraderie, support and resources for returning troops to help them re-assimilate into civilian life was featured in CNET and United Press International articles and on the website Nextgov.com.

“You can think of it as the VFW hall of the 21st century,” said ICT’s Jacquelyn Morie, leader of this project. “Most veterans, when they come back, are not (relocated) into neighborhoods the way people were in World War II. So this gives people a chance to be together, even if they’re widely dispersed.”

Read the CNET story here »

Read the UPI story here »

Read the Nextgov.com story here »

Pieter Peers, Ying Song, Xin Tong, Fabio Pellacini: “SubEdit: A Representation for Editing Measured Heterogeneous Subsurface Scattering”

In this paper we present SubEdit, a representation for editing the BSSRDF of heterogeneous subsurface scattering acquired from real-world samples. Directly editing measured raw data is difficult due to the non-local impact of heterogeneous subsurface scattering on the appearance. Our SubEdit representation decouples these non-local effects into the product of two local scattering profiles defined at respectively the incident and outgoing surface locations. This allows users to directly manipulate the appearance of single surface locations and to robustly make selections. To further facilitate editing, we reparameterize the scattering profiles into the local appearance concepts of albedo, scattering range, and profile shape. Our method preserves the visual quality of the measured material after editing by maintaining the consistency of subsurface transport for all edits. SubEdit fits measured data well while remaining efficient enough to support interactive rendering and manipulation. We illustrate the suitability of SubEdit as a representation for editing by applying various complex modifications on a wide variety of measured heterogeneous subsurface scattering materials.

Pieter Peers, Tim Weyrich, Wojciech Matusik, Szymon Rusinkiewicz: “Fabricating Microgeometry for Custom Surface Reflectance”

We propose a system for manufacturing physical surfaces that, in aggregate, exhibit a desired surface appearance. Our system begins with a user specification of a BRDF, or simply a highlight shape, and infers the required distribution of surface slopes. We sample this distribution, optimize for a maximally-continuous and valley-minimizing height field, and finally mill the surface using a computer-controlled machine tool. We demonstrate a variety of surfaces, ranging from reproductions of measured BRDFs to materials with unconventional highlights.

ICT Games Featured in PC World Magazine Story

A PC World story about the Army’s increasing use of video games discussed ICT’s Virtual IraqUrbanSim and BiLAT applications. Noting that the characters in ICT games feel more human than in commercial games, the article stated, “USC’s Institute for Creative Technologies (ICT) has built a reputation for designing games that make commercial edutainment software look like child’s play—games that help treat post-traumatic stress disorder, let soldiers practice securing and rebuilding an Iraqi city, and even encourage them to develop their skills at negotiation.”

“[The Army is] doing a great job with weapons research,” said Randall W.Hill, Jr., ICT executive director.  “What they ask us is: ‘How do we raise [our soldiers’] cultural awareness?’”

Instead of worrying about how to make a game fun, people like ICT’s Hill and the U.S. Army’s gaming experts ask “How can we design this game to solve a problem?”
As members of the design team focus on answering this question, they come up with games that feel more realistic, more mature, and (unexpectedly) more fun to play, noted the article.

Read the full story.

ICT’s Virtual Worlds and Characters Featured in Armed with Science Interview with Jacquelyn Morie

ICT’s Jacquelyn Morie discussed how research on immersive games, virtual worlds and human intelligence is being used to provide support to military veterans on Armed with Science, a weekly webcast that discusses the importance of science and technology to the Department of Defense. Morie spoke about ICT and about the new project she is leading, the Transitional Online Post-Deployment Soldier Support in Virtual Worlds, or Coming Home, a project that is creating a space within Second Life dedicated to providing camaraderie, support and resources for helping returning soldiers reintegrate into civilian life.

Listen to the segment.

Read about the segment.

Andrew Jones, Magnus Lang, Graham Fyffe, XueMing Yu, Jay Busch, Mark Bolas, Paul Debevec, Ian McDowall: “Achieving Eye Contact in a One-to-Many 3D Video Teleconferencing System”

We present a set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system. The participant’s face is scanned in 3D at 30Hz and transmitted in real time to an autostereoscopic horizontal-parallax 3D display, displaying him or her over more than a $180^\circ$ field of view observable to multiple observers. To render the geometry with correct perspective, we create a fast vertex shader based on a 6D lookup table for projecting 3D scene vertices to a range of subject angles, heights, and distances. We generalize the projection mathematics to arbitrarily shaped display surfaces, which allows us to employ a curved concave display surface to focus the high speed imagery to individual observers. To achieve two-way eye contact, we capture 2D video from a cross-polarized camera reflected to the position of the virtual participant’s eyes, and display this 2D video feed on a large screen in front of the real participant, replicating the viewpoint of their virtual self. To achieve correct vertical perspective, we further leverage this image to track the position of each audience member’s eyes, allowing the 3D display to render correct vertical perspective for each of the viewers around the device. The result is a one-to-many 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferencing systems.

Paul S. Rosenbloom: “Towards a New Cognitive Hourglass: Uniform Implementation of Cognitive Architecture via Factor Graphs”

As cognitive architectures become ever more ambitious in the range of phenomena they are to assist in producing and modeling, there is increasing pressure for diversity in the mechanisms they embody. Yet uniformity remains critical for both elegance and extensibility. Here, the search for uniformity is continued, but shifted downwards in the cognitive hierarchy to the implementation level. Factor graphs are explored as a promising core, with initial steps towards a reimplementation of Soar. The ultimate aim is a uniform implementation level for cognitive architectures affording both heightened elegance and expanded coverage.

USC Researcher Honored for Virtual Reality Innovations

ICT research scientist Albert “Skip” Rizzo was selected as the first ever winner of the Intellectual Leadership Award established by the Los Angeles Chapter of Mensa to recognize, honor and celebrate the region’s leaders in research and innovation.

Rizzo, who is also a research professor at the USC Davis School of Gerontology and the department of psychiatry at the USC Keck School of Medicine, has developed a virtual reality-based therapy for treating PTSD in returning soldiers. The system is currently in use at over 30 hospitals and clinics throughout the country and has shown promise in ongoing clinical trials. He is also researching other uses for virtual reality-based therapies, including for autism, attention deficit disorder and motor rehabilitation following stroke and TBI.

“Skip Rizzo was chosen as our 2009 honoree because his research represents an exciting new frontier using the latest technology to help people not only in the therapeutic process once they have developed a psychological need, but in preventing conditions such as PTSD in our warriors by giving them a realistic idea of what they will face before they go into war,” said Jonathan Carr, director of the Greater Los Angeles Chapter of Mensa. “For these reasons, Skip Rizzo is clearly an intellectual leader in our community and deserves recognition and praise.”

Rizzo was presented with his award at a July 19 reception at the Fairmont Miramar hotel in Santa Monica, where he spoke and gave a live demonstration of virtual reality therapy.

“The use of VR simulation technology is one of a wave of new technologies, that if thoughtfully evolved from both a scientific and clinical perspective, will stand to revolutionize clinical care as we move into 21st Century,” Rizzo said. “I would also like to thank the Mensa members for this recognition and for their enthusiastic and informed curiousity about the research described at the ceremony”

David Traum: “Cultural Models for Virtual Humans”

In this paper, we survey different types of Models of culture for virtual humans. Virtual humans are artificial agents that include both a visual human-like body and intelligent cognition driving action of the body. Culture covers a wide range of common knowledge of behavior and communication that can be used in a number of ways including interpreting the meaning of action, establishing identity, expressing meaning, and inference about the performer. We look at several examples of existing cultural models and point out remaining steps for a more full model of culture.

Ravichander Vipperla, Maria Wolters, Kallirroi Georgila, Steve Renals: “Speech Input from Older Users in Smart Environments”

Although older people are an important user group for smart environments, there has been relatively little work on adapting natural language interfaces to the requirements of older users. In this paper, we focus on a particularly thorny problem: processing speech input from older users. Our experiments on the MATCH corpus show clearly that we need age-specific adaptation in order to recognize older users’ speech reliably. Language models need to cover typical interaction patterns of older people, and acoustic models need to accommodate older voices. Further research is needed into intelligent adaptation techniques that will allow existing large, robust systems to be adapted with relatively small amounts of in-domain, age appropriate data. In addition, older users need to be supported with adequate strategies for handling speech recognition errors.

Invited Talk: Bill Swartout To Give the Robert Engelmore Memorial Lecture at IAAI 09

Bill Swartout, director of technology at USC’s Institute for Creative Technologies, will give the Robert Engelmore Memorial Award Lecture for 2009. Swartout will discuss the synergies among the different threads of AI research that are leading to the goal of computer-generated characters that look and behave just like real people.

ICT’s Bill Swartout Wins Major AI Award

Association for the Advancement of Artificial Intelligence recognizes achievements and service of William Swartout, a USC expert who has redefined personal computing

William Swartout, director of technology at the USC Institute for Creative Technologies, received international recognition for his pioneering work, including efforts towards creating virtual humans that look and behave just like real people.

Swartout, also a research professor of computer science at the USC Viterbi School of Engineering, received the Association for the Advancement of Artificial Intelligence Robert S. Engelmore Memorial Lecture Award, one of the AI community’s top honors, for his contributions throughout a career devoted to expanding the ways humans and computers communicate with each other and exploring what can be achieved through improved interactions.

“This is a well-deserved recognition of Bill’s contributions to the field of artificial intelligence over the last three decades,” said Randal W. Hill, Jr., executive director of ICT.  “He has been at the head of two terrific research organizations and has a gift for attracting very talented researchers, building teams and promoting a vision for what is possible.  We are all very proud of Bill’s accomplishments and this acknowledgment from the AI community.”

As part of the award, Swartout delivered an address at this year’s Innovative Applications of Artificial Intelligence Conference on July 14 in Pasadena. He discussed the goal of creating ever more lifelike virtual humans. A related article will appear in an upcoming issue of AI Magazine.

“We want these computer generated characters to recognize our body language, facial expressions, voice tone and even express their own emotions as well,” said Swartout, who provides overall direction for research efforts at ICT. “The approach we take is to use all the things we know about how people behave and interact, encode that information in computer knowledge bases and use it to drive the behaviors of our virtual humans.”

Swartout is the principal investigator on a National Science Foundation-funded effort to create life-sized virtual human museum guides who have the ability to express knowledge, thoughts, feelings and even memories. These digital docents will be speaking with visitors at the Museum of Science, Boston late this year.

Swartout serves on the Board on Army Science and Technology of the National Academies, is a Fellow of the American Association for Artificial Intelligence and a member of the Joint Forces Command’s Transformation Advisory Group.  Before joining ICT in 1999, Swartout spent nearly two decades at USC’s Information Sciences Institute where he most recently led the Intelligent Systems division. He received his Ph.D. and M.S. in computer science from MIT and his bachelor’s degree from Stanford University.

In granting this year’s Engelmore Award, the AAAI noted Swartout’s, “seminal contributions to knowledge-based systems and explanation, groundbreaking research on virtual human technologies and their applications, and outstanding service to the artificial intelligence community.”

Robert Engelmore, for whom the award was named, was a leading and beloved figure in the field of AI. The award was established in 2003 to recognize his service to the organization and his achievements in his work.

“Bob was a wonderful person with the rare combination of a generous spirit and tack sharp mind,” said Swartout. “It is an honor to receive this award that bears his name.”

Sudeep Gandhe, Nicolle Whitman, David Traum, Ron Artstein: “An Integrated Authoring Tool for Tactical Questioning Dialogue Systems”

We present an integrated authoring tool for rapid prototyping of dialogue systems for virtual humans taking part in tactical questioning simulations. The tool is aimed at helping domain experts who may have little or no knowledge of linguistics or computer science to build virtual characters that can play the role of the interviewee. Here we present the authoring tool and the underlying dialogue system which supports question answering dialogues with additional support for negotiations and other social behavior. Working in a top-down fashion our process begins with the domain specification. The authoring tool generates all relevant dialogue acts and allows authors to assign the language that will be used to refer to the domain elements. The authoring tool can be used to manipulate some aspects of the dialogue strategies employed by the virtual characters. It also supports re-using some of the authored content across different characters. We conclude with a preliminary evaluation of our tool.

Antonio Roque, David Traum: “Improving a Virtual Human Using a Model of Degrees of Grounding”

We describe the Degrees of Grounding model, which tracks the extent to which material has reached mutual belief in a dialogue, and conduct experiments in which the model is used to manage grounding behavior in spoken dialogues with a virtual human. We show that the model produces improvements in virtual human performance as measured by post-session questionnaires.

Matthew Jensen Hays, H. Chad Lane, Mark Core, Dave Gomboc, Milton Rosenberg: “Feedback Specificity and the Learning of Intercultural Communication Skills”

The role of explicit feedback in learning has been studied from a variety of perspectives and in many contexts. In this paper, we examine the impact of the specificity of feedback delivered by an intelligent tutoring system in a game-based environment for cultural learning. We compared two versions: one that provided only “bottom-out” hints and feedback versus one that provided only conceptual messages. We measured during-training performance, in-game transfer, and longterm retention. Consistent with our hypotheses, specific feedback utterances produced inferior learning on the in-game transfer task when compared to conceptual utterances. No differences were found on a web-based post-test. We discuss possible explanations for these findings, particularly as they relate to the learning of loosely defined skills and serious games.

PTSD Virtual Exposure Therapy Featured in Military Training and Simulation Magazine

The latest issue of Military Training and Simulation magazine covers ICT’s Virtual Iraq application for treating PTSD in a story about therapeutic uses of simulation and virtual worlds.

Ryan McAlinden, Andrew Gordon, H. Chad Lane, David Pynadath: “UrbanSim: A Game-based Simulation for Counterinsurgency and Stability-focused Operations”

The UrbanSim Learning Package is a simulation-based training application designed for the U.S. Army to develop commanders’ skills for conducting counterinsurgency operations. UrbanSim incorporates multiple artificial intelligence (AI) technologies in order to provide an effective training experience, three of which are described in this paper. First, UrbanSim simulates the mental attitudes and actions of groups and individuals in an urban environment using the PsychSim reasoning engine. Second, UrbanSim interjects narrative elements into the training experience using a case-based story engine, driven by non-fiction stories told by experienced commanders. Third, UrbanSim provides intelligent tutoring using a simulation-based method for eliciting and evaluating learner decisions. UrbanSim represents a confluence of AI techniques that seek to bridge the gap between basic research and deployed AI systems.

Ning Wang, Jonathan Gratch: “Can a Virtual Human Build Rapport and Promote Learning?”

Research show that teacher’s nonverbal immediacy can have a positive impact on student’s cognitive learning and affect [3]. This paper investigates the effectiveness of nonverbal immediacy using a virtual human. The virtual human attempts to use immediacy feedback to create rapport with the learner. Results show that the virtual human established rapport with learners but did not help them achieve better learning results. The results also suggest that creating rapport is related to higher self-efficacy, and self-efficacy is related to better learning results.

ICT’s Paul Debevec Named Director-at-Large of ACM SIGGRAPH Executive Committee

Paul Debevec, ICT’s associate director for graphics research, has been elected a director-at-large of of the Association for Computing Machinery’s Special Interest Group in Computer Graphics. He will serve a three-year term on this executive committee post that helps set the direction for SIGGRAPH, the premier organization devoted to computer graphics and interactive techniques. Pixar’s Rob Cook, who won an Oscar for his work on RenderMan, was also elected to this position this year.

Abhijeet Ghosh, Tongbo Chen, Pieter Peers, Cyrus Wilson, Paul Debevec: “Estimating Specular Roughness and Anisotropy from Second Order Spherical Gradient Illumination”

This paper presents a novel method for estimating specular roughness and tangent vectors, per surface point, from polarized second order spherical gradient illumination patterns. We demonstrate that for isotropic BRDFs, only three second order spherical gradients are sufficient to robustly estimate spatially varying specular roughness. For anisotropic BRDFs, an additional two measurements yield specular roughness and tangent vectors per surface point. We verify our approach with different illumination configurations which project both discrete and continuous fields of gradient illumination. Our technique provides a direct estimate of the per-pixel specular roughness and thus does not require off-line numerical optimization that is typical for the measure-and-fit approach to classical BRDF modeling.

Skip Rizzo, Thomas Parsons, Bradley Newman, Greg Reger, JoAnn Difede, Barbara O. rothbaum, Robert N. McLay, K. Holloway, Ken Graap, Josh Spitalnick, P. Bordnick, Scott Johnston, Greg Gahm: “Development and Clinical Results from the Virtual Iraq Exposure Therapy Application for PTSD”

Post Traumatic Stress Disorder (PTSD) is reported to be caused by exposure to an extreme traumatic stressor involving direct personal experience of (or witnessing/learning about) an event that involves actual or threatened death or serious injury, or other threat to one’s physical integrity including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Such incidents would be distressing to almost anyone, and are usually experienced with intense fear, horror, and helplessness. Initial data suggests that at least 1 out of 5 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper will present the rationale and description of a VR PTSD therapy application (Virtual Iraq/Afghanistan) and present initial findings from a number of early studies of its use with active duty service members. Virtual Iraq/Afghanistan consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial using Virtual Iraq with 20 treatment completers indicated that 16 no longer met PTSD diagnostic criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.

Ron Artstein, Sudeep Gandhe, Michael Rushforth, David Traum: “Viability of a Simple Dialogue Act Scheme for a Tactical Questioning Dialogue System”

User utterances in a spoken dialogue system for tactical questioning simulation were matched to a set of dialogue acts generated automatically from a representation of facts as triples and actions as pairs. The representation currently covers about 50% of user utterances, and we show that a few extensions can increase coverage to 80% or more. This demonstrates the viability of simple schemes for representing question-answering dialogues in implemented systems.

Paul Debevec Named One of CGI’s Most Important Pioneers by 3D World Magazine

3D World Magazine has named Paul Debevec, ICT’s associate director for graphics research, as one of the seven most important pioneers in the history of computer graphics imagery. The article credits Debevec for developing many of the techniques behind high dynamic range imaging and image based lighting and mentions his current research on face scanning and 3D teleconferencing. The other innovators on the list are George Lucas, Edwin Catmull, Jim Blinn, John Whitney Sr., Chris Landreth and Benoît Mandelbrot.

Read the 3D World Magazine article.

Read the USC Chronicle story.

Bill Swartout to Give Honorary Award Lecture at IAAI-09

Bill Swartout, director of technology at ICT, will give the Robert Engelmore Memorial Award Lecture at the 2009 Innovative Applications of Artificial Intelligence conference 10:30 a.m., Tuesday, July 14, in Pasadena, Calif.. Swartout, is this year’s recipient of the Engelmore Award, presented annually to an individual who has shown extraordinary service to AAAI and the AI community. In conjunction with the award, the recipient is invited to present a keynote lecture at the IAAI conference, and to prepare a companion article for AI Magazine. The award was established in 2003 to honor Dr. Robert S. Engelmore’s extraordinary service to AAAI, AI Magazine, and the AI applications community, and his contributions to applied AI. In his talk, Swartout will discuss the synergies among the different threads of AI research that are leading to the goal of computer-generated characters that look and behave just like real people.

H. Chad Lane, Mark Core, Dave Gomboc, Mike Birch, Milton Rosenberg, John Hart: “Using Written and Behavioral Data to Detect Evidence of Continuous Learning”

We describe a lifelong learner modeling project that focuses on the use of written and behavioral data to detect patterns of learning over time. Related work in essay analysis and machine learning is discussed. Although primarily focused on isolated learning experiences, we argue there is promise for scaling these techniques up to the lifelong learner modeling problem.

Jacki Morie, Jamie Antonisse, Sean Bouchard, Eric Chance: “Virtual Worlds as a Healing Modality for Returning Soldiers and Veterans”

Those who have served in recent conflicts face many challenges as they reintegrate into society. In addition to recovering from physical wounds, traumatic brain injury and post-traumatic stress disorders, many soldiers also face basic psychological issues about who they are and how to find their place in a society that has not shared their experiences. To address these challenges, we have created a space that provides ongoing opportunities for healing activities, personal exploration and social camaraderie in an online virtual world, Second Life. In such worlds, where each avatar is controlled by a live individual, experiences can be unintuitive, uninviting, considered boring or difficult to control. To counter this, we are implementing autonomous intelligent agent avatars that can be “on duty” 24/7, serving as guides and information repositories, making the space and activities easy to find and even personalized to the visitor’s needs. We report the results of usability testing with an in-world veterans’ group. Tests comparing soldiers who use this space as part of their reintegration regimen compared to those who do not are being scheduled as part of the Army’s Warriors in Transition program.

Game Innovation Conference to Feature ICT Training Aid

ICT’s Distribution Management Cognitive Trainer (DMCT)prototype will be presented in a special session at the First International IEEE Consumer Electronics Society’s Games Innovation Conference, Aug 25 – 28, in London. The DMCT prototype application is a personal computer-based training aid that aims to enhance analysis, planning, and decision-making for United States Army logistical planners.

The DMCT supports reinforcing the understanding of the US Army distribution management process, and aids in the development of strategies for best exploiting the capabilities of logistics management systems, including the US Army’s Battle Command Sustainment Support System (BCS3) – the field-recognized logistics command and control tool. The Auto-Tutor provides detailed context-specific feedback which varies according to difficulty level, and the Post-Exercise Review (PXR) offers task-by-task analysis of the Soldier’s performance. The tool can be used in a standalone capacity or in a classroom setting.

The DMCT was made possible through the collaborative efforts of the US Army Product Manager for BCS3, the US Army Research Development and Engineering Command Simulation and Training Technology Center (RDECOM STTC), ICT, and the professional video gaming companies of Quicksilver Software Inc. and Stranger Entertainment.

This session include a range of presentations, tutorials and demonstrations on the following topics:
• Primer on Cognitive Task Analysis
• Live Software Demonstration
• Behind the Scenes

Visit the conference website.

Kallirroi Georgila: “Using Integer Linear Programming for Detecting Speech Disfluencies”

We present a novel two-stage technique for detecting speech disfluencies based on Integer Linear Programming (ILP). In the first stage we use state-of-the-art models for speech disfluency detection, in particular, hidden-event language models, maximum entropy models and conditional random fields. During testing each model proposes possible disfluency labels which are then assessed in the presence of global constraints using ILP. Our experimental results show that by using ILP we can improve significantly the performance of our models with negligible cost in processing time. The less training data is available the larger the improvement due to ILP.

Kenji Sagae, David DeVault, David Traum, Gwen Christian: “Towards natural language understanding of partial speech recognition results in dialog systems”

We investigate natural language understanding of partial speech recognition results to equip a dialogue system with incremental language processing capabilities for more realistic human-computer conversations. We show that relatively high accuracy can be achieved in understanding of spontaneous utterances before utterances are completed.

ICT Research Featured in Films Promoting Science and Math to Middle Schoolers

Many ICT projects are highlighted on the web in segments on LabTV, a project of the National Defense Education Fund. LabTV’s aim is to bring science, math, engineering and technology to life for young people by showcasing DoD-sponsored work being done across the country. LabTV’s crew spent three days filming at ICT and have produced a series of episodes about several of our projects, including SGT Star, ICT’s virtual human work, FlatWorld, and virtual reality therapy for PTSD. You can see the segments at the link below. Check back for upcoming stories about the ICT Graphics Lab and our games for learning.

Randall Hill Discusses ICT Innovations

The USC Chronicle’s “Capital Connections” section featured a recent trip to Washington, D.C. by ICT Executive Director Randall W. Hill, Jr.  Hill met with congressional staff in the offices of Sens. Barbara Boxer (D-Calif.), Dianne Feinstein (D-Calif.), Jack Reed (D-R.I.), Christopher Bond (R-Mo.), James Inhofe (R-Okla.), Tom Cole (R-Okla.) and the House Armed Services Committee about defense research on May 4 and 5. He discussed expanding the use of several ICT-developed applications, including virtual reality therapy for treating post-traumatic stress disorder, virtual patients designed to help train clinicians identify mental health disorders and interactive programs to improve leadership and critical thinking skills for military personnel. “Leaders in Congress are interested in seeing discoveries in the computer and social sciences being effectively transitioned to have a positive impact,” Hill said. “It is very satisfying to be able to share with them the many ways interdisciplinary research coming out of USC is being used to benefit not just our servicemen and women but society at large.”

PBS Frontline Hosts Online Discussion of Military and Technology with ICT’s Skip Rizzo

As part of it’s continuing coverage of the ways technology shapes how we live today, the team from PBS Frontline’s Digital Nation today hosted a live online discussion about the U.S. military’s applications of modern digital technology.  “Digital Warriors: Our 21st Century Military,” featured a panel of three experts; Lt. Gen. Robert J. Elder, commander of the 8th Air Force, Air Combat Command, at Barksdale Air Force Base, Louisiana, where he served as the first commander of Air Network Operations and led the development of the cyberspace mission for the Air Force; Christian Lowe, award-winning military journalist and current editor of DefenseTech; and Dr. Albert ‘Skip’ Rizzo, research scientist and professor at ICT and developer of the Virtual Iraq treatment for combat-related post-traumatic stress disorder (PTSD). The forum covered a wide range of issues, including cyber-security, drones, virtual reality training, virtual reality medical treatment, and the “Soldier 2.0”.

Sin-hwa Kang, Jonathan Gratch: “Associations between Interactants’ Personality Traits and Their Feelings of Rapport in Interactions”

This study explored associations between the personality traits of human subjects and theirfeelings of rapport when they interacted with either a virtual agent or a real human. The animated graphical agent, the Responsive Agent, responded to real human subjects’ storytelling behavior, using appropriately timed nonverbal (contingent) feedback. Interactants’ personality factors of Extroversion, Agreeableness, Conscientiousness, and Openness were related to three self-reported components of rapport: Positivity, Attentiveness, and Coordination; and to three behavioral indications of rapport: Meaningful Words, Disfluency, and Prolonged Words. The results revealed that subjects who scored higher on Conscientiousness reported higher rapport when interacting with another human, while subjects who scored higher on Agreeableness reported higher rapport while interacting with a virtual agent. The effects of these personality variables differed significantly across the two experimental groups. The conclusions provide a step toward further development of rapport theory that contributes to enhancing the interactional fidelity of virtual humans.

Andrew Gordon, Reid Swanson: “Identifying Personal Stories in Millions of Weblog Entries”

Stories of people’s everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million personal stories.

Andrew Gordon, Reid Swanson: “Open Domain Collaborative Storytelling With Say Anything”

In this demonstration we present Say Anything, an open domain interactive storytelling application where an author’s original story sentences are used to select subsequent sentences from a corpus of millions of stories extracted from Internet weblogs.

Jonathan Ito, David Pynadath, Stacy Marsella: “Self-Deceptive Decision Making: Normative and Descriptive Insights”

Computational modeling of human belief maintenance and decision-making processes has become increasingly important for a wide range of applications. We present a framework for modeling the psychological phenomenon of self-deception in a decision-theoretic framework. Specifically, we model the self-deceptive behavior of wishful thinking as a psychological bias towards the belief in a particularly desirable situation or state. By leveraging the structures and axioms of expected utility (EU) we are able to operationalize both the determination and the application of the desired belief state with respect to the decision-making process of expected utility maximization. While we categorize our framework as a descriptive model of human decision making, we show that when specific errors are present, the realized expected utility of an action biased by wishful thinking can exceed that of an action motivated purely by the maximization of expected utility. Finally, in order to provide a descriptive characterization of our framework, we present a discussion of wishful thinking with respect to the Certainty Effect and the Allais Paradox, two specific documented inconsistencies of human behavior. In this discussion we show that our framework has the descriptive flexibility needed to account for both the Certainty Effect and Allais Paradoxes.

Richard Clark, Daniel Schwartz: “Adaptability Research Workshop”

The workshop agenda was designed to begin the process of overcoming the bias implicit in past learning and adaptability research and practice. Of the ten academic researchers invited, half represented a “more constructivist” point of view and half a “more direct instruction” point of view. Military researchers were asked to help their academic colleagues understand the many future challenges of military research, development,training and education. In order to focus the discussions Dr. Rose Hansonof PDRI and GEN Waldo Freeman (R) of the Institute for Defense Analysis agreed to allow the participants to focus their ideas on a proof of concept study requested by the Assistant Secretary of Defense and currently underway (DiGiovianni, 2008). Tasks to be investigated in that study included the capability of military leaders to manage cross-cultural encounters and negotiations. It was reasoned that these specific tasks would help anchor the discussions and the discussants. All participants were asked to collaborate in order to summarize what we know from past research and on the design of future adaptability research and development projects that could be used to support adaptability in cross-cultural encounters and negotiations.
The workshop was organized around three questions:
1) How will we know adaptability when we encounter it?
2) What training and development strategies increase adaptable performance?
3) What research designs are best for investigating the most cost-effective adaptability training methods?
The 25 workshop participants were assigned to five breakout groups, each consisting of two researchers who supported different theories and three military researchers. After a period of discussion about one of the questions,each group brought back their solution and presented it to the rest of the participants.

Jina Lee, Stacy Marsella: “Learning a Model of Speaker Head Nods using Gesture Corpora”

During face-to-face conversation, the speaker’s head is continually in motion. These movements serve a variety of important communicative functions, and may also be influ- enced by our emotions. The goal for this work is to build a domain-independent model of speaker’s head movements and investigate the effect of using affective information dur- ing the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker’s head nods using an annotated corpora of face-to-face human interaction and emotion labels gener- ated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help pre- dict head nods better than when no affective information is used.

Congresswoman Roybal-Allard Christens New Military Social Work Program

The USC School of Social Work and the USC Institute of Creative Technologies held a reception to celebrate the school’s new military social work and veteran services program, the first of its kind at a research university, and recognize Congresswoman Lucille Roybal-Allard for helping secure $3.2 million in federal funding for its development.

“We are very grateful for the opportunity to participate in this one-of-a-kind program. It’s going to be filling a void,” said Randall W. Hill, Jr., executive director of ICT.  “By securing this funding, you are not only bringing attention to the issues that are facing our service members, but you are also helping to train a generation of social workers and specialists who are going to be able to treat them as they return.”

The School of Social Work is collaborating with ICT to use some of its immersive technologies to train social workers and treat patients. ICT is working with the U.S. Army and has developed a virtual reality exposure therapy to treat soldiers and veterans suffering from post-traumatic stress syndrome.  ICT research scientist Skip Rizzo was on hand to demonstrate the therapy.

Read about the kick-off event.

ICT Featured in Wall Street Journal Story about Creating Realistic Digital Characters

An article describing advances in and efforts to create convincing computer-animated characters featured the face scanning work of Paul Debevec and the ICT Graphics Lab. The article noted that the lab’s Light Stage technology was used to scan a mask of Brad Pitt in order to provide accurate lighting data for creating a believable character in the film, “The Curious Case of Benjamin Button”. The story also noted the collaboration between ICT and Image Metrics which led to the creation of the lifelike digital Emily.

Read the story.

Bruce Lamond, Pieter Peers, Abhijeet Ghosh, Paul Debevec: “Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination”

We present an image-based method for separating diffuse and specular reflections using environmental structured illumination. Two types of structured illumination are discussed: phase-shifted sine wave patterns, and phase-shifted binary stripe patterns. In both cases the low-pass filtering nature of diffuse reflections is utilized to separate the reflection components. We illustrate our method on a wide range of example scenes and applications.

ICT’s Abhijeet Ghosh Named Research Assistant Professor

The department of computer science of the USC Viterbi School of Engineering appointed Abhijeet Ghosh a research assistant professor. Ghosh is a member of the ICT Graphics Lab team where he specializes in realistic modeling of scenes and people so that they can be successfully composited in virtual spaces.

Executive Director Randall W. Hill, Jr. Interviewed on DoD’s BlogTalk Radio

Dr. Randall Hill, participated in a blogger’s roundtable where he spoke about the mission and accomplishments of ICT, with a particular focus on how ICT works with the entertainment and game industries to create cutting-edge technologies for soldiers.

Listen to the interview.

USC to Award Honorary Degree to Early ICT Champion

Anita K. Jones a professor, computer scientist, and former federal director of defense research and engineering is being recognized for her insightful leadership that paved the way for the founding of USC’s Institute for Creative Technologies. Honorary degrees represent the highest award the university confers. California Governor Arnold Schwarzenegger is also among those receiving the honor this year.

USC’s Honorary Degree Page.

Anita Jones’ website at the University of Virginia.

ICT Research Highlighted at Inaugural TEDxUSC Conference

Paul Debevec, ICT’s associate director for graphics research, was one of the featured presenters at the first-ever TEDx event. The TED conference began 25 years ago a to bring together some of the world’s greatest thinkers.  USC recently piloted the TEDx program – a new program of independently organized events TED is rolling out to select organizations around the world. Debevec spoke to the audience of over 1000 people about his work in creating photoreal digital characters. In addition, event attendees experienced live demonstrations of ICT’s Sgt. Star interactive virtual human and virtual therapies for aiding PTSD and motor rehabilitation at the conference reception.

Read more about the conference.

ICT’s Paul Debevec Honored with Elan Visionary Award

The ELAN awards honor achievement in the visual effects, animation and video game industry. Paul Debevec, ICT’s associate director for graphics research, is this year’s recipient of the peer-selected “VISIONARY AWARD” for VFX, Paul is best known for his pioneering work in high dynamic range imaging and image-based modelling and rendering. He will be accepting his award at the ceremony April 25, 2009 in Vancouver.

Read the announcement.

Visit the awards’ website.

Sin-hwa Kang, Jonathan Gratch, James H. Watt: “The Effect of Affective Iconic Realism on Anonymous Interactants’ Self-Disclosure”

In this paper, we describe progress in research designed to explore the effect of the combination of avatars’ visual fidelity and users’ anticipated future interaction on self-disclosure in emotionally engaged and synchronous communication. We particularly aim at exploring ways to allow users’ self-disclosure while securing their anonymity, even with minimal cues of a virtual human, when users anticipate future interaction. The research investigates users’ self-disclosure through measuring their behaviors and feelings of social presence in several dimensions.

Belinda Lange: “Games for rehabilitation: virtual reality, robots and more”

ICT Scent Collar Wins Patent

Full sensory immersive worlds are the holy grail of games and simulations. The newly-patented Scent Collar is a personal scent release device for use by an individual in a virtual environment simulation and can provide several scents for use during the experience. Each unique scent is contained in an individual cartridge embedded in a wearable collar.

According to inventor Jacki Morie, an ICT project leader who both designed the collar and oversaw its development, one of the problems this invention was designed to solve was that earlier scent release systems would fill up a whole room with the smell and it was difficult to remove effectively.

“By keeping the amounts to a minimum, and directing the scents directly towards the nose, there is no problem with lingering scents,” she said.

The collar has been used in ICT’s virtual reality environments to increase the emotional realism, and in an experiment designed to test the effects of scent and game play on memory of a virtual environment.

The prototype was fabricated by Anthrotronix, Inc. The co-inventor is Donat-Pierre Luigi.

Virtual Reality Therapy Featured on the Huffington Post

A post by blogger Sam Spear features ICT’s Skip Rizzo and the virtual reality treatment for Post-Traumatic Stress Disorder. The story notes, that in this project the clinician will have control over every aspect of the environment, changing scenarios, sounds, weather and the intensity of the experience. “Our aim here is not to re-traumatize the person, but rather to re-expose them to relevant traumatic events in a graduated way that they can handle.” Rizzo said.

Read the post.

Steve Solomon, Matthew Jensen Hays, Grace Chen, Milton Rosenberg: “Evaluating a Framework for Representing Cultural Norms for Human Behavior Models”

Cultural awareness training is seen as a necessity in the military, in international business, and in diplomacy. The Culturally-Affected Behavior project has defined a framework for encoding cultural norms and values that facilitates the creation of human behavior models having cultural knowledge separate from domain knowledge. We evaluated a simulation based on the framework as a tool for learning cultural norms. Users were provided with a worked example of a meeting with a first virtual character as a training session, and were subsequently able to distinguish appropriate socio-cultural actions from inappropriate actions in a meeting with a second character having the same culture, and in a judgment survey.

ICT’s Stacy Marsella Predicts a Future with Virtual Humans Indistinguishable from Real People

LiveScience highlighted research by Stacy Marsella, a research associate professor at ICT.  “I think eventually we’ll be able to convince people that they’re interacting with a human,” Marsella told LiveScience, but he added that he couldn’t predict how long that might take.

Marsella has been helping the U.S. Army develop artificial intelligence that can power virtual training simulations, the story noted. Such virtual characters need to have the right facial expressions and body movements to allow human trainees to feel comfortable interacting with them. “It turns out that, as human beings, we’ve developed these incredible capacities to interact with each other using language and visual, nonverbal behavior,” Marsella said. “Without nonverbal behavior, it doesn’t look good—it looks sick or demented.”

According to the story, creating an AI that can carry on a sophisticated conversation with humans remains difficult. The U.S. Army wants such AI to help train soldiers to deal with complex social situations, such as mediating among tribal elders in Afghanistan.  “Developing a virtual human is the greatest challenge of this century,” said John Parmentola, U.S. Army director for research and laboratory management.

Marsella and other researchers working with Parmentola have even floated the idea of someday testing their AI in online video games, where thousands of human-controlled characters already run around. That would essentially turn games such as “World of Warcraft” into a huge so-called Turing Test that would determine whether human players could tell that they were chatting with AI.

Read the article.

ICT Director of Technology Honored by AI Community

Bill Swartout, ICT’s director of technology, has been selected as the seventh recipient of the Association for the Advancement of Artificial Intelligence Robert S. Engelmore Memorial Lecture Award, sponsored by IAAI and AI Magazine. The award is given to people who have shown extraordinary service to AAAI and the AI community and will be presented at IAAI-09 in Pasadena.

PBS Frontline Films at ICT

A crew from Ark Media recently visited ICT and filmed many of our researchers for an upcoming PBS Frontline documentary currently titled, “Digital Nation”. The program, set to air in the fall, is a far-reaching exploration into what it means to be human in the digital age. The documentary’s website is set to premiere on March 24.

Frontline’s Digital Nation page

Reid Swanson, Andrew Gordon: “A Comparison of Retrieval Models for Open Domain Story Generation”

In this paper we describe the architecture of an interactive story generation system where a human and computer each take turns writing sentences of an emerging narrative. Each turn begins with the user adding a sentence to the story, where the computer responds with a sentence of its own that continues what has been written so far. Rather than generating the next sentence from scratch, the computer selects the next sentence from a corpus of tens of millions of narrative sentences extracted from Internet weblogs. We compare five different retrieval methods for selecting the most appropriate sentence, and present the results of a user study to determine which of these models produces stories with the highest coherence and overall value.

Bradley Newman, Matt Liewer, Skip Rizzo, Thomas Parsons, Anton Treskunov, Luke Yeh, Jarrell Pair, Greg Reger, Barbara O. Rothbaum, JoAnn Difede, Josh Spitalnick, Robert N. McLay: “A Virtual Iraq System for the Treatment of Combat-Related Posttraumatic Stress Disorder”

Posttraumatic Stress Disorder (PTSD) is reported to be caused by traumatic events that are outside the range of usual human experience including (but not limited to) military combat, violent personal assault, being kidnapped or taken hostage and terrorist attacks. Initial data suggests that at least 1 out of 5 Iraq War veterans are exhibiting symptoms of depression, anxiety and PTSD. Virtual Reality (VR) delivered exposure therapy for PTSD has been previously used with reports of positive outcomes. The current paper is a follow-up to a paper presented at IEEE VR2006 and will present the rationale and description of a VR PTSD therapy application (Virtual Iraq) and present the findings from its use with active duty service members since the VR2006 presentation. Virtual Iraq consists of a series of customizable virtual scenarios designed to represent relevant Middle Eastern VR contexts for exposure therapy, including a city and desert road convoy environment. User-centered design feedback needed to iteratively evolve the system was gathered from returning Iraq War veterans in the USA and from a system deployed in Iraq and tested by an Army Combat Stress Control Team. Results from an open clinical trial using Virtual Iraq at the Naval Medical Center-San Diego with 20 treatment completers indicate that 16 no longer met PTSD diagnostic criteria at post-treatment, with only one not maintaining treatment gains at 3 month follow-up.

ICT Scientist Earns Appointment in USC’s Computer Science Department

ICT’s Louis-Philippe Morency has been appointed as a research assistant professor in the computer science department of the USC Viterbi School of Engineering. The goal of Morency’s research is to recognize, model and predict human nonverbal behavior in the context of interaction with virtual humans, robots and other human participants.

Army News Service Covers ICT

A story about ICT in the Army News Service discussed how ICT programs are being used to develop better, more immersive training experiences. ICT Executive Director Randall W. HIll, Jr. was interviewed for the story. “We believe the key here is engagement,” he said. “That’s where the entertainment industry comes in and that’s where we are trying to bring that capability in—the technologies that we are developing to support interactive digital media for the purpose of training and for actually a lot of other uses too.”

Read the story.

Science Springs to Life for Students

Researchers at the USC Institute for Creative Technologies show how they use science to create special effects, virtual humans and much more.

Don’t ignore your artistic side. That was the message delivered to future scientists from Francisco Bravo Medical Magnet High School during their visit to the USC Institute for Creative Technologies.

A group of nearly 30 students from Bravo’s newly established Engineering for Health Academy spent a recent morning speaking with researchers at the institute as part of a program funded by the California Department of Education to expose high schoolers to career paths based in science, technology, engineering and math.

Read the full story.

PBS’s SoCal Connected Covers Virtual Therapy and Patients at ICT

A crew from PBS-affiliate KCET came to ICT and interviewed Skip Rizzo about his virtual reality exposure therapy for treating PTSD. In the segment, he discusses the growing issue of female sexual assault and introduces ICT virtual patient Sgt. Justina, who is being developed to help train clinicians who may see patients who may have been victims of sexual assault.

Watch the segment.

Investor’s Business Daily Features ICT’s 3-D Videoconferencing Technology

A story about the promise and progress of 3-D videoconferencing featured the recently unveiled 3-D videoconferencing system developed by Paul Debevec and the ICT Graphics Lab.  “We’re trying to get at that wonderful, magical thing that happens when people are together in person, without them having to actually be there,” said Debevec. Debevec and colleagues demonstrated the system in December at the Army Science Conference in Orlando, Fla. Visitors viewing the USC exhibit were able to converse with a 3-D image while the real person sat out of sight and earshot. The story noted that Debevec has also worked with face-scanning technologies and photorealistic visual effects, drawing from both areas for his team’s 3-D system. “We’re doing the real thing as best we can,” he said, “essentially re-creating people out of thin air.”

NPR Features ICT’s Paul Debevec and Light Stage Technology

NPR’s Laura Sydell interviewed Paul Debevec of the ICT Graphics Lab in a story about the technologies behind creating a convincing digital Brad Pitt for the movie “The Curious Case of Benjamin Button.” The story reported Debevec helped develop some of the technology that made Benjamin Button possible and said he already knows of projects in the works that will move the technology forward. “The genie is out of the bottle,” he said. “It’s just a matter of a couple more film projects coming through that really refine the technologies.”

The web version of the story has pictures of Debevec and Light Stage 6.

Read the whole story.

Jacki Morie, Jeff Worth: “StoryBox Workshop””

The StoryBox Workshop was held at ICT’s McConnell Facility, with Jeff Wirth, designer of the StoryBox format, several interactors from Orlando and Los Angeles, and a participant group of approximately 20 people. Jeff started the day with an introduction to interactive improvisation, and explained the specific differences between standard improvisation and the interactions designed for the StoryBox. The second half of the day Jeff and the interactors led several experiences in the StoryBox with participants in the role of the spectactor. Each session was followed by a lively discussion. On day two the group was trained in specific interactor improv techniques in the morning session, after which everyone (in small groups) brainstormed some story questions that could be applied to the StoryBox approach. These ideas were then implemented in the StoryBox, again followed by a very lively set of discussions.Following the workshop, a social networking site was established for posting of videos and continued discussion among the group members.

ICT’s PTSD Virtual Therapy Featured on NBC News Chicago

Chicago NBC affiliate WMAQ-TV featured the virtual reality technology used to treat veterans with post-traumatic stress disorder, developed by ICT. The technology is now being used in a Chicago area hospital, the story reported.

Read the story.

Jacki Morie: “Re-Entry: Online virtual worlds as a healing space for veterans”

We describe a project designed to use the power of online virtual worlds as a place of camaraderie and healing for returning United States military veterans – a virtual space that can help them deal with problems related to their time of service and also assist in their reintegration into society. This veterans’ space is being built in Second Life, a popular immersive world, under consultation with medical experts and psychologists, with several types of both social and healing activities planned. In addition, we address several barrier issues with virtual worlds, including lack of guides or helpers to ensure the participants have a quality experience. To solve some of these issues, we are porting the advanced intelligence of the ICT’s virtual human characters to avatars in Second Life, so they will be able to greet the veterans, converse with them, guide them to relevant activities, and serve as informational agents for healing options. In this way such “avatar agents” will serve as autonomous intelligent characters that bring maximum engagement and functionality to the veterans’ space. This part of the effort expands online worlds beyond their existing capabilities, as currently a human being must operate each avatar in the virtual world; few autonomous characters exist. As this project progresses we will engage in an iterative design process with veteran participants who will be able to advise us, along with the medical community, on what efforts are well suited to, and most effective within, the virtual world.