Army Office of Video Games

The U.S. military has been using games for decades to train its troops. Now, for the first time, the Army has set up a project office, just for building and deploying games.

Read the full article.

ICT Paper Wins Award

A paper authored by ICT researchers was honored with a best paper award for the Emerging Concepts and Innovative Technologies subcommittee and nominated for best paper overall at the Interservice/Industry Training,Simulation and Education Conference (I/ITSEC) in Orlando, Fla. The paper; “Building Interactive Virtual Humans for Training Environments,” draws on the experiences and years of research of ICT’s virtual humans group and addresses the research issues, technology and value of developing virtual humans for training environments. Additionally, it discuss the problems, issues and solutions involved in building such systems and currently implemented applications using virtual characters for negotiation training, virtual patients and tactical questioning. Patrick Kenny, Arno Hartholt, Jonathan Gratch, William Swartout, David Traum, Stacy Marsella and Diane Piepol authored the paper. Patrick Kenny presented it at the conference.

Visit the I/ITSEC website.

Jacki Morie, Tracy Fullerton, Celia Pearce, Janine Fron: “A Game of One’s Own: Towards a New Gendered Poetics of Digital Space”

The techno-fetishism of computer game culture has lead to a predominately male sensibility towards the construction of space in digital entertainment. Real-time strategy games conceive of space as a domain to be conquered; first-person shooters create labyrinthine battlefields in which space becomes a context for combat. Massively multiplayer games offer the opportunity for non-linear exploration, but emphasize linear achievement within a combat-based narrative. In this paper, we argue for a new gendered, regendered and perhaps degendered poetics of game space, rethinking ways in which space is conceptualized and represented as a domain for play. We argue for a more egalitarian virtual playground that acknowledges and embraces a wider range of spatial and cognitive models, referencing literature, philosophy, fine art and non-digital games for inspiration. Reflecting on a variety of sources, beginning with Virginia Woolf’s A Room of One’s Own and Bachelard’s Poetics of Space, feminist writings of Charlotte Gilman Perkins, Simone de Beauvoir, Hélène Cixous, Judith Butler, Janet Murray, and including contemporary game writers such as Lizbeth Klastrup, Mary Flanagan, Maia Engeli, and T.L. Taylor, we will argue for a new gendered poetics of game space, proposing an inclusionary approach that integrates feminine conceptions of space into the gaming landscape.

Cyrus Wilson: “From a New Perspective”

A microscope allows us to capture scenes that are quite different from those we experience in our daily lives. At the limit of the smallest objects that can be observed with visible light, different rules dominate image formation. I will discuss my work exploring what types of information can be obtained from such images. Specifically, I will present a study of the movement of animal cells, in which I was able to uncover new details about the underlying molecular processes—which are themselves too small to see even under the microscope—by capturing scenes in which the effects of the processes are manifested over larger, accessible scales. (No prior biological background will be required.) Then I will share some preliminary work, inspired by challenges in biological research but which considers images in more abstract and general terms, which foreshadows my aspiration to step outside the biological and microscopic contexts.

Reid Swanson: “First person narrative story extraction and retrieval”

Reid is a 3rd year computer science PhD student here at ICT, and will be giving a talk on the work he did as part of his masters thesis in Computational Linguistics. This talk will include a good discussion of many of the technologies that went into the story management applications built as part of the ICT Story Representation & Management project.

Patrick Kenny, Arno Hartholt, Jonathan Gratch, Bill Swartout, David Traum, Diane Piepol, Stacy Marsella: “Building Interactive Virtual Humans for Training Environments”

There is a great need in the Joint Forces to have human to human interpersonal training for skills such as negotiation, leadership, interviewing and cultural training. Virtual environments can be incredible training tools if used properly and used for the correct training application. Virtual environments have already been very successful in training Warfighters how to operate vehicles and weapons systems. At the Institute for Creative Technologies (ICT) we have been exploring a new question: can virtual environments be used to train Warfighters in interpersonal skills such as negotiation, tactical questioning and leadership that are so critical for success in the contemporary operating environment? Using embodied conversational agents to create this type of training system has been one of the goals of the Virtual Humans project at the institute. ICT has a great deal of experience building complex, integrated and immersive training systems that address the human factor needs for training experiences. This paper will address the research, technology and value of developing virtual humans for training environments. This research includes speech recognition, natural language understanding & generation, dialogue management, cognitive agents, emotion modeling, question response managers, speech generation and non-verbal behavior. Also addressed will be the diverse set of training environments we have developed for the system, from single computer laptops to multi-computer immersive displays to real and virtual integrated environments. This paper will also discuss the problems, issues and solutions we encountered while building these systems. The paper will recount subject testing we have performed in these environments and results we have obtained from users. Finally the future of this type of Virtual Humans technology and training applications will be discussed.

ICT Researcher Named to Leadership Post

Jonathan Gratch has been named president-elect of the newly-formed Human-Machine Interaction Network on Emotion (HUMAINE) Association. The group, founded in June 2007, is a professional, world-wide association for researchers in emotion-oriented/affective computing. Gratch is associate director for virtual humans research at ICT.

Visit the Humaine Association website.

Jonathan Gratch: “Research Workshop: Transactional Emotions”

The Transactional Emotions Workshop brought together psychologists and technologists interested in “transactional emotions,” meaning those emotions that arise from an interaction between individuals. Discussion topics included theoretical constructs, empirical findings and technology that can assist in recognizing, classifying and synthesizing emotional behavior. Some of the participants are already involved in collaborations with ICT (Dr. Duncan, Dr. Carnevale, Dr. Movellan, Dr. Narayanan, Dr. Pynadath) and these were strengthened as a consequence of the meeting. The knowledge learned during the meeting will directly benefit ICT researchers involved with virtual human and emotion research. The meeting also helped raise ICT’s profile among a diverse and distinguished collection of researchers.

There are plans to publish a collection of papers from the meeting, either as a special issue of a journal or as a book.

Army’s Newest Recruit: A Fully Interactive Virtual Human

Sgt. Star never sleeps. He doesn’t stop to eat or drink either. It is not just that the strapping 6-foot-tall soldier is devoted to his job – traveling the country with Army recruiters answering questions about being in the Army.

Sgt. Star is always “on” because he is completely computer generated, a product of some of the latest advances in virtual human research and development taking place at the University of Southern California’s Institute for Creative Technologies (ICT).

Here, every nuance of human behavior, from eye rolling to toe tapping and from word choice to voice tone, is studied and incorporated using ICT’s innovative interactive character technologies.

The goal is to infuse virtual humans like Sgt. Star with real-life emotions and actions as useful as they are unique. While Sgt. Star teaches about what it is like to be a soldier, other ICT virtual characters help develop skills in leadership, negotiation and cultural awareness.

“Sgt. Star is a shining example of the potential of virtual humans to aid in teaching and training,” said Randall Hill Jr., executive director of ICT. “From classrooms to boardrooms, we see endless opportunities for using characters from make-believe worlds to help people think more clearly and critically in the real world.”

Last year, ICT researchers were asked by the United States Army Accessions Command to bring Sgt. Star to life. Until now, he has existed as a non-animated web chat personality on the GoArmy.com website.  There he responds verbally and in writing to visitor-typed questions using artificial intelligence software that was developed by Next IT of Spokane, WA.

ICT’s creation—which recently debuted at the Association of the United States Army Conference in Washington DC, is heading to the Future Farmers of America Conference in Indianapolis October 24 – 27 – elevates Sgt. Star to a new level of realism.

Projected onto a semi-transparent screen, he now appears full-sized and fully-formed. His chest moves with each breath, his eyebrows arch to certain questions.
And ICT’s expertise in bringing Hollywood storytelling techniques to cutting-edge computer graphics led to a personality to match his good looks.

He speaks, he shrugs, he laughs and he informs.

“Creating an interactive character such as Sgt Star is not dissimilar to bringing a traditional animated character to life,” said Diane Piepol who has a background in feature films and is the ICT project director who oversaw Sgt. Star’s creation. “We need to develop their scripted replies, their motivation and conversational gestures.”

Kim LeMasters, ICT’s creative director and a former entertainment industry writer, producer and executive, wrote a database of answers that can be matched to questions from members of the public. Now Sgt. Star is ready and willing to talk about topics including Army careers, training, education and tuition for college. He can also handle queries about the technology behind his development and explain how his creation fits in with plans for future Army training environments.

Apparently this team that spent countless hours getting Sgt. Star ready for his current tour of duty doesn’t need sleep either. They are already hard at crafting future generations of virtual humans who they envision serving as everything from museum guides to office receptionists.

“SGT Star shows that virtual human technology is ready to be moved out of the lab and into useful applications,” said Bill Swartout, ICT’s director of technology.  “As the technology we have currently under development in the lab matures, we can expect to see some amazing things from the successors to SGT Star in just a few years.”

About ICT

USC Institute for Creative Technologies brings together the best minds in cinema, science and technology to study and create interactive virtual environments. ICT is the leader in producing believable, unforgettable and accessible immersive experiences.

A unique collaboration between the entertainment industry, academia and the Department of Defense, ICT is revolutionizing learning through the development of educational and engrossing interactive digital media.

Andrew Gordon and Stacy Marsella: “Research Workshop: Theory of Mind”

The Theory of Mind Workshop brought together researchers from several relevant disciplines, including psychology, philosophy, neural science, linguistics and computer science to discuss a range of topics including alternative theories about how people model others, the roles those models play in human social interaction as well as how these theories can be computational modeled.

This workshop yielded many benefits for the Institute for Creative Technologies and the U.S. Army. Because of the central role theory of mind plays in human social interaction, there has been a growing interest in computationally modeling of theory of mind. These computational models are, in particular, playing a role in improving human-computer, and human-robot, interaction. The Institute of Creative Technology is a leader in research in such applications, especially the design of virtual humans that can interact with people much like people interact with each other. In addition, computational models and virtual humans are increasingly being used as methodological tools in the psychological study of human behavior. By bringing together researchers from the many fields studying theory of mind, we created a cross-fertilization of ideas and opened up new research approaches.

The workshop organizers plan to pursue numerous collaborations that resulted from workshop activities, and plan to publish an article describing interdisciplinary approaches to Theory of Mind reasoning at an upcoming international conference on Artificial Intelligence.

Gudny Jonsdottir, Jonathan Gratch, Edward Fast, Kristinn Thorisson: “Fluid Semantic Back-Channel Feedback in Dialogue: Challenges and Progress”

Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase’s onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.

Patrick Kenny, Thomas Parsons, Jonathan Gratch, Skip Rizzo: “Virtual Patients for Clinical Therapist Skills Training”

Virtual humans offer an exciting and powerful potential for rich interactive experiences. Fully embodied virtual humans are growing in capability, ease, and utility. As a result, they present an opportunity for expanding research into burgeoning virtual patient medical applications. In this paper we consider the ways in which one may go about building and applying virtual human technology to the virtual patient domain. Specifically we aim to show that virtual human technology may be used to help develop the interviewing and diagnostics skills of developing clinicians. Herein we proffer a description of our iterative design process and preliminary results to show that virtual patients may be a useful adjunct to psychotherapy education.

Jonathan Gratch, Ning Wang, Jillian Gerten, Edward Fast, Robin Duffy: “Creating Rapport with Virtual Agents”

Recent research has established the potential for virtual characters to establish rapport with humans through simple contingent nonverbal behaviors. We hypothesized that the contingency, not just the frequency of positive feedback is crucial when it comes to creating rapport. The primary goal in this study was evaluative: can an agent generate behavior that engenders feelings of rapport in human speakers and how does this compare to human generated feedback? A secondary goal was to answer the question: Is contingency (as opposed to frequency) of agent feedback crucial when it comes to creating feelings of rapport? Results suggest that contingency matters when it comes to creating rapport and that agent generated behavior was as good as human listeners in creating rapport. A “virtual human listener” condition performed worse than other conditions.

Jina Lee, Stacy Marsella, David Traum, Jonathan Gratch, Brent Lance: “The Rickel Gaze Model: A Window on the Mind of a Virtual Human”

Gaze plays a large number of cognitive, communicative and affective roles in face-to-face human interaction. To build a believable virtual human, it is imperative to construct a gaze model that generates realistic gaze behaviors. However, it is not enough to merely imitate a person’s eye movements. The gaze behaviors should reflect the internal states of the virtual human and users should be able to derive them by observing the behaviors. In this paper, we present a gaze model driven by the cognitive operations; the model processes the virtual human’s reasoning, dialog management, and goals to generate behaviors that reflect the agent’s inner thoughts. It has been implemented in our virtual human system and operates in real-time. The gaze model introduced in this paper was originally designed and developed by Jeff Rickel but has since been extended by the authors.

Seijin Oh, Jonathan Gratch, Woontack Woo: “Explanatory Style for Socially Interactive Agents”

Recent years have seen an explosion of interest in computational models of socio-emotional processes, both as a mean to deepen understanding of human behavior and as a mechanism to drive a variety of training and entertainment applications. In contrast with work on emotion, where research groups have developed detailed models of emotional processes, models of personality have emphasized shallow surface behavior. Here, we build on computational appraisal models of emotion to better characterize dispositional differences in how people come to understand social situations. Known as explanatory style, this dispositional factor plays a key role in social interactions and certain socio-emotional disorders, such as depression. Building on appraisal and attribution theories, we model key conceptual variables underlying the explanatory style, and enable agents to exhibit different explanatory tendencies according to their personalities. We describe an interactive virtual environment that uses the model to allow participants to explore individual differences in the explanation of social events, with the goal of encouraging the development of perspective taking and emotion-regulatory skills.

Pedro A. Gonzalez Calero: “Engineering Case-Based Reasoning Systems in jCOLIBRI”

CBR has been proposed as a knowledge light methodology for building knowledge-based systems where problem solving is accomplished by reusing past experiences instead of reasoning on declarative domain models. Knowledge acquisition effort is greatly alleviated, assumi that acquiring cases is easier than acquiring domain models, what is mainly true in poorly understood and evolving domains. In this talk I will present cost-effective solutions to inject knowledge into CBR systems, empowering CBR knowledge-light approaches with off-the-shelf knowledge components, i.e. domain ontologies. I will show how these techniques have been incorporated into jCOLIBRI, an open source framework in Java for building CBR systems, and will run a short tutorial on building different types of CBR applications with the framework.

Pedro A. González-Calero is Associate Professor of Computer Science at the Complutense University of Madrid, Spain, where he is the founder and director of the Group for Artificial Intelligence Applications (gaia.fdi.ucm.es), and director of the Master of Videogame Development since its creation in 2004. He has developed his whole career in the University of Madrid where he obtained a BS on Physics in 1990 and his PhD in Computer Science in 1997 while assuming teaching duties in the Faculty of Informatics since its creation in 1991. His research has focused on the confluence of software engineering and artificial intelligence and he is author of over 70 reviewed journal and conference proceedings articles on knowledge-based software engineering, software reuse and case-based reasoning. He is currently spending on a one year sabbatical at ISI.

NY Times Looks at PTSD Treatment

Post Traumatic Stress Disorder (PTSD) is a growing problem, especially with the increasing number of veterans returning from duty in the Middle East. Traditional treatments have varying success rates, and often require extended personal therapy work. ”Exposure Therapy,” where patients reengage and recount the original stimuli has been a first-line treatment for quite some time. Virtual reality technologies developed at the ICT have been implemented in simulators that are now taking these methodologies to new levels. ICT researcher Dr. Albert Rizzo, who helped develop the simulator, said, “It’s a hard treatment for a very hard problem.”

Read the full article.

Mehdi Manshadi: “Learning a probabilistic model of event sequences from Internet weblog stories”

Internet weblogs provide a huge source of stories that can be used as a large corpus to learn statistical regularities of narrative text. In this talk, I will describe an approach for building a statistical model of sequences of narrative events that can be used in many different applications, including event prediction, automatic story generation, and story coherence evaluation. In this model, we represent events described in narrative text sentences as a predicate (main verb) with a single argument. We use language modeling techniques to find the probabilities of sequences of events in a story. We will describe the results of some experiments to show how well this model captures the structure of event sequences in narrative text.

Darold Higa: “Multi-Agent Virtual Histories: Disaggregating International Relations”

Concerns over the environment, terrorism, ethnic violence and state disintegration have placed greater emphasis on exploring the possible connections between
resource scarcity and inter-group violence. The wide range of divergent outcomes resulting from resource scarcity suggests that the ideational context of resource scarcity is critical in modeling this relationship. Developing an adequate model of the relationship between scarcity and violence must therefore contain elements that can reflect the origins, development and proliferation of ideas and alternative economic strategies in order to adequately explain real-world divergence in outcomes. Scarcity as a Complex Adaptive System (SCAS) is one such model. SCAS uses an agent-based model featuring cognitively complex agents on a differentiated, three-dimensional landscape to explore the relationship between resource scarcity and inter-group violence. In order to demonstrate the efficacy of SCAS requires translating the model into a computer simulation known as agentLand. AgentLand features adaptive agents that learn experientially via Holland’s Learning Classifier System, learn socially through communication and innovate through random strategy generation. The resulting virtual histories created by agentLand show that ideas, geography, density and communication are important, and the proliferation of different strategies across a landscape of adaptive agents can create a wide range of outcomes, paralleling diversity found in the real world. Preliminary results show that by using an ensemble of virtual histories, agentLand is able to generate plausible virtual scenarios. Most importantly, this research opens the door to a different way of conceptualizing and modeling complex macro-level events as networks of microinteractions.

ICT Research Wins Award at SIGGRAPH ’07

Computer graphics have transformed many aspects of life. But while a great deal of imagery is modeled and rendered in 3D, almost all of it is shown on a 2D display such as a computer monitor, movie screen, or television. Researchers at the USC Institute for Creative Technologies (ICT) along with their collaborators have devised a reproducible, low-cost 3D display system that requires no special glasses, and is viewable from all angles by multiple users. This system allows computer generated 3D objects to be seen in new ways, and will impact the future of interactive systems.

The Interactive 360 degree Light Field Display (3D Display) was demonstrated at Siggraph 2007, and won the award for “Best Emerging Technology.”; Siggraph is the pre-eminent conference for computer graphics and interactive technologies, and the Emerging Technologies exhibit is in collaboration with the International Conference on Virtual Reality. The award brings with it an invitation to show the 3D Display technology at the 10th International Conference on Virtual Reality in Laval, France next April.

The research team includes Paul Debevec and Andrew Jones from the ICT, Ian McDowall from Fakespace Labs, Inc., Hideshi Yamada from Sony Corporation and Mark Bolas from the USC School of Cinematic Arts. Debevec is Associate Director for Graphics Research at the ICT, and is a pioneer in the field of realistic and innovative/interactive computer images.

Information on Siggraph 2007

Ron Artstein: “Evaluation of semantic corpus annotation”

Semantically annotated corpora are intended to capture semantic relations among elements in a text, but it is not known how uniform human intuitions are about such semantic relations. I present a novel methodology for evaluating the process of semantic corpus annotation, which is applicable in domains where we expect naive participants to have valid judgments, such as deciding whether two expressions refer to the same object. The annotation process is tested in experiments where many naive annotators mark the same text, based on a compact set of guidelines and minimal training.

The limited guidelines allow annotators to express their own judgments rather than rely on mechanical rule application, and the large number of participants yields a variety of responses which capture the range of possible judgments better than traditional comparisons of two or three highly trained annotators. The result is a somewhat noisy dataset which is evaluated both qualitatively, by looking at instances of disagreement, and quantitatively, using formal reliability statistics. The evaluation identifies difficult areas in the annotation scheme and guidelines, and also specific linguistic constructions that are not well defined with respect to the annotation. The results of such experiments were used to improve the annotation scheme, guidelines, and task definition of the Arrau corpus of anaphoric relations created at the University of Essex.

Jonathan Gratch, Ning Wang, Anna Okhmatovskaia, Francois Lamothe, Mattheiu Morales, Louis-Philippe Morency: “Can Virtual Humans Be More Engaging Than Real Ones?”

Emotional bonds don’t arise from a simple exchange of facial displays, but often emerge through the dynamic give and take of face-to-face interactions. This article explores the phenomenon of rapport, a feeling of connectedness that seems to arise from rapid and contingent positive feedback between partners and is often associated with socio-emotional processes. Rapport has been argued to lead to communicative efficiency, better learning outcomes, improved acceptance of medical advice and successful negotiations. We provide experimental evidence that a simple virtual character that provides positive listening feedback can induce stronger rapport-like effects than face-to-face communication between human partners. Specifically, this interaction can be more engaging to storytellers than speaking to a human audience, as measured by the length and content of their stories.

Patrick Kenny, Arno Hartholt, Jonathan Gratch, David Traum, Stacy Marsella, Bill Swartout: “The More the Merrier: Multi-Party Negotiation with Virtual Humans”

The goal of the Virtual Humans Project at the University of Southern California’s Institute for Creative Technologies is to enrich virtual training environments with virtual humans � autonomous agents that support face-to-face interaction with trainees in a variety of roles through bringing together many different areas of research including speech recognition, natural language understanding, dialogue management, cognitive modeling, emotion modeling, non-verbal behavior and speech and knowledge management. The demo at AAAI will focus on our work using virtual humans to train negotiation skills. Conference attendees will negotiate with a virtual human doctor and elder to try to move a clinic out of harm’s way in single and multi-party negotiation scenarios using the latest iteration of our Virtual Humans framework. The user will use natural speech to talk to the embodied agents, who will respond in accordance with their internal task model and state. The characters will carry out a multi-party dialogue with verbal and non-verbal behavior. A video of a single-party version of the scenario was shown at AAAI-06. This new interactive demo introduces several new features, including multi-party negotiation, dynamically generated non-verbal behavior and a central ontology.

Kallirroi Georgila: “User Simulation for Dialogue Systems: Learning and Evaluation”

This talk focuses on statistical methods for learning models of users interacting with a dialogue system. I will show how dialogue management strategies can be learned through the use of Markov Decision Processes and user simulations. I will discuss several metrics for evaluating user simulation models with respect to how strongly they resemble real user behavior. I will conclude by showing how dialogue strategies learned with the above techniques were incorporated in a spoken dialogue system using the Information State Update approach. Experimental results showed that such learned strategies were superior to hand-crafted ones.

Bob Hausmann: “Injecting self-explanation in the classroom: An in vivo experiment”

It is widely accepted that active, cognitive processing during learning results in a robust representation of the target material. For example, self-explanation is an active, sense-making learning strategy that has consistently shown to be effective across a wide variety of knowledge domains. While the effects of self-explanation have been well documented, there remain a few open questions. First, do the strong learning effects transfer to the classroom setting? Second, why is self-explanation effective? Generating explanations may be effective because of the additional content that is produced, or it may be due to the activity of generation itself. In an attempt to answer these open questions, we conducted a classroom experiment by instructing students to engage in one of two learning strategies: paraphrasing or self-explaining worked-out examples. In this talk, I will present the results from our study, as well as an introduction to a methodology used by the Pittsburgh Science of Learning Center (PSLC) called in vivo experimentation.

Celestine Ntuen: “Towards Analytical and Computational Sensemaking”

In asymmetric information environments, the deliberate military decision making processes (MDMP) with all their linearity assumptions are generally regarded as inadequate. Generating courses of action must be progressive and opportunistic. Thus, the classical analytical models of judgment and choice that fit force-on-force tactics must be recalibrated to fight against unknown enemies. Sensemaking, the process of connecting dots to disparate information and seeking explanation to potentially unexpected evolving situations, has been suggested as an embellishment to and as a precursor to existing MDMP. Unfortunately, these nascent decision systems lack analytical models that can capture the evolving states of battle dynamics and its information equivocality. This presentation reviews classes of analytical models suitable for sensemaking and discusses an on-going Bayesian Abduction Model (BAM) under development to support sensemaking process. Key challenges are identified. For example, Can sensemaking with all its tacit dimensions of knowledge be represented mathematically (and computationally)? Can analytical models capture the characteristics and give explanations to evolutive systems? Some demonstrations with problem types derived from the Cynefin model developed by Snowben at IBM Europe are presented.

Wenji Mao, Jonathan Gratch: “Modeling social inference in virtual agents”

Social judgment is a social inference process whereby an agent singles out individuals to blame or credit for multi-agent activities. Such inferences are a key aspect of social intelligence that underlie social planning, social learning, natural language pragmatics and computational models of emotion. With the advance of multi-agent interactive systems and the need of designing socially aware systems and interfaces to interact with people, it is increasingly important to model this human-centric form of social inference. Based on psychological attribution theory, this paper presents a general computational framework to automate social inference based on an agent�s causal knowledge and observations of interaction.

PTSD Treatment Features

Earlier this week, a reporter was escorted down an Iraqi street during the morning call to prayer. There was a marketplace to the right, nondescript buildings down the road and a few pedestrians milling about. Then a helicopter flew overhead, accompanied by the bone-rattling sound of gunfire. The ground shook as a parked car suddenly exploded, apparently blown up by an insurgent’s improvised explosive device. Sniper fire popped from the rooftops. Dazed civilians wandered into the reporter’s path—though it was unclear whether they were friendlies, or insurgents in disguise poised for an ambush.

Read the Newsweek article. »

Nightline Reports on Virtual Reality Therapy

Nightline covered an Irap war veteran using Virtual Iraq at Emory University on its June 7, 2007 show.

Read about the story.

Paul Thagard, Peter Ditto, Jonathan Gratch, Stacy Marsella, Drew Western: “Emotional Cognition in the Real World”

There is increasing appreciation in cognitive science of the impact of emotions on many kinds of thinking, from decision making to scientific discovery. This appreciation has developed in all the fields of cognitive science, including, psychology, philosophy, artificial intelligence, and linguistics, and anthropology. The purpose of the proposed symposium is to report and discuss new investigations of the impact of emotion on cognitive processes, in particular ones that are important in real life situations. We will approach the practical importance of emotional cognition from a variety of disciplinary perspectives: social psychology (Ditto), clinical psychology (Westen), computer science (Gratch and Marsella), and philosophy and neuroscience (Thagard). In order to provide integration across these approaches, we will try to address a fundamental set of questions, including: 1. How do emotions interact with basic cognitive processes? 2. What are the positive contributions of emotions to various kinds of thinking in real world situations? 3. How do emotions sometimes bias thinking in real world situations? 4. How can understanding of the psychology and neuroscience of emotional cognition be used to improve the effectiveness of real world thinking?

Sabarish V. Babu: “Inter-Personal Social Conversation in Multimodal Human-Virtual Human Interaction”

My primary area of research lies in understanding the impact of social conversational behaviors in human-virtual human interaction. Specifically, the goal of my work is to investigate what social or conversational tasks are best suited for a virtual human interface agent, and how these capabilities affect human factors such as engagement, satisfaction, acceptance of the interface agent’s role, and the success of task performance in an interactive public setting. To this end, I present the Virtual Human Interface Framework (VHIF) implemented for the purpose of studying human-virtual human interaction. I also present Marve, a virtual receptionist, created as a proof-of-concept application built using VHIF. Using Marve, I studied the impact of the agent’s social conversational capabilities on users’ perceptions and treatment of the virtual human interface agent deployed in a public or social setting. I showed that the virtual human interface agent was able to engage users in context independent social conversational dialogue a significant amount of time. Using VHIF I also developed the multi-agent immersive virtual human social conversational protocol training system. In a controlled study, I demonstrated that users can be trained in verbal and non-verbal social conversational protocols using natural multi-modal interaction with life-size virtual human interface agents.

Abhijeet Ghosh: “Realistic Material and Illumination Environments”

Throughout its history, the field of computer graphics has been striving towards increased realism. This goal has traditionally been described by the notion of photorealism, and more recently and in many cases the more ambitious goal of perceptual realism. Photo-realistic image synthesis involves many algorithms describing the phenomena of light transport in a scene as well as its interaction with various materials. On the other hand, research in perceptual realism typically involves various tone mapping algorithms for display devices as well as algorithms that mimic the natural response of the human visual system in order to recreate the visual experience of a real scene.

An important aspect of realistic rendering is the accurate modeling of the scene elements such as light sources and material reflectance properties. This dissertation proposes a set of new techniques for efficient acquisition of material properties as well as new algorithms for high quality rendering with acquired data. Here, we are mostly concerned with the acquisition and rendering of local illumination effects. In particular, we propose a new optical setup for efficient BRDF acquisition with basis illumination and various Monte Carlo strategies for efficient sampling of direct illumination.

The talk also looks into the display end of the image synthesis pipeline and proposes algorithms for displaying scenes on high dynamic range (HDR) displays for visual realism, and for tying the room illumination with the viewing environment for a sense of presence and immersion in a virtual environment. Here, we develop real-time rendering algorithms for driving the HDR displays as well as for active control of room illumination based on dynamic scene content. Thus, we propose contributions to the acquisition, rendering, and display end of the image synthesis pipeline while targeting real-time rendering applications, as well as high quality off-line rendering with realistic materials and illumination environments.

Keeping Virtual Real

For someone who lives only in a computer, Sgt. John Blackwell sure gets around.

Blackwell is the walking, talking showcase for virtual human research at USC’s Institute for Creative Technologies. He has not yet achieved his mission – to teach, train or mentor Army recruits – but his enviable social life shows how human a virtual character can be.

Read the USC News story.

Marine Corps Using ICT Flatworld Technologies

Chief of Naval Research Rear Admiral William E. Landay III has announced the funding of a $1.3-million “Tech Solutions” project to deliver advanced infantry immersive training to Marines. This project will field two systems by the fall of 2007. The first system will be installed in the I MEF Battle Simulation Center at Camp Pendleton, California, and the second will be installed in the new Marine Expeditionary Rifle Integration Facility opening this summer near Quantico, Virginia. The project uses Flatworld technologies from the ICT.

Ryan Baker: “Detecting and Adapting to When Students Game the System”

Students use interactive learning environments in a considerable
variety of ways. In this talk, I will present research on developing learning
environments that can automatically detect and adapt when a student is
“gaming the system”, attempting to succeed in a learning environment by
exploiting properties of the system rather than by learning the material and
trying to use that knowledge to answer correctly. My colleagues and I have
determined across several studies that gaming the system is replicably
associated with low learning in intelligent tutors, and that gaming has
different effects, depending on when and why students game.

In this talk, I will present a detector that reliably detects gaming, in order to
drive adaptive support. In order to predict both which students game, and when
a specific student is gaming, this detector was trained using a combination of
a psychometric modeling framework, Latent Response Models (Maris, 1995)
and a machine-learning space-searching technique, Fast
Correlation-Based Filtering (Yu and Liu, 2003), using a mixture of labeled
and unlabeled data at different grain-sizes. My colleagues and I have
validated that this detector transfers effectively between several
intelligent tutor lessons without re-training, despite the lessons varying
considerably in their subject matter and user interfaces.

The gaming detector has been used to develop a tutor which responds
to gaming. Within this system, a software agent (“Scooter the Tutor”)
indicates to the student and their teacher whether the student has
been gaming recently. Scooter also gives students supplemental
exercises, in order to offer the student a second chance to learn the
material he/she had gamed through. Scooter reduces the frequency of
gaming by over half, and Scooter’s supplementary exercises are
associated with substantially better learning; Scooter appears to have
had virtually no effect on students who do not game.

Lisa Holt and Brian Stensrud: “Interactive Story Architecture for Training (ISAT)”

The Interactive Story Architecture for Training (ISAT) is designed to create
an engaging and individualized training environment. ISAT combines
interactive drama and intelligent tutoring techniques to improve the
effectiveness of simulation-based training. Interactive story provides an
engaging and realistic experience while intelligent tutoring adapts the
experience to individual trainee needs and maintains a focus on training
goals. The combined ISAT approach provides benefits without interrupting the
immersive simulation-based experience. The central component of the ISAT
architecture is an intelligent director agent. The director monitors the
trainee’s demonstration of knowledge and skills during the training
experience. Using that information, the director plays a role similar to that
of a schoolhouse trainer, customizing training scenarios to meet individual
trainee needs. The director can react to trainee actions within a scenario,
dynamically adapting the environment to the learning needs of trainee or to
suit the dramatic needs of the scene. Another central goal of ISAT is to
facilitate the encoding of training content by a non-programmer. Scribe, the
story authoring component of ISAT, allows the author to manipulate a simple
representation to encode logical event sequences and training content. In
this presentation we will describe the current state of the ISAT director and
Scribe, and demonstrate our current instantiation in the Tactical Combat
Casualty Care (TC3) simulation-based training system. We will also discuss
current efforts to create a visualization component to expose the
functionality of ISAT. Finally we will outline some current and planned
efforts to enhance and extend ISAT.

ICT Featured in the Chronicle of Higher Education

This article in the Chronicle of Higher Education discusses research work at the ICT designed to “create the next generation of training technology.” The piece includes rather extensive overviews of Flatword (a marriage of digital display of synthetic characters with physical props and 4-D effects), ELECT-BiLat (culturally-sensitive and psychologically-rich negotiations with a synthetic human), and AXL (case-based methodology utilizing rich cinematic pieces infused with significant tacit knowledge pedagogy). The overarching conclusion is that these types of virtual training tools and environments are critical for helping to cut costs, limit environmental impact, increase efficiency, and most importantly, save lives.

New ICT Creative Director

Marina del Rey, CA (January 29, 2007) – Today Randall Hill, Executive Director of the University of Southern California’s Institute for Creative Technologies (ICT), announced that it was furthering its already strong relationship with the Hollywood community by appointing Kim LeMasters as its new Creative Director. As Creative Director, LeMasters will work closely with ICT’s Executive Director, as well as the ICT’s senior management team and researchers, in developing creative content for research projects – including concept development/visualization, games, videos and other immersive experiences – while managing ICT’s creative personnel as well as partnerships with the entertainment and computer industries.

Kim brings decades of entertainment experience including president of CBS Entertainment. Following his time at CBS he went on to join Stephen J. Cannell Productions as president in 1992 and later produced individual film and television projects for more than a decade. In 1999 he became chairman and CEO of the digital home entertainment recording company Replay Networks, guiding the rollout of its signature product, ReplayTV.

In light of ICT’s mission to develop virtual immersive training for the military and education applications, LeMasters joins the institute at a time when new technologies and delivery systems are challenging the hegemony of traditional media. “ICT is a pioneer in producing technologically sophisticated films that communicate subtle yet vital leadership skills for military commanders. Our work for the U.S. Army has not only been gratifying in its acceptance but, more importantly, is proving beneficial to the soldiers on the ground,” stated Hill. “Having Kim come aboard as our Creative Director is the direct result of his work with us as a writer and a producer of three of our films. We believe his background in both storytelling and technology will further our goals in not only our pedagogical output but will be accretive as we continue to expand our efforts in our video games and immersive experiences.”

Virtual War, Real Healing

Los Angeles Times front page article covering ICT research on using virtual environments for treating Post Traumatic Stress Disorder.

Read the article.

Louis-Philippe Morency: “Visual Feedback for Multimodal Interfaces”

When people interact with each other, it is common to see indications of acknowledgment given with a simple head gesture or explicit turn-taking with eye gaze shifts. People use visual feedback-visual information transferred during interaction-to communicate relevant information and to synchronize rhythm between participants. The recognition of visual feedback is a key component of human communication, and novel multimodal interfaces need to recognize and analyze these visual cues to facilitate more natural human-computer interaction.
In this talk, I will focus on two core technical challenges necessary to achieve efficient and robust visual feedback recognition: the use of contextual information to anticipate visual feedback, and the use of latent state models for visual gesture recognition. To recognize visual feedback efficiently, people often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. For example, at the end of a sentence the speaker will often look at the listener and anticipate a head nod gesture to ground understanding. I will present a context-based recognition framework for analyzing online contextual knowledge from the interactive system and anticipate visual feedback from the human participant.

Recognizing natural visual feedback from human users is a challenging problem; natural visual gestures are subtle, can differ considerably between individuals, and are context driven. I will present new discriminative sequence models for visual gesture recognition which can model the sub-structure of a gesture sequence, can learn the dynamics between gesture labels and can be directly applied to label un-segmented sequences. These models outperform previous approaches (i.e. SVMs, HMMs, and CRFs) for visual gesture recognition and can efficiently learn relevant contextual information necessary for context-based recognition.