Antonio Damasio: “Neurobiology of the Mind”

On Wednesday December 20th, Antonio and Hanna Damasio will be visiting the ICT. Antonio Damasio will be giving a talk in the 6th floor amphitheater at 10am. The title and abstract will be sent out when they become available.

The Damasios are preeminent neuroscientists who recently joined USC. As stated by a USC news story, “Antonio Damasio¹s research on the neurobiology of the mind has had a major influence on our current understanding of the neural systems that underlie emotion, memory, language, decision-making and consciousness.”

Andrew Jones, Andrew Gardner, Mark Bolas, Ian McDowall, Paul Debevec: “Simulating Spatially Varying Lighting on a Live Performance”

We present an image-based technique for relighting dynamic human performances under spatially varying illumination. Our system generates a time-multiplexed LED basis and a geometric model recovered from high-speed structured light patterns. The geometric model is used to scale the intensity of each pixel differently according to its 3D position within the spatially varying illumination volume. This yields a first-order approximation of the correct appearance under the spatially varying illumination. A global illumination process removes indirect illumination from the original lighting basis and simulates spatially varying indirect illumination. We demonstrate this technique for a human performance under several spatially varying lighting environments.

Andrew Jones, Paul Debevec, Mark Bolas, Ian McDowall: “Concave Surround Optics for Rapid Multiview imaging”

Many image-based modeling and rendering techniques involve photographing a scene from an array of different viewpoints. Usually, this is achieved by moving the camera or the subject to successive positions, or by photographing the scene with an array of cameras. In this work, we present a system of mirrors to simulate the appearance of camera movement around a scene while the physical camera remains stationary. The system thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high speed video of a dynamic scene. We show smooth camera motion rotating 360 degrees around the scene. We discuss the optical performance of our system and compare with alternate setups.

Sarah Tariq, Andrew Gardner, Ignacia Llamas, Andrew Jones, Paul Debevec, Greg Turk: “Efficient Estimation of Spatially Varying Subsurface Scattering Parameters”

We present an image-based technique to efficiently acquire spatially varying subsurface reflectance properties of a human face. The estimated properties can be used directly to render faces with spatially varying scattering, or can be used to estimate a robust average across the face. We demonstrate our technique with renderings of peoples’ faces under novel, spatially-varying illumination and provide comparisons with current techniques. Our captured data consists of images of the face from a single viewpoint under two small sets of projected images. The first set, a sequence of phase-shifted periodic stripe patterns, provides a per-pixel profile of how light scatters from adjacent locations. The second set of structured light patterns is used to obtain face geometry. We subtract the minimum of each profile to remove the contribution of interreflected light from the rest of the face, and then match the observed reflectance profiles to scattering properties predicted by a scattering model using a lookup table. From these properties we can generate images of the subsurface reflectance of the face under any incident illumination, including local lighting. The rendered images exhibit realistic subsurface transport, including light bleeding across shadow edges. Our method works more than an order of magnitude faster than current techniques for capturing subsurface scattering information, and makes it possible for the first time to capture these properties over an entire face.

Paper on Interactive Comics Wins Award

ICT Researcher Andrew Gordon’s paper entitled: “Fourth Frame Forums: Interactive Comics for Collaborative Learning” was awarded “Best Technical Short Paper” at the 14th International ACM Conference on Multimedia, Oct 23-27 2006, in Santa Barbara, CA.

Read the paper.

New ICT Executive Director

Marina del Rey, California—October 9, 2006—The USC Provost announced the selection of a distinguished artificial intelligence researcher and former U.S. Army officer to lead Institute for Creative Technologies.

Randall W. Hill, Jr. has been chosen to lead the Institute for Creative Technologies (ICT), USC Vice Provost for Research Advancement Randolph Hall announced this week on behalf of Provost C. L. Max Nikias.

“I am delighted and honored to be given the responsibility for leading the ICT in its next phase of its growth,” Hill said. “ICT is performing pioneering research on the creation of interactive digital media that will benefit Army training systems as well as classrooms throughout America.”

The appointment concludes an extensive search, during which more than 100 candidates were interviewed and evaluated.

“His combined experience in computer science, the military, and the development of films for leadership development make him uniquely qualified to lead the Institute for Creative Technologies, which was established to bring together the Army, university research scientists and the entertainment industry in the development of immersive training environments,” Hall stated.

Hill has served USC over the last 11 years, first as a project leader at the Information Sciences Institute, and later rising to the position of Director of Applied Research and Transition at ICT. In this position, one of his overarching goals has been to forge ties between the entertainment industry and technology developers to create more engaging learning environments.

Hill worked with Hollywood filmmakers to create an on-line leader development system known as Army Excellence in Leadership (AXL).  He is also directing the development of an interactive learning game for bi-lateral negotiation.

“In making this appointment, I would like to restate the university’s commitment to exhibiting world leadership in interactive digital media and cinematic arts through the ICT,” Hall added.

“Toward this end, USC is committed to attracting additional creative leadership to the ICT in the coming year. In addition, in the next few months, Randall will be developing new strategies for the ICT to strengthen collaborations with the university community and expand the impact of the ICT’s core talents in new areas.”

Prior to joining ICT, Randall spent 11 years at NASA’s Jet Propulsion Laboratory, conducting applied research in artificial intelligence to support military intelligence analysis.  He also managed the research program to automate the monitor-and-control system for the Deep Space Network.

He holds a Ph.D. in Computer Science at USC, and a B.S. degree from the United States Military Academy at West Point.

Upon graduation from West Point, Hill was trained as a field artillery officer at Ft. Sill, Oklahoma and was then assigned to the 7th Infantry Division at Ft. Ord, California, where he was a fire direction officer for a heavy artillery battery. Hill’s last two years in the Army were spent in military intelligence at Ft. Huachuca, Arizona, where he developed automated intelligence analysis systems.

Randall is an avid mountaineer and lives in Pasadena with his wife, Marianne, and children Austin and Aria.

Hill’s appointment as executive director became effective October 9.

USC Public Relations
Carl Marziali

Nigel G. Ward: “Learning to Show You’re Listening: A Trainer for Back-Channeling in Arabic”

Good listeners generally produce back-channel feedback, and do so in a
language-appropriate way. Second language learners often lack this
skill. We present a training sequence which enables learners to
acquire a basic Arabic back-channel skill, namely, that of producing
feedback immediately after the speaker produces a sharp pitch
downslope. This training sequence includes an explanation, audio
examples, the use of visual signals to highlight occurrences of the
pitch downslope, auditory and visual feedback on learners’ attempts to
produce the cue themselves, and feedback on the learners’ performance
as they play the role of an attentive listener in response to one side
of a pre-recorded dialog. Preliminary experiments suggest that this
allows some learners to acquire this behavior.

The talk will also touch on the role of back-channels in various types
of dialog, methods for the discovery and quantification of
dialog-relevant prosodic cues, potential cross-cultural
misunderstandings of prosodic signals, the interplay between
meta-communication and the communication of content, and ways to
quantify the value of good turn-taking relative to other dialog skills.

B. Chandrasekaran: “An Architecture for Diagrammatic Representation and Reasoning”

An Architecture for Diagrammatic Representation and Reasoning B. Chandrasekaran Laboratory for AI Research The Ohio State University Columbus, OH Though AI and cognitive science almost exclusively focus on predicate-symbolic representation as the medium of the “Language of Thought,” perceptual representations are a significant component of cognitive states. Diagrammatic representations are especially common inproblem solving. In this talk, I describe a bi-modal architecture for problem solving, in which diagrams and predicate symbolic representations are co-equal representational modes. Problem solving proceeds opportunistically – whichever modality can solve a subgoal makes its contribution at each cycle. A set of perceptual routines recognize emergent objects and evaluate a set of generic spatial relations between objects in a diagram; and a set of action routines create or modify the diagram in the service of problem solving goals. I describe application of the architecture to problems in Army situation understanding and planning.While the work described is potentially useful as technology, it also has implications for cognitive architecture. Perceptual and kinesthetic representations play a deeper role in thinking than the traditional roles assigned to perception as getting information from the world, andto action as executing decisions. The work on diagrammatic representations suggests what a more fully developed multi-modal cognitive architecture would look like.Biography B. Chandrasekaran is Professor Emeritus Computer Science and Engineering and Director of the Laboratory of Artificial Intelligence Research in the Department of Computer Science and Engineering at The Ohio State University. He is a Fellow of the Institute of Electrical and Electronic Engineers, Association for Computing Machinery and American Association for Artificial Intelligence. His major research activities are in diagrammatic reasoning, causal understanding, knowledge systems,decision support architectures and cognitive architectures. He and David Brown authored “Design Problem Solving,” (Morgan Kaufmann), and he is co-editor of Diagrammatic Reasoning: Cognitive and Computational Perspectives (MIT Press). Chandrasekaran was Editor-in-Chief of IEEEExpert/Intelligent Systems from 1990 to 1994. He is currently a technical leader in an ARL-supported Government-Industry-University Collaborative Technology Alliance on Advanced Decision Architectures.

Summer Intern Program

15 students from Francisco Bravo High School in Los Angeles are part of the very first Army-funded, two-week internship for high school students at the Institute for Creative Technologies in Marina del Rey designed to increase interest in computer engineering careers. They are learning to design computer programs to solve “simple” physics problems like where a packet of food would land if dropped from an airplane. They will also be tapping into the Department of Defense’s supercomputing network to solve more complex problems.

Read the Daily Breeze article.

Peter Khooshabeh, Mary Hegarty: “Inferring cross-sections: when internal visualizations are more important than properties of external visualizations”

Three experiments examined how cognitive abilities and qualities of external visualizations affected performance of a mental visualization task; inferring the cross-section of a complex three-dimensional (3-D) object. Experiment 1 investigated the effect of animations designed to provide different task-relevant views of the external object. Experiment 2 examined the effects of both stereoscopic and motion-based depth cues. Experiment 3 examined the effects of interactive animations, with and without stereoscopic viewing conditions. In all experiments, spatial and general reasoning abilities were measured. Effects of animation, stereopsis, and interactivity were relatively small and did not reach statistical significance. In contrast, spatial ability was significantly associated with superior performance in all experiments, and this remained true after controlling for general intelligence. The results indicate that difficulties in this task stem more from the cognitive ability to perform the relevant internal spatial transformations, than limited visual information about the three-dimensional structure of the object.

Treating Trauma with Games

For soldiers struggling with memories of their combat time in Iraq, USC Centers for Creative Technologies psychologist Skip Rizzo has concocted an unusual prescription: a video game. Since last year, Rizzo has been developing a program that reconfigures the Xbox game “Full Spectrum Warrior”; into a tool to help soldiers who suffer from post-traumatic stress disorder.

Jonathan Gratch, Anna Okhmatovskaia, Francois Lamothe, Stacy Marsella, Mattheiu Morales, Jan van der Werf, Louis-Philippe Morency: “Virtual Rapport”

Effective face-to-face conversations are highly interactive. Participants respond to each other, engaging in nonconscious behavioral mimicry and backchanneling feedback. Such behaviors produce a subjective sense of rapport and are correlated with effective communication, greater liking and trust, and greater influence between participants. Creating rapport requires a tight sense-act loop that has been traditionally lacking in embodied conversational agents. Here we describe a system, based on psycholinguistic theory, designed to create a sense of rapport between a human speaker and virtual human listener. We provide empirical evidence that it increases speaker fluency and engagement.

Stacy Marsella, Sharon Carnicke, Jonathan Gratch, Anna Okhmatovskaia, Skip Rizzo: “An exploration of Delsarte’s structural acting system”

The designers of virtual agents often draw on a large research literature in psychology, linguistics and human ethology to design embodied agents that can interact with people. In this paper, we consider a structural acting system developed by Francois Delsarte as a possible resource in designing the nonverbal behavior of embodied agents. Using human subjects,we evaluate one component of the system, Delsarte’s Cube, that addresses the meaning of differing attitudes of the hand in gestures.

Wired Magazine coverage of Virtual Iraq

The August 2006 issue of Wired has an article about ICT’s Virtual Iraq.

Read the article.

Kevin Gluck, Glenn Gunzelmann, Jonathan Gratch, Eva Hudlicka, Frank Ritter: “Modeling the Impact of Cognitive Moderators on Human Cognition and Performance”

Cognitive moderators, such as emotions, personality, stress, and fatigue, represent an emerging area of research within the cognitive science community and are increasingly acknowledged as important and ubiquitous influences on cognitive processes. This symposium brings together scientists engaged in research to develop models that help us better understand the mechanisms through which these factors impact human cognition and performance. There are two unifying themes across the presentations. One theme is a commitment to developing computational models useful for simulating the processes that produce the effects and phenomena of interest. The second theme is a commitment to assessing the validity of the models by comparing their performance against empirical human data.

Apollo tours Sill

Actor/producer Carl Weathers, best known for his portrayal of boxer Apollo Creed in the first four “Rocky” movies, toured the Joint Fires and Effects Trainer System at Fort Sill along with other USC ICT creative and senior administrative staff.

The Smell of War

I’m Army, Special Operations. My mission: to sneak up on a rebel training camp. If the intelligence is right—if the place is being operated by the enemy Tiger Brigade—then I’m supposed to plant a radio transmitter so that F-16 pilots can launch smart bombs directly to the target. I just need to make absolutely sure that the location is correct—that the rebels are indeed based here. And that won’t be easy.

I creep through a dark drainage culvert, my helmet skimming the ceiling. There’s graffiti on the walls, puddles and trash on the ground. The place smells like damp earth and moldy concrete, a bit like my parents’ cellar— although home didn’t have bats overhead or rats underfoot. I emerge in a forest, by a river, at night. The air is crisp and piney. After the dank culvert, the change is so refreshing that I initially don’t notice the cinderblock building on the hilltop ahead. I don’t notice the sentry on the roof, standing at a machine gun and looking right at me.

Good thing I’m not actually a soldier in a war zone. I’m in Los Angeles at the USC Institute for Creative Technologies, standing in cubicle-land on the third floor of a modern office building. On my head are virtual-reality (VR) goggles with a stereoscopic, 90-degree field of view of the forest. In my hands is a PlayStation-type controller for directing my movements. And around my neck is an oval of blue plastic fitted with four vented metal modules the size of Zippo lighters. Wirelessly controlled by a nearby stack of computers, the modules transform what would otherwise be standard-issue military VR—for along with sight and sound, this training exercise features the smell of war.

Read the article.

Jonathan Gratch, Stacy Marsella, Wenji Mao: “Towards a Validated Model of ‘Emotional Intelligence'”

This article summarizes recent progress in developing a validated computational account of the cognitive antecedents and consequences of emotion. We describe the potential of this work to impact a variety of AI problem domains.

Per Einarsson, Charles-Felix Chabert, Andrew Jones, Wan-Chun Ma, Bruce Lamond, Tim Hawkins, Mark Bolas, Sebastian Sylwan, Paul Debevec: “Relighting Human Locomotion with Flowed Reflectance Fields”

We present an image-based approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. In our approach, a person walks on a treadmill at a regular rate as a turntable slowly rotates the person’s direction. As this happens, the person is filmed with a vertical array of high-speed cameras under a time-multiplexed lighting basis, acquiring a seven-dimensional dataset of the person under variable time, illumination, and viewing direction in approximately forty seconds. We process this data into a flowed reflectance field using an optical flow algorithm to correspond pixels in neighboring camera views and time samples to each other, and we use image compression to reduce the size of this data.We then use image-based relighting and a hardware-accelerated combination of view morphing and light field rendering to render the subject under user-specified viewpoint and lighting conditions. To composite the person into a scene, we use an alpha channel derived from back lighting and a retroreflective treadmill surface and a visual hull process to render the shadows the person would cast onto the ground. We demonstrate realistic composites of several subjects into real and virtual environments using our technique.

Serious Games

Saving the world—one game at a time.

Some may scoff at the notion, but while gaming news in recent weeks has been bombarded with talk of legislation or hearings on violence against the GTAs of the world, there’s a genre of games that has its sights set on things like global hunger, cancer awareness and social activism. USC CCT is at the forefront of this work.

ICT wins modeling and simulation award from DOD

A USC Institute for Creative Technologies (ICT) product has received the Modeling and Simulation Award for Training from the Department of Defense the highest honor for simulation applications. The Self-Directed Learning Internet Module, SLIM-ES3: “Every Soldier a Sensor Simulation,” is a collaborative effort between the ICT, the U.S. Army Intelligence branch (G-2), the U.S. Army Research Development and Engineering Command, Simulation and Training Technology Center (RDECOM-STTC), and Warner Bros. Online. The annual Department of Defense Modeling and Simulation (M&S) Awards recognize achievement in support of Department of Defense Modeling & Simulation objectives. Seventy-nine nominations were received from across the Department of Defense. The awards were recently presented to winners at the Modeling and Simulation Conference in Baltimore, MD.

Vadim Bulitko: “Real-time Learning and Search in Game-like Environments”

The pursuit of moving targets in real-time environments such as computer games and robotics presents several challenges to situated agents.
A priori unknown state spaces and the need to interleave acting and planning limits applicability of traditional learning, heuristic search, and adversarial game-tree search methods. In this talk we demonstrate how even simple opponent modeling techniques can be used to boost efficiency of moving target search methods. We then introduce automated techniques for reducing state space size by building hierarchical abstraction of maps. The talk will be concluded with an outlook on marrying opponent modeling and state abstraction techniques.

The Changing National Training Center

A look at the changing National Training Center, by Brigadier General Robert W. Cone, U.S. Army. ICT has been very involved in adding realism their existing training through an enhancement program, bringing Hollywood talent to the NTC. The areas of focus have been in scenario development, role-play training, higher end editing for their AARs and special effects. Next we will integrate some of ICT technologies into the training, notably the case-method base tool, Army Excellence in Leadership (AXL), that have context set by a team of producers, directors, writers, actors and editors to create scenarios focused in IED Defeat.

Control a car with your thoughts—it’s therapeutic

USC CCT technologies at work. A soldier in a Humvee scoots across the desert, warily eyeing the vast, empty plain. A fire appears on the horizon, driving smoke high into the sky. The soldier is alert but calm. There is a rumbling noise, then the rat-a-tat-tat of gunfire from Iraqi insurgents.

Suddenly, the soldier flinches and the scene disappears — quieted by a keystroke. The soldier relaxes and returns to the reality of his therapist’s office.

Read the full article.

Wenji Mao, Jonathan Gratch: “Evaluating a Computational Model of Social Causality and Responsibility”

Intelligent agents are typically situated in a social environment and must reason about social cause and effect. Such reasoning is qualitatively different from physical causal reasoning that underlies most intelligent systems. Modeling social causal reasoning can enrich the capabilities of multi-agent systems and intelligent user interfaces. In this paper, we empirically evaluate a computational model of social causality and responsibility against human social judgments. Results from our experimental studies show that in general, the model�s predictions of internal variables and inference process are consistent with human responses, though they also suggest some possible refinement to the computational model.

Training in Mock Iraq

FORT IRWIN, Calif. – Three years into the conflict in Iraq, the front line in the American drive to prepare troops for insurgent warfare runs through a cluster of mock Iraqi villages deep in the Mojave Desert, nearly 10,000 miles from the realities awaiting the soldiers outside Baghdad and Mosul and Falluja.

Out here, 150 miles northeast of Los Angeles, units of the 10th Mountain Division from Fort Drum, N.Y., are among the latest war-bound troops who have gone through three weeks of training that introduce them to the harsh episodes that characterize the American experience in Iraq.

Read the New York Times article.

James Fan: “Interpreting Loosely Encoded Knowledge”

Knowledge based systems are capable of answering questions that have elaborate contexts and require deep reasonings. Unlike information retrieval based question answering, knowledge based systems take in complex encodings from users and return the responses by applying reasoning mechanisms on symbolic knowledge bases. A key challenge in using knowledge based question answering systems is how to align a user’s question encoding with the idiosyncrasies in the existing knowledge base. We call such misalignments “loose speak”. I will present the results of our study of loose speak, and describe a loose-speak interpreter which automatically aligns an encoding with the an existing knowledge base. More details of the loose-speak interpreter and its evaluations can be found in the publications section of

Fusun Yaman: “Reasoning about Temporal Plans and Plans for Moving Objects”

Temporal plans play an important role in many real world applications. In this talk I will present my work on generating temporal plans and reasoning about temporal plans. I will particularly focus on the moving object plans which are a special case of temporal plans. There are numerous applications such as air traffic management, military mission planning where there is a critical need to reason about moving object plans. In such applications, the actual execution of the plans is susceptible to change due to contingencies. For example, planes may be delayed due to air traffic. Hence there is a critical need to represent temporally flexible plans and reason about them. I will present a Logic of Motion (LOM) that provides a declarative syntax and model theory and formalizes how to reason about planned movements of objects, when there is uncertainty. LOM is the first quantitative logical treatment of moving objects that can account for the fact that we are not always sure when an object will leave a given location, exactly when it will arrive at its destination, and what its
velocity will be even though the routes are known. It is rooted in a mix of geometry and logic: it provides a realistic continuous model of motion. Furthermore it is armed with very efficient algorithms to answer several kinds of queries about the possible locations of the objects and their proximity to other objects.

Stacy Marsella, Jonathan Gratch: “EMA: a computational model of appraisal dynamics”

A computational model of emotion must explain both the rapid dynamics of some emotional reactions as well as the slower responses that follow deliberation. This is often addressed by positing multiple appraisal processes such as fast pattern directed vs. slower deliberative appraisals. In our view, this confuses appraisal with inference. Rather, we argue for a single and automatic appraisal process that operates over a person’s interpretation of their relationship to the environment. Dynamics arise from perceptual and inferential processes operating on this interpretation (including deliberative and reactive processes). We illustrate this perspective through the computational modeling of a naturalistic emotional situation.

Jeremy Bailenson: “Transformed Social Interaction in Virtual Reality”

Over time, our mode of remote communication has evolved from written letters to telephones, email, internet chat rooms, and
videoconferences. Similarly, collaborative virtual environments (CVEs) promise to further change the nature of remote interaction. CVEs are
systems which track verbal and nonverbal signals of multiple interactants and render those signals onto avatars, three-dimensional,
digital representations of people in a shared digital space. In this talk, I describe a series of projects that explore the manners in which CVEs qualitatively change the nature of remote communication. Unlike telephone conversations and videoconferences, interactants in CVEs have the ability to systematically filter the physical appearance and behavioral actions of their avatars in the eyes of their conversational partners, amplifying or suppressing features and nonverbal signals in real-time for strategic purposes. These transformations have a drastic impact on interactants’ persuasive and instructional abilities.

Furthermore, using CVEs, behavioral researchers can use this mismatch between performed and perceived behavior as a tool to examine complex patterns of nonverbal behavior with nearly perfect experimental control and great precision. Implications for communications systems and social interaction.

Louis-Philippe Morency: “Contextual Recognition of Head Gestures”

Head pose and gesture offer several key conversational grounding cues and are used extensively inface-to-face interaction among people. In this talk, we investigate how dialog context from an embodied conversational agent (ECA) can improve visual recognition of user gestures. We present a recognition framework which (1) extracts contextual features from an ECA’s dialog manager, (2) computes a prediction of head nod and head shakes, and (3) integrates the contextual predictions with the visual observation of a vision-based head gesture recognizer. We found a subset of lexical, punctuation and timing features that are easily available in most ECA architectures and can be used to learn how to predict user feedback. Using a discriminative approach to contextual prediction and multi-modal integration, we were able to improve the performance of head gesture detection even when the topic of the test set was significantly different than the training set.

ICT Virtual Reality Therapy featured on nVidia site

ICT research into virtual reality therapy (Skip Rizzo’s group based on work by Jarrell Pair) is featured on the nVidia site.

Read Article.

Erik T. Mueller: “Commonsense Reasoning and the Event Calculus”

Commonsense reasoning is fundamental to high-level cognition. The event calculus, which is based on first-order logic, is an effective method for commonsense reasoning. It supports multiple reasoning types, including default reasoning, projection, and explanation, and reasoning about multiple domains, including action, change, space, and mental states. In this talk I will discuss key issues of commonsense reasoning, the use of the event calculus for commonsense reasoning, and how the event calculus is being applied to problems in high-level cognition, namely narrative comprehension and vision.

Doug Lenat: “Creativity vs. Common Sense”

The pursuit of Artificial Intelligence—from robotics to natural language processing to automated learning—has been held back by the “brittleness bottleneck” caused by the need for common sense. This is no less true for the more specialized pursuit of getting software to be creative, indeed that is exactly what led me from AM (the automated discovery program I wrote in 1976) to Cyc (the common sense knowledge base and reasoner we’ve been building since 1984.) Along the way, we’ve had to revise our preconceptions and theories, to expand our representation language and arsenal of inference methods, to find approximate yet adequate engineering solutions to problems that philosphers have grappled with for millenia such as substances vs. individual objects, time, space, causality, belief, social interactions, dealing with contradictions and context, and so on. This talk will cover my 30-year journey to get computers to be creative, and get specific about how ICT might harness and leverage our current ResearchCyc technology. This includes obvious connections, such as with Gordon and Hobbs’ work, and more subtle possible synergies with story direction and retrieval, training, producing appropriate explanations, and in general leading to less “brittle” virtual humans.

Bio:Dr. Douglas Lenat is the President and CEO of Cycorp. Since 1984, he and his team have been constructing, experimenting with, and applying a broad real world knowledge base and reasoning engine, collectively “Cyc”. For ten years he did this as the Principal Scientist of the MCC research consortium (the Microelectronics and Computer Technology Corporation), and since 1994 as CEO of Cycorp. He holds BAs in
Mathematics and Physics and an MS in Applied Mathematics from the University of Pennsylvania. His 1976 Stanford PhD thesis, AM, was a demonstration that certain kinds of creative discoveries in mathematics could be produced by a computer program (a theorem proposer, rather than a theorem prover). That work earned him the bi-annual IJCAI Computers and Thought Award in 1977. Dr. Lenat was a professor of computer science at Carnegie-Mellon University and at Stanford University. He is one of the founders of AAAI (the American Association for Artificial Intelligence), and a Fellow of AAAI. He has authored hundreds of journal articles (e.g., a four-article series in AI.J. over several years on The Nature of Heuristics I-IV), book chapters (e.g., in Machine Learning and Hal’s Legacy) and books (including Knowledge Based Systems in Artificial Intelligence and Building Large Knowledge Based Systems). In 1980 he co-founded Teknowledge, Inc. His interest and experience in national security has led him to regularly consult for several U.S. agencies and the White House. He is the only person to have served on the technical advisory boards of both Microsoft and Apple.

ICT’s Full Spectrum Warrior: Virtual Reality Prepares Soldiers for Real War

One blistering afternoon in Iraq, while fighting insurgents in the northern town of Mosul, Sgt. Sinque Swales opened fire with his .50-cal. That was only the second time, he says, that he ever shot an enemy. A human enemy.

“It felt like I was in a big video game. It didn’t even faze me, shooting back. It was just natural instinct. Boom! Boom! Boom! Boom! ” remembers Swales, a fast-talking, deep-voiced, barrel-chested 29-year-old from Chesterfield, Va. He was a combat engineer in Iraq for nearly a year.

Like many soldiers in the 276th Engineer Battalion, whose PlayStations and Xboxes crowded the trailers that served as their barracks, he played games during his downtime. “Halo 2,” the sequel to the best-selling first-person shooter game, was a favorite. So was “Full Spectrum Warrior,” a military-themed title developed with help from the U.S. Army.

Read the Washington Post article.

ICT at Serious Games Summit: a review

Presented on the first day of Serious Games Summit 2005, this fascinating session explained some of the work carried out by the University of Southern California-connected Institute for Creative Technologies (ICT), a university-affiliated research center which is majorly funded by the U.S. Army. It is particularly known in the video game community for having produced the Pandemic co-created Full Spectrum Warrior game for the Xbox, originally made for army training on the go, but then released as a successful THQ-published game, but it also does a great deal of other bleeding-edge research for the Army, DARPA, and other government entities here and abroad on the intersection of entertainment aand technology.

Learn more at Gamasutra.

Virtual Reality Therapy for Combat Stress

A new, high-tech system designed to treat military veterans suffering from Post-Traumatic Stress Disorder—or PTSD—may be familiar to fans of a squad-based combat video game.

Using components from the popular game Full Spectrum Warrior, psychologist Skip Rizzo and his colleagues have fashioned a “virtual” world that simulates the sources of combat stress.

The project is a joint venture between the Institute for Creative Technologies—a cutting-edge research lab at the University of Southern California—and the Office of Naval Research.

View the full story.