The National Covers Paul Debevec and Digital Actors

Paul Debevec was featured in coverage about digital actors in connection with the Cinematic Innovation Conference in Dubai. The article mentions the ICT Graphic Lab’s work on Digital Emily and Digital Ira projects and Benjamin Button.

“While we might be replacing the mechanism by which the image appears on screen, we are not replacing the process of acting at all. Just as Andy Serkis through his talent was able to create this character of Gollum there was no elimination of the acting process,” said Debevec in the story.

Debevec was in Dubai to explore these issues in a panel with Andy Serkis and Steven Lange, called: “The Future of Acting in an Increasingly CGI-Oriented Industry: Should Actors Be Concerned or Excited?”

The Future of Acting in an Increasingly CGI-Oriented Industry: Should Actors Be Concerned or Excited?

Paul Debevec moderates an hour-long panel: The Future of Acting in an Increasingly CGI-Oriented Industry: Should Actors Be Concerned or Excited? with actors Andy Serkis and Stephen Lang.

Marketplace Features Skip Rizzo and Virtual Reality Therapy for PTSD

As part of their week-long look at the intersection of video games and mental health, American Public Media’s Marketplace program interviewed Skip Rizzo and his work developing a virtual reality therapy system for providing prolonged exposure therapy for treating PTSD. Rizzo said the therapy patients wearing a virtual reality headset are hardly playing a game.

“In a game you have unlimited lives, your mission is to kill things. In this environment, it’s about exposure to the things that you’ve been haunted by,” he said. “This is a tool to extend the skills of a well trained clinician that understands how to deliver prolonged exposure, and over time, people start to go through scenarios that they never thought they could get through, and they start to feel a sense of empowerment. And, you see the reduction in PTSD symptoms in the other treatment, but then once someone has gotten over the hurdle there, all the sudden, you check on them one month, six months later, you see a continued drop in the symptoms, because they’re basically continuing to heal.”

The story noted that Rizzo is using the Oculus Rift headset in his medical virtual reality research. The online version of the story includes a link to a video of Rizzo using the Oclus Rift. Oculus Rift founder Palmer Luckey worked on developing headsets for Rizzo’s system during his time as a lab assistant in ICT’s MxR Lab.

Huffington Post Features Mark Bolas in Video on Touch Screen Tech

Mark Bolas hosted reporter Katie Lindendoll in a tour of the MxR Lab to show her the MxR’ Labs innovations in touch screen technologies. Watch this video segment to learn about the future of touch screens and ICT’s role in it.

FX Guide Showcases the ICT Graphics Lab’s Digital Faces Work

An fxguide feature story on the art of digital faces hightlights the facial capture, lighting, and animation collaborations of the ICT Graphics Lab, including Digital Emily, Digital Ira, and facial lighting techniques developed for Gravity, Ender’s Game, and Oblivion.

Mel Slater: “Immersive Virtual Reality: Changing the Self Not Just the Place”

12:00 – 1:00PM | USC ICT Theater

Abstract: Virtual reality has typically been used to induce an illusory transformation of location. Instead of being in, for example, a lab, the participant has the illusion of being somewhere else, engaging in different activities. In recent years a great deal has been learned in cognitive neuroscience about how the brain represents the body. It has been found, counter to common sense, that although we tend to believe that our bodies are relatively stable it is surprisingly easy to give generate the illusion that the body has radically changed. For example, the rubber hand illusion shows that a rubber arm can be incorporated into what feels as if it is part of the body, the shrinking waist illusion can give the strong sensation of the waist radically reducing (or expanding) in size, and Pinocchio illusion that the nose has grown very long. It has been shown that an illusory transformation of the whole body is possible including the substitution of the real body by a virtual body. In this talk I will describe some of these illusions, but concentrate more on their behavioural and attitudinal correlates. It turns out that if you have a different body then at least temporarily this affects your attitudes and behaviours, opening up new experiences and feelings. In other words there is a new path for virtual reality – to change the self and not just the place.

Mel Slater is an ICREA Research Professor at the University of Barcelona. He became Professor of Virtual Environments at University College London in 1997. He was a UK EPSRC Senior Research Fellow from 1999 to 2004, and has received substantial funding for virtual reality installations in both London and Barcelona. Twenty nine of his PhD students have obtained their PhDs since 1989. In 2005 he was awarded the Virtual Reality Career Award by IEEE Virtual Reality ‘In Recognition of Seminal Achievements in Engineering Virtual Reality.’ He leads the eventLab at UB. He is Coordinator of the EU 7th Framework Integrated Project VERE, and scientific leader of the Integrated Project BEAMING. He holds a European Research Council grant TRAVERSE on the specific topic virtual embodiment, and the general topic of a new area of application of virtual reality based on this theme.

Maria V. Sanchez-Vives, MD, PhD has been ICREA Research Professor at the IDIBAPS (Institut d’Investigacions Biomèdiques August Pi i Sunyer) since 2008, where she is head of the Systems Neuroscience group. She is Adjunct Professor of the Department of Basic Psychology at the University of Barcelona. She previously held a position as Associate Professor of Physiology and head of lab at the Instituto de Neurociencias de Alicante in Spain (UMH-CSIC). After obtaining her PhD, she was a postdoctoral researcher at Rockefeller University and Yale University. Her independent research on neuroscience and virtual reality has been supported by national and international agencies (Human Frontier Science Program and the EU) and she has been the supervisor of nine defended PhD theses. She has been a partner in five EU grants and currently is the coordinator of the FP7 project CORTICONIC. She is Specialty Chief Editor of Frontiers in Systems Neuroscience. Her interests include information processing in the cerebral cortex, body representation and the use of virtual reality from a neuroscience and medical perspective.

ELITE Lite

Download a PDF overview.

The Emergent Leader Immersive Training Environment (ELITE) Lite provides a laptop training capability to teach interpersonal skills to United States (US) Army junior leaders by presenting real-world instructional scenarios in an engaging, self-reinforcing manner. The purpose of the training experience is to provide junior leaders with an opportunity to learn, practice and assess interpersonal communication skills for use in basic counseling. The ELITE content incorporates Army-approved leadership doctrine (FM 6-22, Appendix B), evidence-based instructional design methodologies and ICT research technologies, such as virtual humans and intelligent tutoring, to create a challenging yet engaging training experience.

The ELITE Lite software has five scenarios. Users may choose the role of an Officer or NCO when playing through each scenario. All of the scenarios are based on real-world counseling issues such as financial troubles, post-deployment readjustment and alcohol-related performance issues. These scenarios offer students a chance to practice the interpersonal communication skills they learn during the ELITE Lite instruction. The package includes three phases: Up-front Instruction, Practice Environment and After Action Review (AAR).

The total training time for ELITE Lite is anywhere from one to two hours depending on a student’s proficiency. Time will vary depending on student experience level, performance and engagement. Some students may take time to review missed concepts based on how well they respond to quiz questions. Some students may choose to watch all suggested training vignettes and comparisons. Some students may thoroughly engage in the AAR after an interaction in the practice environment. Some students may choose to practice all three scenarios for their given rank.

In the contemporary US Army, leaders must not only be prepared for the tactical side of leadership, but also the personal and soft side of leadership as well. Effective communication between leaders and their subordinates is paramount towards maintaining the combat effectiveness of the force, and ELITE Lite offers young US Army leaders a unique opportunity to learn and practice interpersonal skills so they are better prepared for the interactions they will encounter.

Paul Debevec Gives Keynote at the Smithsonian’s X 3D Conference

Paul Debevec will deliver the keynote talk at the Smithsonian’s X 3D Conference. He will present from 10:30 – 11:15.

Paul Debevec: “Advances in 3D Scanning, Surface Reflectance Measurement, Photoreal Digital Humans, and Glasses-free 3D displays”

Abstract: USC ICT’s Graphics Laboratory performs basic research in the areas of digitization, animation, and display of people, objects, and environments. In this presentation, I will present recent results from the USC ICT’s Graphics Laboratory for 3D scanning, appearance modeling, realistic virtual human characters, and hologram-like 3D displays. The main projects I will discuss are:

Digital Ira : Creating a Real-Time Photoreal Digital Actor
SIGGRAPH 2013 Realtime Live
http://gl.ict.usc.edu/Research/DigitalIra/

Acquiring Reflectance and Shape from Continuous Spherical Harmonic Illumination
SIGGRAPH 2013 Technical Papers
http://gl.ict.usc.edu/Research/SpecScanning/

An Autostereoscopic Projector Array Optimized for 3D Facial Display
SIGGRAPH 2013 Emerging Technologies
http://gl.ict.usc.edu/Research/PicoArray/

BBC Features Skip Rizzo and Virtual Reality Therapy for Post-Traumatic Stress

The BBC radio program, Digital Humans, featured Skip Rizzo and ICT’s virtual reality therapy for treating post-traumatic stress in a segment devoted to how technology improves lives. The episode also included a veteran who believes he was helped by the treatment. “This may encourage people to seek out this type of treatment, particularly when you talk about a generation of Soldiers that may have grown up digital,” said Rizzo in the story. “That may be an option that appeals to people and gets them to seek treatment they can benefit from.”

Rizzo’s interview appears approximately 19 minutes into the segment

USC Standard Patient

Download a PDF overview.
The New Virtual Standardized Patient

What is USC Standard Patient?

USC Standard Patient, under development, is a free online community where medical students, residents and physicians can improve their interview and diagnostic skills with a Virtual Standardized Patient (VSP).     For the learner, the hospital features a variety of different patients, useful performance assessments and guidance and improvements.   For medical educators, the hospital features USC Standard Patient Studio, an online collaboration and authoring community.

The goals of USC Standard Patient:

  • Create the most functional and natural virtual standardized patients in history
  • Create a critical mass of conversational patients as a free national resource
  • Allow for effective natural language interaction with learners
  • Improve medical interviewing & diagnostic skills
  • Create VSPs that can be authored rapidly by non-computer scientists
  • Vastly improve assessment and performance feedback
  • Free up resources to more effectively utilize human standardized patients

Conversational Virtual Standardized Patients

Our Virtual Standardized Patients (VSPs) are based on SimCoach virtual human technology.  VSPs converse in natural English, understand what you tell them and can express themselves non-verbally through facial expressions and gestures.   The system can create patients with a range of personalities including average, sullen, loquacious, uncertain, reserved and neurotic.

Educational Approach

The assessment system is revolutionary and provides specific, useful feedback to the learner.   The ability to provide the student with a detailed & complete, objective assessment and guide them with specifics on how they can perform better is a major advantage of our approach over human standardized patients.   This emerging capability is unprecedented and has the potential to rapidly mature new clinical interviewers.

Friends You Haven’t Met Yet: A Documentary and Research project

“Friends You Haven’t Met Yet” is a documentary short film that chronicles encounters between extremely prolific bloggers and a computer scientist who uses their personal narratives for research. It explores issues related to public sharing of personal stories, the ethical obligations of researchers who use web data, and the changing nature of online privacy.

Background
The Narrative group at the University of Southern California’s Institute for Creative Technologies is engaged in an innovative project to gather and analyze millions of personal stories that people post to their public weblogs.

The group built software that has analyzed every English-language weblog post looking for personal stories – the nonfiction narrations of people’s everyday life experiences, no matter how mundane or extraordinary. Analyzing over a billion posts, they have pulled out 30 million personal stories thus far.

These stories have been used for a variety of research efforts, including:

  • A study of gender differences in the experience of having a stroke, evidenced by hundreds of stroke stories
  • A study of the ways that people tell value-laden stories about emotional experiences, which were subsequently used as stimuli in neuroimaging (fMRI) studies of narrative impact
  • The use of millions of stories of everyday life as a knowledge base for artificial intelligence, enabling machines to reason about causality
  • Interactive storytelling systems where humans and computers collaboratively author new fictional stories, where the computer borrows from millions of nonfiction stories
  • A methodology for training Army soldiers by telling them pertinent stories from civilians who engage in analogous tasks and skills

The team identified hundreds of people who post personal stories to their weblogs nearly every day, and have been doing so for many years and had a number of questions for these prolific bloggers.

  • What motivates these people to post so frequently and publicly about their personal life?
  • To what degree do these people embellish their stories to make them more interesting than reality?
  • What expectations do these authors have about their readers, and what are the ethical implications for researchers like us who analyze their posts?

To answer these questions, they contacted many of these bloggers directly and set up face-to-face interviews at their homes. This sort of ethnography is common in the social sciences, but very unusual for computer scientists. USC  Ph.D. student Christopher Wienberg conducted these interviews at locations all around California, in both urban and rural settings.

They took the additional unusual step of partnering with a documentary film crew, who accompanied Wienberg on his journeys. Psychic Bunny, an LA-based media company, filmed each of the interviews, and produced this documentary short film. It has been submitted to numerous film festivals and will be shown at academic conferences.

This film was produced as part of the research project “Authoring Realistic Learning Environments with Stories (ARLES),” led by Andrew S. Gordon at the University of Southern California’s Institute for Creative Technologies, funded by the Army Research Office.

Researchers
Andrew S. Gordon, Ph.D.
Research Associate Professor of Computer Science
Institute for Creative Technologies and USC Viterbi School of Engineering
University of Southern California
gordon@ict.usc.edu

Christopher Wienberg
PhD Student, Computer Science
Institute for Creative Technologies and USC Viterbi School of Engineering
University of Southern California
cwienberg@ict.usc.edu

Documentary filmmakers
Jesse Vigil, writer and director
jesse@psychicbunny.com
Asa Shumskas Tait, executive producer
asa@psychicbunny.com
Psychic Bunny, Inc.

Ari Shapiro: “A Practical and Configurable Lip Animation Method for Games”

Cristina Conati: “Learner Modeling Beyond Problem Solving Activities”

Abstract: The field of Intelligent Tutoring Systems (ITS) has successfully delivered techniques and systems to provide student-adaptive support for problem solving in variety of domains. There are, however, other educational activities that can help learners acquire the target skills and abilities at different stages of learning, such as exploring interactive simulations, playing educational games and practicing meta-cognitive skills relevant for learning.

Like for problem solving, learners can benefit from receiving individualized pedagogical support during these activities. Providing this support, however, rises unique challenges, because it requires modeling and responding to student behaviors and skills often not as structured and well-defined as those involved in traditional problem solving. This talk illustrates how we tackle these challenges with learner models that leverage data-mining techniques as well as advanced input sources such as eye-tracking data.

Bio: Cristina Conati is an Associate Professor of Computer Science at the University of British Columbia in Vancouver, Canada. She and her students conduct research in affective user modeling, mixed-initiative interface customization, tailored support for exploration and self-explanation, and modeling of meta-cognitive skills. Cristina recently served as a co-editor for the book “Eye Gaze in Intelligent User Interfaces” and last summer, attended the Microsoft Summer Institute on “Crowdsourcing Personalized On-Line Education.” Cristina received her PhD in Intelligent Systems from the University of Pittsburgh in 1999 and is currently on sabbatical. You can find out more about her research at her website.

Film Screening and Q&A

WHAT: A screening and Q&A with the creators of Friends you Haven’t Met Yet, a documentary short film exploring issues of ethics and online privacy by introducing bloggers to the scientists who  – unbeknownst to them – have been analyzing their posts as part of research projects looking at how people tell stories.

WHO: ICT researchers Andrew Gordon and Christopher Weinberg and film’s creators, Jesse Virgil (director) and Asa Shumskas-Tait (producer)

WHEN: 2:00 pm, Wednesday, November 6

WHERE: Theater – USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536

MORE: Millions of people post their thoughts to the web every day, creating new opportunities for researchers to study language, opinion, and people’s lives on a massive scale by analyzing social media data. The use of this data in scientific research raises new questions about the ethical obligations that researchers have to their research subjects, challenging traditional concerns about informed consent, identifiable private information, and minimal risk.

In the Spring of 2013, USC Ph.D. student Christopher Wienberg explored these issues through face-to-face interviews with extremely prolific bloggers – people who post enormous amounts of information about their lives to their public weblogs – and whose blogs were being analyzed by Weinberg and his advisor Andrew Gordon. This documentary chronicles Weinberg’s encounters with a diverse group of authors, each with a unique perspective on the changing nature of privacy on the web.

About Andrew Gordon
Andrew S. Gordon is a Research Associate Professor of Computer Science at the University of Southern California’s Institute for Creative Technologies and the USC Viterbi School of Engineering. He leads interdisciplinary research on storytelling and the human mind, exploring how people experience, interpret, and narrate the events in their lives. A central focus of his research is on the abstract knowledge that enables interpretation of experiences, including the expectations that people have of everyday activity contexts and the commonsense theories that people have of their own psychology. In support of his research goals, he has pioneered methods for collecting and analyzing personal storytelling on a massive scale, identifying tens of millions of narratives posted to Internet weblogs. He has used this collection in a variety of innovative applications using novel information retrieval and natural language processing techniques, particularly in the areas of commonsense causal reasoning, story-based learning environments, and comparative analyses of health-related personal experiences. He is the author of the 2004 book, Strategy Representation: An Analysis of Planning Knowledge. He received his Ph.D. in 1999 from Northwestern University.
http://people.ict.usc.edu/~gordon/

About the USC Institute for Creative Technologies
At the University of Southern California Institute for Creative Technologies (ICT) leaders in artificial intelligence, graphics, virtual reality and narrative advance low-cost immersive techniques and technologies to solve problems facing service members, students and society.  Established in 1999, ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S Army Research Laboratory. ICT brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, education and more.
http://ict.usc.edu/

About Psychic Bunny
In 2005, four former college buddies set out on an Odyssean quest to create the world’s finest cream soda.  They failed, and instead created a very fine company called Psychic Bunny.  There, they combined their unique talents in writing, film production, editing, interactive design, motion graphics, visual effects, and sorcery to create a hybrid production studio capable of taking on all kinds of work.

Eight years later, that studio has gone on to complete projects as varied as feature films, television commercials, video games, training systems, simulations, corporate promos, infographics, card games, and more. Clients run the gamut from entertainment to tech to government, and each brings to us unique challenges that we approach from a ground-up, big-picture perspective.
http://www.psychicbunny.com/

IEEE Spectrum Features ICT

A story about about the current state of military simulation tied to the opening of the Enders Game movie featured ICT simulation research and researchers. The story described the institute as research and development organization that, “makes some of the most convincing military simulations around” and noted that ICT’s budget is “evidence that simulation technology can provide a lot of bang for the buck”.

The story mentioned ICT’s use of video games and virtual reality for training as well as the institute’s work in developing virtual humans.

Randall Hill Jr., ICT’s executive director of the ICT, said that in the past the military’s view on technology would “often focus on how you buy equipment and treat the soldier like a Christmas tree: hang armor on him and gadgets and not look at them holistically.” Now the Army uses technology to empower soldiers more as humans, not mere weapons carriers. “We’re trying to look at soldiers that way,” said Hill.

Virtual Agent Chat Features ICT’s Virtual Human Toolkit

A post on this virtual agent-focused blog showcased ICT’s virtual human toolkit. The story includes an interview with ICT’s Arno Hartholt discussing the capabilities and purpose of the toolkit.

“The toolkit contains all the components that a builder of virtual humans might need to construct their own fully capable character. Since users of the toolkit are able to leverage these pre-built pieces, they can focus on the personality and functions of the virtual human, instead of sweating the technical details,” states the post.  “Hartholt explained the importance of including people from a broad range of disciplines and interests in the development of virtual humans. The toolkit can enable psychologists, educators, medical professionals and others to develop and experiment with virtual humans,” the article said.

USS Benfold Sailors Visit ICT

A group of Navy sailors from the USS Benfold toured the USC Institute for Creative Technologies to see some of the latest examples of virtual reality technologies that could one day be installed on Navy ships. The visit was part of ICT’s participation in Project ATHENA, a Benfold-based initiative to bring innovative technology solutions to improve the Navy.

“These guys have great energy and ideas,” said Todd Richmond, ICT’s director of advanced prototype development and transition. “We hope to serve as a lab that can collaborate with them to make some of those ideas a reality.”

Work along those lines has already begun. When a Navy Ensign proposed developing a system to improve the way Navy ships identify where sounds are coming from in low visibility conditions, ICT’s Mixed Reality Lab (MxR) developed a working prototype using a Microsoft Kinect and a Smartphone.

The idea took the top prize in an ATHENA event earlier this year and has led to continued conversations between ICT researchers and the Benfold sailors, who during their visit got a first hand look at some of the latest technologies being worked on at ICT. These included the ICT Graphic’s Labs advances in creating realistic digital doubles and the MxR Lab’s breakthroughs in data display techniques as well as virtual and augmented reality. The Office of Naval Research SwampWorks is currently funding project “Blueshark” at ICT looking at near-term and longer-term technologies and how the Navy will deal with these virtual and augmented spaces in the future.

“Sailors have to interact with each other and all kinds of information and situation, often in small physical spaces,” said Richmond. “When you fold virtual spaces into the mix, you get limitless possibilities, but also numerous challenges. It is valuable to hear first hand from them what their current problems are and then work directly with them to try to solve them now and in the future.”

Richmond and ICT’s Evan Suma, who is also a research assistant professor of computer science at USC, will continue the conversation when they attend an ATHENA idea showcase in San Diego later this week.

Ravi Ramamoorth: “Sampling and Reconstruction of High-Dimensional Visual Appearance”

Abstract: Many problems in computer graphics and computer vision involve high-dimensional 3D-8D visual datasets. Real-time image synthesis with changing lighting and view is often accomplished by pre-computing the 6D light transport function (2 dimensions each for spatial position, incident lighting and viewing direction). Realistic image synthesis also often involves acquisition of appearance data from real-world objects; a BRDF (Bi-Directional Reflection Distribution Function) that measures the scattering of light at a single surface location is 4D and spatial variation and subsurface scattering involve 6D-8D functions. Volumetric appearance datasets are increasingly used for realism and can involve billions of voxels in three dimensions. In computer vision, problems like lighting insensitive facial recognition similarly involve understanding the space of appearance variation across lighting and view.

Since hundreds of samples may be required in each dimension, and the total size is exponential in the dimensionality, brute force acquisition or precomputation is often not even feasible. In this talk, we describe a signal-processing approach that exploits the coherence, sparsity and inherent low-dimensionality of the visual data, to derive novel efficient sampling and reconstruction algorithms. We describe a variety of new computational methods and applications, from affine wavelet transforms for real-time rendering with area lights, to space-time and space-angle frequency analysis for motion blur and global illumination, to compressive light transport acquisition. In computer vision, we introduce a new framework of differential photometric reconstruction to tame the complexity of real-world reflectance functions. The results point toward a unified sampling theory applicable to many areas of signal processing, computer graphics and computer vision.

Bio: Ravi Ramamoorthi is an Associate Professor of Electrical Engineering and Computer Science at the University of California, Berkeley since Jan 2009. Earlier, he received his BS, MS degrees from Caltech and his PhD from Stanford University in 2002, after which he joined the faculty of the computer science department at Columbia University. He is interested in many areas of computer graphics, computer vision and signal-processing, having published more than 90 papers, including 45 in ACM SIGGRAPH/Transactions on Graphics. He has received a number of research awards, including the 2007 ACM SIGGRAPH Significant New Researcher Award for his work in computer graphics, and the 2008 White House US Presidential Early Career Award for Scientists and Engineers for his work on physics-based computer vision. He was also awarded the NSF Career Award (2005), Sloan Fellowship (2005), ONR Young Investigator Award (2007) and Okawa Foundation Research Grant (2011).
Many of his contributions, such as spherical harmonic lighting and importance sampling, are widely used in industry. He has also been a leader in education, teaching the first open online course in computer graphics with total enrollments to date of 50,000 students, and he has advised more than 20 postdoctoral, Ph.D. and MS students. Learn more at cs.berkeley.edu/~ravir.

ICT Hosts Visitors from NATO and ONR

Rene LaRose, director of the the NATO Science and Coordination Office, visited ICT along with John Tangney from the Office of Naval Research and Paul Chatelier from the Naval Postgraduate School. The purpose of the visit was to learn about the institute’s modeling, simulation and virtual reality work. The tour included demonstrations of ICT’s virtual human prototypes INOTS, SimCoach, SimSensei and Gunslinger. The guests also visited the ICT Graphics Lab to see the group’s advances in creating digital doubles and holographic projections.

Jonathan Gratch at ARL-HRED Fellows Meeting

ICT’s Associate Director for Virtual Humans Research, Jonathan Gratch, will give a talk as part of ARL-HRED Fellows meeting.

Paul Debevec: “Achieving Photoreal Digital Actors in Film and in Real-Time”

Abstract: Somewhere between “Final Fantasy” and “The Curious Case of Benjamin Button”, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. This talk describes how the Light Stage scanning systems and HDRI lighting techniques developed at the USC Institute for Creative Technologies have helped create digital actors in a range of recent movies and research projects. In particular, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create 2008’s “Digital Emily”, a collaboration with Image Metrics (now Faceware) yielding one of the first photoreal digital actors, and 2013’s “Digital Ira”, a collaboration with Activision Inc., yielding the most realistic real-time digital actor to date. The talk includes recent developments in HDRI lighting, polarization difference imaging, and reflectance measurement, and 3D object scanning, and concludes with advances in autostereoscopic 3D displays to enable 3Dis teleconferencing and holographic characters.

Paul Debevec: “Advances in Achieving Photoreal Digital Actors”

Abstract: Somewhere between “Final Fantasy” in 2001 and “The Curious Case of Benjamin Button” in 2008, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. This talk describes how the Light Stage scanning systems and HDRI lighting techniques developed at the USC Institute for Creative Technologies have helped create digital actors in in films such as Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, Tron: Legacy, and Avengers. For in-depth examples, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create “Digital Emily”, a collaboration with Image Metrics (now Faceware) yielding one of the first photoreal digital actors, and “Digital Ira”, a collaboration with Activision Inc., yielding the most realistic real-time digital actor to date. The talk includes recent developments in HDRI lighting, polarization difference imaging, and skin reflectance measurement, and concludes with advances in autostereoscopic 3D displays which are enabling 3D teleconferencing techniques which can transmit person’s face in 3D to make eye contact with remote collaborators.

Bill Swartout at NATO Workshop

ICT’s Director of Technology, William Swartout, will be speaking at a NATO workshop. He was invited by John Tangney.

Ari Shapiro and ICT’s Smartbody Animation System Featured in Fxguide

An interview with Hao Li, a USC computer science professor who specializes in 3-D acquisition and animation, noted that Li is collaborating with ICT’s Ari Shapiro to use ICT’s Smartbody system to create animated versions of people from Li’s 3-D scans.

“The tool automatically rigs your 3D scan,” said Li. “The possibilities here are infinite, so you can really think of yourself in a game or a virtual world for any possible purposes.”

ICT Mentioned in MIT Technology Review

A story in MIT’s Technology Review notes that Oculus Rift developer Palmer Luckey worked as a head-mounted display designer at ICT. Luckey was a lab assistant in ICT’s MxR Lab when he launched the Kickstarter campaign to develop the Oculus HMD.

Paul Debevec Quoted in the Hollywood Reporter

A story about the visual effects in the film Gravity quoted ICT’s Paul Debevec talking about the future of virtual reality film making.

“This is a step toward shooting movies on the ‘holodeck,’ ” said Paul Debevec, associate director of graphics research at ICT. “Eventually we are going to have a great virtual environment that is three-dimensional and believable. Instead of putting actors on a green screen, you’ll film them on an enormous, enveloping 3D display, and the director and cinematographer will have complete control.”

The article also noted that Debevec served as a consultant on the film.

Paul Debevec on Think & Drink Panel

Join the poolhouse and The Mill on october 9th at The Mill Los Angeles for a great discussion on technology and creativity.

Cocktails at 6:30

Panel discussion with q&a at 7 pm

Space is limited so please rsvp to poolhouse@themill.com.

7DDA5BA8-E93D-480E-97F6-B384BA148635

Discovery Channel Features ICT’s Light Stage Technology

A Discovery Channel Future Tech segment showcased ICT’s Paul Debevec and his Light Stage technology. The episode, which aired as part of the show Daily Planet, shows Debevec advancing the capabilities of Light Stage X by digitally relighting a correspondent to appear as if he is speaking under two very different lighting conditions.

LA Times Features Veteran’s Treatment with Virtual Reality Exposure Therapy

A front page Los Angeles Times story told the story of a veteran being treated with the virtual reality exposure therapy created by ICT’s Skip Rizzo. The story notes that the therapy was developed at ICT and is being studied at dozens of sites around the country, including the Long Beach VA. The story also includes a video of Sgt. Johnathan Warren telling his story and another clip showing the latest version of the virtual reality therapy.

ICT GFY13 Achievements Report Now Online

Click here to download a PDF of ICT achievement highlights from Government Fiscal Year 2013.

Louis-Phillipe Morency Presents “Ellie” at Health2.0 Conference

ICT’s Louis-Phillipe Morency demonstrated SimSensei, ICT’s virtual human platform to improve patient diagnosis in the Frontiers of Health panel at Health 2.0 in Santa Clara. Health 2.0 co promotes, showcases, and catalyzes new technologies in health care through a worldwide series of conferences, code-a-thons, prize challenges, and more.

Louis-Philippe Morency: “SimSensei: Virtual Human Platform for Healthcare”

SimSensei is a virtual human platform specifically designed for healthcare support and is based on the 10+ years of expertise at ICT with virtual human research and development. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the perceived user state and intent, through its own speech and gestures.

Event Update: 2013 TAB Review Cancelled for Oct 9 -10

ICT’s 2013 TAB review, scheduled for Oct 9 – 10, has been cancelled. Please visit the registration page for more information.

Todd Richmond Delivers Keynote at SciTechLA Research 2.0 Forum

Todd Richmond will give the keynote talk on Saturday morning. Purchase tickets here.

Daily Beast Features ICT’s Virtual Reality Exposure Therapy for Post-Traumatic Stress

An article in the Daily Beast featured ICT’s virtual reality exposure therapy, calling it, “one of the most promising therapeutic tools for treating veterans’ PTSD”, and noted that it is the most-widely  “virtual reality exposure therapy” treatment program in America. The story also cited results of several studies noting the effectiveness of virtual reality exposure therapy for treating post-traumatic stress.

“If we ask what effects the military’s contribution to videogames will have on society at large, it is increasingly clear that one of the areas of greatest influence will be the field of mental health,” said author Cory Meade, whose recently published book War Play: Video Games and the Future of Armed Conflict, which includes the work of Skip Rizzo and ICT’s Medical Virtual Reality Lab.

Professor Prashant Doshi: “Compressing Mental Model Spaces and Modeling Human”

Abstract: A wide swath of multidisciplinary areas in autonomous multiagent systems benefit from modeling interacting agents. However, the space of candidate mental models is generally very large and grows disproportionately as the interaction progresses. Sometimes, application constraints and data may limit this space. In this talk, I will present principled and domain-independent ways of compressing the model spaces. The general approach is to partition spaces by forming equivalence classes of models and retaining representatives from each class. These innovative methods for compression include those that are exact as these do not result in any associated loss of information for the modeling agent (lossless) but which are computationally intensive, and approximate that are computationally more efficient but lossy. The latter may violate an important epistemic condition on the model space, which is seldom considered in the plan recognition literature. While these methods are broadly useful, I will focus on the impact of the compression on the scalability and quality of planning in large and partially observable multiagent settings within the framework of interactive POMDPs.

Human strategic thinking is hobbled by low levels of recursive reasoning (reasoning of the type “I think that you think that I think…”) in general contexts. Recent studies demonstrate that in simple and competitive contexts, strategic reasoning can go deeper than previously thought, even up to three levels. I will present a computational model of the behavioral data obtained from the studies using the interactive POMDP appropriately simplified and augmented with well-known models simulating human learning and decision. These studies and the psychologically-plausible process models provide insights into the specific ways by which humans attribute strategic intent. The models could be viable ways for computationally modeling strategic behavior in mixed human-agent settings.

Bio: Prashant Doshi is an associate professor of computer science at the University of Georgia (UGA) and directs the THINC lab. He is also a faculty member of the Institute for AI at UGA. He received his doctorate from the University of Illinois at Chicago in 2005. His research interests lie in decision making and specifically in decision making under uncertainty in multiagent settings. He is also interested in studying and computationally modeling strategic decision making by humans. Prashant has co-conducted successful tutorials on decision making in multiagent settings for the last six years at the AAMAS conferences, co-organized the past three MSDM workshops and two workshops on interactive decision and game theory. He received UGA’s Creative Research Medal in 2011 and the 2009 NSF CAREER award. Research in this talk is supported by grants from NSF, AFOSR and ONR. More details about his research are available at http://thinc.cs.uga.edu.

Boston Globe Features ICT’s Virtual Reality Therapy

An article in the Boston Globe covered the use and study of ICT’s virtual reality therapy for treating post-traumatic stress. The story noted the system is being used and at a number of sites, including  the Home Base Program, a Boston clinic for veterans and their families run by the Red Sox Foundation and Massachusetts General Hospital.

“With virtual reality exposure therapy — seeing the desert sky, hearing familiar sounds, and even feeling the rumble of a blast in his seat — it is as if  the song comes on and, all of a sudden, you remember every single measure,” said an Army Sergeant treated on the system in Boston. “It all comes right back up.”

“The growing interest in virtual reality therapy reflects a quantum change happening in military thinking,” said Skip” Rizzo, associate director of the Institute for Creative Technologies, which is now developing a more advanced system depicting the two wars. “There is a shift away from the idea that a person who gets treatment is weak and toward the realization that treatment is a means for service members to keep their minds — not just their bodies — fit,” he said.

Todd Richmond: “Digital Technology for the Analog World”

Wall Street Journal Features ICT’s Holographic 3-D Projection Technology

A Wall Street Journal video features the glasses-free, perspective-correct  3-D projection work of the ICT Graphics Lab and calls it, “one of the most advanced 3-D projectors in the world.”

Paul Debevec is interviewed in the story, explaining how the technology works and what it can be used for.

Belinda Lange Collaborates on Video Game for Young People with Cerebral Palsy

Belinda Lange, Ph.D, a  physical therapist who leads ICT’s Game-Based Rehabilitation efforts, collaborated with UCLA researchers to provide children with cerebral palsy an opportunity to play video games. The collaboration allowed children with physical and visual impairments to control game play using ICT’s FAAST system, which drives game play with gestures rather than keyboards and joysticks. Watch this Cerebral Palsy Network video of children enjoying the games at their CP Family Forum, an event held at UCLA.

 

Talk at ICT: Douglas Van Praet – The Cognitive Science of Effective Brand Narratives

Please join us next Thursday September 12 at 2pm in the large 2nd floor Amphitheater for a special talk by a luminary of the Los Angeles marketing industry, Douglas Van Praet.

Title: The Cognitive Science of Effective Brand Narratives
When: Thursday September 12 at 2pm
Where: ICT large 2nd floor theater

Abstract:
Storytelling has been the way in which shared information and culture since the dawn of humanity. For hundreds of thousands of years we sat around campfires as hunter-gatherers telling and sharing stories. But applying storytelling to brands and markets is the latest frontier in effective communication in today’s emerging new media environments.
Here author and brand strategist, Douglas Van Praet will take you through a step-by-step process to find and craft a brand message that will move people at the deepest, most fundamental levels. Drawing upon the principles of his recent book, Unconscious Branding, and the latest research in behavioral science you will learn how some of the most successful marketing campaigns in history have succeeded by leveraging human not just consumer insights.

Speaker bio:
Douglas Van Praet is the author of Unconscious Branding: How Neuroscience Can Empower (and Inspire) Marketing. He is also a brand strategy consultant whose approach to marketing draws from unconscious behaviorism and applies neurobiology, evolutionary psychology and behavioral economics to business problems. He has helped some of the world’s most iconic brands produce effective, award-winning campaigns and product launches.
Most recently, Van Praet was executive vice president at Deutsch LA, where his responsibilities included Group Planning Director for the highly acclaimed Volkswagen account where he helped VW attain the highest market share stateside in three decades. These efforts included the wildly successful and beloved: “The Force/Mini‐Darth Vader” commercial, the most shared ad in Super Bowl history.
His work is featured in international media such as Fast Company, Psychology Today, Business Insider, UK Telegraph, BBC News and Contagious Magazine.

RSVP: Non-ICT staff interested in attending should email belman@ict.usc.edu.

Virtual Child Witness

The primary objective of this research project is to establish the feasibility and efficacy of using an interactive learning environment to train and evaluate investigative child interviewing skills. In the courts, children are portrayed by defense attorneys and expert witnesses as being highly suggestible. As a result, child witnesses are often not believed. On the other hand, suggestive interviewing methods have been shown to undermine young children’s accuracy, and can lead to false convictions.

The NICHD investigative interview protocol provides interviewers with an effective, standardized interview method to obtain accurate information from children. Unfortunately, the protocol is difficult to teach. Training programs reliably increase interviewer knowledge but often fail to affect interviewer behavior (Powell, 2002). The most effective approach is practice with feedback (Lamb, Sternberg, Orbach, Esplin, & Mitchell, 2002). However, simulated interviews suffer from adult simulator’s difficulties in accurately portraying child witnesses’ limitations (Powell, Fisher, & Hughes-Scholes, 2008). For practical and ethical reasons, children cannot be utilized for mock interviews about child abuse.

The Virtual Child Witness project models the rapport building phase of an investigative interview, in which the interviewer asks the child questions about innocuous events. Users select from a menu of questions that vary in their open-endedness and therefore in their productivity in eliciting narrative reports.

Initially, the project was intended to demonstrate the varying effectiveness of different question types at inducing detailed accounts of information regarding an innocuous event experienced by the virtual child; what is known as the narrative practice rapport phase of an investigative interview. Pilot data has supported the program’s efficacy in assessing the user’s interviewing skills, and in serving as an engaging device for training. Further refinements will lead to a program that can be used, by the multitude of different agencies tasked with investigative interviewing, to train all necessary components of what make up an effective investigative interview, components adapted from the National Institute of Child Health & Human Development (NICHD) investigative interview protocol (Lamb et al., 2008).

The creation of a virtual child can provide a cost-effective and standardized process for teaching good interviewing skills. It can be disseminated through the web, providing a distance learning alternative to in-person training.

Committee Participation: National Academies – The Context of Military Environments: Social and Organizational Factors

As the U.S. military continues to transform itself to accomplish its full spectrum of missions in the 21st century, an efficient and effective research program to inform U.S. military personnel policies and practices is essential. Highly effective leaders and teams must be adaptable and flexible, because they operate in varied, dynamic, and changing environments. Therefore, any program to improve leadership and performance must consider the social and organizational factors that present external influences on the behavior of individuals operating within the context of military environments. This study, performed by an ad-hoc committee, which includes ICT’s Jonathan Gratch,  with oversight from the Board on Behavioral, Cognitive, and Sensory Sciences, will identify and assess these factors and recommend an agenda for future U.S. Army Research Institute research in these areas.

The 24 month consensus study will be carried out by an ad hoc committee that will meet four times during the first year. Its second meeting will include a small public data-gathering event to assist the committee in developing its findings, conclusions, and recommendations for their final report, expected to be published in summer 2014

Cerebella

Download a PDF overview.

Cerebella automates the generation of physical behaviors for virtual humans, including nonverbal behaviors accompanying the virtual humans dialog, responses to perceptual events as well as listening behaviors. Modular processing pipelines transform the input into behavior schedules, written in the Behavior Markup Language and then passed to a character animation system.

Designed as a highly flexible and extensible component, Cerebella realizes a robust process that supports a variety of use patterns. For example, to generate the character’s nonverbal behavior for an utterance, Cerebella can take as input detailed information about a character’s mental state (e.g., emotion, attitude, etc.) and communicative intent. On the other hand, in the absence of such information, Cerebella will analyze the utterance text and prosody to infer that information. It can be used online to generate behavior in real-time or offline to generate behavior schedules that will be cached for later use. Offline use has allowed Cerebella to be incorporated into behavior editors that support mixed initiative, iterative design of behavior schedules with a human author, whereby  Cerebella and the human author can iterate over a cycle of Cerebella behavior schedule generation and human author modification the schedule.

The above clip is an example of Cerebella performing nonverbal behavior generation and listening behaviors just using the analysis of the utterance text and audio.

Expert Talk: Machine Learning and Character Animation

Sergey Levine, a Ph.D. candidate from Stanford, speaks about character animation and machine learning. Applications include motion controllers, quasi-physics, path planning and gesturing.

Chronicle of Philanthropy Features ICT’s New Dimensions in Testimony Collaboration with USC Shoah Foundation

A Chronicle of Philanthropy story showcased the New Dimensions in Testimony collaboration between ICT and the USC Shoah Foundation. The article highlighted ICT’s natural language and holopgraphic projection technologies that are being used to create an immersive experience for future students and others to ask questions of Holocaust survivors and others.  The article quoted ICT’s Paul Debevec on the muli-camera, multi-angle filming process that will make it possible to project, “a believable, almost physical-feeling presence,” that will make it feel like the survivor is in the room, he said.

“You ask that question, and the video witness is now looking straight out of the camera, straight into your own eyes, and giving you the answer to your question,” said Stephen Smith, executive director of the USC Shoah Foundation of the natural language capabilities that enable the video character to answer questions. “We’re creating a new form of dialogue with historical characters,”  he added about the project.

Medal of Honor Recipient Visits ICT

Medal of Honor recipient Staff Sgt. Ty Carter visited the University of Southern California Institute for Creative Technologies Sept. 3, getting a first-hand look at how this Army-sponsored research center is turning its virtual reality advances into cutting-edge tools for service members.

“It was an honor to have combat hero Staff Sgt. Carter here,” said Randall Hill, executive director of ICT. “He demonstrated exceptional combat leadership and is now shining a light on the hidden wounds of war. We felt privileged to show him some of the work we are doing to help in this area.”

Read more at USC News and the U.S. Army Research Lab website.

Jon Gratch Quoted About Virtual Humans in Huffington Post

Jonathan Gratch, head of ICT’s vitual humans group, was quoted in a Huffington Post story about an MIT-developed 3D character designed to teach interpersonal skills. “While it may seem odd to use computers to teach us how to better talk to people, such software plays an important [role] in more comprehensive programs for teaching social skills [and] may eventually play an essential step in developing key interpersonal skills,”  said Gratch, who is also a reserach associate professor of computer science and psychology at USC.

Peter Khooshabeh and Jonathon Gratch: “Looking Real and Making Mistakes”

Abstract: What happens when a Virtual Human makes mistakes? In this study we investigate the impact of VHs’ conversational mistakes in the context of persuasion. The experiment also manipulated the level of photorealism of the VH. Users interacted with a VH that told persuasive information, and they were given the option to use the information to complete a problem-solving task. The VH occasionally made mistakes such as not responding, repeating the same answer, or giving irrelevant feedback. Results indicated that a VH is less persuasive when he or she makes \emph{conversational} mistakes. Individual differences also shed light on the cognitive processes of users who interacted with VH who made conversational errors. Participants with a low Need For Cognition are more effected by the conversational errors. VH photorealism or gender did not have significant effects on the persuasion measure. We discuss the implications of these results with regard to Human-Virtual Human interaction.

Stefan Scherer: “Towards a Multimodal Virtual Audience Platform for Public Speaking Training”

Abstract: Public speaking performances are not only characterized by the presentation of the content, but also by the presenters’ nonverbal behavior, such as gestures, tone of voice, vocal variety, and facial expressions. Within this work, we seek to identify automatic nonverbal behavior descriptors that correlate with expert-assessments of behaviors characteristic of good and bad public speaking performances. We present a novel multimodal corpus recorded with a virtual audience public speaking training platform. Lastly, we utilize the behavior descriptors to automatically approximate the overall assessment of the performance using support vector regression in a speaker-independent experiment and yield promising results approaching human performance.

Wall Street Journal Features ICT’s Light Stage Face Scanning Technology

A Wall Street Journal video story featured Paul Debevec ICT’s Light Stage technology. The story covered the ICT Graphics Lab’s work creating realistic digital faces for movies, video games and explored the possibility of mobile-phone based systems that can allow for people to become 3-D characters themselves.

Motivational Interviewing Learning Environment and Simulation (MILES)

Download a PDF overview.

The Motivational Interviewing Learning Environment and Simulation (MILES) is a virtual reality project developed by the USC Center for Innovation and Research on Veterans and Military Families (CIR), in partnership with the USC Institute for Creative Technologies.

Now embedded in the USC School of Social Work, MILES provides future therapists the opportunity to advance their skills in treating service members, veterans, or military-impacted family members through practice with a simulated patient. This exciting teaching tool allows instructors to guide social work students through a therapist-client interaction with a simulated veteran using a multiple choice-style progression through a therapy session.

The ELITE and INOTS platforms were successfully leveraged to create MILES, with USC CIR and ICT developing a new instructional framework to drive the learning experience.

Stefan Scherer: “Investigating Voice Quality as a Speaker-Independent Indicator of Depression and PTSD”

Abstract: “We seek to investigate voice quality characteristics, in particular on a breathy to tense dimension, as an indicator for psychological distress, i.e. depression and post-traumatic stress disorder (PTSD), within semi-structured virtual human interviews. Our evaluation identifies significant differences between the voice quality of psychologically distressed participants and not-distressed participants within this limited corpus. We investigate the capability of automatic algorithms to classify psychologically distressed speech in speaker-independent experiments. Additionally, we examine the impact of the posed questions’ affective polarity, as motivated by findings in the literature on positive stimulus attenuation and negative stimulus potentiation in emotional reactivity of psychologically distressed participants. The experiments yield promising results using standard machine learning algorithms and solely four distinct features capturing the tenseness of the speaker’s voice.”

West Point Graduates and Rhodes Scholars at ICT

The USC Institute for Creative Technologies is hosting two recent graduates from the U.S. Military Academy at West Point who were both awarded 2013 Rhodes Scholarships. Over the next four weeks Second lieutenant Kiley Hunkler will be working with the Medical Virtual Reality team, and Second lieutenant Evan Szablowski will be working with the Virtual Human team. 2nd Lts. Hunkler and Szablowski will be starting their programs at Oxford University in October 2013.
Read about these impressive young officers in this 2012 West Point release:

Albert “Skip” Rizzo: “Virtual Reality Goes to War and Beyond”

Overview: The physical, emotional, cognitive and psychological demands of war place enormous stress on even the best-prepared military personnel. The urgency of war, combined with significant advances in low-cost consumer technology has driven the development of Virtual Reality and Interactive Technology applications for clinical research and intervention.

Part 1 of this workshop will present an overview of the research, development and implementation of Virtual Reality and Interactive Technology applications that have been applied in the prevention, assessment, treatment and scientific understanding of the psychological wounds of war. This talk will conclude with a discussion of how the urgency of war has driven clinical system development that will soon have significant impact on civilian behavioral healthcare needs.

Part 2 of this workshop will focus on a range of interactive technology applications that address the challenging cognitive and social needs of schizophrenics. This work uses web-based, IPad and mobile systems to improve the efficacy and dissemination of clinical and research applications. The workshop will conclude with a general discussion with the presenters and workshop participants as to how recent advances in VR and Interactive Systems (in some cases driven by military funding) can be now be translated to address the needs of civilian behavioral healthcare. Demonstrations of some of the systems discussed (VR PTSD Exposure Therapy, SimCoach, BrainHQ) will also be available for the participants to experience.

Paul Rosenbloom: “The Sigma Cognitive Architecture/System”

Paul Rosenbloom will deliver the keynote presentation at the IJCAI Workshop on Intelligence Science.

ICT CogSci 2013

ICT virtual human and social simulation research will be presented at CogSci 2013 is the 35th annual meeting of the Cognitive Science Society, to be held in Berlin, Germany, Wednesday, July 31 – Saturday, August 3, 2013.

Talks

Computational Models of Human Behavior in Wartime Negotiations
Speakers: David Pynadath, Ning Wang, Stacy Marsella
Abstract: Political scientists are increasingly turning to game-theoretic models to understand and predict the behavior of national leaders in wartime scenarios, where two sides have the op- tions of seeking resolution at either the bargaining table or on the battlefield. While the theoretical analyses of these mod- els is suggestive of their ability to capture these scenarios, it is not clear to what degree human behavior conforms to such equilibrium-based expectations. We present the results of a study that placed people within two of these game mod- els, playing against an intelligent agent. We consider several testable hypotheses drawn from the theoretical analyses and evaluate the degree to which the observed human decision- making conforms to those hypotheses.

Negotiation Strategies with Incongruent Facial Expressions of Emotion Cause Cardiovascular Threat
Speakers: Peter Khooshabeh, Celso DeMelo and Jonathan Gratch
Abstract: Affect is important in motivated performance situations such as negotiation. Longstanding theories of emotion suggest that facial expressions provide enough information to perceive another person’s internal affective state. Alternatively, the contextual emotion hypothesis posits that situational factors bias the perception of emotion in others’ facial displays. This hypothesis predicts that individuals will have different perceptions of the same facial expression depending upon the context in which the expression is displayed. In this study, cardiovascular indexes of motivational states (i.e., challenge vs. threat) were recorded while players engaged in a multi-issue negotiation where the opposing negotiator (confederate) displayed emotional facial expressions (angry vs. happy); the confederate’s negotiation strategy (cooperative vs. competitive) was factorially crossed with his facial expression. During the game, participants’ eye fixations and cardiovascular responses, indexing task engagement and challenge/threat motivation, were recorded. Results indicated that participants playing confederates with incongruent facial expressions (e.g., cooperative strategy, angry face) exhibited a greater threat response, which arises due to increased uncertainty. Eye fixations also suggest that participants look at the face more in order to acquire information to reconcile their uncertainty in the incongruent condition. Taken together, these results suggest that context matters in the perception of emotion.

Context Dependent Utility: Modeling Decision Behavior Across Contexts
Speakers: Jonathan Ito and Stacy Marsella
Abstract: One significant challenge in creating accurate models of human decision behavior is accounting for the effect of context. Research shows that seemingly minor changes in the presentation of a decision can lead to drastic shifts in behavior; phenomena collectively referred to as framing effects. Previous work has developed Context Dependent Utility (CDU), a framework integrating Appraisal Theory with decision-theoretic principles. This work extends existing research by presenting a study exploring the behavioral predictions offered by CDU regarding the multidimensional effect of context on decision behavior. The present study finds support for the predictions of CDU regarding the impact of context on decisions: 1) as perceptions of pleasantness increase, decision behavior tends towards risk-aversion; 2) as perceptions of goal-congruence increase, decision behavior tends towards risk-aversion; 3) as perceptions of controllability increase, i.e., perceptions that outcomes would have been primarily caused by the decision maker, behavior tends towards risk-seeking.

Tutorial

Virtual Humans: A New Toolkit for Cognitive Science Research
Time: Half-day (9:00 – 12:30)
Organizers: Jonathan Gratch, Arno Hartholt, Morteza Dehghani, and Stacy Marsella Overview: Virtual humans (VHs) are digital anthropomorphic characters that exist within virtual worlds but are designed to perceive, understand and interact with real-world humans. Although typically conceived as practical tools to assist in a range of application (e.g., HCI, training and entertainment), the technology is gaining interest as a methodological tool for studying human cognition. VHs not only simulate the cognitive abilities of people, but also many of the embodied and social aspects of human behavior more traditionally studied in fields outside of cognitive science. By integrating multiple cognitive capabilities (e.g., language, gesture, emotion) and requiring these processes to support real-time interactions with people, VHs create a unique and challenging environment with-in which to develop and validate cognitive theories. In this tutorial, we will review recent advances in VH technologies, demonstrate examples of use of VHs in cognitive science research and provide hands on training using our Virtual Human Toolkit. Learn more at vhtoolkit.ict.usc.edu.

Peter Khooshabeh, Celso DeMelo and Jonathan Gratch: “Negotiation Strategies with Incongruent Facial Expressions of Emotion Cause Cardiovascular Threat”

Abstract: Affect is important in motivated performance situations such as negotiation. Longstanding theories of emotion suggest that facial expressions provide enough information to perceive another person’s internal affective state. Alternatively, the contextual emotion hypothesis posits that situational factors bias the perception of emotion in others’ facial displays. This hypothesis predicts that individuals will have different perceptions of the same facial expression depending upon the context in which the expression is displayed. In this study, cardiovascular indexes of motivational states (i.e., challenge vs. threat) were recorded while players engaged in a multi-issue negotiation where the opposing negotiator (confederate) displayed emotional facial expressions (angry vs. happy); the confederate’s negotiation strategy (cooperative vs. competitive) was factorially crossed with his facial expression. During the game, participants’ eye fixations and cardiovascular responses, indexing task engagement and challenge/threat motivation, were recorded. Results indicated that participants playing confederates with incongruent facial expressions (e.g., cooperative strategy, angry face) exhibited a greater threat response, which arises due to increased uncertainty. Eye fixations also suggest that participants look at the face more in order to acquire information to reconcile their uncertainty in the incongruent condition. Taken together, these results suggest that context matters in the perception of emotion.

David Pynadath, Paul Rosenbloom and Stacy Marsella: “Modeling Two-player Games in the Sigma Graphical Cognitive Architecture”

Abstract: Effective social interaction and, in particular, a Theory of Mind are critical components of human intelligence, allowing us to form beliefs about other people, generate expectations about their behavior, and use those expectations to inform our own decision-making. This article presents an investigation into methods for realizing Theory of Mind within Sigma, a graphical cognitive architecture. By extending the architecture to capture independent decisions and problem-solving for multiple agents, we implemented Sigma models of several canonical examples from game theory. We show that the resulting Sigma agents can capture the same behaviors prescribed by equilibrium solutions.

Enhanced Environment for Communication and Collaboration (E2C2)

E2C2 is funded by ONR Swampworks and executed by the USC Institute for Creative Technologies. The E2C2 project seeks to answer a simple question that has complicated answers – what will a future office/work environment look like in 2020 and how will it enable people to better function? E2C2 essentially is looking at creating new models for what could be termed “Communication Operating Systems” (COS). The initial E2C2 COS (codename BlueShark) will be a mix of conceptual frameworks, hardware and software that will better enable decision makers and help optimizeoperator performance.

BlueShark is an experiment to look at a variety of ways to possibly collaborate more efficiently.  Some experiments might be more a dream as the technology might not have been invented yet, some experiments might be using cutting edge technology not quite ready for prime time, and some experiments might look at new to market technologies that just have not been applied to workforce experience yet. And the best might be experiments driven by “the audience” that combines different technologies in different ways to better meet some objective.

The overall objective of BlueShark is combine the creative minds of the Institute for Creative Technologies (U.S. Army University Affiliated Research Center for virtual systems), USC, Hollywood studios and U.S. Navy and Marine Corps personnel to take advantage of our country’s youth’s recent education experience (both formal (schools) and informal (self-taught)) and the rapid advancement of information technologies and apply them to existing naval environments (in an experimental space) and feedback information to technology creators/designers, ship designers, and mission planners on better, more efficient ways to collaborate, train and work. Individual spin off experiments may take place on a case by case basis on naval assets for detailed long term evaluations.

Incorporating the best tools and techniques that ICT has to offer this exciting project will utilize, virtual and augmented reality, virtual humans, artificial intelligence, human-computer interfaces and other research and prototypes.

Learn more at e2c2.ict.usc.edu.

ICT at SIGGRAPH 2013

ICT graphics innovations in display and rendering technologies will be featured throughout the upcoming ACM SIGGRAPH conference July 21- 25 in Anaheim, Calif. SIGGRAPH is the premier international forum for disseminating new scholarly work and emerging trends and techniques in computer graphics and interactive visuals.

Technical Paper

Acquiring Reflectance and Shape from Continuous Spherical Harmonic Illumination
Authors:Borom Tunwattanapong, Graham Fyffe, Paul Graham, Jay Busch, Xueming Yu, Abhijeet Ghosh, Paul Debevec
Abstract: A new reflectance measurement and 3D scanning technique uses a rotating arc of light and long-exposure photography to illuminate an object with spherical harmonic illumination and estimate maps from multiple viewpoints. The responses yield estimates of diffuse and specular reflectance parameters and 3D geometry.

ICT Graphics Lab’s Technical Paper page

Real-Time Live Demo

Digital Ira: High-Resolution Facial Performance Playback
Real-time facial animation from high-resolution scans driven by video performance capture rendered in a reproducible, game-ready pipeline. This collaborative work incorporates expression blending for the face, extensions to photoreal eye and skin rendering and real-time ambient shadows.
NVIDIA: Curtis Beeson, Steve Burke, Mark Daly
Activision, Inc: Javier von der Pahlen, Etienne Danvoye, Bernardo Antoniazzi, Michael Eheler, Zbynek Kysela, Jorge Jimenez
USC Institute for Creative Technologies” Oleg Alexander, Jay Busch, Paul Graham, Borom Tunwattanapong, Andrew Jones, Koki Nagano, Ryosuke Ichikari, Paul Debevec, Graham Fyffe

ICT Graphics Lab’s Digital Ira page

Related Work
Talk: Driving High-Resolution Facial Blendshapes with Video Performance Capture, Graham Fyffe
Poster: Digital Ira: Creating a Real-Time Photoreal Digital Actor (scroll to #36)
Poster: Vuvuzela: A Facial Scan Correspondence Tool (scroll to #113)

Emerging Technologies Demo

An Autostereoscopic Projector Array Optimized for 3D Facial Display
USC Institute for Creative Technologies: Koki Nagano, Andrew Jones, Jay Busch, Paul Debevec, Mark Bolas, Xueming Yu
University of California at Santa Cruz: Jing Liu
Abstract: Video projectors are rapidly shrinking in size, power consumption, and cost. Such projectors provide unprecedented flexibility to stack, arrange, and aim pixels without the need for moving parts. This dense projector display is optimized in size and resolution to display an autostereoscopic life-sized 3D human face. It utilizes 72 Texas Instruments PICO projectors to illuminate a 30 cm x 30 cm anisotropic screen with a wide 110-degree field of view. The demonstration includes both live scanning of subjects and virtual animated characters.

ACM Symposium on Spatial Use

Interaction (SUI)

Just prior to SIGGRAPH, ICT will host the first ACM Symposium on Spatial User Interaction (SUI), chaired by ICT’s Evan Suma.

ACM SIGGRAPH/Eurographics

Symposium on Computer Animation

Virtual Character Performance from Speech
Authors: Stacy Marsella, Margaux Lhommet, Yuyu Xu, Andrew Feng, Stefan Scherer, Ari Shapiro
Abstract: We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone.

Digital Production Symposium

Towards Higher Quality Character Performance in Previz, Digital Production Symposium
Authors: Stacy Marsella, Ari Shapiro, Andrew Feng, Yuyu Xu, Margaux Lhommet, Stefan Scherer
Abstract: We seek to raise the standard of quality for automatic and minimal cost 3D content in previz by automated generation of a believable 3D character performance from only an audio track and its transcription.

The Graphics Lab: “Acquiring Reflectance and Shape from Continuous Spherical Harmonic Illumination”

Abstract: A new reflectance measurement and 3D scanning technique uses a rotating arc of light and long-exposure photography to illuminate an object with spherical harmonic illumination and estimate maps from multiple viewpoints. The responses yield estimates of diffuse and specular reflectance parameters and 3D geometry.

Koko Nagano and Andrew Jones: “An Autosteroscopic Projector Array Optimized for 3D Facial Display”

Abstract: Video projectors are rapidly shrinking in size, power consumption, and cost. Such projectors provide unprecedented flexibility to stack, arrange, and aim pixels without the need for moving parts. This dense projector display is optimized in size and resolution to display an autostereoscopic life-sized 3D human face. It utilizes 72 Texas Instruments PICO projectors to illuminate a 30 cm x 30 cm anisotropic screen with a wide 110-degree field of view. The demonstration includes both live scanning of subjects and virtual animated characters.

The Graphics Lab and Activision: “Digital Ira: High-Resolution Facial Performance Playback”

Abstract: Real-time facial animation from high-resolution scans driven by video performance capture rendered in a reproducible, game-ready pipeline. This collaborative work incorporates expression blending for the face, extensions to photoreal eye and skin rendering [ Jimenez et al, Real-Time Live! SIGGRAPH 2012], and real-time ambient shadows. The actor was scanned in 30 high-resolution expressions using a Light Stage [Ghosh et al. SIGGRAPH Asia 2011] from which eight were chosen for real-time performance rendering. The actor’s performance clips were captured at 30 fps under flat light conditions using the multi-camera rig. Expression UVs were interactively corresponded to the neutral expression, retopologized to an artist mesh. The offline animation solver creates a performance graph representing dense GPU optical flow between video frames and the eight expressions. The graph is pruned by analyzing the correlation between video and expression scans over 12 facial regions. Then dense optical flow and 3D triangulation are computed, yielding per-frame spatially varying blendshape weights approximating the performance. Mesh animation is transferred to standard bone animation on a game-ready 4k mesh using a bone-weight and transform solver. This solver optimizes the smooth skinning weights and the bone-animated transforms to maximize the correspondence between the game mesh and the reference animated mesh. Surface stress values are used to blend albedo, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. DX11 rendering includes SSS, translucency, eye refraction and caustics, physically based two-lobe specular reflection with microstructure, DOF, antialiasing, and grain. Due to the novelty of this pipeline, there are many elements in progress. By delivery time, these elements will be present: eyelashes, eyelid bulge, displacement shading, ambient transmittance, and several other dynamic effects.

ICT Hosts USC Rossier School of Educations Global Executive Doctor of Education Program Students

The USC Institute for Creative Technologies hosted students from the USC Rossier School of Education Global Executive Doctor of Education Program on Tuesday, July 23. The Global Ed.D. cohort of 2013 learned about the ICT organization from Executive Director Dr. Randy Hill and watched hands-on demonstrations of the Immersive Naval Officer Training System, game-based rehabilitation therapy tools, and the Graphics Lab’s light stage. ICT researchers also highlighted projects such as the Motivational Interviewing Learning Environment and Simulation, a collaboration with the USC Center for Innovation and Research on Veterans and Military Families, and the New Dimensions in Testimony project, a collaboration with the USC Shoah Foundation. Over lunch, students asked ICT researchers about their leadership philosophies, and discussed how creative people at ICT approached problem-solving.

Stacy Marsella, Ari Shapiro, Andrew Feng, Yuyu Xu, Margaux Lhommet and Stefan Scherer: “Towards Higher Quality Character Performance in Previz”

Abstract: We seek to raise the standard of quality for automatic and minimal cost 3D content in previz by automated generation of a believable 3D character performance from only an audio track and its transcription.

Stacy Marsella, Margaux Lhommet, Yuyu Xu, Andrew Feng, Stefan Scherer and Ari Shapiro: “Virtual Character Performance from Speech”

Abstract: We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone.

Media Advisory: USC Showcases Breakthrough Technologies for Holographic Projections and Realistic Digital Doubles at Premier Computer Graphics and Animation conferences

Holographic heads, photo-real video game characters and virtual humans with realistic behaviors among the USC Institute for Creative Technologies-developed technologies to be demonstrated and discussed at SIGGRAPH 2013 and co-located Symposium on Computer Animation (SCA) and Digital Production Symposium (DigiPro)

WHEN/WHERE: July 19-25 in Anaheim, California

SCA — 9:10 -10:40 a.m. Friday, July 19, Sheraton Park Hotel
DigiPro2013 — 11:30 a.m.-12:40 p.m. Saturday, July 20, Disney’s Grand Californian Hotel
SIGGRAPH 2013 — July 21-25, Anaheim Convention Center

WHAT: 

SIGGRAPH presentations include: 

  • Digital Ira, a photo-real digital double that debuted to much acclaim and over a millionYouTube views, was created in collaboration with video game maker Activision. The latest version of this interactive Ira will be one of the highlights at the SIGGRAPH Real-Time Live!session. 5:30-7:30 p.m., Tuesday, July 23
  • http://youtu.be/l6R6N4Vy0nE
  • A holographic version of Ira, along with live scans of real people, will be projected as part of ICT’s 3-D facial display demonstration in the Emerging Technology showcase. This work represents an extension of ICT’s winning 3-D display system, awarded Best Emerging Technology at SIGGRAPH in 2007. Sunday, July 21 — Thursday, July 25
  • ICT researchers will also be at the Technical Papers session to present their breakthroughs creating fully computer-generated glasses and other objects. These research results overcome existing challenges in reproducing objects with reflectance, translucence and transparent qualities. 2:00–3:30 p.m., Wednesday, July 24, 2013
  • ICT’s Mark Bolas will speak about the mutual value of academic and industry partnerships in a SIGGRAPH business symposium talk 5-6 p.m., Sunday, July 21

SCA and DigiPro 2013 presentations:

  • These two talks introduce Cerebella, an automated performance-generation system that realistically animates virtual characters using only an analysis of the spoken and transcribed words. This tool improves upon existing animation systems in that it infers a character’s intent. It also advances the state-of-the-art in pre-visualization.

WHO:

  • Paul Debevec: Director of the ICT Graphics Lab and research professor at the USC Viterbi School of Engineering (SIGGRAPH)
  • Stacy Marsella: ICT associate director for social simulation and research assistant professor at the USC Viterbi School of Engineering (SCA and DigiPro)
  • Mark Bolas: Director of the ICT Mixed Reality Lab and associate professor at the USC School of Cinematic Arts

CONTACT:
SIGGRAPH: Brian Ban at (773) 454-7423 or media@siggraph.org
USC: Robert Perkins at (213) 740-9226 or perkinsr@usc.edu

MORE: For a complete list of ICT involvement at SIGGRAPH and related conferences see:http://ict.usc.edu/events/

ABOUT ICT:
At the University of Southern California Institute for Creative Technologies (ICT), leaders in artificial intelligence, graphics, virtual reality and narrative advance low-cost immersive techniques and technologies to solve problems facing service members, students and society. Established in 1999, ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S. Army Research Laboratory. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation. ICT brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, education and more.

Intelligence, Surveillance, and Reconnaissance Trainer (ISR-T)

The University of Southern California Institute for Creative Technologies (ICT) supports the mission of the U.S. Army Intelligence Center Center of Excellence in its effort to develop an interactive multimedia instruction (IMI) system focused on Intelligence, Surveillance, and Reconnaissance (ISR) Synchronization Planning. The resulting ISR-Training system (ISR-T) lends itself to blended learning and adaptability; it can be used in resident training, web-based training with an instructor, and web-based training without an instructor. This is done through a combination of scenario engagement (presented by some mix of text, visuals, audio and video), background materials, and directed exercises. Training results are tracked for each student so that instructors can assess performance and understanding.

The target audience for ISR-T is members of the intelligence community or personnel with the appropriate security clearance in any agency with the mission to perform ISR Planning and Synchronization. These will be primarily Army, but include Joint Military Intelligence (MI) service members, civilians, and contractors of active and reserve component, both Army Reserve and National Guard, MI units, and other individuals and units preparing to, or having the potential to be deployed in support of a combat or non-combat operation. The audience is not limited to Army individuals and is to be produced and presented with joint audiences in mind.

The goal of ISR-T is to complement existing training exercises and applications. The ISR-T does not teach the detailed mechanics of the ISR Synch Planning Process – this is what the ISR Synch Manager Course is designed to do. Instead, ISR-T supplements existing training and helps students understand the big picture and how to think about the ISR Synch Planning Process using an expert framework and creative problem solving. ISR-T exercises introduce students to how ISR experts work creatively and use critical thinking skills, and then present problems that require students to implement these skills Students are provided with opportunities to visualize the problem space in a number of ways, and are tested on how to think through the ISR process and ask the right questions. The underlying approach is to create a system which helps students develop a holistic lens for thinking through the ISR synch problems associated with conventional and asymmetric threats.

Students should be able to view ISR Synch problems from various points of view potentially including: the Commander’s, the operational Soldier’s, the collector’s, and friendly and enemy networks. Students must learn to solve ISR Synch problems creatively with limited resources. Ultimately, ISR-T should help students understand the “how” and “why” they do what they do, rather than simply developing a procedural trainer. The ISR-T also supports (but does not require) ISR problem solving incorporating SECRET level asset capabilities which further emphasize the “how and “why” they do what they do.

Contact: Todd Richmond, USC ICT richmond@ict.usc.edu

Paul Rosenbloom: “Learning Via Gradient Descent in Sigma”

Abstract: “Integrating a gradient-descent learning mechanism at the core of the graphical models upon which the Sigma cognitive architecture/system is built yields learning behaviors that span important forms of both procedural learning (e.g., action and reinforcement learning) and declarative learning (e.g., supervised and unsupervised concept formation), plus several additional forms of learning (e.g., distribution tracking and map learning) relevant to cognitive systems/modeling. The core result presented here is this breadth of cognitive learning behaviors that is producible in this uniform manner.”

New Scientist Features ICT’s Digital Ira Collaboration with Activision

New Scientist featured research by Paul Debevec and colleagues on how to simulate light reflecting off the human skin. Their digital rendering software creates super realistic 3D models, including simulating light penetrating skin to varying depths and scattering before being reflected. The story mentioned the Digital Ira project, a collaboration between computer gaming publisher Activision and ICT to develop software that can generate photorealistic digital actors on gaming consoles. The story was also covered by Gizmodo.

The New Yorker Highlights ICT’s Virtual Reality Exposure Therapy

An article in The New Yorker highlighted ICT’s virtual reality research project “Virtual Iraq/Afghanistan“, in which veterans suffering from PTSD are guided through simulations recreating their experiences. The story noted that technology innovator Palmer Luckey worked on this immersive virtual reality project during his time at ICT.

ICT Virtual Reality and Health Research Cited in Time

An article in Time looked at whether virtual reality technologies can promote behavior change. The article cited several ICT studies and also noted Evan Suma’s work using a Kinect to movement to World of Warcraft. The story mentions this 2011 ICT review of how virtual reality could impact obesity and diabetes, among other health-related conditions and also features ICT work studying VR for attention disorders.

Louis-Philippe Morency: “Action Recognition by Hierarchical Sequence Summarization”

Abstract: Recent progress has shown that learning from hierarchical feature representations leads to improvements in various computer vision tasks. Motivated by the observations that human activity data contains information at various temporal resolutions, we present a hierarchical sequence summarization approach for action recognition that learns multiple layers of discriminative feature representations. We build up a hierarchy dynamically and recursively by alternating sequence learning and sequence summarization. For sequence learning we use CRFs with latent variables to learn hidden spatio-temporal dynamics; for sequence summarization we group observations that have similar semantic meaning in the latent space. For each layer we learn an abstract feature representation through non-linear gate functions. This procedure is repeated to obtain a hierarchical sequence summary representation. We develop an efficient learning method to train our model and show that its complexity grows sublinearly with the size of the hierarchy. Experimental results show the effectiveness of our approach, achieving the best published results on the Ar-mGesture and Canal9 datasets.

Read the paper here.

Researcher Spotlight: Peter Khooshabeh

When ICT’s Peter Khooshabeh was an undergraduate at the University of California at Berkeley he worked on developing a virtual practice tool for surgeons. The idea was that an individual interacting in this simulated scenario would show improved outcomes in the operating room. But when Khooshabeh spent time in a real hospital, he observed that technical skill was just one aspect of surgical success. Any useful virtual environment would also need to capture the interpersonal dynamics of such a high-stress, multi-person setting.

“At first we were focused on putting just one person in this virtual environment but there are many players involved in any given surgery,” said Khooshabeh, a research fellow in ICT’s virtual humans research group. “I came to understand that the key to improving performance may not be in the quality of the technology, but in how much you understand about people and how they perceive one another.”

Khooshabeh went on to earn a Ph.D. in cognitive psychology from UC Santa Barbara and continues to leverage technology as a tool to better understand people.

In August, Khooshabeh, along with Johnathan Gratch, ICT’s associate director for virtual humans research, and additional co-authors from USC’s Marshall School of Business and UCSB, will present recent research that uses virtual humans to advance knowledge of non-verbal thought and emotion in real humans. In this study, players took part in a negotiation game where the opposing negotiator was a virtual human programmed to be cooperative or competitive along with either an angry or happy facial expression.

“The expectation was that facial expression would override behavior, meaning people would be threatened by anger no matter if the virtual human was helping them or working against them in the negotiation,” said Khooshabeh.

However, the results showed that it wasn’t a competitive negotiation strategy or the anger expression that caused players stress but whether the strategy and the virtual human’s emotions were matched or not. Specifically, physiological data showed that virtual humans who played cooperatively but looked mad caused participants to show signs of distress, measured by lower cardiac output and increased eye fixations on the virtual human’s face.

“People don’t always respond to angry faces the same way,” said Khooshabeh. “These results are significant because they suggest context matters in the perception of emotion.”

In another study to be published in an upcoming issue of the Journal of Artificial Intelligence and Society, Khooshabeh and ICT colleagues Morteza Dehghani, Angela Nazarian and Jonathan Gratch gave an otherwise identical virtual character different accents (either Iranian- Chinese- or California-accented spoken English) and analyzed how study subjects with the same ethnicity to the accented virtual character responded to the differently accented characters.

Across the two studies, Iranian-Americans interacting with an Iranian-accented, English-speaking virtual human were more likely to make decisions congruent with Iranian cultural customs. Similarly, Chinese-Americans listening to a Chinese-accented, English-speaking virtual human were more likely to make causal attributions congruent with collectivistic, East Asian cultural ideologies.

“Accents matter just as much or possibly more than visual information in forming impressions of others and how they affect our thinking,” said Khooshabeh. “Our work provides experimental evidence that accent can affect individuals differently depending on whether they share the same ethnic cultural background as the target culture.”

In addition to being informative for designing virtual humans for training or research tasks, Khooshabeh hopes his research helps make people aware of biases that they might not realize they possess and also contributes to a greater understanding of how people interact and respond to stressful situations, whether they are performing surgery, negotiations or cross-cultural dialogues.

“In the real world everything is mixed up,” he said. “If we want to understand the role of a single informational cue – be it an emotion or an accent – we have to take it into the lab.”

Peter Khooshabeh is a U.S. Army Research Laboratory (ARL) research fellow in ICT’s virtual humans group. His work explores the social effects that virtual humans can have on people in areas including culture, thought and emotion. It also advances the use of virtual humans as a communications science research tool for better understanding behavior. In addition to his work at ICT, Khooshabeh spends 30 percent of his time at the University of California at Santa Barbara-led Institute for Collaborative Biotechnologies, another UARC under ARL. His fellowship is supported by the ARL Human Research and Engineering Directorate (HRED) Cognitive Sciences Branch.

Armed With Science Highlights ICT Virtual Humans in Healthcare

An Armed with Science article about artificial intelligence in mental health noted that ICT researchers are developing virtual mental health patients that converse with human trainees. See more at: http://science.dodlive.mil/2013/06/25/artificial-intelligence-in-mental-healthcare/#sthash.l5E5PVsu.dpuf written by David D. Luxton, a research psychologist and program Manager at the DoD’s National Center for Telehealth & Technology

Sin-hwa Kang: “Exploring Users’ Social Responses to Computer Counseling Interviewers’ Behavior”

Abstract: We explore the effect of behavioral realism and reciprocal self-disclosure from computer interviewers on the social responses of human users in simulated psychotherapeutic counseling interactions. To investigate this subject, we designed a 3×3 factorial between-subjects experiment involving three conditions of behavioral realism: high realism, low realism, and audio-only (displaying no behavior at all) and three conditions of reciprocal self-disclosure: high reciprocity, low reciprocity, and no reciprocity. We measured users’ feelings of social presence (Copresence, Social Attraction, and Emotional Credibility), rapport, perception of the quality of users’ own responses (Embarrassment and Self-Performance), emotional state (PANAS), perception of an interaction partner (Person Perception), self-reported self-disclosure, speech fluency (Pause Fillers and Incomplete Words), and verbal self-disclosure. The results of objective data analysis demonstrated that users disclosed greater verbal self-disclosure when interacting with computer interviewers that displayed high behavioral realism and high reciprocity of self-disclosure. Users also delivered more fluent speech when interacting with computer interviewers that displayed high behavioral realism. The results are described in detail and implications of the findings are discussed in this paper.

Mark Bolas Quoted in the LA Times

Mark Bolas was quoted in a story about Oculus founder and former ICT staff member Palmer Luckey. The story notes that Bolas hired Luckey to work on a design team finding ways to make virtual reality cost-effective. “The thing that set Palmer apart was he had a great knowledge of the history of virtual reality,” said Bolas in the story.

Bolas was also quoted in second story about the Microsoft and Sony’s newest video game consoles and whether they can compete with the increasing popularity of mobile games. “I would love to have an excuse to put my smartphone down and sit down in my nest,” said Bolas in the article. “I’m hoping the computing power in the devices will enable some new rich experiences that are worth sitting down for.”

Jon Gratch Featured in the Wall Street Journal

A Wall Street Journal video story examined whether virtual reality experiences can impact real world behaviors. The report highlighted some research projects out of Stanford University and quoted Jon Gratch, ICT’s associate director for virtual reality, who also mentioned some of the research taking place at ICT. “It is pretty clear that things in the virtual environment will provoke you as if they are real,” said Gratch, who is also an associate professor in the department of computer science at the USC Viterbi School of Engineering.

BBC Covers SimSensei and ICT’s Virtual Human Work

A BBC story featured ICT’s SimSensei project and highlighted how the institute’s is leading the way in the creation of virtual humans that can produce real help for those in need. The story included quotes from Skip Rizzo and Louis-Philippe Morency and noted how ICT is involved in several research projects involving human computer interaction. “Throughout the building, the work done is starting to blur the lines between the real world and the virtual world. And the result just may be real help for humans who need it,” said the story. The story was also covered in a BBC radio segment that included ICT’s Angela Nazarian and Rachel Wood’s research efforts to humanize virtual Ellie.

NPR Features SimSensei and ICT Research Using Computers to Detect Signs of Emotional Distress

A story by NPR’s mental health reporter Alix Spiegel covered ICT’s SimSensei project and research led by Skip Rizzo and Louis-Philippe Morency to develop a virtual human application that can be used to identify signals of depression and other mental health issues. The point, according to Rizzo, ICT’s associate director for medical virtual reality, is to analyze in almost microscopic detail the way people talk and move — to read their body language.

“We can look at the position of the head, the eye gaze,” Rizzo says in the story. Does the head tilt? Does it lean forward? Is it static and fixed?”

The theory, says Spiegel in her report, is that of all this is that a detailed analysis of those movements and vocal features can give us new insights into people who are struggling with emotional issues. The body, face and voice express things that words sometimes obscure.

The story notes that the idea here is not for Ellie to actually diagnose people and replace trained therapists. She’s just there to offer insight to therapists, Morency says, by providing some objective measurements.

“Think about it as a blood sample,” he says. “You send a blood sample to the lab and you get the result. The [people] doing the diagnosis [are] still the clinicians, but they use these objective measures to make the diagnosis.”

The story also states that this work was commissioned by the U.S. Department of Defense as part of its suicide prevention efforts. To learn more about the project visit the SimSensei page on our website.

Andrew Gordon’s Narrative Framing Work featured in USC Dornsife Magazine

An article and video about Andrew Gordon’s neurobiology of narrative framing work, being done in collaboration with USC’s Brain and Creativity Institute, is showcased in the current USC Dornsife magazine. Gordon is the principal investigator of this study which explores how people from different cultures react psychologically to stories. ICT’s Kenji Segae is also a researcher on the project.

Bill Swartout Painted and Profiled in Malibu Times Series

Bill Swartout, ICT’s director of technology, is featured in Faces of Malibu, a series that showcases Malibu residents with a painted portrait and an interview. Portrait painted by Johanna Spinks.

SimSensei

Download a PDF overview.
Summary
The University of Southern California Institute for Creative Technologies’ (ICT), pioneering efforts within DARPA’s Detection and Computational Analysis of Psychological Signals (DCAPS) project encompass advances in the artificial intelligence fields of machine learning, natural language processing and computer vision. These technologies identify indicators of psychological distress such as depression, anxiety and PTSD, and are being integrated into ICT’s virtual human application to provide healthcare support.

Goals
This effort seeks to enable a new generation of clinical decision support tools and interactive virtual agent-based healthcare dissemination/delivery systems that are able to recognize and identify psychological distress from multimodal signals. These tools aim to provide military personnel and their families’ better awareness and access to care while reducing the stigma of seeking help. For example, the system’s early identification of a patient’s high or low distress state could generate the appropriate information to help a clinician diagnose a potential stress disorder. User-state sensing can also be used to create long-term patient profiles that help assess change over time.

Capabilities
ICT is expanding its expertise in automatic human behavior analysis to identify indicators of psychological distress in people. Two technological systems are central to the effort. Multisense automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention and fidgeting). Multisense infers from these signals and behaviors, indicators of psychological distress that directly inform SimSensei, the virtual human. SimSensei is a virtual human platform able to sense real-time audio-visual signals captured by Multisense. It is specifically designed for healthcare support and is based on the 10+ years of expertise in virtual human research and development at ICT. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the perceived user state and intent, through its own speech and gestures. DCAPS is not aimed at providing an exact diagnosis, but at providing a general metric of psychological health.

Paul Graham: “Measurement-Based Synthesis of Facial Microgeometry”

Introduction: Current scanning techniques record facial mesostructure with submillimeter precision showing pores, wrinkles, and creases. However, surface roughness continues to shape specular reflection at the level of microstructure: micron scale structures. Here, we present an approach to increase the resolution of mesostructure-level facial scans using microstructure examples digitized about the face. We digitize the skin patches using polarized gradient illumination and 10 mm resolution macro photography, and observe point-source reflectance measurements to characterize the specular reflectance lobe at this smaller scale. We then perform constrained texture synthesis to create appropriate surface microstructure per facial region, blending the regions to cover the whole entire face. We show that renderings of microstructure-augmented facial models preserve the original scanned mesostructure and exhibit surface reflections which are qualitatively more consistent with real photographs.

Belinda Lange Delivers Keynote at REHAB 2013

Belinda Lange is a research scientist at the Institute for Creative Technologies and a research assistant professor in the School of Gerontology at the University of Southern California. She received her Ph.D. and degree in physiotherapy (with honors) from theUniversity of South Australia and her science degree from Flinders University. Lange’s research interests include the use of interactive video game and virtual reality technologies for motor rehabilitation, exergaming, cognitive assessment, postoperative exercise and virtual human character interactions. She is on the Board of Directors of the International Society for Virtual Rehabilitation and is an associate editor for the Journal Computer Animation and Virtual Worlds. Belinda was on the conference program committee for Meaningful Play conference in 2008, 2010 and 2012, co-chaired the Presence 2009 Conference, was on the organizing committee for the rehabilitation track of the Games for Health Conference in 2010, 2011 and 2012, was the workshop chair for the International Virtual Rehabilitation Conference in Zurich in 2011 and is on the program committee of theInternational Society for Presence Research. She is also a co-founder of games4rehab.org, a non-profit social network that brings together individuals with disabilities and those undergoing rehabilitation with researchers, clinicians and game industry professionals.

Randall Hill, Jr. at the 51st National Junior Science and Humanities Symposium

ICT Executive Director Randall Hill, Jr. will give the keynote address at the 51st National Junior Science and Humanities Symposium. The National JSHS will bring together 240 high school students who qualify for attendance by submitting and presenting original scientific research papers in regional symposia held at universities nationwide. Approximately 100 adult leaders, high school teachers, university faculty, ranking military guests, and others attend and join in encouraging the future generation of scientists and engineers and celebrating student achievement in the sciences.

ICT Researchers Win Best Paper at IEEE Conference on Automatic Face and Gesture Recognition 2013

Congratulations to co-lead authors Stefan Scherer and Giota Stratou, along with Jill Boberg, Jonathan Gratch, Skip Rizzo and Louis-Philippe Morency. This ICT team of received the best paper award at the prestigious IEEE Conference on Automatic Face and Gesture Recognition 2013 in Shanghai, China. The paper, Automatic Behavior Descriptors for Psychological Disorder Analysis, highlights facial behavior indicators correlated with depression, anxiety and PTSD. This research is being used to inform advances in ICT’s SimSensei project. Watch the video below for more information.

CNN’s The Next List to Feature ICT and Virtual Reality Therapy for PTSD

Tune into The Next List on CNN, Saturday, May 4 at 11:30 am PST/ 2:30 pm EST.

This week’s episode features ICT’s Skip Rizzo and the development of a virtual reality exposure therapy for PTSD. An earlier episode focused on how the treatment is helping service members and veterans of the conflicts in Iraq and Afghanistan.

The show’s producers returned to ICT this week to explore how this DoD-funded research might be transitioned to help those suffering from PTSD after experiencing traumatic events like the Boston bombings.

Watch the preview below and the entire show on Saturday.

Forbes Highlights ICT’s Jacki Morie and her Virtual Worlds Work

Forbes features a Q and A with Jacki Morie, a senior scientist at ICT who studies virtual worlds and how people use them. The article notes Morie’s work exploring building virtual worlds as healing spaces for veterans as well as her belief in their potential to impact many areas of daily life. “You can use virtual worlds in education, in delivery services, or as an advanced form of telehealthcare that offers so much more than videoconferencing,” she said. “Virtual worlds can give us social connectivity, built-in support groups, and ways to avoid ever being alone again.”

Nature Features Andrew Gordon’s Research Studying Personal Stories on Blogs

An article in Nature covering new guidance for U.S. internet research highlighted Andrew Gordon’s work studying millions of personal stories found on blogs. Gordon’s research features data collected from individuals who post online. The new recommendations are from the Department of Health and Human Services, the agency that governs human-subjects research. “There’s a gap between the expectation and the reality of what can be done with technology, so it really complicates the issue of what is identifiable private information,” Gordon said in the article. Learn more about Andrew Gordon and his internet research here.

Paul Debevec: “Lighting Hollywood’s Photoreal Digital Actors”

Abstract: Creating photoreal digital humans has been a ‘holy grail’ for computer graphics, and may revolutionize the production and impact of movies and video games. This talk will present new facial scanning, animation, and rendering techniques used in recent projects, and also introduce new approaches for material and object scanning.

USC Awards Paul Rosenbloom Phi Kappa Phi Faculty Recognition Award

Paying tribute to the wide range of intellectual and leadership achievements of the past year, the 32nd annual Academic Honors Convocation was held on April 23 at Town & Gown to recognize the best and the brightest among USC students, faculty and administrators. USC President C. L. Max Nikias awarded ICT’s Paul Rosenbloom the Phi Kappa Phi Faculty Recognition Award. Paul Rosenbloom is a professor at the USC Viterbi School of Engineering and is currently working at the USC Institute for Creative Technologies on a new cognitive/virtual-human architecture – Sigma (Σ) – based on graphical models.

Read the full USC News story.

Stefan Scherer: “Automatic Behavior Descriptors for Psychological Disorder Analysis”

Abstract: We investigate the capabilities of automatic nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder (PTSD). We seek to confirm and enrich present state of the art, predominantly based on qualitative manual annotations, with automatic quantitative behavior descriptors. We propose four nonverbal behavior descriptors that can be automatically estimated from visual signals. We introduce a new dataset called the Distress Assessment Interview Corpus (DAIC) which includes 167 dyadic interactions between a confederate interviewer and a paid participant. Our evaluation on this dataset shows correlation of our automatic behavior descriptors with specific psychological disorders as well as a generic distress measure. Our analysis also includes a deeper study of self-adaptor and fidgeting behaviors based on detailed annotations of where these behaviors occur.

In particular, we identify three main findings: (1) There are significant differences in the automatically estimated gaze behavior of subjects with psychological disorders. In particular, an increased overall downwards angle of the gaze could be automatically identified using two separate automatic measurements, for both the face as well as the eye gaze; (2) using automatic measurements, we could identify on average significantly less intense smiles for subjects with psychological disorders as well as significantly shorter average durations of smiles; (3) based on the manual analysis, subjects with psychological conditions exhibit on average longer self-touches and fidget on average longer with both hands (e.g. rubbing, stroking) and legs (e.g. tapping, shaking).

Bruce John: “Training Effective Investigative Child Interviewing Skills through Use of a Virtual Child”

Introduction
In abuse testimony, children are portrayed by the defense highly suggestible. As a result, child witnesses are often not believed. On the other hand, suggestive interviewing methods have been shown to undermine young childrens accuracy, invalidate testimony and lead to false convictions. The NICHD investigative interview protocol provides potential interviewers with an effective, standardized interview method to obtain accurate information from children. Unfortunately, the protocol is difficult to teach. Training programs reliably increase interviewer knowledge but often fail to affect interviewer behavior (1). The most effective approach is practice with feedback (2). However, simulated interviews suffer from adult simulatorҒs difficulties in accurately portraying child witnesses limitations (3). For ethical reasons, children are not utilized for mock child abuse interviews. 


Objectives
The objective of this pilot study was to test a prototype of an interactive learning environment used to train and evaluate investigative child interviewing skills. VCW projects prototype tracks interviewer question-types during an interview with a virtual child. The creation of a virtual child provides a cost-effective and standardized process for teaching interviewing skills that emphasize open ended questioning. 


Methods
Forty-five subjects were recruited to participate in a preliminary study using a prototype of the Virtual Child Witness (VCW) program (n=45). The expert group had twenty-two participants who graduated an investigative interviewer training seminar. The novice group had twenty three interview novice-screened college graduates from the USC Institute for Creative Technologies. 


Results
An independent-samples t-test indicated that the expert group, group 1 (M = .860, SD = .151), asked a significantly higher percentage of open ended questions compared to the novice group, group 2 (M = .477, SD = .284), t(24.748) = 5.150, p < .001, d = 1.68, with a 95% CI. Independent-samples t-tests also showed significant differences between the groups with regard to the other question types. 
 Conclusions
The expert and novice interview groups had a number of similarities, including relative education level, computer usage, and finding the Virtual Child Witness (VCW) system easy to use and understandable. However, the two groups differed greatly in the types of questions they asked the virtual child during the interview. As expected, the expert group asked more open-ended questions during the interview, whereas the novice group asked more of other question types which were considered wrong. While this initial prototype is capable of distinguishing between different skill levels of interviewers, more development and testing is needed to assess its use as a pedagogical tool. Future work will also focus on simulated medical forensic interviews with physicians and natural language recognition interview capability.


References

  1. Powell, M. B. (2002). Specialist training in investigative and evidential interviewing: Is it having any effect on the behaviour of professionals in the field? Psychiatry, Psychology and the Law, 9(1), 4455. doi:10.1375/132187102760196763
  2. Lamb, M. E., Sternberg, K. J., Orbach, Y., Esplin, P. W., Mitchell, S. (2002). Is Ongoing Feedback Necessary to Maintain The Quality of Investigative Interviews With Allegedly Abused Children? Applied Developmental Science, 6(1). doi:10.1207/S1532480XADS0601_04
  3. Powell, M.B., Fisher, R.P., & Hughes-Scholes, C.H. (2008). The effect of using trained versus untrained adult respondents in simulated practice interviews about child abuse. Child Abuse & Neglect, 32, 1007-1016. doi:10.1016/j.chiabu.2008.05.005
  4. Wright, R., and Powell, M.B. (2005). Investigative interviewers֒ perceptions of their difficulty in adhering to open-ended questions with child witnesses. International Journal of Police Science & Management, 8(4), 316-325.

Army AL&T Magazine Features ICT’s Work Creating Low-Cost Head Mounted Displays

Army AL&T Magazine highlighted ICT achievements in increasing quality and bringing down costs of head mounted displays (HMD) for virtual reality experiences. The article focused on the open-source designs from Mark Bolas and the ICT’s Mixed Reality Lab. Starting with early smartphone-based, foam-core folded prototypes and moving into 3D printed viewers, the lab’s research in low-cost virtual reality displays produced the Socket HMD, an open design used to inform elements of the commercial Oculus Rift headset. The article begins on page 77 of the April-June issue.

Stacy Marsella: “Felt emotion and social context determine the intensity of smiles in a competitive video game”

Abstract: The present study uses automatic facial expression recognition software to examine the relationship between social context and emotional feelings on the expression of emotion, to test claims that facial expressions reflect social motives rather than felt emotion. To vary emotional feelings, participants engaged in a competitive video game. Deception was used to systematically manipulate perceptions of winning or losing. To vary social context, participants played either with friends or
strangers. The results support the hypothesis of Hess and colleagues that smiling is determined by both factors. The results further highlight the value of automatic expression recognition technology for psychological research and provide constraints on inferring emotion from facial displays.

ICT and Law School Develop Virtual Tool to Train Child Interviewers

Conducting interviews with children who have witnessed a crime or have been victims of abuse or neglect represents some of the most challenging and sensitive investigative work for attorneys, social workers and law enforcement officers.

Getting a child to overcome embarrassment and fear and disclose information is a process where the wrong question or reaction can cause a child to withdraw, stop talking or provide misinformation.

A joint research project between the USC Gould School of Law and the USC Institute for Creative Technologies (ICT) aims to train future investigative interviewers, helping them to ask the right questions and use the right techniques to get children to talk.

The Virtual Child Witness (VCW) project is a virtual reality computer interface that will eventually live on the Web, making it accessible to students and professionals all over the world. It combines the child witness research of USC Gould Professor Thomas Lyon and the virtual reality advances of the ICT’s Albert “Skip” Rizzo. The interdisciplinary USC collaboration has already received a $49,500 Zumberge Research Grant to jump-start research, expand the program’s capability and allow its authors to seek further funding.

“We need to teach people how to deal with very sensitive questions and take in really awful answers,” said Lyon, holder of the Judge Edward J. and Ruey L. Guirado Chair in Law and Psychology at USC Gould.

Over the past several years, Lyon has won $3.7 million in grants from the National Institute of Child Health and Human Development to develop, refine and test the protocol he created to interview maltreated children about their abuse.

As a provost grant reviewer, Lyon heard about Justina, an ICT virtual patient developed with the Keck School of Medicine of USC as a tool to train psychiatry residents, and thought a virtual role player could also serve as a great tool to prepare students for working with children.

“I never imagined that I would be working with computer scientists,” Lyon said. “Now that we have begun, it seems like the perfect combination.”

The virtual patient and virtual child witness collaborations are examples of USC researchers crossing disciplines to solve a problem — in this case, how to improve interviewing skills whether an individual is training to be a psychiatrist or an attorney.

“The child is not a patient per se but the interaction is similar,” said Rizzo, ICT’s associate director for medical virtual reality. “You’ve got to be able to use proper questioning strategy to elicit honest information from the child, and this is where the training process takes place. You can read about this stuff in a book, but you don’t get good at it until you practice. Using virtual characters allows students to practice anytime, any place without doing any harm.”

The VCW project began when current ICT Project Assistant Bruce John ’11 was an undergraduate working for Lyon as a research assistant in his developmental psychology lab.

John developed an initial prototype of a virtual child witness for a game design class during his final semester at USC. Using the questions and techniques that Lyon researched and developed, and the resources available to him through ICT, where he had been working since the summer, John created a program in which a virtual child would react to questions selected by the user. The program could track what types of questions were asked and assess the user’s performance.

The next steps are to move the virtual child onto a system that can be accessed online. ICT researchers built such a platform for SimCoach, the institute’s interactive online virtual human health care guide program that was originally developed so that U.S. veterans could remotely access health care and give support to military people who are hesitant to seek help for mental health issues.

On April 25, John and Thomas Talbot, another member of the ICT medical virtual reality team and co-investigator on the VCW project, will present a paper on the first study conducted with the project at the International Pediatric Simulation Symposia and Workshops, a conference at The New York Academy of Medicine.

The purpose of the study was to test the early prototype of the virtual child witness and evaluate whether it could distinguish between the correct and incorrect types of questions that interviewers should be asking. The system, which was able to assess the skill levels of interviewers, recognized that novice interviewers asked more yes or no questions, while trained interviewers asked a greater percentage of more meaningful open-ended queries. Future research will focus on assessing the system’s value as a teaching tool.

“This project is a great marriage in that Professor Lyon will be able to reach more people on a Web-based platform, and ICT can further advance and tailor SimCoach for its future use as a virtual human research platform,” John said. “It advances the work of the law school, ICT, and the research and development of virtual role players for teaching and training across many applications.”

About the USC Institute for Creative Technologies

At the University of Southern California Institute for Creative Technologies (ICT) leaders in artificial intelligence, graphics, virtual reality and narrative advance low-cost immersive techniques and technologies to solve problems facing service members, students and society. Established in 1999, ICT is a DoD-sponsored University Affiliated Research Center (UARC) working in collaboration with the U.S Army Research Laboratory. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation. ICT brings film and game industry artists together with computer and social scientists to improve how people interact with computers and expand what they use them for. ICT prototypes, including interactive virtual humans, virtual reality systems and video game trainers have transitioned for use in military training, health therapies, education and more.

About the USC Gould School of Law
The USC Gould School of Law offers a premier inter-professional education to highly motivated students preparing for a career that will span the coming decades. As the legal profession continues to evolve, no school is better positioned to provide the education that will be the platform for the next generation of lawyers who will practice on a world-wide stage. The legal profession is dynamic, and the Gould School has always taken pride in adapting its methods to provide a legal education tailored to needs of the current environment, while maintaining its strong core commitments. http://weblaw.usc.edu/

Read the story on USC News

PBS Features ICT’s Virtual Health Coaches

An article on PBS Media Shift highlighted ICT’s work in creating effective virtual reality health coaches, including SimSensei and SimCoach. The story notes how the team at ICT is advancing research to create computer systems that can detect signs of distress. “We are not building digital therapists, nor are we trying to replace clinicians,” ICT’s Skip Rizzo said in the story, “but we are creating the building blocks for a system that can mimic empathy.” Stefan Scherer, a post-doc in Louis-Philippe Moreny’s MultiComp Lab and John Hart, ICT program manager at the Army Simulation and Training Technology Center were also quoted.

Paul Debevec: “From Spider-Man to Avatar, Emily to Benjamin: Achieving Photoreal Digital Actors”

Abstract: Somewhere between Final Fantasy in 2001 and The Curious Case of Benjamin Button in 2008, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. I will describe some of the key technological advances which have enabled this achievement. Two technologies from our laboratory, High Dynamic Range Lighting and the Light Stage facial capture systems, have been used in creating realistic digital characters in movies such as Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button, and Avatar. For an in-depth example, I will describe how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create “Digital Emily”, a collaboration between our laboratory and Image Metrics. Actress Emily O’Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics’ video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008. I will also present a 3D Teleconferencing system which uses live facial scanning and an autostereoscopic display to transmit a person’s face in 3D and make eye contact with remote collaborators, and a new head-mounted facial performance capture system based on photometric stereo.

The Verge Features ICT’s Use of Oculus Rift for Virtual Reality PTSD Treatment

Writer Katie Drummond wrote an article on The Verge about the Oculus Rift’s impact on virtual reality therapy and focused on ICT’s Skip Rizzo and his work with the new device. The story noted that Oculus founder Palmer Luckey worked at ICT before founding his VR company. “With exposure therapy and PTSD, the entire idea is to get someone’s head into it as much as you can — we think that’s what’ll lead to better clinical outcomes,” said Rizzo in the story. Oculus Rift, given its low price and unprecedented realism, might reduce costs and increase immersion, Rizzo noted. The article stated that he plans to recommend that treatment sites already using Virtual Iraq consider using Oculus Rift, and has already demoed the system to military clinicians at Joint Base Lewis-McCord. “This has the capacity to turn virtual reality [therapy] into a mass market treatment,” he said. “I’m sure anyone doing this kind of clinical work will agree with me.”

Belinda Lange: “Gaming Technology for People with Cerebral Palsy & Severe Movement Disorders”

Belinda showcased the work she has been doing in collaboration with Dr. Eileen Fowler and Blair Webb, including the Flexible Action and Articulated Skeleton Toolkit (FAAST) and projects using Microsoft Kinect.

Albert “Skip” Rizzo: “Virtual Reality Goes to War: Clinical Applications for the Prevention and Treatment of the Psychological Wounds of War”

Abstract: War is perhaps one of the most challenging situations that a human being can experience. The physical, emotional, cognitive and psychological demands of a combat environment place enormous stress on even the best-prepared military personnel. Numerous reports indicate that the incidence of posttraumatic stress disorder (PTSD) in returning OEF/OIF military personnel is creating a significant healthcare challenge. This situation has served to motivate research on how to better develop and disseminate evidence-based treatments for PTSD and other psychosocial conditions. In this regard, Virtual Reality delivered exposure therapy for PTSD is currently being used with initial reports of positive outcomes. This presentation will detail how virtual reality applications are being designed and implemented across various points in the military deployment cycle to prevent, identify and treat combat-related PTSD in OIF/OEF Service Members and Veterans. Recent work will also be presented that employs virtual human agents that serve in the role as “Virtual Patients” for clinical training of novice healthcare providers in both military and civilian settings and as online healthcare guides for breaking down barriers to care. The projects in these areas that will presented have been developed at the University of Southern California Institute for Creative Technologies, a U.S. Army University Affiliated Research Center, and will provide a diverse overview of how virtual reality is being used to deliver exposure therapy, assess PTSD and cognitive function, provide stress resilience training prior to deployment and its use in breaking down barriers to care. Visit our website: medvr.ict.usc.edu and YouTube channel for more information on this work: youtube.com/user/albertskiprizzo

Learning Objectives: At the conclusion of this session, participants will be able to:

  1. Describe and define Virtual Reality (VR) and explain how it can generally be used for clinical and research applications.
  2. Describe and evaluate the rationale for the use of VR as a tool to deliver exposure therapy for anxiety disorders and PTSD.
  3. Describe, analyze and evaluate the clinical outcomes from the initial research using VR to deliver a prolonged exposure approach in the treatment of PTSD with OIF/OEF Service Members and Veterans.
  4. Describe and differentiate the use of Virtual Reality as a tool for psychological resilience training with Service Members prior to a combat deployment.
  5. Explain how the use of digital Virtual Humans could be applied to clinical training and health care outreach and support and be able to propose how this technology could be used to address the needs of the clinical population that you are most familiar with.

MxR Lab’s Infinite Walking Research Featured in Phys.Org, ACM and New Scientist

Articles in Phys.Org, ACM Tech News and New Scientist covered the virtual reality walking research of Evan Suma and Mark Bolas. The stories noted recent work presented at the IEEE Virtual Reality conference, done in collaboration with colleagues from the Vienna University of Technology, that allows virtual reality headset wearers to roam through automatically-generated rooms and hallways when in fact they are really just walking in circles. Learn more about ICT’s MxR Lab here.

David Traum: “Non-cooperative and Deceptive Dialogue”

Abstract: Cooperation is usually seen as a central concept in the pragmatics of dialogue.There are a number of accounts of dialogue performance and interpretation that require some notion of cooperation or collaboration as part of the explanatory mechanism of communication (E.g., Grice’s maxims, interpretation of indirect speech acts, etc). Most advanced computational work on dialogue systems has also generally assumed cooperativity, and recognizing and conforming to the user’s intention as central to the success of the dialogue system. In this talk I will review some recent work on modeling non-cooperative dialogue, and the creation of virtual humans who engage in Non-cooperative and deceptive dialogue. These include “tactical questioning” role-playing agents, who have conditions under which they will reveal truthful or misleading information, and negotiating agents, whose goals may be at odds with a human dialogue participant, and calculate utilities for different dialogue strategies, and also have an ability to keep secrets using plan-based inference to avoid giving clues that would reveal the secret.

NPR Features Paul Debevec and ICT’s New Dimensions in Testimony Collaboration with USC Shoah

The NPR program “Here and Now” covered ICT’s New Dimensions in Testimony project, a collaboration with the USC Shoah Foundation to record and display Holocaust survivors’ testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future. Paul Debevec, director of ICT’s Graphics Lab was interviewed for the segment, which aired on NPR stations around the country, along with the Stephen Smith, executive director of USC Shoah.

Game Trailers.com Showcases ICT’s Light Stage Technologies and Photo-Real Video Game Faces

A segment on Game Trailers, NextTech features Paul Debevec and his work creating realistic digital doubles. The segment includes some images of ICT’s collaboration with Nvidia’s FaceWorks project, which rendered in real-time a convincing digital face for video games.
Watch the full segment here.

CNET, Mashable and Discovery Canada Feature SimSensei and Research in Automated Depression Recognition

CNET, Mashable and Discovery Canada featured research from ICT’s MultiComp Lab that automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and other behaviors like fidgeting. These signals can aid in identifying indicators of psychological distress.

These stories showcase SimSensei, ICT’s virtual human platform specifically designed for healthcare support. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the perceived user state and intent, through its own speech and gestures. The SimSensei project is led by Louis-Philippe Morency and Skip Rizzo at ICT. The stories note that SimSensei is an aid—not a replacement – for a human health care provder.

The coverage is based on a paper to be presented at the Automatic Face and Gesture Recognition conference in Shanghai, China, this month. The paper was a joint ICT effort from post-doctoral researcher Stefan Scherer and research programmer Giota Stratou with Jonathan Gratch associate director for virtual humans research, leading the data acquisition portion.

Digital Ira

Download a PDF overview.

Creating a Real-Time Photoreal Digital Actor

Activision, Inc. and USC Institute for Creative Technologies

In 2008, the “Digital Emily” project showed how a series of high-resolution facial expressions scanned in a light stage could be rigged into a real-time photoreal digital character and driven with video-based facial animation techniques.  However, Emily was rendered offline, was just the front of the face, and was never seen in a tight closeup.

In this collaboration between Activision and USC ICT, we tried to create a real-time, photoreal digital human character which could be seen from any viewpoint, any lighting, and could perform realistically from video performance capture even in a tight closeup.  In addition, we needed this to run in a game-ready production pipeline. To achieve this, we scanned the actor in thirty high-resolution expressions using the USC ICT’s new Light Stage X system [Ghosh et al. SIGGRAPH Asia 2011] and chose eight expressions for the real-time performance rendering.  To record the performance, we shot multi-view 30fps video of the actor performing improvised lines using the same multi-camera rig. We used a new tool called Vuvuzela to interactively and precisely correspond all expression (u,v)’s to the neutral expression, which was retopologized to an artist mesh. Our new offline animation solver works by creating a performance graph representing dense GPU optical flow between the video frames and the eight expressions. This graph gets pruned by analyzing the correlation between the video frames and the expression scans over twelve facial regions. The algorithm then computes dense optical flow and 3D triangulation yielding per-frame spatially varying blendshape weights approximating the performance.

To create the game-ready facial rig, we transferred the mesh animation to standard bone animation on a 4K polygon mesh using a bone weight and transform solver. The solver optimizes the smooth skinning weights and the bone-animated-transforms to maximize the correspondence between the game mesh and the reference animated mesh.  The rendering technique uses surface stress values to blend diffuse texture, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. The DirectX11 rendering includes screen-space subsurface scattering, translucency, eye refraction and caustics, real-time ambient shadows, a physically-based two-lobe specular reflection with microstructure, depth of field, antialiasing, and film grain.  This is a continuing project and some ongoing work includes simulating eyelid bulge, displacement shading, ambient transmittance and several other dynamic effects.

TIME Article Endorses ICT’s Virtual Reality Exposure Therapy for Treating PTSD

Writing in TIME, Army psychiatrist Artin Terhakopian lauds ICT’s virtual reality therapy for treating PTSD as having, “the potential to reach the thousands of patients who have had the most difficult and, dare I say, an impossible time interfacing and benefiting from traditional methods of therapy and mental health care delivery.” Dr. Terhakopian cites a recent paper by ICT’s Skip Rizzo that appeared in Psychiatric Annals and concludes that, “the VR therapy interface developed and scientifically elaborated at ICT appears to be just the mechanism needed to reach the patients who through no fault of their own are less optimally equipped to fend off behavioral problems and mental illnesses and recover from them once in the grip of the problem or illness.”

USC Researcher Uses a Robot as a Real Roving Reporter

Telepresence experiment studies how interview subjects in Spain will respond to questions posed by a robot being operated in LA

Who/What: Researcher Nonny de la Peña, a pioneering immersive journalist and USC Ph.D student who is exploring how new technologies can be used in journalism, is going to be interviewing researchers and activists in Spain via a robot. De la Peña will be operating the robot wearing a motion capture suit (like the ones used in Avatar) at the University of Southern California Institute for Creative Technologies’ Mixed Reality Lab in Playa Vista. Her movements and interactions will be relayed into the robot, which will be in Spain with the interview subjects. Her gestures and gaze will be replicated by the robot, in order to create a communication experience that goes beyond talking on a phone or using Skype.

Why: Aside from being interesting topics (the AIDS researchers’ breakthroughs and the Catalan independence movement), this is an experiment in telepresence – exploring what it means to have the reporter in the room (as a robot) interviewing the subjects in Spain from the lab here in LA. This project is part of a larger EU-funded research investigating how a person can visit a remote location feel fully immersed in the new environment. Will a reporter get to “go” to the moon next? Travel to a dangerous frontline story during a revolution? Walk the bottom of the ocean? To our knowledge, this will be the first time a reporter is driving a robot to do an interview. Talk about advancing the story!

When: 8:00 am Thursday, April 4, 2013

Where: USC Institute for Creative Technologies
Mixed Reality Lab
5318 McConnell Avenue
Los Angeles, CA 90066

More: This experiment in a collaboration with virtual reality leader Mel Slater at ICREA-University of Barcelona. Read about his work here. The event has received support from www.nvisinc.com and www.xsens.com

RSVP: Orli Belman, 310 709-4156, belman@ict.usc.edu

Read more about Nonny de la Pena
About MxR

KTLA’s Tech Report Showcases ICT Digital Technology Work

A story by tech reporter Rich DeMuro featured ICT work that was on display at the USC GLIMPSE Digital Technology Showcase. ICT’s Paul Debevec, David Nelson, David Traum and Ron Artstein all appear in the story which provides examples of ICT’s work in immersive technologies, including the New Dimensions in Testimony collaboration with the USC Shoah Foundation and the MxR Lab’s immersive viewers.

Belinda Lange’s Video Game Rehabilitation Work Featured in the Daily Trojan

An article in the USC Daily Trojan highlighted Belinda Lange and her work developing video games that can be used for physical therapy. The article noted that Lange has been testing her Kinect-based Jewel Mine system in area clinics and that patients and clinicians are beginning to view the application as a helpful and innovative method to add to their rehabilitation programs.

“It’s helping them to realize their potential because they’re focusing on a game and not focusing on what they can’t do,” Lange said.

New Scientist and Gizmodo Feature ICT Research Teaching Computers to Recognize Signs of Depression

Two articles highlight research from ICT’s MultiComp Lab that automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and other behaviors like fidgeting. These signals can aid in identifying indicators of psychological distress. New Scientist (subscription may be required) featured a study to be presented at the Automatic Face and Gesture Recognition conference in Shanghai, China, this month.

The coverage is based on a paper to be presented at the Automatic Face and Gesture Recognition conference in Shanghai, China, this month. The paper was a joint ICT effort from post-doctoral researcher Stefan Scherer and research programmer Giota Stratou with Jonathan Gratch, associate director for virtual humans research, leading the data acquisition portion. The team used ICT’s MultiSense system to identify characteristic movements that indicate someone may be depressed. “Presently broad screening is done by using only a checklist of yes/no questions or point scales, but all the non-verbal behaviour is not taken into account,” Scherer said. “This is where we would like to put our technology to work.”

This story, as well as one on Gizmodo, showcase SimSensei, ICT’s virtual human platform specifically designed for healthcare support. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the perceived user state and intent, through its own speech and gestures. The SimSensei project is led by Louis-Philippe Morency and Skip Rizzo at ICT. The story notes that SimSensei is an aid—not a replacement – for a human health care provder.

CNN and tech blogs feature ICT’s Digital Ira Collaboration with Activision

ICT’s Graphics Lab’s high profile Digital Ira work advancing the ability to create real-time, photo-real faces was featured in this CNN/Fortune Magazine story, as well as other outlets. The work, which was shown at the Game Developer’s Conference, used the Light Stage system from the ICT Graphics Lab in combination with Activision’s rendering technologies. Watch a video of the work.

Read the Mashable story.
Read the PC Gamer story.
Read the Game Planet story.

Jewish Telegraphic Agency Features New Dimensions in Testimony Project

Columnist Edmon Rodman wrote about this ICT collaboration with the USC Shoah Foundation. Rodman’s wire-service story ran in the Arizona Jewish Post and elsewhere. Rodman visited ICT and got a demonstration of the technology and an explanation of how it works from Paul Debevec, David Traum and Ron Arstein. He also spoke to Stephen Smith of the USC Shoah Foundation. The plan, the story notes, is to make the interactive testimonies available through 3-D installations installed in Holocaust museums and schools, allowing students and others to have a question-and-answer session with a survivor.

BBC Covers ICT Graphics Lab

The BBC News covered the ICT Graphics Lab’s advances in creating realistic digital doubles. The story included the ICT collaborations, Digital Emily and New Dimensions in Testimony as well as the noting the use of lab-developed Light Stage technology in creating believable visual effects for the Hulk character in the recent film, the Avengers.

Voice of America Covers ICT Graphics Lab Work

Voice of America reporter Elizabeth Lee visited ICT and produced this story about current and future work in the ICT Graphics Lab. Paul Debevec was interviewed for the segment, which highlighted ICT’s New Dimensions in Testimony collaboration with the USC Shoah Foundation, uses of the Light Stage technology in films, and 3-D display technologies, including ICT’s 3-D videoconferencing system and the goal of creating life-sized 3-D projections.

Yuyu Xu, Andrew W. Feng and Ari Shapiro: “A Simple Method for High Quality Artist-Driven Lip Syncing”

Abstract: We demonstrate a real-time lip animation algorithmthat can be used to generate synchronized facial movements with audio generated from a text-to-speech engine or from recorded audio. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set (diphones). The diphone animations are then stitched together to construct the final animation. This method can be easily retargeted to new faces that use the same set of visemes. Thus, our method can be applied to any character that utilizes the same, small set of facial poses. In addition, our method is editable in that it allows an artist to directly and easily change specific parts of the lip animation algorithm as needed. Our method requires no learning, can work on multiple languages, and is easily replicated. We make available publicly animations for lip syncing English utterances. We present a study showing the subjective quality of our algorithm, and compare it to the results of a popular commercial software package.

Forbes and Tech Blogs Cover FaceWorks, ICT Graphics Lab’s Collaboration with Nvidia

Does this guy look familiar?

Check out ICT’s own Ari Shapiro rendered in real-time as Digital Ira by the team at ICT’s Graphics Lab using Nvidia’s Titan graphics card. Nvidia CEO Jen-Hsun Huang, showed a demonstration of this FaceWorks collaboration at the company’s GPU Technology Conference. Forbes, Pocket-Lint and other tech blogs raved about the technology they say “may end up shielding our eyes from the uncanny valley forever”. The demo begins at 8:40 in the video below.

The Huffington Post Features Paul Rosenbloom’s Computer Science Op-Ed

In an opinon piece on the Huffington Post Paul Rosebloom, a project director at ICT, makes the case that computer science deserves more respect. Rosenbloom, who is also a professor of computer science at the USC Viterbi School of Engineering, says that computer science should be viewed by students, universities and society in general as more than the sum of its parts.

“To students, it’s simply programming. To scientists in other fields, it’s a tool that helps them in their research. To the public, it’s a source of productivity in the workplace and entertainment apps. Even many computing professionals see computer science as just a form of engineering.

But thinking of computing as only programming, tools/apps or engineering fails to capture its essence as a science. It is not only a true science, concerned with understanding how the world works; it is also the basis for what can be called a great scientific domain — the computing sciences — that is in every way the equal of the three traditional scientific domains,” he writes.

Rosenbloom is the author of the book “On Computing: The Fourth Great Scientific Domain”, published by MIT Press.

MxR Uses 3D Printers to Make DIY Head Mounted Displays at IEEE VR

OPEN VIRTUAL REALITY, Research Demo Booth March 18 – 20

Description: The ICT Mixed Reality Lab is leveraging an open source philosophy to influence and disrupt industry. Projects spun out of the lab’s efforts include the VR2GO smartphone based viewer, the inVerse tablet based viewer, the Socket HMD reference design, the Oculus Rift and the Project Holodeck gaming platforms, a repurposed FOV2GO design with Nokia Lumia phones for a 3D user interface course at Columbia University, and the EventLab’s Socket based HMD at the University of Barcelona. A subset of these will be demonstrated. This open approach is providing low cost yet surprisingly compelling immersive experiences.

Workshop: Saturday, March 16
OTSVR: The Workshop on Off-The-Shelf Virtual Reality is intended to bring together researchers, professionals, and hobbyists to share ideas that leverage off-the-shelf technology for the creation of virtual reality experiences. Building on a successful workshop from IEEE VR 2012, OTSVR 2013 will provide a venue for sharing novel hardware prototypes, software toolkits, interaction techniques, and novel immersive systems and applications that integrate low-cost consumer and hobbyist devices. With an eye towards penetrating thebarrier for widespread use, the workshop will focus specifically on research and applications using technology that is replicable at the price point of a typical home user. Specific areas of interest include, but are not limited to:

  • Low-cost immersive hardware designs and prototypes
  • Software toolkits for leveraging off-the-shelf technology
  • Virtual/augmented reality using smartphones or tablet devices
  • Inexpensive immersive display or projection technology
  • Novel systems or interaction techniques using low-cost motion sensing devices such as the Microsoft Kinect, Nintendo Wiimote, or Playstation Move
  • Evaluations of immersive applications that can be deployed using consumer level devices

IEEE POSTER
Flexible Spaces: A Virtual Step Outside of Reality Khrystyna Vasylevska, Hannes Kaufmann, Mark Bolas, Evan A. Suma
Posters will be available for viewing from 8:30am to 5:00pm on Monday and Tuesday, and 8:30am to 2:00pm on Wednesday.
Posters will be hosted by authors during all breaks (excluding Lunch), and from 3:00pm to 5:00pm on Tuesday before the banquet.

IEEE PAPER
Wed. 3/20
Session 8: Perception
Peripheral Stimulation and its Effect on Perceived Spatial Scale in Virtual Environments Adam Jones, J. Edward Swan II, Mark Bolas

3DUI – Paper presentation
Saturday, 3/16
FlexibleSpaces: Dynamic Layout Generation for Infinite Walking in Virtual Environments | Khrystyna Vasylevska, Hannes Kaufmann, Mark Bolas, Evan Suma

IEEE WORKSHOP
Sunday 3/17
Evan Suma on panel: 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) 2013 

Workshop on Virtual and Augmented Assistive Technology 
Sunday 3/17
Evan Suma co-author on paper: Evaluation of the Exertion and Motivation Factors of a Virtual Reality Exercise Game for Children with Autism

PANELS

Tuesday 3/19, 10 – 12
The Future of Consumer Virtual Reality
Palmer Luckey on panel

Wed. 3/20, 10 -12
VR Science Meets Fiction
Nonny DeLaPeña on panel

ICT’s MxR Lab creates immersive viewers using 3-D printers, low-cost lenses and display panels at the IEEE VR conference in Orlando, Florida

The do-it-yourself (DIY) demonstration was part of the MxR Lab’s push to disrupt the virtual reality marketplace by making low-cost, high quality virtual reality (VR) head mounted displays so accessible that people can even build their own.

As part of their demo at the IEEE VR Conference the team used a 3-D printer to construct a headset casing. Then, they added lenses and panels to create fully-functional immersive viewers. Here is a video of a display being printed.

The MxR Lab, a collaboration between USC’s Institute for Creative Technologies and the USC School of Cinematic Arts, also launched an website for their open source work. The site showcases do-it-yourself VR creations, modifications and designs so people can build their own units. It also has a forum to share ideas, get advice and have conversations about virtual reality and head mounted displays. The site also houses a new VR Dictionary, a glossary of terms and best practices ideas for the VR community.

Visit the site at projects.ict.usc.edu/mxr/diy/

Highlights of MxR’s conference participation include:

Workshop: Saturday, March 16
OTSVR: The Workshop on Off-The-Shelf Virtual Reality brings together researchers, professionals and hobbyists to share ideas that leverage off-the-shelf technology for the creation of VR experiences. Building on a successful workshop from IEEE VR 2012, OTSVR 2013 was designed to provide a venue for sharing novel hardware prototypes, software toolkits, interaction techniques, and novel immersive systems and applications that integrate low-cost consumer and hobbyist devices. With an eye towards penetrating thebarrier for widespread use, the workshop focused specifically on research and applications using technology that is replicable at the price point of a typical home user. Specific areas of interest include, but are not limited to:

IEEE PAPER
Wed. 3/20
Session 8: Perception
Peripheral Stimulation and its Effect on Perceived Spatial Scale in Virtual Environments Adam Jones, J. Edward Swan II, Mark Bolas

3DUI – Paper presentation
Saturday, 3/16
Flexible Spaces: Dynamic Layout Generation for Infinite Walking in Virtual Environments | Khrystyna Vasylevska, Hannes Kaufmann, Mark Bolas, Evan Suma

IEEE WORKSHOP
Sunday 3/17
Evan Suma: Panelist on 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) 2013

Workshop on Virtual and Augmented Assistive Technology
Sunday 3/17
Evan Suma co-author on paper: Evaluation of the Exertion and Motivation Factors of a Virtual Reality Exercise Game for Children with Autism

ICT’s David Krum in MIT Technology Review

David Krum, co-director of ICT’s Mixed Reality Lab, spoke to MIT’s Technology Review for a story about 3-D display technology. Krum said that many computer scientists are now working on content development for 3-D systems. Part of the challenge, he continued, is understanding human perception and which light rays can be left out while still creating the perception of a 3-D image for the viewer. Without addressing this, mobile 3-D will create big bandwidth and data-storage burdens, he said in the story.

Psychiatric Annals Features ICT’s Medical Virtual Reality Work

The current issue of Psychiatric Annals features a summary of ICT-developed virtual reality applications that are being designed and implemented across various points in the military deployment cycle to prevent, identify and treat combat-related PTSD in war veterans. In describing the work surveyed in the article, written by Skip Rizzo and colleagues from ICT’s Medical Virtual Reality team, Emory University and Weill Cornell Medical Center, Rizzo stressed their appeal to young people who are increasingly comfortable in a digital world

“Whether by clinician design or implementation, these tools are now providing opportunities to deliver evidence-based care in formats that may have a wide appeal to members of a society who are increasingly viewing technology not simply as a luxury, but as a natural part of everyday existence,” Rizzo said. “Clinicians who scorn the use of such technological opportunities as somehow subverting an authentic clinical process are likely to find themselves in the same spot as those who thought ‘talking movies’ were just a fad.”

A news story accompanying the article offers perspective from Retired Army Col. Elspeth Cameron Ritchie, MD, MPH, Chief Clinical Officer, District of Columbia Department of Mental Health.

“Our veterans from the wars in Iraq and Afghanistan have matured in a culture of computer games and virtual worlds,” she said. “As our society becomes more digital, expanding our techniques to mobile applications expands our potential reach as therapists. Written by the world’s expert in the field, Dr. Albert “Skip” Rizzo, this article will be of great utility to clinicians interested in exploring new frontiers with their patients.”

Stress Resilience in Virtual Environments (STRIVE)

Download a PDF overview.

A New Path to Allostasis

It is universally accepted that post-traumatic stress disorder (PTSD) resulting from OEF/OIF poses an unprecedented healthcare challenge to the US. Understandably, initial efforts to address PTSD targeted treatment. The Institute for Creative Technologies (ICT) at the University of Southern California (a U.S. Army UARC) played a pivotal role in systematizing a novel method of administering exposure therapy, the most effective treatment for PTSD. Under the direction of Dr. Skip Rizzo, highly controlled yet emotionally evocative scenarios were developed in virtual reality (VR) allowing trained therapists to immerse the participant in scenes in which the therapist controls even minute details. This gives the therapist an unparalleled ability to expose the user only to environments the therapist deems the user capable of confronting and processing in a therapeutic fashion. Treatment success has been observed in open clinical trials and large controlled trials (RCTs) are underway.

The success of this treatment system led Dr. Rizzo, in conjunction with his long-term collaborator Dr. Galen Buckwalter, to explore the efficacy of VR-based immersion in preparing users for the psychological challenges of combat before combat deployment. This effort is based on two scientific principles; 1) pre-exposure to traumatic events within a safe environment provides some degree of protection for those exposed to subsequent trauma (Latent Inhibition); and 2) resilience, or the rate and effectiveness with which someone returns to normal after stress (a process termed allostasis), can be strengthened through systematic training. To provide training consistent with these principles, STRIVE has developed six VR scenarios with advanced gaming development software, cinematically designed lighting and sound and narrative that maximizes character development and emotional engagement as well as clinical appropriateness. Each scenario consists of a combat segment with a pivotal trauma, an event frequently reported to be the emotional source of PTSD ruminations such as witnessing the death of a child, or the loss of a comrade. A virtual human mentor then delivers a resilience training segment within the traumatic context. Note our openness to address the emotions surrounding death, an overt decision to allow for the use this technique in developing a suicide resilience program should the current technique to prove effective. The resilience training techniques used in STRIVE are directly developed from the dimensions of resilience identified in the Headington Institute Resilience Inventory (HIRI), the first multi-dimensional assessment of resilience. Current training focuses on Adaptability, Emotional Regulation, Behavioral Regulation, CBT Appraisal methods, Social Support, Empathy, Hardiness and Meaning in Work. These factors are used to guide curriculum given their effective summary of current theorization of resilience.

The effectiveness of these scenarios will be tested in a study to be conducted at Camp Pendleton in mid 2013. Given that resilience contains many trait components it is not appropriate to test the clinical effectiveness of STRIVE during a short-term study. Rather, we explore the effectiveness of STRIVE with a battery of psychophysiological measures obtained during exposure to STRIVE. Using EKG, EEG, GSR and respiration we will compare psychophysiological responses during high stress epochs with baseline, and with segments involving relaxation administered by the mentor. We hypothesize elevated stress markers during simulated stress and reduced stress markers (below baseline) during mentor administered resilience training. In an additional scientific component of STRIVE we are conducting one of the largest studies to date to validate the definition that resilience is equivalent to a more rapid return to allostasis. Without an efficient return to allostasis, physiologically-specific residuals remain, termed allostatic load, interfering with long-term functioning of the specific system. Given this circular pattern of causality between acute and long-term effects of stress, by assessing several physiological systems prior to stress induction we hypothesize an ability to accurately predict the effectiveness of acute stress response and resolution, solely from physiological residuals of stress, i.e., allostatic load. Such predictive ability would allow for fully individualized resilience training programs and combat placements.

Engadget Features ICT’s DIY Open Source Virtual Reality Work

The web magazine Engadget posted a story about the launch of the MxR Lab’s new DIY website. The site showcases do-it-yourself virtual reality creations, modifications and designs so people can build their own units. It also has a forum to share ideas, get advice  and have conversations about virtual reality and head mounted displays. The modifications that MxR has worked on include add-ons for the Oculs Rift, the revolutionary low-cost commercial HMD that incorporates some of ICT’s open-source designs. Slashdot, Road to VR, 3ders.org and others covered this  work as well.

Read the full article.

Evan Suma Speaks in Opening Panel: Systems Issues in Successful Research Careers

Evan Suma will give a keynote panel talk at the opening of the SEARIS workshop. His section of the panel presentation is titled “The Changing Face of VR Systems.”

Video Games for Rehabilitation

ICT research scientist Belinda Lange specializes in developing game-based systems for people with neurological or physical injuries. Lange and her team created a state-of-the-art video game called Jewel Mine that leverages 3-D motion sensing technology to provide customized rehabilitation to service members, veterans and civilians. One of the features of the program is modeled on the 80′s pattern game Simon but patients must use their bodies to reach for the objects, rather than just pushing a button.

“In order to improve motor function patients need to perform the same motions over and over,” said Lange, who is also an assistant research professor at the USC Davis School of Gerontology. “These exercises are repetitive by design, and that can get boring. Putting them in a game has the potential to keep users motivated.”

Jewel Mine can be used for balance training, upper limb reaching training and has tasks that challenge memory and attention. Now Lange and her team are tailoring the game to meet the needs of amputees. According to recently released government data, over 1,500 U.S. service men and women have lost limbs serving in Iraq or Afghanistan. These men and women face unique challenges, including learning how to perform rehabilitation exercises with a prosthetic arm or leg.

“The beauty of adapting off-the-shelf gaming systems, like the Microsoft Kinect, for rehabilitation is that we can adjust them for each user, and we can also record the user’s movements and track progress,” said Lange. “Existing games are often too difficult for physical therapy patients or just don’t target the areas they need.”

Jewel Mine was originally developed as part of USC’s Optimize Participation Through Technology/Rehabilitation Engineering Research Center (OPTT/RERC), which focuses on building technologies for aging populations and people with disabilities. Lange received funding from the U.S. Army Research Laboratory’s Army Research Office and the Telemedicine and Advanced Technology Research Center of the U.S. Army Medical Research and Materiel Command to modify and evaluate the game with military populations.

Her goal is to make Jewel Mine available in homes and clinics nationwide so patients can have easy access to engaging training and therapists can monitor and assess their progress, including remotely.

Just over a year ago, producers of the ABC television show Extreme Makeover: Home Edition invited Lange to install her game in a house they were building for Allen Hill, a veteran suffering from traumatic brain injury. Lange was able to meet Hill and show him how to use the game.

“That was a researcher’s dream come true,” said Lange. “To be able to see your work make it out of the lab and make a real impact on someone’s life is truly amazing.”

BiLAT Featured in PEO STRI Newsletter

The current issue PEO STRI newsletter, Inside STRI, features an article about BiLAT and BiLAT AIDE, the ICT-developed video game and primer, designed to help Army officers prepare fo culturally sensitive meetings and negotiations. The application was a winner of the U.S. Army Modeling and Simulation Awards for FY08 and is available for download from the U.S. Army’s MilGaming website. BiLAT is a collaboration between the USC Institue for Creative Technologies, U.S. Army Research Institute for the Behavioral and Social Sciences, U.S. Army Research Laboratory Human Research and Engineering Directorate and U.S. Army Research Laboratory’s Simulation and Training Technology Center.

Read the full article.

Albert “Skip” Rizzo: “Interfacing with Virtual Rehabilitation”

Abstract: Virtual reality (VR) has undergone a transition in the past 20 years that has taken it from the realm of expensive toy and into that of functional technology. When discussion of the potential for VR applications in the human clinical and research domains first emerged in the early-1990s, the technology needed to deliver on the anticipated “visions” was not in place. Consequently, during these early years, VR suffered from a somewhat imbalanced “expectation-to-delivery” ratio, as most users trying systems during that time will attest. Yet the idea of producing simulated virtual environments that allowed for the systematic delivery of ecologically relevant stimulus events and challenges was compelling and made intuitive sense for the future of rehabilitation. That view, as well as a long and rich history of encouraging findings from the aviation simulation literature lent support to the concept that testing, training and treatment in highly proceduralized VR simulation environments would be a useful direction for psychology and rehabilitation to explore. Fortunately, since those early days we have seen revolutionary advances in the underlying VR enabling technologies (i.e., computation speed and power, graphics and image rendering technology, display systems, interface devices, immersive audio, haptics tools, tracking, intelligent agents, and authoring software) that have supported development resulting in more powerful, low-cost PC-driven Clinical VR systems. Broadband internet access and mobile technologies have also driven opportunities for patients to engage in sophisticated rehabilitation activities that are longer limited to the physical confines of the clinic. Such advances in technological “prowess” and expanded accessibility have provided new options for the conduct of rehabilitation research and intervention within more usable, useful and lower cost systems. However, the key to clinical success always requires a thoughtful assessment of what is needed to foster proper interaction in VR that addresses a focused clinical objective. Many new high fidelity rehabilitation hardware options that leverage robotics and motion platforms are getting traction in the marketplace and have been successful in supporting interaction with clinic-based treatment systems. However, to produce the sheer amount of engaged repetition required for successful clinical outcomes, low cost VR systems that can escape the clinic and deliver rehabilitation activities in the home are needed. The creation of such systems requires the use of commodity off-the-shelf interface, tracking and display technologies to promote low-cost access. This presentation will provide a brief review of the scientific evidence generated in the Clinical VR field and present an overview of the many ways that VR has evolved with advances in new software, interface, display and Virtual Human systems that will shape the future of rehabilitation. I will make the case that the technology has caught up with the early vision of VR and that this will have a meaningful impact on rehabilitation.

Paul Debevec: “Half Century Trojans Going Back to College Day”

Paul Debevec will deliver a talk at the USC Alumni Association’s “Behind the Scenes of Modern Cinema” event.

Belinda Lange and Sebastian Koenig: “Using Technology to Enhance Clinical Practice: What Rehabilitation Psychologists Need to Know”

The 15th Annual Rehabilitation Psychology Conference is sponsored by the American Board of Rehabilitation Psychology and the Division of Rehabilitation Psychology of the American Psychological Association.

As the annual continuing education program for rehabilitation psychology, presenters and attendees represent national experts in their field. During a full-day preconference attendees are presented with the latest advances in technology-based products, including mobile applications, games, virtual reality, simulations, web-based interventions and telehealth interventions. The purpose of this event is to increase awareness and utilization of technology and virtual reality applications by rehabilitation psychologists.

Belinda Lange, Ph.D., discussed a range of virtual reality and gaming technologies that can be used in the clinic and home setting, including her recent work using virtual reality for cognitive assessment and utilizing the Microsoft Kinect for clinic and home-based rehabilitation.

Sebastian Koenig, Ph.D., spoke about the design, development and evaluation of virtual reality neuropsychology tools, with a focus on the process and interdisciplinary team communication to create these tools. This presentation highlighed USC ICT’s projects VRCPAT, Bravemind and STRIVE as successful examples of such a collaborative approach.

Lange and Koenig also joined a panel of interdisciplinary experts to discuss potential barriers to adoption of these technologies in the clinical and home setting.

Belinda’s virtual reality game-based rehabilitation was featured on Extreme Makeover: Home Edition. Watch video highlights below.

Ari Shapiro and Andrew W. Feng: “The Case for Physics Visualization in an Animator’s Toolset”

Abstract: By associating physical properties with a digital character’s joint and bones, we are able to visualize explicitly a number of properties that can help animators develop high-quality animation. For example, proper ballistic arcs can be shown to demonstrate proper timing and location of a character during flight. In addition, a center of mass that accurately reflects the posture of the character can be shown to help with a balanced appearance during walking or running. In addition, motion properties not previously considered, such as angular momentum, can be easily identified when blatantly violated by an animator. However, very few in-house or commercial systems employ such tools, despite their nearly transparent use in an animator’s workflow and their utility in generating better-quality motion. In this paper, we argue the case for incorporating such toolset, describe an algorithm for implementing the tools, and detail that types of uses for such a tool.

Read the full paper.

Sebastain Koenig: “Design, Development and Evaluation of Virtual Reality Neuropsychology Tools – Bridging the Gap between Software Engineering and Clinical Application”

Sebastian Koenig presented an invited talk at the Rehabilitation Psychology Pre-Conference. VRCPAT was used as one of the examples of successful ICT projects that combine technology and clinical science.

The New York Times Features MxR’s Open Source Head Mounted Displays

The open source designs of the MxR Lab at ICT are featured today in a New York Times article on the Oculus Rift. The article quoted Mark Bolas, ICT associate director for mixed reality research and development, and noted that Oculus founder Palmer Luckey worked at ICT and that elements of the Oculus are based on Bolas’s research at USC.

President Obama Awards National Medal of Science to Solomon Golomb, ICT Advisory Board Member and USC Viterbi Legend

On Feb. 1, one of USC’s most decorated faculty members received the highest honor bestowed by the United States for scientific innovation.

President Barack Obama presented Solomon Golomb, university and distinguished professor of electrical engineering and mathematics, with the National Medal of Science for his advances in mathematics and communications at an awards ceremony in Washington, D.C.

Read the full story.

CNET Covers New Dimensions in Testimony Project

CNET featured ICT’s New Dimensions in Testimony project, a collaboration with the USC Shoah Foundation to record and display Holocaust survivors’ testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future. “It’s the kind of project that really inspires you to push everything that much further, because it’s such valuable content that we’ll be recording,” said Debevec, whose Graphics Lab has contributed to such films as The Curious Case of Benjamin Button and Avatar.

Read the full story.

Paul Debevec: “Creating Photoreal Digital Actors”

Paul Debevec will deliver an invited talk to CS 597, a Ph.D seminar.

Randall Hill, Jr.: “Warfighter Training in the Human Dimension”

In the talk:

  • ELITE: Leader development with Virtual Humans
  • UrbanSIM: Mission Command using Social Simulation
  • SimCoach: Web-accessible Coaching for Post Traumatic Stress Injuries
  • Emerging technologies that will change how immersive training is delivered

Washington Post Features ICT’s Hologram Holocaust Survivor Collaboration with USC Shoah Foundation

An Associated Press article, carried in the Washington Post and other papers throughout the country, featured ICT’s New Dimensions in Testimony project, a collaboration with the USC Shoah Foundation to record and display Holocaust survivors’ testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future. The article recounts how a digital version of survivor Pinchas Gutter was able to answer real-time questions posed to him at the Glimspe digital media showcase held on the USC campus.

Paul Debebec, ICT’s associate director for graphics research, was quoted in the story discussing the project’s goal of being able to project a life-size 3-D version that people can interact with in a museum or classroom setting within one to five years.

“Having actually put it together, it’s clear this will happen,” said Debevec.

ICT’s David Traum and Lori Weiss were featured in photographs accompanying the story. Traum and ICT’s Ron Arstein are working on the natural language aspect of the program to allow the hologram to not only be able to tell a story but recognize questions and answer them succinctly. Weiss, ICT’s director of communication and research operations, was photographed in Light Stage X, an ICT Graphics Lab innovation used for creating believable digital doubles.

Paul Debevec: “Creating Photoreal Digital Actors”

Paul Debevec will deliver the keynote at the Activision Executive Retreat.

Defense News Features Mobile Version of ICT’s ELITE Leadership Trainier

A story in Defense News covers ICT’s development of a laptop version of the Emergent Leader Immersive Training Environment (ELITE), which targets leadership and basic counseling skills for junior leaders in the U.S. Army. Currently ELITE is a classroom trainer installed at the Maneuver Center of Excellence at Fort Benning, Georgia. A similar version, the Immersive Naval Officer Training System, is used by Officer Training Command in Newport, Rhode Island. The article notes that the hope is that a laptop version will allow greater access to the system and let users train when they have the time, rather than during a scheduled class. It would also make the technology both more mobile and less expensive, the story states.

“The institutional version is great, if you have a building that will house it, and you’ve got the instructors and a classroom full of students,” said Todd Richmond, head of advanced prototype and transition at ICT. “The laptop version addresses the low-overhead concept of ‘What if you don’t have a classroom?’”

Matt Trimmer, the project leader for ELITE, noted that the laptop version would have enhanced artificial intelligence capabilities to make up for the lack of instructor-facilitated classroom feedback and discussion.

“We have to address that gap by integrating intelligent tutoring and digital coaches,” said Trimmer. These aids, adapted from ICT’s existing virtual tutor lineup, could bring up suggested talking points and provide guidance in the after-action review, much like an instructor would.

The article notes the ICT team expects to have an instructor-free version ready by fall 2013.

New Dimensions in Testimony

Download a PDF overview.

New Dimensions in Testimony is an initiative to record and display testimony in a way that will continue the dialogue between Holocaust survivors and learners far into the future.

A collaboration between the USC Shoah Foundation and the USC Institute for Creative Technologies, in partnership with Conscience Display, New Dimensions in Testimony will yield insights into the experiences of survivors through a new set of interview questions, some that survivors are asked on a regular basis, plus many of which have not been asked before.

The project uses ICT’s Light Stage technology and records interviews with seven cameras for high-fidelity playback; as well as natural language technology, which will allow people to engage with the testimonies conversationally by asking questions that trigger relevant, spoken responses. ICT is also pioneering display technology that will enable the testimonies to be projected in 3D.

The goal is to develop interactive 3-D exhibits in which learners can have simulated, educational conversations with survivors though the fourth dimension of time. Years from now, long after the last survivor has passed on, the New Dimensions in Testimony project can provide a path to enable young people to listen to a survivor and ask their own questions directly, encouraging them, each in their own way, to reflect on the deep and meaningful consequences of the Holocaust.

The project also advances the age-old tradition of passing down lessons through oral storytelling, but with the latest technologies available.

 

CNN to Feature Skip Rizzo and ICT Virtual Reality Technology for PTSD, Sunday, Jan 27 2pm EST/11am PST

Set your DVR! CNN’s Next List is devoting an entire segment to ICT’s Skip Rizzo and his work using virtual reality to treat and prevent PTSD.
The segment airs Sunday, January 27 at 2pm eastern, 11 am pacific time.
Watch show promos and read a blog post about Skip and his work here.
Learn more about The Next List, CNN’s Sanjay Gupta-hosted program that profiles innovators, visionaries and agents of change.

Jane’s International Defense Review Features ICT work in Cultural Awareness Training

An article in the December 2012 issue of Jane’s International Defense Review covered U.S. military efforts at cultural awareness training and highlighted ICT-developed prototypes in this area, including the BiLAT negotiation training video game and the Combat Hunter Action and Observation Simulation (CHAOS) that is installed at Camp Pendleton. ICT project director Julia Kim was interviewed for the story. Kim noted that while ELECT BiLAT was developed to represent an Iraq-based scenario, the skills developed are transferable and applicable across many societies and cultures.

Read the full story.

Andrew Gordon and His Storytelling Research Featured in Poets and Writers Magazine

An essay in Poets and Writers Magazine explores the importance and meaning of stories. The author highlights Andrew Gordon’s research collecting personal stories from blogs and using them to help teach computers to reason.

“Storytelling is a human universal,” Gordon says in the article. “There’s not a culture that doesn’t tell stories. It’s something embedded in our genes that makes us good storytellers. It’s a huge survival advantage, because you can encapsulate important information from one person to another and share it within a group. So there’s a good reason to be good storytellers.”

Researcher Spotlight: Ari Shapiro

Ari Shapiro leads the development of ICT’s SmartBody application, an animation system for 3-D games that synchronizes speech, gesturing and facial and body motion, particularly for interactive virtual humans.

Shapiro’s advances open opportunities for more complex and less costly games and simulations by providing automated processes for character creation, and were recognized with a best paper award at the recent International Conference on Motion in Games. Characters can be downloaded or acquired from a digital marketplace, then instantly infused with various skills and behaviors, avoiding a costly art pipeline. The number and variety of skills will be continue to be built over time, enabling greater complexity and variation.

In combination with Cerebella, a companion system that intelligently generates a character’s non-verbal behavior developed by ICT’s Stacy Marsella, an entire 3-D character performance can be automatically generated from only the audio signal. Future applications could include the creation of videos, tv shows, cartoons using only an audio track.

Using SmartBody end users can now embody their 3-D characters with realistic behaviors without the time and expense of involving specialists. The art budget is one of the most expensive aspects of making games and movies and SmartBody can significantly reduce the costs.  People in the research community are already using the application and it has caught the attention of the game industry.

For several years, Shapiro worked on character animation tools and algorithms in the research and development departments of visual effects and video games companies such as Industrial Light and Magic, LucasArts and Rhythm and Hues Studios. He has worked on many feature-length films, and holds film credits in The Incredible Hulk and Alvin and the Chipmunks 2. In addition, he holds video games credits in the Star Wars: The Force Unleashed series.

He has published many academic articles in the field of computer graphics in animation for virtual characters, and is a five-time SIGGRAPH speaker. Shapiro completed his Ph.D. in computer science at UCLA in 2007 in the field of computer graphics with a dissertation on character animation using motion capture, physics and machine learning. He also holds an M.S. in computer science from UCLA, and a B.A. in computer science from the University of California, Santa Cruz.

Watch a video demonstration of SmartBody and Cerebella below. Note that nothing in the 3D character’s performance was hand generated – it’s all automated.

An Interview with Paul Rosenbloom

By Julia Kim

Reason #27 that one should mingle at office social events: you learn that your colleague is writing a book to totally change how people think about computing.

A few years ago, my colleague Paul Rosenbloom mentioned off-handedly over our glasses of wine that he was revising his new book. I can’t remember whether he or I proposed having me read a draft, but we both liked the idea. Since I had done some studies in the history of computing, his exploration into computing’s essence and its relationship to other fields sounded intriguing. Plus it’s interesting to see how someone within a field views the field as a whole. After reading the draft, I found it not just intriguing but exciting.

Paul pointed out that various traditional concepts of computing (it is hardware, it is software, it is math, etc.) are neither precise nor productive. As a computer scientist unafraid of meta-thinking, he proposed a computational model for computing (yup) that explained the field and its relationship to the world. He also controversially suggested that computing should be considered on par with the physical, life, and social sciences. Oh and mathematics falls under computing.

The book, On Computing: The Fourth Great Scientific Domain, is out now from MIT Press. As Paul himself notes in his introduction to the book, you don’t have to believe everything that he says. But he hopes that it will still inspire you to think new thoughts about how computing is understood, taught, and done.

It certainly did that for me, and Paul was nice enough to answer my questions about the book and related topics.

You say in the introduction that the ideas in this book came from noticing how computing touches so many things in the world, from everyday life to research. Can you talk a little about this?

This all grew out of a decade focused on new directions activities at USC’s Information Sciences Institute. The set of topics we were working on at first seemed like nothing more than a random jumble, although it was clear that they were all inherently interdisciplinary if you looked closely at them. The topics included technology and the arts; combining computing technology and entertainment expertise for education and training; teams of robots, agents and people; sensor networks and their extension to physically coupled webs (combining sensors, effectors and computing); the use of grid technology for building virtual organizations; computational science; biomedical informatics; modeling and simulation of entire environments, such as virtual cities and comprehensive earth/quake models; and automated construction. Each of these involves computing and at least one other domain, and sometimes multiple, in varying relationships.

Going from observing a phenomenon to developing a new framework seems like a non-obvious thing (at least to me!). Where did you start? How did the architecture evolve? Did you know it was going to have all the dimensions it ended up having?

As mentioned above, I started with the topics we were working on at ISI, almost all of which were interdisciplinary in some way. In trying to move beyond just which domains were involved, I asked myself how these domains related to each other in yielding the various topics, and therefore whether there was some systematic space within which it could all be fit and organized. I recall playing around on paper for quite a while at the beginning, but it was more than a decade ago, so I don’t remember the details. Out of this originally came two types of structures: (1) the early relational architecture (which had an embedding relationship in addition to implementation and interaction); and (2) what is now called in the book the hierarchical systems architecture, but which back then was considered an orthogonal way of organizing topics within computing. Over the years, the relational hierarchy has been refined repeatedly, and the other hierarchy has finally been assimilated to it.

As a (lapsed) historian of science, I like that your architecture challenges the traditional concept of science vs. engineering. You propose that “shaping” and “understanding” contributes to knowledge creation and that there is a healthy feedback loop between the two processes. Can you talk more about this from your own experiences and what you’ve seen in computing and other fields?

My core research is actually in an area called cognitive architecture, where I try to understand the fixed structure of a/the mind, whether natural or artificial. With humans, there is already a system in existence with which we can experiment. With AI, we need to invent/build the systems before we can experiment with them. With humans, the trickiest parts are coming up with clever experiments and interpretations that help us understand this existing black box. With AI, experiments and interpretations are usually rather straightforward, with the tricky part being inventing the things that are worth experimenting on (and much of what is learned comes from the process of understanding how to build something that will work, rather than just formal experimentation on the ultimate working system).

The experiments/interpretations on both the natural and artificial systems help to guide the process of deciding what it is worthwhile to build. What is being built may have practical consequences in the real world, as a usable system, but its construction itself is an experiment in what is possible, and a means for understanding both what is possible and how it is possible. I like to quote Feynman here: “What I cannot create, I do not understand.” Anyway, for me this all means that I follow an integrated pathway that involves both understanding and shaping mind, with the natural and artificial aspects typically intertwined.

In computer science in general there may not always be a natural aspect from which to learn, but the rest of it is always there. We learn from and by building, as well as from theoretical analysis and experimentation with what has been built. At the same time, what we learn helps us build better things that are more likely to have value in the real world.

The other sciences have not always been able to leverage this form of deep integration, as historically they haven’t always been able to build instances of what they want to study, but this should be changing more and more as we understand how to create/change the physical, life and social world at their most primitive levels.

So the lines between natural and artificial are being blurred – and not just in computing but other domains (e.g., genetically-modified foods or super-bugs responding to our vaccines). How do you think and feel about these issues?

Since people are in fact part of nature, there really is no fundamental difference between natural and artificial phenomena, and I see no good reasons for science being limited to one versus the other. We have a compelling need to understand all of the phenomena around and in us, no matter what their origins. This need not imply that origins are unimportant in determining how something is studied or what its impact might be, but even here just determining origins may be increasingly difficult as our ability continually increases for modifying nature at its most fundamental levels. There are always risks in the creation of the new, whether it is the evolution of new organisms or the development of human inventions, and the more powerful the creation the greater the risk. Improving our understanding is really the only viable approach to dealing with risk over the long term.

A potentially more controversial idea in your book is that computing is one of the major sciences (along with the physical, life, and social sciences). When did this idea become a part of your thinking and this book?

This came up fairly late in the overall process. I first developed the relational architecture, and as I was writing it up for the book, realized that it assumed a form of symmetry between computing and the other three domains to which it was being related (the physical, life and social sciences). This raised the question as to whether this symmetry was really appropriate if computing wasn’t coequal with the others, and forced me to ask whether or not it was. The relational architecture could still be useful even if computing weren’t a coequal, but it would be more compelling if it were. So at this point I had to understand what it was that made the physical, life and social sciences what they were, and whether a case could be made that computing was a fourth such thing. Before this excursion, I had been thinking of this work as primarily about the nature and structure of computing, but not necessarily about establishing where computing fit in the pantheon of science and engineering. What thus arose as a subquestion may end up of more lasting importance than the original question/idea.

How are faculty meetings at USC now that you’ve written this book? Are the physicists letting you sit at their table? I imagine the mathematicians may not be taking kindly to being subsumed into computing. For that matter, do the computer science people agree with what you’ve written?

I haven’t yet had much interaction with faculty in other departments at USC about the book. I have though given a talk based on the material at several universities, including USC, primarily to computer scientists but there has usually been a smattering of folks from other disciplines in attendance as well. I get a range of responses, from highly enthusiastic, to appalled – particularly about the claim concerning mathematics – to unclear whether this is anything more than simply an academic exercise. None of the responses to date have shaken the basic claims, but of course the ones I’ve enjoyed the most are those where it is clear a flash bulb has gone off in someone’s head.

You talk in the book about a bit of a crisis you had prior to taking on your new job at ISI. Can you talk about this?

At that point I had been working for two decades towards the ideal, or grand challenge, of understanding and building a cognitive architecture, with much of the latter years focused on a large-scale application of the architecture. But I no longer could see a path forward towards significant enhancements to what we had in hand, and couldn’t generate enthusiasm for what did seem possible. So something dramatically different, where I could learn new things and hopefully have a significant impact of a different kind, seemed in order. My job heading up New Directions at ISI did turn out to be this. When that wrapped up, I was able to return to working on cognitive architecture with new ideas and enthusiasm, and with what appears to be a very promising path forward.

What do you take from your relational architecture work for your cognitive architecture work? What do you think other computing folks can take away from the relational architecture?

One big thing was a meta-lesson about my own research style. I realized that, in general, research was unsatisfying to me unless I was making progress on understanding and exploiting deep underlying commonalities. This is what we were able to do in the early days working on the Soar cognitive architecture, and what I was able to with the relational architecture, although these two pieces were in very different areas. I now believe that the inability to do this any longer along the path I had been following was a major factor in the crisis mentioned above. Fortunately, I can see how to do it again with my new Sigma cognitive architecture.

Another lesson was the reaffirmation of the importance of the interdisciplinarity that had always been part of my work on cognitive architecture. Combining work on natural and artificial intelligence is frequently neither understood nor appreciated within artificial intelligence even though AI is inherently multi-domain according to the relational analysis. The relational analysis has bolstered my confidence in what I always believed, and inspired me to be more evangelical than apologetic about it.

I hope more generally that the relational architecture will help people in computing think about the ways that their work either is or should be across domain; and to understand how what they do relates to what else is going on across the domain of computing. Independent of their own area, my hope is that it also helps them understand better the diverse and important scientific domain in which they work.

What’s next for you?

A big next step based on the book is to (co-) design and teach a new kind of Introduction to Computing course for incoming USC freshman who are majoring in computer science. The first half of the course will focus on introducing the key concepts in computing, and the second half on providing the broad integrated perspective on computing described in the book. The hope is to provide them with the rich background and context that enables them to make sense of all of the more specific technical content they will see in the remainder of the program. If this goes well, an introductory course for the broader population of undergraduates is also a possibility.

To wrap up, let’s go back to the beginning for you. What was your first hands-on experience with a computer? And do you remember an early computer program that you wrote or saw that excited you about computers?

My first hands on experience with a computer was during the summer of 1971, when I got the chance to learn a bit of the BASIC language using a very slow teletype connected to a shared central computer.  What has both fascinated and excited me about computers ever sense is the ability to turn concepts into behavior, along with the procedural intricacy that this involves. A big part of this for me personally has been the ability to turn ideas about how minds work into systems that behave according to these ideas.

Paul S. Rosenbloom is a professor of computer science at the USC Viterbi School of Engineering and works at the USC Institute for Creative Technologies on a new cognitive/virtual-human architecture – Sigma (Σ) – based on graphical models. For more information, click here.

Julia Kim is a project director at the University of Southern California’s Institute for Creative Technologies. Julia studied the history of science at Harvard University (B.A., 1998; M.A., 2004), with a focus on the history of information science and technology. For more information, click here.

For more information about Paul Rosenbloom’s book, read the USC news article.

Buy On Computing: The Fourth Great Scientific Domain on Amazon.

Professor Offers Keen Observations on Computing

When Paul Rosenbloom became the director of New Directions at USC’s Information Sciences Institute 15 years ago, he helped apply timely computing to a diverse range of enterprises, including earthquake modeling, building construction, military training, art heritage and biomedical research.

However, he never imagined the job title would come to refer to a literal new direction for him. The multidisciplinary work propelled him to explore and explain the very nature, structure and stature of computing and its role in the world.

The results of that journey are presented in Rosenbloom’s new book, On Computing: The Fourth Great Scientific Domain (MIT Press).

In the book, the USC Viterbi School of Engineering computer science professor argues that computing deserves to be considered a “Great Scientific Domain” on par with the physical, life and social sciences.

“Thinking of computing as only hardware, software, mathematics, tools or engineering could not be further off the mark,” said Rosenbloom, who is also a project leader at the USC Institute for Creative Technologies. “There is an overarching coherence to the field of computing that makes it greater than the sum of its parts.”

To prove his point, Rosenbloom developed a model that he dubbed the relational approach — a new way of understanding computing and ultimately the structure of science as a whole.

“The three traditional scientific domains each study the interactions among a characteristic set of structures and processes,” Rosenbloom said.

“The story is the same for the computing sciences, where the focus is on information plus its transformation, and the span includes all the fields that contribute to the understanding and shaping of that information transformation,” he explained.

Rosenbloom conceded that he is not a philosopher and his forays into defining science may be considered beyond the boundaries of an artificial intelligence researcher. But he feels that narrow attitudes about computing need countering. Overstating his case seems less of a risk than allowing the future of computing to remain constrained by institutional or ideological walls.

“My goal is to appeal to that broad notion of what computing is all about,” said Rosenbloom, who calls his book a rallying cry to embrace the intertwining of all its parts.

The fields of computer science and engineering, information science and technology informatics, and, perhaps controversially, even mathematics fit into his unified model of computing. He goes further, too, noting all the ways that computing is integral to other domains as well.

Rosenbloom illustrates this by reviewing recent lists of Time magazine’s “Best Inventions of the Year.”

He found that more than one in every three inventions — including 2007’s iPhone, 2008’s Hulu and 2009’s robotic unicycle — were the result of multiple forms of computing and integration with physical, life and social sciences.

Looking forward, Rosenbloom is excited by what is next.

His book expresses optimism that the grand adventure he calls computing will continue to stretch across multiple directions, including looking inward to redefine its place in schools, science and society at large.

Read ICT project director Julia Kim’s interview with Paul Rosenbloom.

Buy On Computing: The Fourth Great Scientific Domain on Amazon.

Virtual NICoE

Virtual NICoE is a partnership between the National Intrepid Center of Excellence (NICoE) and the University of Southern California Institute for Creative Technologies. NICoE is the Department of Defense’s premier research institute for traumatic brain injury and psychological health conditions and is located in Bethesda, MD. We have built a virtual version of  NICoE, where each department will place educational materials, activities and ongoing connectivity for their patients to utilize while they are in their four week program at NICoE and after they leave. Year 1 (2011-2012) focused on building a version of NICoE for use in the virtual world, on working with NICoE leadership to determine appropriate content to be included in the project, and also on training the staff in the use of virtual worlds. Year 2 will allow stakeholders to test and enhance the functionality of the Virtual NICoE platform to ensure maximum functionality. In Year 3 we will involve a patient cohort for focused studies on effectiveness.

External Collaborators
Dr. Melissa Hunfalvay, NICoE

ICT’s Clarke Lethin Honored With Navy Service Award

Clarke Lethin, ICT’s managing director, received the Department of the Navy Meritorious Civilian Service Award recognizing his exceptional performance of duties while assigned as the Human Performance, Training and Education Thrust Manager with the Office of Naval Research. Randall W. Hill, Jr., ICT’s executive director, presented the award to Lethin at the ICT staff meeting in December. Lethin began at ICT in October 2012. Prior to joining ICT, he worked in the Office of Naval Research, most recently as the manager for the Human Performance Training and Education program. He was also the technical manager for the Future Immersive Training Environment, Joint Capabilities Technology Demonstration.

Lethin graduated from Oregon State University in 1980 with a degree in business and then began nearly three decades of service in the Marine Corps, retiring in 2008 as a Colonel. While on active duty he repeatedly served in roles where he demonstrated his ability to lead large teams of diverse staff toward collaborative goals. From 1998 to 2004, Lethin was assigned to Camp Pendleton and served with One Marine Expeditionary Force Headquarters and 1st Marine Division Headquarters. During these six years, he planned military training exercises in Egypt, Jordan and Kenya as well as planned and participated in combat operations in Afghanistan in 2001, and in Iraq in 2003 and 2004. Between 2004 and 2006, Lethin attended the Industrial College of the Armed Forces in Washington D.C. and served as the director for the Marine Corps’ Fires and Maneuver Integration Division in Quantico, Va. His last active duty assignment was as the chief operating officer at Camp Pendleton, overseeing the 350 person organization that was responsible for 44,000 military and civilian personnel.

Defense News Features ICT’s DICE-T and Todd Richmond in Mobile Learning Stories

Defense News covered mobile learning applications in two recent articles. Todd Richmond, director of advanced prototype development and transition at ICT, was quoted in both stories discussing the benefits and challenges involved in developing such applications.

“Some of the first solutions were people taking PowerPoint briefs, burning them on to PDFs, putting that on a tablet, and calling it mobile learning. I don’t call that mobile learning,” said Richmond. “Some things are intuitive on a tablet, but others aren’t.”

ICT’s Dismounted Interactive Counter-IED Environment for Training, or DICE-T, a training game that runs on a PC-based server but that users can play on their Android tablets was featured as well.

“I will never get out of a tablet the immersiveness I can get out of a head-mounted display or a multiperson network,” Richmond said. “But mobile lets you do it anywhere, anytime.”

The second story in Defense News discussed whether mobile applications provide effective learning, noting that with so many thousands of options available it can be difficult to sort the good from the bad. Richmond said his gut feeling is that what a user gets out of mobile learning depends on the user.

“If you have a user that is comfortable with a tablet or a smartphone, they will be more willing to put up with issues and engage with the content,” he said.