Researcher Spotlight: Peter Khooshabeh

Published: June 27, 2013
Category: News

When ICT’s Peter Khooshabeh was an undergraduate at the University of California at Berkeley he worked on developing a virtual practice tool for surgeons. The idea was that an individual interacting in this simulated scenario would show improved outcomes in the operating room. But when Khooshabeh spent time in a real hospital, he observed that technical skill was just one aspect of surgical success. Any useful virtual environment would also need to capture the interpersonal dynamics of such a high-stress, multi-person setting.
“At first we were focused on putting just one person in this virtual environment but there are many players involved in any given surgery,” said Khooshabeh, a research fellow in ICT’s virtual humans research group. “I came to understand that the key to improving performance may not be in the quality of the technology, but in how much you understand about people and how they perceive one another.”
Khooshabeh went on to earn a Ph.D. in cognitive psychology from UC Santa Barbara and continues to leverage technology as a tool to better understand people.
In August, Khooshabeh, along with Johnathan Gratch, ICT’s associate director for virtual humans research, and additional co-authors from USC’s Marshall School of Business and UCSB, will present recent research that uses virtual humans to advance knowledge of non-verbal thought and emotion in real humans. In this study, players took part in a negotiation game where the opposing negotiator was a virtual human programmed to be cooperative or competitive along with either an angry or happy facial expression.
“The expectation was that facial expression would override behavior, meaning people would be threatened by anger no matter if the virtual human was helping them or working against them in the negotiation,” said Khooshabeh.
However, the results showed that it wasn’t a competitive negotiation strategy or the anger expression that caused players stress but whether the strategy and the virtual human’s emotions were matched or not. Specifically, physiological data showed that virtual humans who played cooperatively but looked mad caused participants to show signs of distress, measured by lower cardiac output and increased eye fixations on the virtual human’s face.
“People don’t always respond to angry faces the same way,” said Khooshabeh. “These results are significant because they suggest context matters in the perception of emotion.”
In another study to be published in an upcoming issue of the Journal of Artificial Intelligence and Society, Khooshabeh and ICT colleagues Morteza Dehghani, Angela Nazarian and Jonathan Gratch gave an otherwise identical virtual character different accents (either Iranian- Chinese- or California-accented spoken English) and analyzed how study subjects with the same ethnicity to the accented virtual character responded to the differently accented characters.
Across the two studies, Iranian-Americans interacting with an Iranian-accented, English-speaking virtual human were more likely to make decisions congruent with Iranian cultural customs. Similarly, Chinese-Americans listening to a Chinese-accented, English-speaking virtual human were more likely to make causal attributions congruent with collectivistic, East Asian cultural ideologies.
“Accents matter just as much or possibly more than visual information in forming impressions of others and how they affect our thinking,” said Khooshabeh. “Our work provides experimental evidence that accent can affect individuals differently depending on whether they share the same ethnic cultural background as the target culture.”
In addition to being informative for designing virtual humans for training or research tasks, Khooshabeh hopes his research helps make people aware of biases that they might not realize they possess and also contributes to a greater understanding of how people interact and respond to stressful situations, whether they are performing surgery, negotiations or cross-cultural dialogues.
“In the real world everything is mixed up,” he said. “If we want to understand the role of a single informational cue – be it an emotion or an accent – we have to take it into the lab.”
Peter Khooshabeh is a U.S. Army Research Laboratory (ARL) research fellow in ICT’s virtual humans group. His work explores the social effects that virtual humans can have on people in areas including culture, thought and emotion. It also advances the use of virtual humans as a communications science research tool for better understanding behavior. In addition to his work at ICT, Khooshabeh spends 30 percent of his time at the University of California at Santa Barbara-led Institute for Collaborative Biotechnologies, another UARC under ARL. His fellowship is supported by the ARL Human Research and Engineering Directorate (HRED) Cognitive Sciences Branch.