Using AI to Help Humans

Published: April 3, 2024
Category: Essays | News
Dr. Gale Lucas, Director, Technology Evaluation Lab

By Dr. Gale Lucas, Director, Technology Evaluation Lab, USC Institute for Creative Technologies

Dr. Gale Lucas is the Director, Technology Evaluation Lab, USC Institute for Creative Technologies, and Co-Director of CENTIENTS, Center for Intelligent Environments, a cross-disciplinary partnership with the Sonny Astani Department of Civil and Environmental Engineering at the University of Southern California. Here she talks about using AI to help people, with relation to some of her current research projects.

In my lab at ICT – the Technology Evaluation Lab – our goal is to use AI to improve people’s lives. To do this we conduct lines of research in Human Computer Interaction (HCI) with the goal of optimizing interactions, and subsequent outcomes, between humans and social intelligence, including virtual agents, social robots, and other social interfaces in the built environment. 

How do we do this? We conduct user studies to examine that interaction – between bio-humans and the non-bio entities mentioned above – empirically, gathering data with a particular focus in understanding how those relationships develop. We identify, record, analyze and draw conclusions on how the human subjects in our studies develop rapport and trust with AI, and how the AIs persuade, negotiate or otherwise engage in social influence with humans (including fostering behavior change).

Moreover, I am proud that, in the lab we not only develop prototypes and applications, but importantly, conduct user studies with great attention to proper research design and analysis. Through published studies and leadership/teaching in this area, members of the Technology Evaluation lab lead the way in terms of best practices in the field for research design and analysis for user studies.

How Did I Get Here?

My job today comes under the heading of one of those that “didn’t exist on the Career Week chart” at high school. But I can draw a through line to where I am now, from as far back as I can remember, because my career aspirations have always included science, and helping people.

When I was four, I wanted to be a veterinarian. I knew I would perform life-saving operation miracles, and then stare down at the grateful button eyes looking back up at me. Cut to eighth grade. Career Week. A world opened up, with so many options and possibilities. That was an important turning point for me, it taught me the value of having an open mind, a quality which has made all the difference to my life. In high school, I contemplated several different careers, ranging from dermatologist to environmental engineer. Though seemingly disparate, they had something in common – always in a science field, and they always gave back to the community. 

In college, I took my first psychology classes. I honed in on becoming a researcher of Psychological Science. I got my BA, then worked towards my PhD in Social Psychology. After grad school, I thought I was taking “a bit of a detour” for my post-doc, by working in the area of Human Computer Interaction (HCI) at ICT with Jon Gratch as my mentor. I imagined I’d go back to a faculty position in psychology after that. 

But no. I’m still at ICT, because, as it turned out, THIS was the career in science and giving back to the community role that fits me best. In fact, this is the career that is perfect for me.

Back to the Lab

So that was the through line in my career to date – always a science field, and always giving back to the community. 

How does that manifest in the Technology Evaluation Lab today? 

To answer that, here are two of our current research tracks that show how we use AI to try to improve lives and help people: 

Human Responses to Emergencies in Buildings 

According to an FBI report (May 2022) the number of active shooter incidents in the U.S. rose 96.8% between 2017 – 2021 and by 52.5% from 2020 to 2021. To address this escalation, we embarked on a research study called Human Responses to Emergencies in Buildings. This research seeks to improve preparedness and response to active shooter events, which will save lives and reduce panic, anger and confusion during these events. To this end, how do factors such as building design, size and demographics of the crowd, and individual differences (i.e. familiarity with the building), impact human responses?

To answer these questions, we conducted human subject experiments using Immersive Virtual Environments (IVEs). modeling the built environment in virtual reality, and simulating the behavior of both the adversaries and the crowd. By then exposing participants to an active shooter incident using IVE, the researchers could then measure responses in realistic ways that are not possible outside the laboratory environment. This research was funded by the NSF.

Retrofitting buildings to be hardened against active shooters may slow such bad actors down, but it also slows victims down while trying to survive. Participants experienced the same active shooter incident in the virtual version of one building or the other; and the original building design shaved off 25% of their time to respond, so participants were significantly slower when the building was retrofitted to be hardened against active shooters. The original building design also enabled the majority of people to run, whereas only about half chose to run (and the other half hide) when the building was retrofitted to be hardened against active shooters.

Since establishing this finding, we turned our sights to an even bigger problem that we discovered along the way. During focus groups with first responders, law enforcement, and experts in security, we heard one theme ring above others: training for active shooter incidents is critical to help people survive them. Unlike fire drills, these are very challenging (and expensive) to conduct live. So we have developed active shooter training in virtual reality, and now are testing the effectiveness for training workers what to do during such incidents. 

A “result” from a study that I am most proud of came from this project. A USC employee who was trained using our active shooter training in virtual reality was involved in an incident with live fire. He relayed to our team that, because of the training, he was able to remain calm and knew what to do to stay safe. To me, this was a key outcome aligning with my overall goal to help people while using science.

Trust and Teaming 

A key component of IVEs is establishing a sense of trust within them, and within the human-machine interaction – otherwise we know people will not use them. 

This is where our next research comes in: 

Now that humans are teaming with automated agents, trust is a critical factor. Trust in automation has never been a given, for a variety of reasons, including fear of losing jobs to automation. But trust issues around autonomy are complex. People’s trust in automation also depends on their perception of automation – is it capable of doing the required task? Will it understand their goals? People can over trust as well – and then abandon the tool if it makes a mistake. How can trust be repaired within the human-automation relationship?

Even outside of work, people can be suspicious of the new forms of automation. Current traditional training resources have failed to focus on, or build, trust into the human-machine relationship. This is a huge problem because when users trust the automation less than they could (based on its capabilities), they will underutilize the system and fail to reap the benefits of partnering with automated agents.

To this end, we use IVEs, among other research paradigms, to enable us to study how, and when, relational factors can facilitate trust in automation, especially in automated teammates. Much of the prior research in this area has taken a more “informational approach,” such that trust is built based on increasing users’ understanding of the automation. In our work we have gone further, to explore how human-like factors such as relational factors (e.g. rapport-building dialogue, affective listening), and natural interactions (e.g. with natural language, contingent agent behaviors) have a part to play in engendering trust in human-machine relationships. 

Here are a few results from this research, showing that AI helped the humans in these studies: 

  • In a sample of veterans and civilians, a virtual human interviewer doubles people’s willingness to share sensitive or personal details about their lives compared to a human interviewer. 
  • National guard members trusted a virtual human interviewer, being willing to share 3x more symptoms of PTSD with the virtual human than with their commander. 
  • Construction workers’ trust in a demolition robot increased 3x more when trained using our VR-based training compared to the traditional method of training in person.

These are just two of the research tracks within the Technology Evaluation Lab that focus on helping people through the use of AI, enabling us to contribute to ICT’s overall mission to use creative technologies to teach, train, help and heal. 

//

Dr. Gale Lucas is the Director, Technology Evaluation Lab, USC Institute for Creative Technologies, and Co-Director of CENTIENTS, Center for Intelligent Environments, a cross-disciplinary partnership with the Sonny Astani Department of Civil and Environmental Engineering at the University of Southern California. Dr. Lucas joined ICT in 2013 as a Researcher in the Affective Computing Lab and also holds a position as Research Assistant Professor of Computer Science and Civil and Environmental Engineering, USC Viterbi School of Engineering. She holds a BA in Psychology from Willamette University, a PhD in Psychology from Northwestern University, and did her postdoctoral study in Human-Computer Interaction at the University of Southern California (USC).