Dr. Gale Lucas’s Human-AI Interaction Research Featured in Fast Company

Published: August 11, 2025
Category: News
Gale Lucas research featured in Fast Company

ICT’s work in human–AI interaction was recently featured in Fast Company, highlighting research led by Dr. Gale Lucas, Director of ICT’s Technology Evaluation Lab. The article, by FutureThink CEO Lisa Bodell, examined why AI coaching often works—and in some cases works better—than human coaching.

At the heart of the story is ICT’s groundbreaking study demonstrating that virtual humans can increase a person’s willingness to disclose sensitive or personal information.

Dr. Lucas’s Technology Evaluation Lab investigates how people interact with social technologies—virtual agents, social robots, and smart environments—with the goal of optimizing these interactions to build rapport, trust, and positive outcomes. The lab develops and tests systems capable of persuading, negotiating, and fostering behavior change, while applying rigorous user-study design and analysis to ensure results are both robust and applicable in the real world.

The featured research was covered in Dr. Lucas’s paper “It’s Only a Computer: Virtual Humans Increase Willingness to Disclose” (August 2014), published in Computers in Human Behavior (37:94–100). The study explored how belief that they are interacting with an AI affects a participant’s comfort level in a clinical interview setting. Participants interacted with a rapport-building virtual human interviewer, but were told either that the agent was controlled by a human or that it was fully automated.

The findings were striking: those who believed they were speaking with an automated system reported lower fear of self-disclosure, engaged in less impression management, and were observed—both by human coders and by automated facial-expression recognition software—to be willing to share more openly, including willingness to express sadness.

“In many health contexts, people hold back because they fear judgment,” Dr. Lucas explained. “When they believe the conversation is with an impartial computer, they feel safer, and that can lead to more honest, detailed responses—responses that could improve diagnosis and care.”

The study, supported by DARPA and the U.S. Army, also revealed unsolicited participant feedback underscoring the point: some reported they “would have said a lot more” if they thought no one else was watching, while others described the automated interviewer as “way better than talking to a person” about personal matters.

The implications extend well beyond clinical screenings. Virtual humans designed with these principles could be used to collect more accurate patient histories, encourage disclosure in mental health settings, or support behavior-change interventions. They could also play a role in training healthcare providers, offering a safe environment to practice sensitive conversations without fear of their attempts being judged.

Economically, the technology offers potential cost savings once developed, as virtual human systems can be replicated at scale and deployed in remote or underserved areas where access to in-person specialists is limited.

“This research shows that well-designed virtual humans aren’t just a novelty,” Dr. Lucas said. “They can be an essential tool in lowering barriers between people and the help they need.”

By conceptualizing human–machine interaction as a mechanism for fostering greater self-disclosure, the ICT Technology Evaluation Lab is advancing the empirical foundations for integrating artificial intelligence into applications where eliciting candid, high-quality information is essential, with the potential to surface insights that individuals may be less inclined to share in traditional human-mediated settings.

//