A Harvard Business Review article on trust and robots featured ICT research on how to make computers more like humans.
“The more you add lifelike characteristics, and particularly the more you add things that seem like emotion, the more strongly it evokes these social effects,” says Jonathan Gratch, a professor at the University of Southern California who studies human-machine interactions. “It’s not always clear that you want your virtual robot teammate to be just like a person. You want it to be better than a person.”
The article states that, in his own research Gratch has explored how thinking machines might get the best of both worlds, eliciting humans’ trust while avoiding some of the pitfalls of anthropomorphism. In one study he had participants in two groups discuss their health with a digitally animated figure on a television screen (dubbed a “virtual human”). One group was told that people were controlling the avatar; the other group was told that the avatar was fully automated. Those in the latter group were willing to disclose more about their health and even displayed more sadness. “When they’re being talked to by a person, they fear being negatively judged,” Gratch says.
Gratch hypothesizes that “in certain circumstances the lack of humanness of the machine is better.” For instance, “you might imagine that if you had a computer boss, you would be more likely to be truthful about what its shortcomings were.” And in some cases, Gratch thinks, less humanoid robots would even be perceived as less susceptible to bias or favoritism.