Programming Autonomous Machines Ahead of Time Promotes Selfless Decision Making

Published: February 11, 2019
Category: Press Releases | News

ABERDEEN PROVING GROUND, Md. (Feb. 11, 2019) — A new study suggests the use of autonomous machines increases cooperation among individuals.

Researchers from the U.S. Combat Capabilities Development Command’s Army Research Laboratory, the Army’s Institute for Creative Technologies and Northeastern University collaborated on a paper published in the Proceedings of the National Academy of Sciences.

The research team, led by Dr. Celso de Melo, ARL, in collaboration with Drs. Jonathan Gratch, ICT, and Stacy Marsella, NU, conducted a study of 1,225 volunteers who participated in computerized experiments involving a social dilemma with autonomous vehicles.

“Autonomous machines that act on people’s behalf — such as robots, drones and autonomous vehicles — are quickly becoming a reality and are expected to play an increasingly important role in the battlefield of the future,” de Melo said. “People are more likely to make unselfish decisions to favor collective interest when asked to program autonomous machines ahead of time versus making the decision in real-time on a moment-to-moment basis.”

De Melo said that despite promises of increased efficiency, it is not clear whether this paradigm shift will change how people decide when their self-interest is pitted against the collective interest.

“For instance, should a recognition drone prioritize intelligence gathering that is relevant to the squad’s immediate needs or the platoon’s overall mission?” de Melo asked. “Should a search-and-rescue robot prioritize local civilians or focus on mission-critical assets?”

“Our research in PNAS starts to examine how these transformations might alter human organizations and relationships,” Gratch said. “Our expectation, based on some prior work on human-intermediaries, was that AI representatives might make people more selfish and show less concern for others.”

In the paper, results indicate the volunteers programmed their autonomous vehicles to behave more cooperatively than if they were driving themselves. According to the evidence, this happens because programming machines causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals.

“We were surprised by these findings,” Gratch said. “By thinking about one’s choices in advance, people actually show more regard for cooperation and fairness. It is as if by being forced to carefully consider their decisions, people placed more weight on prosocial goals. When making decisions moment-to-moment, in contrast, they become more driven by self-interest.”

The results further show this effect occurs in an abstract version of the social dilemma, which they say indicates it generalizes beyond the domain of autonomous vehicles.

“The decision of how to program autonomous machines, in practice, is likely to be distributed across multiple stakeholders with competing interests, including government, manufacturers and controllers,” de Melo said. “In moral dilemmas, for instance, research indicates that people would prefer other people’s autonomous vehicles to maximize preservation of life (even if that meant sacrificing the driver), whereas their own vehicle to maximize preservation of the driver’s life.”

As these issues are debated, researchers say it is important to understand that in the possibly more prevalent case of social dilemmas — where individual interest is pitted against collective interest — autonomous machines have the potential to shape how the dilemmas are solved and, thus, these stakeholders have an opportunity to promote a more cooperative society.

###