The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams (bibtex)
by Ning Wang, David V. Pynadath, Susan G. Hill
Abstract:
Researchers have observed that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain effective team performance even when the system is less than 100\% reliable. However, current explanation algorithms are not sufficient for making a robot's quantitative reasoning (in terms of both uncertainty and conflicting goals) transparent to human teammates. In this work, we develop a novel mechanism for robots to automatically generate explanations of reasoning based on Partially Observable Markov Decision Problems (POMDPs). Within this mechanism, we implement alternate natural-language templates and then measure their differential impact on trust and team performance within an agent-based online test-bed that simulates a human-robot team task. The results demonstrate that the added explanation capability leads to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot interaction.
Reference:
The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams (Ning Wang, David V. Pynadath, Susan G. Hill), In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2016.
Bibtex Entry:
@inproceedings{wang_impact_2016,
	address = {Singapore},
	title = {The {Impact} of {POMDP}-{Generated} {Explanations} on {Trust} and {Performance} in {Human}-{Robot} {Teams}},
	isbn = {978-1-4503-4239-1},
	url = {http://dl.acm.org/citation.cfm?id=2937071},
	abstract = {Researchers have observed that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain effective team performance even when the system is less than 100\% reliable. However, current explanation algorithms are not sufficient for making a robot's quantitative reasoning (in terms of both uncertainty and conflicting goals) transparent to human teammates. In this work, we develop a novel mechanism for robots to automatically generate explanations of reasoning based on Partially Observable Markov Decision Problems (POMDPs). Within this mechanism, we implement alternate natural-language templates and then measure their differential impact on trust and team performance within an agent-based online test-bed that simulates a human-robot team task. The results demonstrate that the added explanation capability leads to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot interaction.},
	booktitle = {Proceedings of the 2016 {International} {Conference} on {Autonomous} {Agents} \& {Multiagent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {Wang, Ning and Pynadath, David V. and Hill, Susan G.},
	month = may,
	year = {2016},
	keywords = {ARL, DoD, Social Simulation, UARC},
	pages = {997--1005}
}
Powered by bibtexbrowser