Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations (bibtex)
by Wang, Ning, Pynadath, David V. and Hill, Susan G.
Abstract:
Trust is a critical factor for achieving the full potential of human-robot teams. Researchers have theorized that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain trust when the system is less than 100% reliable. In this work, we leverage existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate such explanations. To measure the explanation mechanism's impact on trust, we collected self-reported survey data and behavioral data in an agent-based online testbed that simulates a human-robot team task. The results demonstrate that the added explanation capability led to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot trust calibration.
Reference:
Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations (Wang, Ning, Pynadath, David V. and Hill, Susan G.), In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2016.
Bibtex Entry:
@inproceedings{wang_trust_2016,
	address = {New Zealand},
	title = {Trust {Calibration} within a {Human}-{Robot} {Team}: {Comparing} {Automatically} {Generated} {Explanations}},
	url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7451741},
	doi = {10.1109/HRI.2016.7451741},
	abstract = {Trust is a critical factor for achieving the full potential of human-robot teams. Researchers have theorized that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain trust when the system is less than 100\% reliable. In this work, we leverage existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate such explanations. To measure the explanation mechanism's impact on trust, we collected self-reported survey data and behavioral data in an agent-based online testbed that simulates a human-robot team task. The results demonstrate that the added explanation capability led to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot trust calibration.},
	booktitle = {2016 11th {ACM}/{IEEE} {International} {Conference} on {Human}-{Robot} {Interaction} ({HRI})},
	publisher = {IEEE},
	author = {Wang, Ning and Pynadath, David V. and Hill, Susan G.},
	month = mar,
	year = {2016},
	keywords = {Social Simulation, UARC, ARL, DoD},
	pages = {109--116}
}
Powered by bibtexbrowser