Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams (bibtex)
by Ning Wang, David V. Pynadath, Ericka Rovira, Michael J. Barnes, Susan G. Hill
Abstract:
Trust is critical to the success of human-robot interaction. Research has shown that people will more accurately trust a robot if they have an accurate understanding of its decision-making process. The Partially Observable Markov Decision Process (POMDP) is one such decision-making process, but its quantitative reasoning is typically opaque to people. This lack of transparency is exacerbated when a robot can learn, making its decision making better, but also less predictable. Recent research has shown promise in calibrating human-robot trust by automatically generating explanations of POMDP-based decisions. In this work, we explore factors that can potentially interact with such explanations in influencing human decision-making in human-robot teams. We focus on explanations with quantitative expressions of uncertainty and experiment with common design factors of a robot: its embodiment and its communication strategy in case of an error. Results help us identify valuable properties and dynamics of the human-robot trust relationship.
Reference:
Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams (Ning Wang, David V. Pynadath, Ericka Rovira, Michael J. Barnes, Susan G. Hill), Chapter in Persuasive Technology, Springer International Publishing, volume 10809, 2018.
Bibtex Entry:
@incollection{wang_is_2018,
	address = {Cham, Switzerland},
	title = {Is {It} {My} {Looks}? {Or} {Something} {I} {Said}? {The} {Impact} of {Explanations}, {Embodiment}, and {Expectations} on {Trust} and {Performance} in {Human}-{Robot} {Teams}},
	volume = {10809},
	isbn = {978-3-319-78977-4 978-3-319-78978-1},
	url = {http://link.springer.com/10.1007/978-3-319-78978-1_5},
	abstract = {Trust is critical to the success of human-robot interaction. Research has shown that people will more accurately trust a robot if they have an accurate understanding of its decision-making process. The Partially Observable Markov Decision Process (POMDP) is one such decision-making process, but its quantitative reasoning is typically opaque to people. This lack of transparency is exacerbated when a robot can learn, making its decision making better, but also less predictable. Recent research has shown promise in calibrating human-robot trust by automatically generating explanations of POMDP-based decisions. In this work, we explore factors that can potentially interact with such explanations in influencing human decision-making in human-robot teams. We focus on explanations with quantitative expressions of uncertainty and experiment with common design factors of a robot: its embodiment and its communication strategy in case of an error. Results help us identify valuable properties and dynamics of the human-robot trust relationship.},
	booktitle = {Persuasive {Technology}},
	publisher = {Springer International Publishing},
	author = {Wang, Ning and Pynadath, David V. and Rovira, Ericka and Barnes, Michael J. and Hill, Susan G.},
	month = apr,
	year = {2018},
	doi = {10.1007/978-3-319-78978-1_5},
	keywords = {ARL, DoD, Social Simulation, UARC},
	pages = {56--69}
}
Powered by bibtexbrowser