The Dynamics of Human-Agent Trust with POMDP-Generated Explanations (bibtex)
by Ning Wang, David V. Pynadath, Susan G. Hill, Chirag Merchant
Abstract:
Partially Observable Markov Decision Processes (POMDPs) enable optimized decision making by robots, agents, and other autonomous systems. This quantitative optimization can also be a limitation in human-agent interaction, as the resulting autonomous behavior, while possibly optimal, is often impenetrable to human teammates, leading to improper trust and, subsequently, disuse or misuse of such systems [1]. Automatically generated explanations of POMDP-based decisions have shown promise in calibrating human-agent trust [3]. However, these “one-size-fits-all” static explanation policies are insufficient to accommodate different communication preferences across people. In this work, we analyze human behavior in a human-robot interaction (HRI) scenario, to find behavioral indicators of trust in the agent’s ability. We evaluate four hypothesized behavioral measures that an agent could potentially use to dynamically infer its teammate’s current trust level. The conclusions drawn can potentially inform the design of intelligent agents that can automatically adapt their explanation policies as they observe the behavioral responses of their human teammates.
Reference:
The Dynamics of Human-Agent Trust with POMDP-Generated Explanations (Ning Wang, David V. Pynadath, Susan G. Hill, Chirag Merchant), In Proceedings of the 17th International Conference on Intelligent Virtual Agents (IVA 2017), Springer International Publishing, 2017.
Bibtex Entry:
@inproceedings{wang_dynamics_2017,
	address = {Stockholm, Sweden},
	title = {The {Dynamics} of {Human}-{Agent} {Trust} with {POMDP}-{Generated} {Explanations}},
	isbn = {978-3-319-67400-1 978-3-319-67401-8},
	url = {https://link.springer.com/chapter/10.1007/978-3-319-67401-8_58},
	abstract = {Partially Observable Markov Decision Processes (POMDPs) enable optimized decision making by robots, agents, and other autonomous systems. This quantitative optimization can also be a limitation in human-agent interaction, as the resulting autonomous behavior, while possibly optimal, is often impenetrable to human teammates, leading to improper trust and, subsequently, disuse or misuse of such systems [1]. Automatically generated explanations of POMDP-based decisions have shown promise in calibrating human-agent trust [3]. However, these “one-size-fits-all” static explanation policies are insufficient to accommodate different communication preferences across people. In this work, we analyze human behavior in a human-robot interaction (HRI) scenario, to find behavioral indicators of trust in the agent’s ability. We evaluate four hypothesized behavioral measures that an agent could potentially use to dynamically infer its teammate’s current trust level. The conclusions drawn can potentially inform the design of intelligent agents that can automatically adapt their explanation policies as they observe the behavioral responses of their human teammates.},
	booktitle = {Proceedings of the 17th {International} {Conference} on {Intelligent} {Virtual} {Agents} ({IVA} 2017)},
	publisher = {Springer International Publishing},
	author = {Wang, Ning and Pynadath, David V. and Hill, Susan G. and Merchant, Chirag},
	month = aug,
	year = {2017},
	keywords = {ARL, MedVR, Social Simulation, UARC}
}
Powered by bibtexbrowser