Transparency Communication for Machine Learning in Human-Automation Interaction (bibtex)
by David V. Pynadath, Michael J. Barnes, Ning Wang, Jessie Y. C. Chen
Abstract:
Technological advances offer the promise of autonomous systems to form human-machine teams that are more capable than their individual members. Understanding the inner workings of the autonomous systems, especially as machine-learning (ML) methods are being widely applied to the design of such systems, has become increasingly challenging for the humans working with them. The “black-box” nature of quantitative ML approaches poses an impediment to people’s situation awareness (SA) of these ML-based systems, often resulting in either disuse or over-reliance of autonomous systems employing such algorithms. Research in human-automation interaction has shown that transparency communication can improve teammates’ SA, foster the trust relationship, and boost the human-automation team’s performance. In this chapter, we will examine the implications of an agent transparency model for human interactions with ML-based agents using automated explanations. We will discuss the application of a particular ML method, reinforcement learning (RL), in Partially Observable Markov Decision Process (POMDP)-based agents, and the design of explanation algorithms for RL in POMDPs.
Reference:
Transparency Communication for Machine Learning in Human-Automation Interaction (David V. Pynadath, Michael J. Barnes, Ning Wang, Jessie Y. C. Chen), Chapter in Human and Machine Learning, Springer International Publishing, 2018.
Bibtex Entry:
@incollection{pynadath_transparency_2018,
	address = {Cham, Switzerland},
	title = {Transparency {Communication} for {Machine} {Learning} in {Human}-{Automation} {Interaction}},
	isbn = {978-3-319-90402-3 978-3-319-90403-0},
	url = {http://link.springer.com/10.1007/978-3-319-90403-0_5},
	abstract = {Technological advances offer the promise of autonomous systems to form human-machine teams that are more capable than their individual members. Understanding the inner workings of the autonomous systems, especially as machine-learning (ML) methods are being widely applied to the design of such systems, has become increasingly challenging for the humans working with them. The “black-box” nature of quantitative ML approaches poses an impediment to people’s situation awareness (SA) of these ML-based systems, often resulting in either disuse or over-reliance of autonomous systems employing such algorithms. Research in human-automation interaction has shown that transparency communication can improve teammates’ SA, foster the trust relationship, and boost the human-automation team’s performance. In this chapter, we will examine the implications of an agent transparency model for human interactions with ML-based agents using automated explanations. We will discuss the application of a particular ML method, reinforcement learning (RL), in Partially Observable Markov Decision Process (POMDP)-based agents, and the design of explanation algorithms for RL in POMDPs.},
	booktitle = {Human and {Machine} {Learning}},
	publisher = {Springer International Publishing},
	author = {Pynadath, David V. and Barnes, Michael J. and Wang, Ning and Chen, Jessie Y. C.},
	month = jun,
	year = {2018},
	doi = {10.1007/978-3-319-90403-0_5},
	keywords = {ARL, DoD, Social Simulation, UARC},
	pages = {75--90}
}
Powered by bibtexbrowser