Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams (bibtex)
by David V. Pynadath, Ning Wang, Ericka Rovira, Michael J. Barnes
Abstract:
Trust is critical to the success of human-agent teams, and a critical antecedents to trust is transparency. To best interact with human teammates, an agent explain itself so that they understand its decision-making process. However, individual differences among human teammates require that the agent dynamically adjust its explanation strategy based on their unobservable subjective beliefs. The agent must therefore recognize its teammates' subjective beliefs relevant to trust-building (e.g., their understanding of the agent's capabilities and process). We leverage a nonparametric method to enable an agent to use its history of prior interactions as a means for recognizing and predicting a new teammate's subjective beliefs. We first gather data combining observable behavior sequences with survey-based observations of typically unobservable perceptions. We then use a nearest-neighbor approach to identify the prior teammates most similar to the new one. We use these neighbors' responses to infer the likelihood of possible beliefs, as in collaborative filtering. The results provide insights into the types of beliefs that are easy (and hard) to infer from purely behavioral observations.
Reference:
Clustering Behavior to Recognize Subjective Beliefs in Human-Agent Teams (David V. Pynadath, Ning Wang, Ericka Rovira, Michael J. Barnes), In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2018.
Bibtex Entry:
@inproceedings{pynadath_clustering_2018,
	address = {Stockholm, Sweden},
	title = {Clustering {Behavior} to {Recognize} {Subjective} {Beliefs} in {Human}-{Agent} {Teams}},
	url = {https://dl.acm.org/citation.cfm?id=3237923},
	abstract = {Trust is critical to the success of human-agent teams, and a critical antecedents to trust is transparency. To best interact with human teammates, an agent explain itself so that they understand its decision-making process. However, individual differences among human teammates require that the agent dynamically adjust its explanation strategy based on their unobservable subjective beliefs. The agent must therefore recognize its teammates' subjective beliefs relevant to trust-building (e.g., their understanding of the agent's capabilities and process). We leverage a nonparametric method to enable an agent to use its history of prior interactions as a means for recognizing and predicting a new teammate's subjective beliefs. We first gather data combining observable behavior sequences with survey-based observations of typically unobservable perceptions. We then use a nearest-neighbor approach to identify the prior teammates most similar to the new one. We use these neighbors' responses to infer the likelihood of possible beliefs, as in collaborative filtering. The results provide insights into the types of beliefs that are easy (and hard) to infer from purely behavioral observations.},
	booktitle = {Proceedings of the 17th {International} {Conference} on {Autonomous} {Agents} and {MultiAgent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {Pynadath, David V. and Wang, Ning and Rovira, Ericka and Barnes, Michael J.},
	month = jul,
	year = {2018},
	keywords = {ARL, DoD, Social Simulation, UARC},
	pages = {1495--1503}
}
Powered by bibtexbrowser