Semi-Automated Construction of Decision-Theoretic Models of Human Behavior (bibtex)
by David V. Pynadath, Heather Rosoff, Richard S. John
Abstract:
Multiagent social simulation provides a powerful mechanism for policy makers to understand the potential outcomes of their decisions before implementing them. However, the value of such simulations depends on the accuracy of their underlying agent models. In this work, we present a method for automatically exploring a space of decision-theoretic models to arrive at a multiagent social simulation that is consistent with human behavior data. We start with a factored Partially Observable Markov Decision Process (POMDP) whose states, actions, and reward capture the questions asked in a survey from a disaster response scenario. Using input from domain experts, we construct a set of hypothesized dependencies that may or may not exist in the transition probability function. We present an algorithm to search through each of these hypotheses, evaluate their accuracy with respect to the data, and choose the models that best re ect the observed behavior, including individual di⬚erences. The result is a mechanism for constructing agent models that are grounded in human behavior data, while still being able to support hypothetical reasoning that is the main advantage of multiagent social simulation.
Reference:
Semi-Automated Construction of Decision-Theoretic Models of Human Behavior (David V. Pynadath, Heather Rosoff, Richard S. John), In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2016.
Bibtex Entry:
@inproceedings{pynadath_semi-automated_2016,
	address = {Singapore},
	title = {Semi-{Automated} {Construction} of {Decision}-{Theoretic} {Models} of {Human} {Behavior}},
	isbn = {978-1-4503-4239-1},
	url = {http://dl.acm.org/citation.cfm?id=2937055},
	abstract = {Multiagent social simulation provides a powerful mechanism for policy makers to understand the potential outcomes of their decisions before implementing them. However, the value of such simulations depends on the accuracy of their underlying agent models. In this work, we present a method for automatically exploring a space of decision-theoretic models to arrive at a multiagent social simulation that is consistent with human behavior data. We start with a factored Partially Observable Markov Decision Process (POMDP) whose states, actions, and reward capture the questions asked in a survey from a disaster response scenario. Using input from domain experts, we construct a set of hypothesized dependencies that may or may not exist in the transition probability function. We present an algorithm to search through each of these hypotheses, evaluate their accuracy with respect to the data, and choose the models that best re
ect the observed behavior, including individual di⬚erences. The result is a mechanism for constructing agent models that are grounded in human behavior data, while still being able to support hypothetical reasoning that is the main advantage of multiagent social simulation.},
	booktitle = {Proceedings of the 2016 {International} {Conference} on {Autonomous} {Agents} \& {Multiagent} {Systems}},
	publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
	author = {Pynadath, David V. and Rosoff, Heather and John, Richard S.},
	month = may,
	year = {2016},
	keywords = {Social Simulation, UARC},
	pages = {891--899}
}
Powered by bibtexbrowser