"So, which one is it?" The effect of alternative incremental architectures in a high-performance game-playing agent (bibtex)
by Paetzel, Maike, Manuvinakurike, Ramesh and DeVault, David
Abstract:
This paper introduces Eve, a highperformance agent that plays a fast-paced image matching game in a spoken dialogue with a human partner. The agent can be optimized and operated in three different modes of incremental speech processing that optionally include incremental speech recognition, language understanding, and dialogue policies. We present our framework for training and evaluating the agent’s dialogue policies. In a user study involving 125 human participants, we evaluate three incremental architectures against each other and also compare their performance to human-human gameplay. Our study reveals that the most fully incremental agent achieves game scores that are comparable to those achieved in human-human gameplay, are higher than those achieved by partially and nonincremental versions, and are accompanied by improved user perceptions of efficiency, understanding of speech, and naturalness of interaction.
Reference:
"So, which one is it?" The effect of alternative incremental architectures in a high-performance game-playing agent (Paetzel, Maike, Manuvinakurike, Ramesh and DeVault, David), In Proceedings of SIGDIAL 2015, 2015.
Bibtex Entry:
@inproceedings{paetzel_so_2015,
	address = {Prague, Czech Republic},
	title = {"{So}, which one is it?" {The} effect of alternative incremental architectures in a high-performance game-playing agent},
	shorttitle = {So, which one is it?},
	url = {http://ict.usc.edu/pubs/So,%20which%20one%20is%20it%20-%20The%20effect%20of%20alternative%20incremental%20architectures%20in%20a%20high-performance%20game-playing%20agent.pdf},
	abstract = {This paper introduces Eve, a highperformance agent that plays a fast-paced image matching game in a spoken dialogue with a human partner. The agent can be optimized and operated in three different modes of incremental speech processing that optionally include incremental speech recognition, language understanding, and dialogue policies. We present our framework for training and evaluating the agent’s dialogue policies. In a user study involving 125 human participants, we evaluate three incremental architectures against each other and also compare their performance to human-human gameplay. Our study reveals that the most fully incremental agent achieves game scores that are comparable to those achieved in human-human gameplay, are higher than those achieved by partially and nonincremental versions, and are accompanied by improved user perceptions of efficiency, understanding of speech, and naturalness of interaction.},
	booktitle = {Proceedings of {SIGDIAL} 2015},
	author = {Paetzel, Maike and Manuvinakurike, Ramesh and DeVault, David},
	month = sep,
	year = {2015},
	keywords = {Virtual Humans, UARC},
	pages = {77 -- 86}
}
Powered by bibtexbrowser