Non-cooperative and Deceptive Virtual Agents (bibtex)
by David Traum
Abstract:
Virtual agents that engage in dialogue with people can be used for a variety of purposes, including as service and information providers, tutors, confederates in psychology experiments, and role players in social training exercises. It seems reasonable that agents acting as service and information providers, and arguably as tutors, would be be truthful and cooperative. For other applications, however, such as roleplaying opponents, competitors, or more neutral characters in a training exercise, total honesty and cooperativeness would defeat the purpose of the exercise and fail to train people in coping with deception. The Institute for Creative Technologies at the University of Southern California has created several roleplaying characters, using different models of dialogue and uncooperative and deceptive behavior. This article briefly describes these models, as used in two different genres of dialogue agent: interviewing and negotiation. The models are presented in order from least to most sophisticated reasoning about deception. Most accounts of pragmatic reasoning in dialogue use versions of Grice’s cooperative principles and maxims to derive utterance meanings (which might be indirect in their expression). However, these maxims, such as “be truthful,” don’t cover situations in which conversationalists are deceptive or otherwise uncooperative, even though much human dialogue contains aspects of uncooperative behavior. Gricean accounts alone don’t adequately cover cases in which conversational participants aren’t cooperative—for example, why do they ever answer at all? The notion of discourse obligations differentiatesthe obligation to respond from the mechanism of response generation, which could be either cooperative, neutral, or deceptive.
Reference:
Non-cooperative and Deceptive Virtual Agents (David Traum), In IEEE Intelligent Systems: Trends and Controversies: Computational Deception and Noncooperation, volume 27, 2012.
Bibtex Entry:
@article{traum_non-cooperative_2012,
	title = {Non-cooperative and {Deceptive} {Virtual} {Agents}},
	volume = {27},
	url = {http://ict.usc.edu/pubs/Non-cooperative%20and%20Deceptive%20Virtual%20Agents.pdf},
	abstract = {Virtual agents that engage in dialogue with people can be used for a variety of purposes, including as service and information providers, tutors, confederates in psychology experiments, and role players in social training exercises. It seems reasonable that agents acting as service and information providers, and arguably as tutors, would be be truthful and cooperative. For other applications, however, such as roleplaying opponents, competitors, or more neutral characters in a training exercise, total honesty and cooperativeness would defeat the purpose of the exercise and fail to train people in coping with deception. The Institute for Creative Technologies at the University of Southern California has created several roleplaying characters, using different models of dialogue and uncooperative and deceptive behavior. This article briefly describes these models, as used in two different genres of dialogue agent: interviewing and negotiation. The models are presented in order from least to most sophisticated reasoning about deception. Most accounts of pragmatic reasoning in dialogue use versions of Grice’s cooperative principles and maxims to derive utterance meanings (which might be indirect in their expression). However, these maxims, such as “be truthful,” don’t cover situations in which conversationalists are deceptive or otherwise uncooperative, even though much human dialogue contains aspects of uncooperative behavior. Gricean accounts alone don’t adequately cover cases in which conversational participants aren’t cooperative—for example, why do they ever answer at all? The notion of discourse obligations differentiatesthe obligation to respond from the mechanism of response generation, which could be either cooperative, neutral, or deceptive.},
	number = {6},
	journal = {IEEE Intelligent Systems: Trends and Controversies: Computational Deception and Noncooperation},
	author = {Traum, David},
	month = nov,
	year = {2012},
	keywords = {Virtual Humans, UARC},
	pages = {66--69}
}
Powered by bibtexbrowser