Getting to Know Each Other: The Role of Social Dialogue in Recovery from Errors in Social Robots (bibtex)
by Gale M. Lucas, Jill Boberg, David Traum, Ron Artstein, Jonathan Gratch, Alesia Gainer, Emmanuel Johnson, Anton Leuski, Mikio Nakano
Abstract:
This work explores the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when robots make conversational errors. Our study uses a NAO robot programmed to persuade users to agree with its rankings on two tasks. We perform two manipulations: (1) The timing of conversational errors - the robot exhibited errors either in the first task, the second task, or neither; (2) The presence of social dialogue - between the two tasks, users either engaged in a social dialogue with the robot or completed a control task. We found that the timing of the errors matters: replicating previous research, conversational errors reduce the robot's influence in the second task, but not on the first task. Social dialogue interacts with the timing of errors, acting as an intensifier: social dialogue helps the robot recover from prior errors, and actually boosts subsequent influence; but social dialogue backfires if it is followed by errors, because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. The design of social robots should therefore be more careful to avoid errors after periods of good performance than early on in a dialogue.
Reference:
Getting to Know Each Other: The Role of Social Dialogue in Recovery from Errors in Social Robots (Gale M. Lucas, Jill Boberg, David Traum, Ron Artstein, Jonathan Gratch, Alesia Gainer, Emmanuel Johnson, Anton Leuski, Mikio Nakano), In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, ACM Press, 2018.
Bibtex Entry:
@inproceedings{lucas_getting_2018,
	address = {Chicago, IL},
	title = {Getting to {Know} {Each} {Other}: {The} {Role} of {Social} {Dialogue} in {Recovery} from {Errors} in {Social} {Robots}},
	isbn = {978-1-4503-4953-6},
	url = {http://dl.acm.org/citation.cfm?doid=3171221.3171258},
	doi = {10.1145/3171221.3171258},
	abstract = {This work explores the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when robots make conversational errors. Our study uses a NAO robot programmed to persuade users to agree with its rankings on two tasks. We perform two manipulations: (1) The timing of conversational errors - the robot exhibited errors either in the first task, the second task, or neither; (2) The presence of social dialogue - between the two tasks, users either engaged in a social dialogue with the robot or completed a control task. We found that the timing of the errors matters: replicating previous research, conversational errors reduce the robot's influence in the second task, but not on the first task. Social dialogue interacts with the timing of errors, acting as an intensifier: social dialogue helps the robot recover from prior errors, and actually boosts subsequent influence; but social dialogue backfires if it is followed by errors, because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. The design of social robots should therefore be more careful to avoid errors after periods of good performance than early on in a dialogue.},
	booktitle = {Proceedings of the 2018 {ACM}/{IEEE} {International} {Conference} on {Human}-{Robot} {Interaction}},
	publisher = {ACM Press},
	author = {Lucas, Gale M. and Boberg, Jill and Traum, David and Artstein, Ron and Gratch, Jonathan and Gainer, Alesia and Johnson, Emmanuel and Leuski, Anton and Nakano, Mikio},
	month = mar,
	year = {2018},
	keywords = {UARC, Virtual Humans},
	pages = {344--351}
}
Powered by bibtexbrowser