Exploring Variation of Natural Human Commands to a Robot in a Collaborative Navigation Task (bibtex)
by Matthew Marge, Claire Bonial, Ashley Foots, Cory Hayes, Cassidy Henry, Kimberly Pollard, Ron Artstein, Clare Voss, David Traum
Abstract:
Robot-directed communication is variable, and may change based on human perception of robot capabilities. To collect training data for a dialogue system and to investigate possible communication changes over time, we developed a Wizard-of-Oz study that (a) simulates a robot’s limited understanding, and (b) collects dialogues where human participants build a progressively better mental model of the robot’s understanding. With ten participants, we collected ten hours of human-robot dialogue. We analyzed the structure of instructions that participants gave to a remote robot before it responded. Our findings show a general initial preference for including metric information (e.g., move forward 3 feet) over landmarks (e.g., move to the desk) in motion commands, but this decreased over time, suggesting changes in perception.
Reference:
Exploring Variation of Natural Human Commands to a Robot in a Collaborative Navigation Task (Matthew Marge, Claire Bonial, Ashley Foots, Cory Hayes, Cassidy Henry, Kimberly Pollard, Ron Artstein, Clare Voss, David Traum), In Proceedings of the First Workshop on Language Grounding for Robotics, Association for Computational Linguistics, 2017.
Bibtex Entry:
@inproceedings{marge_exploring_2017,
	address = {Vancouver, Canada},
	title = {Exploring {Variation} of {Natural} {Human} {Commands} to a {Robot} in a {Collaborative} {Navigation} {Task}},
	url = {http://www.aclweb.org/anthology/W17-2808},
	abstract = {Robot-directed communication is variable, and may change based on human perception of robot capabilities. To collect training data for a dialogue system and to investigate possible communication changes over time, we developed a Wizard-of-Oz study that (a) simulates a robot’s limited understanding, and (b) collects dialogues where human participants build a progressively better mental model of the robot’s understanding. With ten participants, we collected ten hours of human-robot dialogue. We analyzed the structure of instructions that participants gave to a remote robot before it responded. Our findings show a general initial preference for including metric information (e.g., move forward 3 feet) over landmarks (e.g., move to the desk) in motion commands, but this decreased over time, suggesting changes in perception.},
	booktitle = {Proceedings of the {First} {Workshop} on {Language} {Grounding} for {Robotics}},
	publisher = {Association for Computational Linguistics},
	author = {Marge, Matthew and Bonial, Claire and Foots, Ashley and Hayes, Cory and Henry, Cassidy and Pollard, Kimberly and Artstein, Ron and Voss, Clare and Traum, David},
	month = aug,
	year = {2017},
	keywords = {ARL, UARC, Virtual Humans},
	pages = {58--66}
}
Powered by bibtexbrowser