Fluid Semantic Back-Channel Feedback in Dialogue: Challenges & Progress (bibtex)
by Jonsdottir, Gudny Ragna, Gratch, Jonathan, Fast, Edward and Thórisson, Kristinn R.
Abstract:
Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase's onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.
Reference:
Fluid Semantic Back-Channel Feedback in Dialogue: Challenges & Progress (Jonsdottir, Gudny Ragna, Gratch, Jonathan, Fast, Edward and Thórisson, Kristinn R.), In Lecture Notes in Artificial Intelligence; Proceedings of the 7th International Conference on Intelligent Virtual Agents (IVA), 2007.
Bibtex Entry:
@inproceedings{jonsdottir_fluid_2007,
	address = {Paris, France},
	title = {Fluid {Semantic} {Back}-{Channel} {Feedback} in {Dialogue}: {Challenges} \& {Progress}},
	url = {http://ict.usc.edu/pubs/Fluid%20Semantic%20Back-Channel%20Feedback%20in%20Dialogue-%20Challenges%20&%20Progress.pdf},
	abstract = {Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase's onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.},
	booktitle = {Lecture {Notes} in {Artificial} {Intelligence}; {Proceedings} of the 7th {International} {Conference} on {Intelligent} {Virtual} {Agents} ({IVA})},
	author = {Jonsdottir, Gudny Ragna and Gratch, Jonathan and Fast, Edward and Thórisson, Kristinn R.},
	month = sep,
	year = {2007},
	keywords = {Virtual Humans}
}
Powered by bibtexbrowser