Multimodal Dialogue Management for Multiparty Interaction with Infants (bibtex)
by Setareh Nasihati Gilani, David Traum, Arcangelo Merla, Eugenia Hee, Zoey Walker, Barbara Manini, Grady Gallagher, Laura-Ann Petitto
Abstract:
We present dialogue management routines for a system to engage in multiparty agent-infant interaction. The ultimate purpose of this research is to help infants learn a visual sign language by engaging them in naturalistic and socially contingent conversations during an early-life critical period for language development (ages 6 to 12 months) as initiated by an artificial agent. As a first step, we focus on creating and maintaining agent-infant engagement that elicits appropriate and socially contingent responses from the baby. Our system includes two agents, a physical robot and an animated virtual human. The system's multimodal perception includes an eye-tracker (measures attention) and a thermal infrared imaging camera (measures patterns of emotional arousal). A dialogue policy is presented that selects individual actions and planned multiparty sequences based on perceptual inputs about the baby's internal changing states of emotional engagement. The present version of the system was evaluated in interaction with 8 babies. All babies demonstrated spontaneous and sustained engagement with the agents for several minutes, with patterns of conversationally relevant and socially contingent behaviors. We further performed a detailed case-study analysis with annotation of all agent and baby behaviors. Results show that the baby's behaviors were generally relevant to agent conversations and contained direct evidence for socially contingent responses by the baby to specific linguistic samples produced by the avatar. This work demonstrates the potential for language learning from agents in very young babies and has especially broad implications regarding the use of artificial agents with babies who have minimal language exposure in early life.
Reference:
Multimodal Dialogue Management for Multiparty Interaction with Infants (Setareh Nasihati Gilani, David Traum, Arcangelo Merla, Eugenia Hee, Zoey Walker, Barbara Manini, Grady Gallagher, Laura-Ann Petitto), In Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI '18, ACM Press, 2018.
Bibtex Entry:
@inproceedings{nasihati_gilani_multimodal_2018,
	address = {Boulder, CO, USA},
	title = {Multimodal {Dialogue} {Management} for {Multiparty} {Interaction} with {Infants}},
	isbn = {978-1-4503-5692-3},
	url = {http://dl.acm.org/citation.cfm?doid=3242969.3243029},
	doi = {10.1145/3242969.3243029},
	abstract = {We present dialogue management routines for a system to engage in multiparty agent-infant interaction. The ultimate purpose of this research is to help infants learn a visual sign language by engaging them in naturalistic and socially contingent conversations during an early-life critical period for language development (ages 6 to 12 months) as initiated by an artificial agent. As a first step, we focus on creating and maintaining agent-infant engagement that elicits appropriate and socially contingent responses from the baby. Our system includes two agents, a physical robot and an animated virtual human. The system's multimodal perception includes an eye-tracker (measures attention) and a thermal infrared imaging camera (measures patterns of emotional arousal). A dialogue policy is presented that selects individual actions and planned multiparty sequences based on perceptual inputs about the baby's internal changing states of emotional engagement. The present version of the system was evaluated in interaction with 8 babies. All babies demonstrated spontaneous and sustained engagement with the agents for several minutes, with patterns of conversationally relevant and socially contingent behaviors. We further performed a detailed case-study analysis with annotation of all agent and baby behaviors. Results show that the baby's behaviors were generally relevant to agent conversations and contained direct evidence for socially contingent responses by the baby to specific linguistic samples produced by the avatar. This work demonstrates the potential for language learning from agents in very young babies and has especially broad implications regarding the use of artificial agents with babies who have minimal language exposure in early life.},
	booktitle = {Proceedings of the 2018 on {International} {Conference} on {Multimodal} {Interaction}  - {ICMI} '18},
	publisher = {ACM Press},
	author = {Nasihati Gilani, Setareh and Traum, David and Merla, Arcangelo and Hee, Eugenia and Walker, Zoey and Manini, Barbara and Gallagher, Grady and Petitto, Laura-Ann},
	month = oct,
	year = {2018},
	keywords = {Virtual Humans},
	pages = {5--13}
}
Powered by bibtexbrowser