Teaching Language to Deaf Infants with a Robot and a Virtual Human (bibtex)
by Brian Scassellati, Ari Shapiro, David Traum, Laura-Ann Petitto, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo Merla
Abstract:
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90\% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.
Reference:
Teaching Language to Deaf Infants with a Robot and a Virtual Human (Brian Scassellati, Ari Shapiro, David Traum, Laura-Ann Petitto, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo Merla), In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM Press, 2018.
Bibtex Entry:
@inproceedings{scassellati_teaching_2018,
	address = {Montreal, Canada},
	title = {Teaching {Language} to {Deaf} {Infants} with a {Robot} and a {Virtual} {Human}},
	isbn = {978-1-4503-5620-6},
	url = {http://dl.acm.org/citation.cfm?doid=3173574.3174127},
	doi = {10.1145/3173574.3174127},
	abstract = {Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90\% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.},
	booktitle = {Proceedings of the 2018 {CHI} {Conference} on {Human} {Factors} in {Computing} {Systems}},
	publisher = {ACM Press},
	author = {Scassellati, Brian and Shapiro, Ari and Traum, David and Petitto, Laura-Ann and Brawer, Jake and Tsui, Katherine and Nasihati Gilani, Setareh and Malzkuhn, Melissa and Manini, Barbara and Stone, Adam and Kartheiser, Geo and Merla, Arcangelo},
	month = apr,
	year = {2018},
	keywords = {Virtual Humans},
	pages = {1--13}
}
Powered by bibtexbrowser