Clinical interviewing by a virtual human agent with automatic behavior analysis (bibtex)
by AA Rizzo, G Lucas, J Gratch, G Stratou, L-P Morency, R Shilling, A Hartholt, S Scherer
Abstract:
SimSensei is a Virtual Human (VH) interviewing platform that uses off-the-shelf sensors (i.e., webcams, Microsoft Kinect and a microphone) to capture and interpret real-time audiovisual behavioral signals from users interacting with the VH system. The system was specifically designed for clinical interviewing and health care support by providing a face-to-face interaction between a user and a VH that can automatically react to the inferred state of the user through analysis of behavioral signals gleaned from the user’s facial expressions, body gestures and vocal parameters. Akin to how non-verbal behavioral signals have an impact on human-to-human interaction and communication, SimSensei aims to capture and infer user state from signals generated from user non-verbal communication to improve engagement between a VH and a user and to quantify user state from the data captured across a 20 minute interview. As well, previous research with SimSensei indicates that users engaging with this automated system, have less fear of evaluation and self-disclose more personal information compare to when they believe the VH agent is actually an avatar being operated by a “wizard of oz” human-in-the-loop (Lucas et al., 2014). The current study presents results from a sample of military service members (SMs) who were interviewed within the SimSensei system before and after a deployment to Afghanistan. Results indicate that SMs reveal more PTSD symptoms to the SimSensei VH agent than they self-report on the Post Deployment Health Assessment. Pre/Post deployment facial expression analysis indicated more sad expressions and fewer happy expressions at post deployment.
Reference:
Clinical interviewing by a virtual human agent with automatic behavior analysis (AA Rizzo, G Lucas, J Gratch, G Stratou, L-P Morency, R Shilling, A Hartholt, S Scherer), In Proceedings of The 2016 Proceedings of the International Conference on Disability, Virtual Reality and Associated Technologies, ICDVRAT and the University of Reading, 2016.
Bibtex Entry:
@inproceedings{rizzo_clinical_2016,
	address = {Los Angeles, CA},
	title = {Clinical interviewing by a virtual human agent with automatic behavior analysis},
	isbn = {978-0-7049-1547-3},
	url = {http://centaur.reading.ac.uk/66645/8/ICDVRAT2016_Full_Proceedings_11th%20_Conf.pdf},
	abstract = {SimSensei is a Virtual Human (VH) interviewing platform that uses off-the-shelf sensors (i.e., webcams, Microsoft Kinect and a microphone) to capture and interpret real-time audiovisual behavioral signals from users interacting with the VH system. The system was specifically designed for clinical interviewing and health care support by providing a face-to-face interaction between a user and a VH that can automatically react to the inferred state of the user through analysis of behavioral signals gleaned from the user’s facial expressions, body gestures and vocal parameters. Akin to how non-verbal behavioral signals have an impact on human-to-human interaction and communication, SimSensei aims to capture and infer user state from signals generated from user non-verbal communication to improve engagement between a VH and a user and to quantify user state from the data captured across a 20 minute interview. As well, previous research with SimSensei indicates that users engaging with this automated system, have less fear of evaluation and self-disclose more personal information compare to when they believe the VH agent is actually an avatar being operated by a “wizard of oz” human-in-the-loop (Lucas et al., 2014). The current study presents results from a sample of military service members (SMs) who were interviewed within the SimSensei system before and after a deployment to Afghanistan. Results indicate that SMs reveal more PTSD symptoms to the SimSensei VH agent than they self-report on the Post Deployment Health Assessment. Pre/Post deployment facial expression analysis indicated more sad expressions and fewer happy expressions at post deployment.},
	booktitle = {Proceedings of {The} 2016 {Proceedings} of the {International} {Conference} on {Disability}, {Virtual} {Reality} and {Associated} {Technologies}},
	publisher = {ICDVRAT and the University of Reading},
	author = {Rizzo, AA and Lucas, G and Gratch, J and Stratou, G and Morency, L-P and Shilling, R and Hartholt, A and Scherer, S},
	month = sep,
	year = {2016},
	keywords = {MedVR, UARC, Virtual Humans},
	pages = {57--64}
}
Powered by bibtexbrowser