Detection and computational analysis of psychological signals using a virtual human interviewing agent

September 2, 2014 | Gothenberg, Sweden

Speaker: Skip Rizzo
Host: 10th International Conference on Disability, Virtual Reality and Associated Technologies

It has long been recognized that facial expressions, body gestures and vocal features/prosody play an important role in human communication signaling. Recent advances in low cost computer vision and sensing technologies can now be applied to the process of sensing such behavioral signals and from them, making meaningful inferences as to user state when a person interacts with a computational device. Effective use of this additive information could serve to enhance human interaction with virtual human (VH) agents and for improving engagement in Telehealth/Teletherapy approaches between remote patients and care providers. This paper will focus on our current research in these areas within the DARPA-funded “Detection and Computational Analysis of Psychological Signals” project, with specific attention to our SimSensei application use case. SimSensei is a virtual human platform able to sense real-time audio-visual signals from users interacting with the system. It is specifically designed for health care support and is based on years of expertise at ICT with virtual human research and development. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the estimated user state and intent of the user through vocal parameters and gestures. Much like non-verbal behavioral signals have an impact on human to human interaction and communication, SimSensei aims to capture and infer from user’s non-verbal communication to improve engagement between a VH-human and a user. The system can also quantify sensed signals over time that could inform diagnostic assessment within a clinical context.