Project Leaders: Albert "Skip" Rizzo and Louis-Philippe Morency
The University of Southern California Institute for Creative Technologies’ (ICT), pioneering efforts within DARPA’s Detection and Computational Analysis of Psychological Signals (DCAPS) project encompass advances in the artificial intelligence fields of machine learning, natural language processing and computer vision. These technologies identify indicators of psychological distress such as depression, anxiety and PTSD, and are being integrated into ICT’s virtual human application to provide healthcare support.
This effort seeks to enable a new generation of clinical decision support tools and interactive virtual agent-based healthcare dissemination/delivery systems that are able to recognize and identify psychological distress from multimodal signals. These tools aim to provide military personnel and their families’ better awareness and access to care while reducing the stigma of seeking help. For example, the system’s early identification of a patient’s high or low distress state could generate the appropriate information to help a clinician diagnose a potential stress disorder. User-state sensing can also be used to create long-term patient profiles that help assess change over time.
ICT is expanding its expertise in automatic human behavior analysis to identify indicators of psychological distress in people. Two technological systems are central to the effort. Multisense automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention and fidgeting). Multisense infers from these signals and behaviors, indicators of psychological distress that directly inform SimSensei, the virtual human. SimSensei is a virtual human platform able to sense real-time audio-visual signals captured by Multisense. It is specifically designed for healthcare support and is based on the 10+ years of expertise in virtual human research and development at ICT. The platform enables an engaging face-to-face interaction where the virtual human automatically reacts to the perceived user state and intent, through its own speech and gestures. DCAPS is not aimed at providing an exact diagnosis, but at providing a general metric of psychological health.