CLOVR

2017 - 2019
Project Leader: David Traum

The CLOVR responsive system, delivered conveniently to users at home, school, or anywhere it’s needed, enables users to have meaningful, personal, interactions with virtual humans within a virtual reality environment using a VR headset. Like an empathetic listener, CLOVR analyzes the user’s emotional state and responds adaptively while also providing conversational feedback loops and non-verbal behavior.

Phase I of the project focused on creating a User Perceived State model based on a lexical analysis of direct user input. This emotional model, called the User Perceived State, or UPS, in combination with a new dialogue management and natural language processing editor, drives the system’s response. Responses include not only what the virtual agent says, but their gestures, posture, and tone. Reactive responses can also include changes to the environment or multimedia selections.

Phase II is slated to expand the UPS model by integrating indirect input such as the user’s facial expressions, vocal tone, pulse rate, and posture to create a more holistic analysis of their emotional state.

Like human to human communication, CLOVR will include these implicit, non-verbal, clues in determining its responses.