Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

November 9, 2015 | Seattle, WA

Speaker: Stefan Scherer
Host: 17th ACM International Conference on Multimodal Interaction

We have developed an interactive virtual audience platform for public speaking training. Users’ public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user’s behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).