Prototypes

Virtual Human Toolkit

2009-present
Project Leader: Arno Hartholt

Download a PDF overview.

Learn more on the Virtual Human Toolkit website.

Goal
ICT has created a Virtual Human Toolkit with the goal of reducing some the complexity inherent in creating virtual humans. Our toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at ICT and its partners.

The toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching toolkit technology with a research project’s proprietary or 3rd-party software. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

What Is It
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of virtual human conversational characters. At the core of the toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all ICT virtual human software is built on top of a common framework, as part of a modular architecture, researchers using the toolkit can do any of the following:

  • utilize all components or a subset thereof;
  • utilize certain components while replacing others with non-toolkit components;
  • utilize certain components in other existing systems.

The technology emphasizes natural language interaction, nonverbal behavior and visual recognition. The main modules are:

  • Non Player Character Editor (NPCEditor), a package for creating dialogue responses to inputs for one or more characters. It contains a text classifier based on cross-language relevance models that selects a character’s response based on the user’s text input, as well as an authoring interface to input and relate questions and answers, and a simple dialogue manager to control aspects of output behavior.
  • Nonverbal Behavior Generator (NVBG), a rule-based behavior planner that generates behaviors by inferring communicative functions from a surface text and selects behaviors to augment and complement the expression of those functions.
  • SmartBody is a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time using the Behavior Markup Language (BML).
  • Watson, a real-time visual feedback recognition library for interactive interfaces that can recognize head gaze, head gestures, eye gaze and eye gestures using the images of a monocular or stereo camera.
  • Speech Client (AcquireSpeech), a tool that can send audio to one or more speech recognizers and relay the information to the rest of the system. It also allows text to be typed into the system, simulating speech input. The toolkit uses PocketSphinx as a 3rd party speech recognition solution.

The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.