Prototypes

Virtual Human Toolkit

2009-present
Project Leader: Arno Hartholt

Download a PDF overview.

Learn more on the Virtual Human Toolkit website.

Goal
ICT has created the Virtual Human Toolkit with the goal of reducing some of the complexity inherent in creating virtual humans. Our Toolkit is an ever-growing collection of innovative technologies, fueled by basic research performed at ICT and its partners.

The Toolkit provides a solid technical foundation and modularity that allows a relatively easy way of mixing and matching Toolkit technology with a research project’s proprietary or 3rd-party software. Through this Toolkit, ICT aims to provide the virtual humans research community with a widely accepted platform on which new technologies can be built.

What Is It
The ICT Virtual Human Toolkit is a collection of modules, tools and libraries that supports the creation of virtual human conversational characters. At the core of the Toolkit lies innovative, research-driven technologies which are combined with other software components in order to provide a complete embodied conversational agent. Since all ICT virtual human software is built on top of a common framework, as part of a modular architecture, researchers using the Toolkit can do any of the following:

  • Utilize all components or a subset thereof
  • Utilize certain components while replacing others with non-Toolkit components
  • Utilize certain components in other existing systems

The technology emphasizes natural language interaction, nonverbal behavior and visual recognition. The main modules are:

  • Non Player Character Editor (NPCEditor), a package for creating dialogue responses to inputs for one or more characters. It contains a text classifier based on cross-language relevance models that selects a character’s response based on the user’s text input, as well as an authoring interface to input and relate questions and answers, and a simple dialogue manager to control aspects of output behavior.
  • Nonverbal Behavior Generator (NVBG), a rule-based behavior planner that generates behaviors by inferring communicative functions from a surface text and selects behaviors to augment and complement the expression of those functions.
  • SmartBody is a character animation platform that provides locomotion, steering, object manipulation, lip syncing, gazing and nonverbal behavior in real time using the Behavior Markup Language (BML).
  • MultiSense , a perception framework that enables multiple sensing and understanding modules to inter-operate simultaneously, broadcasting data through the Perception Markup Language (PML). Its main use within the Toolkit is head and facial tracking through a webcam.

The target platform for the overall toolkit is Microsoft Windows, although some components are multi-platform.