Louis-Philippe Morency, Jacob Whitehill, Javier Movellan: “Generalized Adaptive View-based Appearance Model”

September 17, 2008 | Amsterdam, The Netherlands

Speaker: Louis-Philippe Morency, Jacob Whitehill, Javier Movellan
Host: 8th International Conference on Automatic Face and Gesture Recognition (FG 2008)

Accurately estimating the person’s head position and orientation is an important task for a wide range of applications such as driver awareness and human-robot interaction.Over the past two decades, many approaches have been suggested to solve this problem, each with its own advantages and disadvantages. In this paper, we present a probabilistic framework called Generalized Adaptive Viewbased Appearance Model (GAVAM) which integrates the advantages from three of these approaches: (1) the automatic initialization and stability of static head pose estimation, (2) the relative precision and user-independence of differential registration, and (3) the robustness and bounded drift of keyframe tracking. In our experiments, we show how the GAVAM model can be used to estimate head position and orientation in real-time using a simple monocular camera. Our experiments on two previously published datasets show that the GAVAM framework can accurately track for a long period of time (>2 minutes) with an average accuracy of 3.5 and 0.75in with an inertial sensor and a 3D magnetic sensor.