Publications
Search
1.
Lance, Brent; Marsella, Stacy C.
Glances, Glares, and Glowering: How Should a Virtual Human Express Emotion Through Gaze? Journal Article
In: Journal Autonomous Agents and Multi-Agent Systems, vol. 20, no. 1, 2010.
@article{lance_glances_2010,
title = {Glances, Glares, and Glowering: How Should a Virtual Human Express Emotion Through Gaze?},
author = {Brent Lance and Stacy C. Marsella},
url = {http://ict.usc.edu/pubs/Glances,%20Glares,%20and%20Glowering-%20How%20Should%20a%20Virtual%20Human%20Express%20Emotion%20Through%20Gaze.pdf},
year = {2010},
date = {2010-01-01},
journal = {Journal Autonomous Agents and Multi-Agent Systems},
volume = {20},
number = {1},
abstract = {Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character's simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that a gaze shift towards any speci?c target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer's attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character's simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that a gaze shift towards any speci?c target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer's attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.
Filter
2010
Lance, Brent; Marsella, Stacy C.
Glances, Glares, and Glowering: How Should a Virtual Human Express Emotion Through Gaze? Journal Article
In: Journal Autonomous Agents and Multi-Agent Systems, vol. 20, no. 1, 2010.
Abstract | Links | BibTeX | Tags: Social Simulation
@article{lance_glances_2010,
title = {Glances, Glares, and Glowering: How Should a Virtual Human Express Emotion Through Gaze?},
author = {Brent Lance and Stacy C. Marsella},
url = {http://ict.usc.edu/pubs/Glances,%20Glares,%20and%20Glowering-%20How%20Should%20a%20Virtual%20Human%20Express%20Emotion%20Through%20Gaze.pdf},
year = {2010},
date = {2010-01-01},
journal = {Journal Autonomous Agents and Multi-Agent Systems},
volume = {20},
number = {1},
abstract = {Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character's simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that a gaze shift towards any speci?c target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer's attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.},
keywords = {Social Simulation},
pubstate = {published},
tppubtype = {article}
}
Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character's simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that a gaze shift towards any speci?c target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer's attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.