By Bin Han, PhD student, Computer Science, Viterbi School of Engineering
As a PhD student in Computer Science at USC’s Institute for Creative Technologies, I spend most of my time immersed in the mechanics of behavior—how to model it, measure it, and, increasingly, how to generate it. At the heart of my research lies a simple yet demanding question: can we design virtual agents that not only speak and gesture but do so in ways that reflect consistent, recognizable personalities?
“Can LLMs Generate Behaviors for Embodied Virtual Agents Based on Personality Prompting?” – the paper I’m presenting at IVA 2025 in Berlin – explores precisely this. Working with my co-authors Deuksin Kwon, Spencer Lin, Kaleen Shrestha, and Jonathan Gratch, we developed a framework that uses large language models (LLMs) to generate both verbal and nonverbal behaviors for virtual agents, all modulated by a single personality trait—extraversion. It’s a narrow slice of the much broader space of human personality, but a particularly expressive and tractable one.
Our approach builds on recent advances in multimodal reasoning capabilities, which are no longer limited to language alone. These systems are increasingly capable of reasoning across text, images, audio, and action. Yet much of the work in this area still stops short of embodiment—of endowing agents with behaviors that feel psychologically consistent, not only in what they say, but in how they move, gesture, and speak. That gap is where our work lives.
We designed two contrasting interaction scenarios—negotiation and ice-breaking—to evaluate the expressiveness and perceptibility of LLM-driven behaviors. One is task-oriented and restrained; the other, more personal and open-ended. We prompted LLMs to generate both dialogue and behavior aligned with either introversion or extraversion, using a curated list of nonverbal actions spanning facial expressions, gestures, and vocal qualities. Then we built agents that embodied these personalities in a 3D environment using Unity.
Our analyses spanned linguistic patterns, nonverbal behavior distributions, and human perception. We found that extroverted agents not only spoke in longer, more expressive sentences but also moved with greater frequency and amplitude. Their voices were louder and faster. In contrast, introverted agents used more subdued language, smaller gestures, and softer, slower speech. More importantly, users noticed. In our user study, participants reliably distinguished between the two personality types, and most perceived the verbal and nonverbal behaviors as aligned with the intended traits.
What makes this work significant, to me, isn’t just that it works. It’s that it works without custom datasets, complex rule-based systems, or laborious animation pipelines. The behavioral profile of an agent—what it says, how it says it, and how it moves—is shaped on the fly with the help of an LLM, guided by prompts and supported by a modular understanding of motion and expression. This opens up new possibilities not just for scaling embodied agents but for controlling them. We can now begin to ask and test questions about social alignment, emotional resonance, and personality pairing in a way that’s both systematic and flexible.
The larger goal is to deepen our understanding of how personality shapes interaction, especially in contexts where rapport and cooperation matter: education, negotiation, therapy. I’m particularly interested in how people with different personalities respond to agents that either match or contrast with their own traits. Our early results hint at an interaction effect—introverted participants, for instance, were more sensitive to personality cues than their extroverted counterparts. But these are just starting points. The real test will come with live, adaptive agents that can engage with users in real time.
As I prepare to share this work at IVA, I’m struck by how quickly the field is evolving. What felt speculative two years ago—personality-simulating LLMs embedded in embodied agents—is now a viable research platform. I don’t think the question anymore is whether machines can express personality. It’s what kinds of personality we want them to express, and to what ends.
This is what excites me about research right now. It’s not just about building better models. It’s about giving those models a face, a voice, a gesture—and then asking what it means to respond to them as if they were, in some meaningful way, people.
//