From Numbers to Narratives: Building Bridges Between AI and Human Understanding

Published: September 18, 2025
Category: Essays | News
Boxi Fu

BYLINE: Boxi Fu, Game Developer, Human-Centered AI Lab, ICT

As a Game Project Specialist in the Human-Centered AI Lab (Director, Dr. Ning Wang) at USC’s Institute for Creative Technologies, I spend my days solving problems that didn’t exist in textbooks. How do you make an AI system that can accurately sync lip movements with speech while maintaining natural gestures? How do you create educational experiences that don’t feel like homework disguised as games? 

These questions have shaped my work over the past two years, leading me into territory I never anticipated when I first started studying computer science.

The path here wasn’t straightforward. I began my undergraduate study as an applied mathematics student at UC Berkeley—mathematics had always been my strongest subject, and it seemed like a logical choice. Computer science came later, almost by chance, but the combination proved more powerful than I had imagined. When I came to USC for my Master of Science in Computer Science – Game Development, I chose this specialisation partly because I recognised the intense competition in traditional software engineering, but mostly because interactive experiences seemed to offer something that pure algorithms couldn’t: direct human connection.

Two years ago, I responded to a recruitment email from ICT about a research project. That conversation changed everything. Graduate studies provide more flexibility than undergraduate courses, and I was curious about what research at ICT actually involved. What I discovered was a place where the abstract concepts I’d studied could come alive in ways that genuinely mattered to people.

Making Complex Ideas Accessible

My work centres on a deceptively simple question: how do you make cutting-edge AI research tangible for people who aren’t computer scientists? The answer, I’ve learned, involves considerably more psychology and design thinking than any algorithms course ever covered.

Consider our Virtual Human Exhibit (VHX) project, currently running at UC Berkeley’s Lawrence Hall of Science

VHX is an interactive exhibition project USC had in collaboration with UC Berkeley. On the surface, it’s a collection of interactive demonstrations—emotion detection, laugh recognition, facial expression classification. But the real challenge wasn’t implementing these AI systems; it was determining how to present them so that a ten-year-old could understand what artificial intelligence actually does and why it matters.

We built six standalone demonstrations, each designed to run for approximately five minutes—just long enough to spark curiosity without losing attention. The technical implementation involved integrating emotion detection algorithms with animated cartoon characters, creating responsive UI components that could handle thousands of daily interactions, and ensuring everything worked reliably in a museum setting where there’s no technical support team standing by.

The response has been remarkable. We’re seeing a 98.2% positive feedback rate from over 20,000 visitors—students and parents who leave understanding something meaningful about AI that they didn’t know before. But what strikes me most is observing children interact with these systems. They’re not intimidated by the technology; they’re genuinely curious about how it works and what it means. If you ever get a chance to visit, please check out the exhibition yourself!

Games as Learning Laboratories

My involvement as a core design member in the early phases of the BecomingFei project pushed these concepts further. How do you teach both military personnel and civilians about AI capabilities without overwhelming them with technical jargon or oversimplifying to the point of meaninglessness? We decided to embed the learning within a rescue mission scenario—a game where players naturally encounter AI applications in medical diagnosis, surveillance, image identification, and decision-making contexts.

The design challenge was substantial. We needed to research topics, assets, and narratives that would work within our constraints whilst achieving maximum teaching effectiveness and maintaining an engaging interactive experience. Players would learn about AI’s role in daily life through a sequence of first-person shooter elements, open world exploration, text-based narration, and various mini-games—different approaches for different learning preferences.

The technical aspects were equally demanding. We had to implement core game mechanics that accurately represented AI capabilities, and ensure performance across different devices using continuous integration testing. But the harder problem was always design: ensuring players understood they weren’t simply playing a game, but exploring real-world applications of technology that’s already reshaping how critical decisions get made.

This project taught me something important about the relationship between entertainment and education. The moment something feels “obviously educational,” you’ve lost your audience. The goal isn’t to trick people into learning—it’s to create experiences where learning happens naturally because the content is genuinely engaging and meaningful.

Building for Scale and Impact

What excites me most about our current projects is their potential reach. We’re developing educational games that could replace traditional class periods in STEM education, potentially reaching hundreds of middle and high schools across California. That’s not simply about creating better software—it’s about fundamentally changing how young people encounter and understand technology that will define their futures.

The challenge is raising the quality of serious games to match commercial entertainment standards. Currently, most educational games feel like vegetables disguised as dessert. Players recognise they’re being taught something, and that awareness creates resistance. My goal is to create experiences so compelling that people choose to engage with them regardless of the educational component.

This requires thinking beyond traditional game development. It means understanding cognitive load theory, designing for diverse learning styles, and creating feedback loops that feel rewarding rather than patronising. It also means grappling with questions that pure computer science doesn’t address: How do you measure learning in an interactive environment? How do you balance challenge and accessibility? How do you create experiences that scale across different age groups and cultural contexts?

Throughout my studies and work, I’ve experienced creating games of various genres and purposes—adventure RPGs, puzzle games, card-based games, text-based narrative experiences—both for entertainment and educational purposes. Each project has taught me something different about what makes interactive experiences meaningful. Working in both small groups and larger teams has shown me how different scales of collaboration can shape the final product in unexpected ways.

The Intersection of Technology and Human Experience

My work sits at the intersection of several disciplines—computer science, game design, educational psychology, and human-computer interaction. This interdisciplinary approach isn’t simply academically interesting; it’s practically necessary. The AI systems we’re building aren’t just technical achievements; they’re tools for human understanding and communication.

During my previous internship at OPPO, I worked on photo-to-facial feature algorithms and audio-to-facial landmark models—technical projects that shortened character generation processes by over 90% and improved audio-visual correlation by more than 15%. But at ICT, I’ve learned that technical optimisation is only meaningful when it serves human needs and understanding.

The shift from pure technical development to human-centred design has been profound. Instead of asking “Can we make this algorithm more efficient?” I’m asking “Will this help someone understand something important?” It’s a different kind of problem-solving, one that requires empathy alongside technical skill.

I love trying and learning new things, because I understand how little I know. This perspective has been essential in my work at ICT, where every project teaches me something unexpected about the relationship between technology and human understanding. Each challenge—whether it’s creating believable character animation or designing intuitive interfaces—reveals new layers of complexity I hadn’t anticipated.

GDC & Beyond?

The future I’m working towards is one where the gap between cutting-edge AI research and public understanding continues to shrink. Not because we’re simplifying the technology, but because we’re getting better at creating bridges between complex ideas and human intuition.

I’d love to present my works at GDC someday, not simply to share technical innovations, but to contribute to conversations about the responsibility that comes with creating interactive experiences. When you build something that teaches, you’re not just writing code—you’re shaping how people understand their world.

The most meaningful measure of success for me isn’t technical benchmarks or even positive user feedback, though both matter. It’s the moment when someone who was intimidated by AI realises they can understand and engage with it thoughtfully. It’s when a middle schooler leaves our exhibit curious about computer science, or when an adult understands something new about how technology affects their daily life.

Every project we complete at ICT feels like a small step towards a world where advanced technology doesn’t have to be mysterious or alienating—where understanding and innovation can grow together, where humans and the artificial intelligence we create can genuinely collaborate rather than simply coexist.

That’s the future I’m building towards, one interaction at a time.

//