By Stanley Lin, Cinematographer, Imaging Researcher (volunteering at the Vision & Graphics Lab on our next Light Stage)
When I first visited the USC Institute for Creative Technologies, it was out of genuine interest — not as a wide-eyed tourist, but as someone who had already begun thinking critically about the tools that shape digital storytelling. It was during ICT’s 25th anniversary event that I encountered the Vision & Graphics Lab’s Light Stage for the first time. As someone working in both cinematography and immersive imaging, I saw in it not just a spectacle, but a highly engineered solution to a problem I cared about deeply: how to represent the complexity of the human face — light, movement, texture — in ways that feel both precise and believable.
Still, I have to admit: at that moment, I turned to my friend and said, “That’s the coolest thing since sliced bread.” I stand by it.
Fast forward to today — I’m working with the Vision & Graphics Lab (VGL) at ICT [Research Lead / Director, VGL: Dr. Yajie Zhao] to help build out their newest Light Stage. The original system has become a benchmark in the field of volumetric capture and digital human rendering. It’s been recognized with two Scientific and Technical Academy Awards, used in almost 50 films and streaming services, and cited in over 160 academic papers. Now, I’m part of the team reimagining that technology for the next chapter — larger, more modular, and even more adaptable to the evolving needs of filmmakers, researchers, and immersive storytellers.
My role touches on multiple facets of the build — from hardware configuration and system layout to logistics, calibration support, and even contributing perspective from the lens of a working cinematographer. I’ve always been interested in entertainment technology, and I think ICT represents a rare convergence point between rigorous technical research and artistic application. Volunteering on this project gives me the chance to contribute to a system that not only advances computer graphics, but also supports the kinds of creative workflows I believe in.
I’ve always gravitated toward areas where creative and technical systems intersect. As an undergraduate at USC, I pursued a dual degree in Business Administration and Cinema & Media Studies. I originally enrolled as a business major — I’ve had a habit since high school of launching small ventures — and wanted to explore the operational side of creative industries. But while studying marketing, I found myself increasingly drawn to the narrative aspects: the psychology of storytelling, visual language, how you design for impact.
At a certain point, I realized it didn’t make sense to be at one of the best film schools in the world and not pursue cinema more directly. I applied to the USC School of Cinematic Arts and eventually found myself on a path that combined the two disciplines. I’ve been in art classes since I was five — so it felt natural to find ways to bring that visual thinking into my interest in production tools, camera systems, and image science.
BEYOND ICT: CINEMATOGRAPHY +
Outside of my work at ICT, I’m a cinematographer and imaging researcher. My primary research is through USC’s Mobile and Environmental Media Lab and the Ganek Immersive Studio, where I’ve been helping develop high-fidelity stereoscopic imaging pipelines for virtual reality. The goal has been to make immersive video workflows more accessible — both financially and technically — while maintaining a high level of image quality.
The most meaningful achievement from this work so far has been developing an image acquisition pipeline for immersive cinematography using prosumer-grade hardware. We used a Canon R5 C paired with a 5.2mm dual fisheye lens to shoot a short narrative VR piece. From there, we tackled a range of image quality issues: sensor noise, lens distortion, lighting across a 190º field of view — all of which are substantial challenges in stereoscopic VR. We developed a solution that included optimized exposure and aperture control, strategic lighting design, AI-based denoising, and 16K upscaling, ultimately building a replicable process that supports cinematic lighting while preserving immersion.
That project taught me a lot about workflow design — how choices made early in production affect what’s possible in post — and how technical constraints can actually shape aesthetic outcomes in meaningful ways.
WHY THE LIGHT STAGE MATTERS
The Light Stage isn’t a volumetric capture system in the conventional sense. It’s more specialized — a photometric rig designed to precisely capture how light reflects off human skin under a wide range of lighting directions. It captures both high-resolution facial geometry and reflectance data that can be used to build digital humans that are physically accurate and relightable. This allows characters to be placed in any digital environment with correct lighting, shading, and surface detail.
In practical terms, it’s one of the few technologies capable of capturing the subtle shifts in light that occur across a moving face — the soft shadows under a cheekbone, the specular highlights on a brow, the way translucent skin scatters and absorbs light. These details matter enormously when building realistic digital characters. The Light Stage enables researchers and visual effects teams to model those phenomena with real-world accuracy.
Working on the new system means getting deep into the engineering and design logic that enables that kind of precision. I’ve been helping with the physical layout, supporting calibration, managing hardware integration, and thinking through user workflows — all while staying aware that every decision affects the reliability and reproducibility of the capture data. It’s a systems problem, but it also requires thinking like a cinematographer: where is the light coming from, how consistent is it, what are the physical tolerances of the build?
RETHINKING PHOTOREAL CAPTURE
Looking ahead, I’m particularly interested in making advanced imaging tools — including photometric systems like the Light Stage — more accessible to smaller studios and independent creators. The ability to relight a performance after capture, or to generate digital doubles that don’t break the illusion of realism, could open up entirely new production strategies.
My long-term interest is in adapting capture technologies to better serve narrative workflows — particularly in enabling re-shoots or lighting adjustments in post without requiring massive green screen setups, complex VFX pipelines, or having to stop multimillion dollar productions to shoot simple inserts. If we can lower the barrier to entry, it could radically shift the kinds of stories smaller teams are able to tell.
That’s part of why I feel so fortunate to be working on the Light Stage. It’s not just about building a sophisticated piece of hardware. It’s about contributing to a lineage of tools that elevate creative expression — that make it easier to capture something honest and human, even when the final frame is fully synthetic.
BEHIND THE FRAME
One thing I’ve come to appreciate deeply during my time at ICT is the collaborative atmosphere. The people here — from researchers to engineers to production leads — bring both technical mastery and a genuine openness to new ideas. It’s a place where you can walk in with a question and leave with three different solutions, all of them grounded in years of research and experimentation.
Volunteering here has been one of the most valuable professional experiences I’ve had, and not just because of the technology. It’s also the mentorship, the problem-solving culture, and the shared belief that creativity deserves robust, well-designed tools. The work is demanding, but it’s never abstract — everything we’re doing has a tangible application, whether in a film pipeline, a research study, or an immersive training environment.
WIDE SHOT
I came to ICT because I wanted to be part of something that pushed the boundaries of how we capture and represent human expression. The Light Stage project has given me that — and more. It reminded me that precision and imagination are not opposites. In fact, in this work, they’re inseparable.
To help build a system that empowers artists and researchers alike — to be part of a lab that’s rethinking the very nature of digital human performance — is something I take seriously. And if I can bring a bit of that enthusiasm into my own projects moving forward, whether in VR imaging, cinematography, or future research, then I’ll feel I’m doing right by the opportunity I’ve been given.
And yes — it’s still the coolest thing since sliced bread.
//