A Colorful Path from ICT to Google

Published: April 15, 2024
Category: Essays | News
Dr. Chloe LeGendre

By Dr. Chloe LeGendre, Senior Software Engineer, Google Research

Dr. Chloe LeGendre is a senior software engineer at Google Research working on research at the intersection of machine learning, computer graphics, and photography. Previously, she was a staff research scientist at Netflix in the Production Innovation group, working on applied research for new techniques for visual effects and virtual production. In 2019, Dr. LeGendre completed her Ph.D. in Computer Science, as an Annenberg Graduate Fellow in the Viterbi School of Engineering of USC, and worked here, at ICT (2015 to 2019), in the Vision and Graphics Lab, as a graduate research assistant, supervised by Prof. Paul Debevec. Here’s Dr. Chloe LeGendre’s story of how she came to work at ICT, and the research she’s most proud of from that era.

Dr. Chloe LeGendre, Senior Software Engineer, Google Research (Graduate Research Assistant, ICT, 2015 – 19)

Two significant events led me to ICT. 

First, in 2013, I flew from the east coast to L.A. to attend my first SIGGRAPH conference in Anaheim, on a work trip on behalf of my then employer – L’Oréal USA Research and Innovation. I was looking for new technologies that could help usher the cosmetics giant into the digital era. I learned about all kinds of new tech – both while walking around the Expo floor and by attending technical paper sessions. While attending SIGGRAPH’s Real-Time Live event, I learned about “Digital Ira” – part of a long tradition of virtual human research from ICT’s Graphics Lab led by Professor Paul Debevec and his team. 

Then, a year later, in the spring or summer of 2014, I went to get a haircut in Hoboken, NJ, where I was living at the time. While I waited for my appointment, I picked up an issue of The New Yorker magazine on the table in front of me. Inside there was an article called “Pixel Perfect” featuring the virtual human research by Paul Debevec at the USC ICT Graphics Lab. I was already somewhat aware of this work from my time at SIGGRAPH, but I didn’t know nearly as much as I learned from that article.

Getting that haircut – that was when I decided, “Wow, if I’m going to go to graduate school, this is where I want to do my research. I want to do it at ICT, and I want to be working with Paul as my supervisor.”

GRAD SCHOOL

I applied to ten schools, and somehow (despite my disbelief) was accepted at all ten, including the USC Viterbi School of Engineering. Then Kat Haase from ICT’s Graphics Lab sent me an email, saying that the lab and Paul were interested in my profile. I was blown away. I knew I wanted to do my graduate research at the USC ICT after seeing the work at SIGGRAPH and reading that article in The New Yorker. And now, here it was – an opportunity to do just that.

Instantly, from that point, all the other schools I’d got into were not even under consideration, in truth. My mind was essentially made up, despite having to move from the east coast to L.A. away from my friends and family. I started my graduate study at USC Viterbi School of Engineering as an Annenberg Graduate Fellow, and joined ICT as a graduate research assistant, under the supervision of Professor Paul Debevec.

Within a few months of working at the lab, I had a “pinch me” moment, one of many during my time there. My desk at ICT was right across from the Light Stage laboratory, and there was an army of incredibly famous people coming in day after day to get their faces scanned for upcoming films. I had just moved to Los Angeles from New Jersey. So to be involved in an academic research lab that’s so integrated within Hollywood? That was out of my realm of experience until I got to ICT. 

ACM SIGGRAPH 2016

One of the pieces of research that I was most proud of from my time at ICT was Practical Multispectral Lighting Reproduction, which we published as a technical paper at SIGGRAPH in 2016. This involved USC ICT’s Light Stage X, engineered and built before I had even arrived at the university. Prior to our research, there were techniques where you could use the red, green, and blue LEDs (Light Emitting Diodes) to reproduce omnidirectional real-world illumination inside a light stage system. This lit an actor to make it look like they were out in the real-world rather than inside a laboratory or film studio. In our research, we were able to reproduce illumination using the additional spectral channels of Light Stage X – the white, cyan, and amber LEDs – and add them into the mix of RGB lighting to produce even more color-accurate and true-to-life illumination within the Light Stage. 

For the paper, we organized a photoshoot of a friend of the lab. We photographed her outside ICT, during the “golden hour” time of day, while also capturing a record of the scene’s illumination. Then we reproduced that lighting inside the Light Stage using our new technique and photographed her there as well. And when we did a side-by-side comparison of the two photographs, they matched nearly perfectly. 

Previously, you would have observed color errors which you could correct by applying color grading or color correction post-processing methods. But our research made this unnecessary; the images just came out of the camera looking correct. Because of our calibration process – and the extra spectral channels – everything just matched.

My involvement with SIGGRAPH has since come full-circle, from contributing my first research project in 2016. Next year, I’m going to be the Posters Chair for SIGGRAPH in 2025. I’m excited for this, because I contributed many times to the Posters Program at SIGGRAPH during my time as a student. Now, to be the Chair of the Posters Program and to engage the next generation of researchers – that feels amazing. 

ACADEMIA TO INDUSTRY

I was awarded my Ph.D. in Computer Science from USC in 2019 and joined Google as a Software Engineer working on AR (Augmented Reality) at first and then machine perception projects. After a couple of years there, I went to Netflix as a Research Scientist, again working with Paul Debevec who was leading a new research team there. 

As the world was emerging from the COVID-19 pandemic, a lot of film and television production was moving to using “virtual production” techniques due to post-pandemic workflow shifts, which included filming actors inside LED volumes. 

A “volume” is built of an array of LED panels, which you can use to surround an actor with lighting from all directions. LED volumes are awesome because they allow you to practice lighting reproduction, literally surrounding the actors with the light that would be Illuminating them in the real world, as if you weren’t in a studio. They also allow you to film your background imagery directly, rather than needing to use a green screen. This workflow came to be referred to as “in-camera VFX.” 

But among the (many) challenges that comes with filming inside an LED volume is that its LEDs are only red, green, and blue. So you can’t use our color-accurate multispectral lighting reproduction technique that we practiced inside Light Stage X, because you don’t have the extra spectral channels. Thus, if you want accurate colors, you have to “fix it in post” (production), with a color grading or color correction process. 

Essentially, this became a through-line from the research problem we tackled at ICT. How can you get the colors looking correct when the actors are lit with narrow-spectrum studio lighting from RGB LED panels, rather than the broad-spectrum lighting of the real world? And, if you know you’re going to “fix your image in post” via color correction, how can you make sure that your in-camera background still looks correct? 

We solved this problem and published our research Jointly Optimizing Color Rendition and In-Camera Backgrounds in an RGB Virtual Production Stage, at the DigiPro conference (2022). This conference is co-located with SIGGRAPH but, as I see it, more focused on industry applications. 

GOOGLE RESEARCH

In February 2023, I returned to Google as a Senior Software Engineer, within Google’s Research organization, leveraging my knowledge in imaging and computer graphics, as well as machine learning, to develop new technologies in support of new features on the Google Pixel Phone and in Google Photos. 

I’ve been involved with several products that have already shipped, including Portrait Light, which allows you to position an artificial light source into your photo after capture, so you can “fix it in post.” I was also involved in a second version of Portrait Light which is called “Balance Light”. This allows you to remove the appearance of harsh shadows or specular highlights in images after you’ve captured them.

Those are examples of things I’ve done in the past. Things I’m working on for the future, at Google Research, are in similar domains, but I can’t talk about them yet. Safe to say, they’re applying computer graphics and lighting-based ideas to improve photographs, largely after capture. 

How can we do this? Typically it’s using machine learning and other techniques – the kinds of techniques I started developing during my time at ICT, almost a decade ago. 

// 

BIO: 
Dr. Chloe LeGendre is a senior software engineer at Google Research working on research at the intersection of machine learning, computer graphics, and photography. Previously, she was a staff research scientist at Netflix in the Production Innovation group, working on applied research for new techniques for visual effects and virtual production. In 2019, Dr. LeGendre completed her Ph.D. in Computer Science, as an Annenberg Graduate Fellow in the Viterbi School of Engineering of USC, and worked here, at ICT (2015 to 2019), in the Vision and Graphics Lab, as a graduate research assistant, supervised by Prof. Paul Debevec.