Metalenses – a New Direction in Computer Graphics

Published: February 6, 2025
Category: Essays | News
image background of LS2 with foreground of metalense in ancyslumerical

By Pratusha B. Prasad, Research Programmer, Vision and Graphics Lab; PhD student, Computer Vision and Graphics

Pratusha B Prasad is a Research Programmer and PhD student in Computer Vision and Graphics, working in the Vision and Graphics Lab, ICT. Her research uses both traditional techniques as well as newer volumetric methods, to understand the interaction of light with real scenes, tracing it from image formation to rendering, and then produce virtual photo-real characters (aka: digital humans) and scenes. Pratusha B Prasad also holds visual effects credits on movies including BLADE RUNNER 2049. 

The world of optics is on the cusp of a paradigm shift. For centuries, lenses—whether in cameras, microscopes, or telescopes—have relied on traditional designs: curved glass that refracts light to focus an image. These lenses have served us well but come with inherent limitations in size, weight, and design flexibility. Enter metalenses: flat, ultrathin optical devices that leverage the power of nanoscale engineering to manipulate light using diffraction with improving accuracy and precision.

As a researcher in computer vision and graphics at VGL, I’ve long been fascinated by the interplay of light and material, and its role in creating photorealistic digital renderings. My work involves capturing how light interacts with surfaces—its scattering, reflection, and absorption—and using this information to recreate digital representations of objects and scenes with striking realism. Metalenses, with their ability to revolutionize light manipulation, are an elegant solution to one of the most persistent challenges in optics: achieving high performance in increasingly compact systems.

Unlike conventional lenses, which rely on bulky, curved surfaces to bend light, metalenses employ an array of nanoscale structures to achieve the same effect. Their compact form factor makes them ideal for applications demanding miniaturization, such as smartphones, biomedical imaging, and the next generation of augmented and virtual reality (AR/VR) headsets.

In my recent research, I’ve focused on a particularly exciting application: using polarization-sensitive metalenses to separate the diffuse and specular components of light reflected from an object. This capability has profound implications for understanding an object’s material properties, enabling real-time sensing  of digital assets and enhancing the realism of virtual environments. Moreover, these advancements could significantly streamline optical systems, offering a pathway to more compact and wearable AR/VR devices.

Rethinking Optics with Metalenses

Metalenses are carefully engineered arrangements of nanopillars / meta-atoms . By manipulating how light interacts with these nanostructures, metalenses achieve precise control over light’s phase, amplitude, and polarization—capabilities that far exceed the limitations of traditional lenses.

This ability to fine-tune light’s behavior opens up a new frontier in optical design. A single metalens can be designed to replace multiple bulky lenses in imaging systems, resulting in thinner, lighter, and more cost-effective setups. For fields like augmented reality, where miniaturization is key, or biomedical imaging, where precision and compactness are critical, the implications are profound.

In this work, I’ve explored how polarization-sensitive metalenses can transform the task of capturing the bidirectional reflectance distribution function (BRDF) of an object. BRDF describes how light reflects off a surface based on its material properties and the angle of illumination, and it’s a cornerstone of photorealistic rendering in computer vision.  Rendering algorithms greatly benefit from the separation of  light reflected diffusely—primarily scattered by rough surfaces—from light reflected specularly, as it would from a smooth, reflective material. Traditional methods to achieve this separation rely on complex, bulky systems involving active lighting and polarization rigs—methods that are impractical for portable AR/VR applications.

Metalenses provide a revolutionary alternative. By leveraging nanoscale optics, we’ve designed a bifocal polarization-sensitive metalens to separate  diffuse and specular components of light in a single, compact device. This approach doesn’t just simplify the process; it redefines it, paving the way for lightweight, passive imaging systems that could be integrated into next-generation AR/VR devices.

Designing a Polarization-Sensitive Metalens

The concept of a polarization-sensitive metalens begins with the shape of the nanoscale structures at its heart. These structures  – dielectric nanopillars, are engineered to manipulate light’s phase depending on its polarization. By carefully arranging these nanopillars in a precise pattern, we can create a metalens that focuses light with different polarizations—transverse electric (TE) and transverse magnetic (TM)—onto separate focal points on the same focal plane.

The design process begins by simulating how light interacts with these nanopillars using Rigorous Coupled-Wave Analysis (RCWA). This computational technique allows us to calculate the phase shifts and transmission efficiencies for different nanopillar geometries, creating a library of possible configurations. By selecting nanopillars which produce the required l phase profile, we can then construct a metalens that achieves the desired polarization separation.

Once the nanopillar library is complete, the full metalens is optimized using a least-squares minimization approach to ensure the phase profile matches the target design. The result is a metalens capable of focusing TE and TM polarizations onto distinct points separated by a precise distance. This bifocal polarization capability is validated through Finite-Difference Time-Domain (FDTD) simulations, which confirm that the metalens performs as intended under various polarization conditions.

Verifying with Realistic Surfaces

To test the effectiveness of our polarization-sensitive metalens, we simulated using FDTD,  its performance with two types of surfaces : a smooth, reflective object and a rough, scattering object. By illuminating these surfaces with polarized light and analyzing the resulting far-field intensity profiles, we can strengthen our claim that the metalens can successfully separate the diffuse and specular components of the reflected light.

For the smooth surface, the metalens focused light at a single peak corresponding to the input polarization, demonstrating its specular nature. For the rough surface, the incident polarized  light scattered into two peaks, allowing us to isolate the diffuse component. These results validate the potential of metalenses for efficient BRDF acquisition, a critical step in creating photorealistic digital assets.

The Future of Metalenses

This work  explores the integration of metalenses with image sensing of material properties.. By demonstrating their ability to separate light’s polarization components in a compact form factor, we’ve opened the door to a wide range of applications, from material characterization to portable AR/VR devices.

Moving forward, the next challenge lies in scaling and verifying these designs with physical fabrication and integrating them into imaging systems with real-world sensors. Extending this technology to handle multiple polarization states simultaneously could further enhance its capabilities, enabling richer data capture and more accurate material analysis.

The implications of metalenses extend far beyond virtual and augmented reality. Their potential to miniaturize and improve optical systems positions them as a transformative technology for fields ranging from healthcare to autonomous vehicles. As we continue to push the boundaries of what’s possible with nanoscale optics, the future of imaging—and our interaction with the digital and physical worlds—is set to become more immersive, efficient, and groundbreaking than ever before.

Acknowledgements

This work was conducted within the ICT Vision and Graphics Lab, under the leadership of Dr. Yajie Zhao. I would like to thank my co-authors, Omid Hemmatyar and Caoyi Zou, plus our sponsors, the Army Research Office of the US Government. I am also grateful for the support of Professor Michelle Povinelli, Kat Haase and Christina Trejo. 

//