A Peep Around the Corner with Wavefront Shaping

A Peep Around the Corner with Wavefront Shaping

Researchers at Caltech have found a way to “unscatter” light reflected off randomized surfaces using wavefront shaping, which creates detailed images of what’s around any corner.
The ability to take a peek around the corner, before you get to that corner, would be a mighty useful trick for spies, rescuers, and self-driving cars. It’s not such a great feat if you know in advance the corner you hope to see past: just put some kind of lens or mirror there and bounce the light around the bend. But to see around any previously un-tinkered with corner, you’ll have to make do with what’s there, be it a wall, tree or post.
If that sounds impossible, it’s with good reason—it is. Or was. Now researchers at Caltech have found a way to focus scattered light to create detailed images of what’s around any corner.
“The wall introduced random phases on the light’s wavefront, meaning the phase of light is completely ruined by the wall. What we did is to compensate for that kind of random phase modulation,” said Ruizhi Cao, a graduate research assistant at the university’s Biophotonic Laboratory, and lead author of the paper “High-resolution Non-Line-of-Sight Imaging Employing Active Focusing,” which appeared in May in Nature Photonics.
Cao and his colleagues were able to “unscatter” light reflected off these randomizing surfaces using a technique called wavefront shaping. The technique has been around for more than a decade and involves adjusting the wave reflecting off a surface so that its light is no longer chaotic.
 “The only difference between a wall and a lens is that the lens is polished,” said Cao.
Wavefront shaping repolishes the light. But previous efforts depended on something called a guidestar. This is a simple point added to a surface that lets a system know how the light would be reflected if it weren’t being randomized—to better allow it to be unrandomized.

Continue Your Read: Bright Light From the Tiny

But Cao wanted to be able to turn a rough surface into a lens without having to add a guidestar, so that he could see around any unprepared corner. His idea was to use the target object itself—the thing around the corner that you want to see—as a kind of guidestar.
His system, called UNCOVER (for Unseen Non-line-of-sight Casted Optical aperture Visibility Enhanced Return), bounces a laser off the wall at a corner to hit the target. On its return, the altered laser light tells the system what the wall’s surface is like. The reflective aperture made on the wall is first kept tiny so that the hidden target itself reveals the random phase modulation on that tiny aperture. Then, by synthesizing those smaller apertures into a larger aperture, more light is sent out to reshape the light coming from the wall, resulting in a high-resolution image of the hidden object.
“For wavefront shaping, if the signals for extracting the light's wavefront are from an extended uniform object—say a large plate—then it was considered impossible to focus light onto the target with linear feedback,” said Cao. “When I first saw the image, I was truly amazed. It was like I had finally proved that our theory is correct.”

You Might Also Like: Hands-off Medical Imaging

But the path from this proof to a usable device is likely to be a long one. For one thing, the hidden objects that UNCOVER has uncovered have, so far, been two-dimensional and stationary. Adding a third dimension to what the system can see, and, ultimately, the ability to watch a moving object, are well within the realm of possibility with the support of more funding.
Once UNCOVER gets there, it could help cars know what’s approaching an intersection before they arrive, first responders could see around rubble after an earthquake, and soldiers would face fewer surprises when rounding a corner. But Cao has his sights set even further into the future.
 “We are interested in using this in a space mission,” he said. “You don’t know the environment when you first arrive on another planet. It might be hard for a rover to drive a long way to the position it wants. Using our technique, you just shine a light, and it will tell you what you will see after the wall.”
There’s plenty of work to be done before we get to regularly see around corners on this planet or any other. “I think from the theory to real implementation, there’s still a long way to go,” said Cao. “We still have a lot of technical challenges to figure out.”
Michael Abrams is a science and technology writer in Westfield, N.J.

You are now leaving ASME.org