ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Autonomous Vehicles with Depth Perception, Part 2

Autonomous Vehicles with Depth Perception, Part 2

In Part 1, researchers were able to develop a 4D camera that will give self-driving cars the panoramic vision so critically needed for smooth and safe driving. Here they innovate the solution to an even more challenging problem: depth perception.

To add depth perception and refocusing capabilities, researchers used a technology called light field photography, which had been previously developed at Stanford. The technique captures the two-axis direction of the light hitting the lens and combines it with the 2D image to give depth to the image.

 

138-degree light field panoramas (top) and a depth estimate of the second panorama (bottom). Image: Stanford Computational Imaging Lab/UCSD Photonic Systems Integration Laboratory

 

“One of the things you realize when you work with an omnidirectional camera is that it’s impossible to focus in every direction at once. Something is always close to the camera, while other things are far away,” University of San Diego electrical engineering professor Joseph Ford says. “Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics.”

Inside the camera is a set of lenslet arrays. Each of these has hundreds of thousands of micro-scale lenses, and together they transform the main, spherical lens into a light-field camera, says Donald Dansereau, a Stanford University postdoctoral fellow in electrical engineering.

The 4D camera is good at improving close-up images, he adds.

“Something like a compound eye, this kind of camera understands light in terms of direction—like a normal camera—but also position, where the rays are in space,” he says. “With many different perspectives on the 3D world, this camera inherently captures 3D shape and higher-order effects like transparent and reflective surfaces.”

These capabilities help robots navigate across landscapes semi-obscured by rain, snow, or fog and through crowded areas. Needless to say, that’s a boon to self-driving vehicles driving around pedestrians and other cars while on a snowy day, Wetstein says.

“This could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’re made of,” he adds.

The camera is now at the proof-of-concept stage and researchers have plans to create two compact prototypes to test on a robot, Dansereau says.

“One of these is for panoramic capture, much like the first prototype,” he adds. “The second camera has disconnected fields of view and is designed as a low-cost method to track a robot’s motion in a broad range of conditions.

Future autonomous vehicle passengers are likely thanking the researchers in advance.

Jean Thilmany is an independent writer.

One of the things you realize when you work with an omnidirectional camera is that it’s impossible to focus in every direction at once.Prof. Joseph Ford, University of San Diego

You are now leaving ASME.org