ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
System Helps Self-driving Cars See in Fog

System Helps Self-driving Cars See in Fog

A new MIT system builds on existing LiDAR technology to guide self-driving vehicles safely through the fog.

Autonomous vehicles are already driving themselves in test mode down American streets. But their onboard navigation systems still can’t help them maneuver safely through heavy or even light fog. Particles of light, it turns out, bounce around the water droplets before they can reach the cameras that guide the vehicles. That scattering of light poses major navigation challenges in heavy mist.

Researchers at the Massachusetts Institute of Technology are driving toward a solution for that problem. They’ve developed a system that can sense the depth and gauge the distance of hidden objects to safely navigate driverless vehicles through fog.

The researchers’ announced their milestone two days after March 18 when an autonomous car operated by Uber, with an emergency backup driver behind the wheel, hit a woman on a street in Tempe, AZ. The accident happened at 10 pm, but the weather was clear and dry. While fog is not the only issue for autonomous vehicle navigation, it definitely presents a problem.

Why can’t you see through fog? Because it refracts light rays and jumbles the information that arrives at the human eye, making it impossible to inform a clear picture.Guy Satat, graduate student, MIT Media Lab

Guy Satat checks the images returned to his group’s system, which uses a time-of-flight camera. Image: Melanie Gonick/MIT

Part of that problem is that not all radar systems are the same. Those that guide airplanes down runways, for example, use radio waves, which have long wavelengths and low frequencies and don’t return high-enough resolution for autonomous vehicle navigation. Like other, longer wavelengths in the electromagnetic spectrum, such as X-Rays, they don’t do a good job distinguishing different types of materials. That characteristic is needed to differentiate between something like a tree from a curb, says Guy Satat, a graduate student in the Camera Culture Group at the MIT Media Lab who led the research under group leader Ramesh Raskar.

Also for You: Adding Depth Perception to Autonomous Vehicles

Instead, todays’ autonomous navigation systems mostly rely on light detection and ranging (LiDAR) technology, which sends out millions of infrared laser beams every second and measures how long they take to bounce back to determine the distances to objects. But LiDAR, in its present state, can’t “see through fog as if fog wasn’t there,” Satat says.

“We’re dealing with realistic fog, which is dense, dynamic, and heterogeneous,” he says “It is constantly moving and changing, with patches of denser or less-dense fog.” Satat says.

Satat and his team sought a method that would use the shorter, more precise near-visible light rays that humans and animals rely upon to see.

“Why can’t you see through fog?” Satat asks. “Because it refracts light rays and jumbles the information that arrives at the human eye, making it impossible to inform a clear picture.”

The MIT researcher’s new system builds on existing LiDAR technology. It uses a time-of-flight camera, which fires short bursts of laser light through a scene clouded by forms. The forms—in this case, the fog—scatter the light photons. Onboard software then measures the time it takes photons to return to a sensor on the camera.

The photons that traveled directly through the fog are the quickest to make it to the system because they aren’t scattered by the dense cloud-like material.

“The straight line photons arrive first, some arrive later, but the majority will scatter hundreds and thousands of time before they reach the sensor,” Satat says “Of course they’ll arrive much later.”

The camera counts the photons that reach it every 56 trillionths of a second and onboard algorithms calculate the distance light traveled to each of the sensor’s 1,024 pixels. That enables it to handle the variations in fog density that foiled earlier systems. In other words, it can deal with circumstances in which each pixel sees a different type of fog, Satat says.

By doing so, the system creates a 3D image of the objects hidden among or behind the material that scatters the light.

“We don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions,” Satat says.

The MIT lab has also used its visible-light-range camera to see objects through other scattering materials, such as human skin. That application could eventually be used as an X-ray alternative, he says.

Driving in bad weather conditions is one of the remaining hurdles for autonomous driving technology. This new technology can address that by making autonomous vehicles “super drivers” through the fog, Satat says.

“Self-driving vehicles require super vision,” he says. “We want them to be driven better and safer than us, but they should also be able to drive in conditions where we’re not able to drive, like fog, rain, or snow.”

Jean Thilmany is an independent writer.

More stories on technology and society from ASME.org:
Levitating with a Tornado of Sound Waves
How to Raise a Coder in Four Easy Steps
Engineers Break Down Borders

You are now leaving ASME.org