Artificial Intelligence Camera Improves Sight in Autonomous Vehicles
Oct 30, 2018
by Agam Shah Associate Editor at Mechanical Engineering magazine
The cameras used in today’s autonomous vehicles are combined with computers to recognize objects and make smart driving decisions. But more strain is being placed on those computers, which lag behind the advances in camera vision, increased image resolution and sophisticated artificial intelligence algorithms.
Researchers at Stanford University believe their new AI camera could be a big leap forward in helping cars recognize and associate images faster. The key to this camera is an optical computer that can process and sort images as the light enters it, as reported in a paper recently published in Scientific Reports.
“For autonomous cars, that means faster image recognition and faster decision making,” said Julie Chang, a Ph.D. student at Stanford and one of the researchers. The opto-electrical system in the AI camera establishes a two-layer neural network that cuts down on the extra hoops required by conventional computers to pre-process, sort, and classify images.
Optical technology was used back in CD-ROMs to record data and is used today in telecommunications networks for high-speed data transfers. Stanford’s AI camera does more with optics by using photons for computation, much like how electrons drive conventional computing. Optical computing has shown promise as a faster form of computing but hasn’t fully developed yet.
The AI camera has a customized optical transistor that in real time analyzes optical data from the camera -- light rays in this case – and filters the images based on patterns. The optical computing is done through a combination of algorithms and pixel association to classify and label pixels.
The AI camera identifies basic objects like signs, animals, cars and planes much faster and with less power than conventional computers, Chang said. Conventional computers would require cameras to transfer all image data to power hungry processing units like GPUs, which have to parse through every pixel to identify objects.
Fewer processing cycles and light-speed preprocessing power makes the AI camera extremely effective, Chang said.
Fewer processing cycles and light-speed preprocessing power makes the AI camera extremely effective.
Julie Chang, postdoctoral researcher, Stanford University
But the camera’s optical computer is still limited in scope and doesn't have the computing power to help the camera “learn.” Its pattern-based processing capabilities are also limited.
The AI camera requires a conventional computer to do the heavy lifting, like learning or providing context to a full scene that may have multiple objects. Conventional computers are also needed to create rich visual data sets for drones, autonomous cars and robots. It works with popular deep learning platforms like Google’s Tensor and Facebook’s Caffe.
The AI camera can be classified as an effective “edge” device, which is becoming important in IoT. Edge devices like the AI camera cut out irrelevant data and help identify objects quicker, reducing the overall computing stress. Smart meters and other sensor devices are similarly gaining such decision-making capabilities that reduce the strain on central computers doing high-level analytics.
One major challenge the researchers faced was for the AI camera to identify images in dark conditions. The team overcame that with the neural network design and adjusted mathematical models, Chang said.
Still in its early development stages, the camera is far from commercialization. But the researchers believe they have created a better solution for such a critical application.