ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Making Robots That Think, Part 1

Making Robots That Think, Part 1

A new 'deep learning' technique enables robots to learn motor tasks through trial and error.

Microsoft co-founder Paul Allen made headlines last month when he announced plans to invest a $125 million via from his nonprofit foundation in what he calls “Project Alexandria,” a multi-year effort to bring fundamental human knowledge to robotic and artificial intelligence (AI) systems.

In short, Allen wants to create machines with common sense.

“To make real progress in AI, we have to overcome the big challenges in the area of common sense,” he told The New York Times.

UC Berkeley Robot Learning Lab (from left to right): Chelsea Finn, Pieter Abbeel, Trevor Darrell, and Sergey Levine. Image: UC Berkeley

It was a splashy announcement for a technical functionality that researchers have been working on quietly for some time. Robotics has come a long way since the turn of the century, with hardware and software available that enable machines to complete a variety of complex tasks; such as assembling products on an assembly line; performing delicate medical work; and working underwater, in outer space, and in other inhospitable environments. But limitations remain. Robots excel at repetitive, assignable tasks such as tightening a screw over and over, but don’t yet work well in situations where they are forced to work alongside others or think and plan actions for themselves.

Allen’s research aims to address this shortcoming by developing machines that can do more of the same mental exercises that humans can and using that newfound knowledge to build smarter, more adaptable robots.

Learn How Artificial Intelligence Can Actually Help Humanity

To make real progress in AI, we have to overcome the big challenges in the area of common sense. Paul Allen, Microsoft

That is just part of the solution. Robotics engineers are also working on systems that help robots think beyond what tasks they are pursuing on a day-to-day basis – the work they have been programmed to do – and instead develop the forethought they need to learn and adapt to new challenges, effectively picking up new skills on the fly, independent of what humans teach them.

This functionality is the basis of work being done by Dr. Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research focuses on the intersection between control and machine learning, developing, as he says, “algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks.”

Levine’s most recent work in this area is focused on the concept of “visual foresight,” enabling machines to visualize their future actions so that they can figure out on their own what they need to do in situations that they have never experienced before. This is accomplished by using the robot’s cameras to visualize a set of movements, and then allowing their software to process those visual cues into actions that mimic those movements.

“What we were thinking about were some of the differences between how robots manipulate objects in their environment and how people do it,” Dr. Levine says. “A lot of the standard approaches to robotic manipulation involve, essentially, modeling the world, planning through that model, and then executing that plan using whatever control we happened to have.”

Part 2 looks at the different applications those advancements in robotic AI can target.

Tim Sprinkle is an independent writer.

You are now leaving ASME.org