Making Robots That Think, Part 2

Making Robots That Think, Part 2

A new 'deep learning' technique enables robots to learn motor tasks through trial and error.

Part One of “Making Robots That Think” explored the concept of enabling robots to think beyond the immediate task they are working on. Here we’ll look at the different applications those advancements can target.

The modeling aspect of robotic manipulation actually tends to be pretty onerous, especially for robots that need to act intelligently in new or changing settings. It’s probably not worth the time and effort to model an entirely new environment around a factory robot that’s programmed to repeat the same process thing over and over again. It can just stay put and do its work. But if you have a robot in a home or office setting, where new things can pop up all the time, then modeling the world in enough detail for the machine to handle its own planning and control is more worthwhile, despite the challenges in actually doing it.

“People don't seem to have a problem making these adaptations, but also it seems like people aren't all that great at predicting physical events perfectly,” he says. “But we have an intuitive notion of physics, so we can rightly figure out that if some object is top-heavy, and it's tipping over, then it's going to fall. And things like that. So we thought, well, maybe we could actually set up a system where the robot can learn a kind of intuitive notion of physics, which may not be exactly the right equations of motion, but good enough for it to act intelligently and to actually generalize to its settings.”

His method basically has the robot play with objects on a table and observing what happens when it moves each of them in different ways. Does it tip over? Does it lay flat? Can it be stacked? The machine then uses that experience to build a predictive model to guide its interactions with those objects in the future, effectively learning from the experience of moving objects around and using that information to make its own decisions later on. This could be possible without requiring things like 3D models or extensive mapping preparation by humans; the robots are learning based solely on their own observations.

In general, the ability to predict a little bit into the future I think can be tremendously useful in a wide range of areas. Prof. Sergey Levine, University of California, Berkeley

BRETT the Robot learns to put a cap on a bottle. Image: UC Berkeley

Broad Applications

Although the technology itself is currently in the very early stages – the robots in the lab are only able to make predictions a few seconds into the future and based on just a handful of movements – the potential applications for this work could be significant.

“In general, the ability to predict a little bit into the future I think can be tremendously useful in a wide range of areas,” Dr. Levine says. “One of the things we've considered, for instance, is how this could be useful in an autonomous driving scenario. If you have your autonomous car and it's driving on the road and it can predict, just a few seconds into the future, how other cars on the road will behave, that can be hugely useful for avoiding accidents and things like that.”

The team has also looked at potential applications for drones, allowing unmanned autonomous vehicles (UAVs) to better plan their own flight paths; other types of robotic manipulation, limiting the need for human oversight in many current robot applications, and more.

It may not be “common sense” yet, but visual foresight is just one more skill that humans overlook and that robots currently lack. By closing this gap, Levine and his team hope to develop smarter machines and expand the capabilities of today’s robotics.

“I'm very excited about this general area,” he says, “because I feel like it has the right ingredients to allow us to create machines that continuously improve through their own experience, which is a very powerful thing,” he says. “Because, if you imagine a robot... maybe there's a company that builds thousands of them and they are deployed all over the world. Maybe they’re in people’s homes, or in offices, or hospitals. That's a lot of experience that those machines will be collecting, and if all of that experience can be used to improve the system [collectively], they can improve very rapidly and reach a very high degree of proficiency.”

Here’s Part 1, if you happened to miss it.

Tim Sprinkle is an independent writer.

You are now leaving