ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Robotics: The Software Stage Is Here

It’s 2017, and robots are still pretty dumb.

Robotic hardware has more or less arrived, and machines are currently hard at work in a wide range of industries including manufacturing, health care, and more. But the truth is, today’s robots are not yet the stuff of science fiction dreams. They are only capable of performing rote, monotonous tasks, aren’t good at adaptation, and still struggle with jobs requiring human interaction.

In order for robots to reach their full potential, then, it’s time for the software that controls them to catch up with the capabilities of today’s hardware. Researchers worldwide are working on this challenge right now, leveraging everything from artificial intelligence, to machine learning, to Big Data in order to better train robots and more seamlessly integrate them into daily life.

“It really does feel like robotics is exciting again,” says Chris Roberts, head of industrial robotics at product development and design firm Cambridge Consultants. “Since the seventies, there has been this general steady progression of robots getting bigger and more precise and more powerful and more expensive. This hasn't really been a revolution in technology, but lots of individual things getting a bit better. Processors getting a bit faster and sensors getting a bit cheaper. With labor costs going up I expect what we'll see in the next few years is more of the very low-skilled jobs getting automated.”

According to Dr. Dezhen Song, a professor in the Department of Computer Science and Engineering at Texas A&M University, high-level intelligence for more advanced tasks is still probably five to 10 years off, depending on the difficulty of the task and the robot behavior involved. Simpler, more repetitive tasks—such as picking and sorting produce—could be outsourced far sooner.

“If you want a fully autonomous system that functions like a human, that's probably very far off,” he said. “But if you have specifically set up a task you want them to do, then we are very close. We actually are already there for some tasks.”

FANUC (a), Kawasaki (b), KUKA (c), and other major robotics companies are now manufacturing systems designed to work alongside humans.

Partners, Not Tools

In order for robots to become an autonomous part of the workforce they will need to become better at interacting and working side-by-side with humans, a process that robotics experts refer to as cobotics, literally human-robot collaboration.

“Imagine you've got a robot working at the same lab bench as you and the robot is helping you,” says Roberts. “Say you both reach for the same test tube. The robot will stop and it won't hurt you, whereas the last generation of robots would have. That’s cobotics. But it's still too hard for that robot to plan around you. So, when you both try to reach for the same test tube it will stop, it won't try to retry, it won't say you're reaching for that so I'll take a different route to get it.”

The challenge of cobotics is the fact that humans and robots tend to have overlapping skillsets, so developers need to determine which tasks to assign to robots and which to leave up to humans. It isn’t solely a question of creating machines that handle tasks for us, but rather making them flexible enough to know when to step in and help us and when to let us take over.

Deep Learning: Teaching the Robots

This is where artificial intelligence and machine learning come in.

Deep Learning is a neural network-based approach to machine learning that makes use of today’s massive sets of data to train machines on behavior. By using these large data sets programmers are now able to improve robots’ object recognition skills, their natural language processing, their image classification and more, resulting in smarter machines.

A graph showing the number of organizations engaged with NVIDIA on Deep Learning in 2013-2015. Image: NVIDIA

According to Jesse Clayton, senior manager of product management for intelligent machines at Nvidia, three factors have enabled this new approach to machine learning: Big Data, so there is more data available to train neural networks; new training algorithms that are far more efficient than previous generations; and advanced new graphic processing technologies, enabling robots to “see” and perceive more about the world around them.

“The key part is training,” he said. “This is where you're exposing a neural network to the sort of data that you want it to learn. So, if you want it to learn to detect people, or you want it to learn to detect cars, or if you want it to learn to detect widgets in a factory, you simply show many, many instances of that data and through that process it learns how to distinguish between cars or people or different types of widgets in a factory.”

This is the process by which artificial intelligence becomes “intelligent,” and thanks to Big Data and cloud computing, it is accelerating.

“Right now, robots know to pick up a widget from this spot, move it over to this spot and put it back down,” Clayton said. “They can't deal well with things like dynamic lighting, changing environments, or changes to a manufacturing line. So, there's a lot of opportunity to automate so many more things throughout the entire industrial supply chain, if robots could be smarter about dealing with more dynamic situations, and also smarter about being able to work with humans.”

Clayton says he expects Deep Learning to start making real changes to robotics in the next five years, affecting not only manufacturing but a whole host of other industries as well.

The Rise of the Robots

Of course, no discussion of Deep Learning and “robots teaching robots” is complete without addressing the risk factors associated with having sentient, autonomous robots in close proximity to humans. By definition, machines are stronger and more resilient than the average person, and that creates a potential danger in the case of a malfunction or other breakdown in the cobotics working relationship.

This has not gone unnoticed by researchers.

"With robots, we’re going to have situations where they might work in some environments, situations where I can control the environment, but might not work when we are in an environment where we cannot anticipate of all the possibilities,” says Dr. Song. “So, we will have to be very careful. We have to have a fence, and within the fence we know the robot can work safely. The problem is it's not always possible to establish that fence, especially as robots start getting closer and closer to humans.”

Autonomous driving is a very good example of this, he explained, because in a self-driving car a person is essentially sitting inside a robot that is fully in control of the situation and is driving very close to other people out on the road. This is a car, and it can do real damage—to the occupant as well as others around it—in the event that something goes wrong. The possibility of any sort of accident, then, is unacceptable, and many layers of safeguards must build in to protect the humans that are interacting with these machines.

This is a process that takes time and careful effort, meaning that the transition to fully interactive robots is going to be slow and methodical.

“It is good to be optimistic,” says Dr. Song, “but it's not good to be overly optimistic about this technology. We have many years of work to do.”

Tim Sprinkle is an independent writer.

With labor costs going up I expect what we'll see in the next few years is more of the very low skilled jobs getting automated. Chris Roberts, Cambridge Consultants

You are now leaving ASME.org