Robot Helps People Get Dressed
Every January at the Consumer Electronics Show in Las Vegas, where some 200,000 business leaders and forward thinkers gather to view the newest technologies, there is always interest in when robots as versatile household assistants may become reality.
Thus far journalists typically report that despite all the flashy gadgets and promises made, the feasibility of a multi-purpose household robot to serve as a reliable and affordable butler-chef-cleaner-fetcher-etc. is much further in the future than many would imagine.
That doesn’t mean engineers aren’t making considerable progress in robotics in areas they believe will make life easier and better, especially for the more than 1 million Americans who need extra help in performing everyday physical tasks, such as older adults, wounded veterans, and people who have suffered a stroke or debilitating injury.
In May, researchers from Georgia Tech presented a paper, “Deep Haptic Model Predictive Control for Robot-Assisted Dressing,” at the International Conference on Robotics & Automation about a robot they created that may help people who require daily physical assistance getting dressed. The robot uses haptics, rather than vision, to help it guide a hospital gown onto a person’s hand, around the elbow, and onto the shoulder.
The work is being done in Associate Professor Charlie Kemp’s healthcare robotics lab, using a Willow Garage PR2 research robot. The lead Georgia Tech Ph.D. student on the team, Zackory Erickson, said the robot taught itself to slide a hospital gown on a person’s arm by analyzing nearly 11,000 simulated examples to learn what it feels like to be the human receiving assistance. From this simulation of both successes and failures, the robot learned to estimate the optimal force. Doing a similar number of trials on humans could have been dangerous as well as very time-consuming, Erickson says.
However, one of the biggest engineering challenges was setting up a simulation the robot could learn from and still transfer to the real world, he says. “Simulation is a very crude approximation of the real world. When you run a simulation, you have to model what’s going to happen in the real world,” Erickson says.
The team chose haptics as the sensorial method and simulation for learning for a couple of reasons. “One of the challenging aspects of robots helping with dressing is that they can’t directly see what’s happening since clothing typically hides a person’s body,” Kemp says. “Another challenge is that doing a good job depends on the forces felt by the person. Too much force can result in discomfort. In the real world, our robot doesn’t have direct access to what a person is feeling. Yet in the simulated world where our robot learns, it can directly measure the forces applied to the person.”
He notesthat the team chose a hospital gown because it’s a widely used standard clothing item in healthcare and that many steps of dressing are similar to pulling a tube of material over a body part. After the simulated learning, the robot moved on to performing the action on humans, which took about 10 seconds.
Erickson sayswhat makes this work differently from past efforts is that the robot takes a human perspective. Other work typically takes a robot-centric approach where the machine thinks about its actions and how it’s going to succeed. “It doesn’t take into consideration what impact it’s going to have on the person. As the robot is pulling the gown on someone’s body, it’s trying to estimate based on what it feels on its fingertips, what is the person feeling on his or her body,” he says.
As part of the learning, the robot can predict the consequences of moving the gown in different ways. Some motions made the gown taut, pulling hard against the person’s body. Other movements slid the gown smoothly. “The robot uses these predictions to select motions that comfortably dress the arm,” Erickson says.
The researchers also varied the timing of the robot’s thinking and actions to allow it to think as much as a fifth of a second into the future while strategizing about its next move. Less than that caused the robot to fail more often.
Kemp is happy with the progress and is optimistic that some of the approaches and principles will be broadly applicable wherever robots are intelligently interacting with people and the world. “We've taught a robot to predict the physical implications of its actions during a complex physical process,” he says.
The work has also shown “that you don’t have to have an incredibly high-fidelity, physics-based simulator to teach a robot how to do useful things,” he says. “One of the challenges is how to strike that balance between the fidelity of the simulation and the computational requirements.”
But there is still much to be done. Dressing is a much more complex task than some other activities of daily living Kemp’s lab has worked on, such as shaving or feeding. “In terms of dressing, we’ve worked with a hospital gown and putting one sleeve of one arm of the gown on a person’s arm. You can imagine there are still a ways to go with just that task. We’re working on more on that,” Kemp says.
The team is also looking at some other simulation-based approaches to facilitate testing with people with disabilities.
“That’s something extremely important because just as testing in the real world is very important, if you actually want people with disabilities to benefit, you have to test it with people with disabilities. Otherwise you don’t know. More broadly, I’m optimistic that in the long run, robots can help [other] people manage in their daily lives,” Kemp says. “Maybe help people that are just a bit tired and want help around the house.”
Nancy S. Giges is an independent writer.
More broadly, I’m optimistic that in the long run, robots can help [other] people manage in their daily lives.” Prof. Charlie Kemp, Georgia Tech and Emory University