Measuring Human Trust in Robots

Measuring Human Trust in Robots

Human trust in robots doesn’t always rise and fall when it should, so a researcher at Texas A&M is developing a way to measure worker trust in robots.

Trust is vital when working with a few hundred pounds of steel arm that is swinging product parts throughout the workday. But the amount of trust is key: Too little faith in the machine and productivity goes down. Too much faith and things get dangerous.

Gauging human trust in robots is an important piece of manufacturing that needs to be done regularly and accurately. Ranjana Mehta, a professor of industrial and systems engineering at Texas A&M and director of A&M’s NeuroErgonomics Lab, is finding new ways of quantifying that trust.

Surveys of trust, for researchers or manufacturers hoping to quantify such things, don’t tell the whole story. “When you take a deep dive into trusting and distrusting behaviors, you find differences that you don’t find in survey responses,” said Mehta.

Similar Reading: This Robot Learns by Watching

So, she decided to scan the brains of people working with robots. Mehta used functional near-infrared spectroscopy to watch the brain activity of workers as they assisted robots with different tasks and different levels of human/robot interaction. In one set of experiments, the robot did a bit of polishing, and in a second set, the robot performed assembly tasks.

Neural patterns, such as activation and connectivity, were consistent when humans and robots are both working as planned. “Under reliable conditions, when the robot is behaving as it’s supposed to, brain function did not change between low assistance and high assistance of the robot and had stronger connectivity,” said Mehta.

Researchers in Ranjana Mehta’s lab capture functional brain activity as operators work with robots on a manufacturing task to track the operator’s trust or distrust levels. Photo: Texas A&M Engineering
But when the robot or the human started to get sloppy, some of those connections were severed.

Unfortunately, it turns out that human workers are trusting their robotic counterparts when they should be most on their guard. “We found that humans tended to trust the robots more and more as they were getting fatigued, even when what the robot did was not trustworthy,” said Mehta. “We wanted to look at how can we characterize this trust. And is it the same across different individuals?”

She found that though men and women seem to trust robots equally when the robots were doing their job properly, human trusting behaviors—but not trust survey responses—varied wildly between men and women when robots misbehaved. Men were more likely to keep trusting the machine, assuming it would still do what it was supposed to, whereas women grew more suspicious. This was captured using their neural and gaze behaviors. Ultimately, that meant that men made more mistakes.

Join Today: How to Become an ASME Member

But that doesn’t mean men should be moved out of the manufacturing workspace. “The whole point is about calibrating trust,” said Mehta. “If you use only survey data to calibrate trust, then men would start making more mistakes. Instead, let’s create trust-calibrated robot actions based on measures of behavior that are informative.”

Eye tracking data, for instance, could help make a robot aware of how over- or under-trusting a human is at any moment during the workday.

Mehta’s experiments were conducted in her lab with students working with robots that were given somewhat controlled tasks. Her next step will be to take the research to a company near the university’s campus to see how real workers respond, in situ, in the real world. She will also investigate how quickly a robot regains a human’s trust after a breach.

You Might Also Enjoy: Robotics Blog: Why Robots Should Develop Human-Like Skills

The results of her work are already robust enough that robot-wielding factories could put them to use. This research could also change how robots are made and programmed in the first place. For example, a robot’s display could show the robot’s level of reliability.

“Then the human knows, and they can then choose to trust it or change their behavior accordingly,” said Mehta. “That transparency is vital to maintaining trust in the system.” 

Michael Abrams is a technology writer based in Westfield, N.J.

You are now leaving