Making Robot-Human Interactions Safer

Making Robot-Human Interactions Safer

As robots move beyond factories and into everyday life, new algorithms are helping machines make safer, smarter decisions when interacting with people.
Robots are a familiar sight in factories, but they’re increasingly showing up in warehouses, hospitals, restaurants, and even homes. They’re cleaning floors, delivering packages, driving cars, transporting materials, mowing lawns, and taking on other tasks once done exclusively by people. 

As robots become more common in daily life, researchers at the University of Colorado say there’s a growing need to improve collaboration between humans and machines so that tasks are completed effectively—and human safety remains the top priority. 

In a study presented at the International Joint Conference on Artificial Intelligence in August 2025, CU Boulder researchers unveiled new algorithms designed to help robots make the best possible decisions in uncertain or risky situations. The work, led by Associate Professor Morteza Lahijanian and graduate students Karan Muvvala and Qi Heng Ho from the Ann and H.J. Smead Department of Aerospace Engineering Sciences, aims to improve how robots plan and act when outcomes aren’t guaranteed. 

“Our whole project was motivated by the fact that we are at a level of automation now that we can deploy robots in our daily lives,” Lahijanian said. “And whether we like it or not, they are going to come in contact with us.” 

The importance of safety is underscored by a 2023 incident in South Korea, when a worker at a vegetable packing facility was killed after being grabbed and pressed against a conveyor belt by a robot’s arm. 

“How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?” Lahijanian asked. 

More for You: The Inevitable March Toward AI Robotics Continues

The researchers applied game theory—a mathematical framework originally developed in economics—to design new robot algorithms. Game theory examines how different players, such as individuals, companies or governments, make decisions that influence one another’s outcomes. In robotics, it treats a robot as one of many players in a game striving to “win” by successfully completing a task. But when humans are in the mix, success becomes more complex: The robot must not only achieve its goal but also ensure human safety at every step. 

One of the team’s innovations was to make regret part of the algorithm guiding a robot’s reactions to humans in its environment. 

“When a robot initially starts doing a task, it has no idea what type of human it is trying to interact with,” Muvvala said. “Initially it gives the human a chance—to see if the human is open to collaboration or not. If the human behaves in a way that helps the robot finish the task more quickly, the robot becomes optimistic that they can finish the task as a team because that’s the goal. If the human behaves antagonistically or adversarially—let’s say the human moves in such a way that stops the robot from finishing the task—then the robot will adapt its behavior. It will regret the actions it took in the past where it wanted to collaborate with the human and will adapt its behaviors so it can finish the task on its own.”

The team tested the algorithms in its lab, filming robots interacting with and being blocked by humans. The researchers observed the robots pausing tasks or moving away to complete them safely. 

Discover the Benefits of ASME Membership

“When you look at the interactions on the videos, you see that the robot at some point says, enough is enough. I can’t do this anymore. I will just do it on my own,” Lahijanian said. 

While a robot’s version of winning is to complete a task, the researchers proposed a broader idea: having the robot find an “admissible strategy.” That means accomplishing as much of its task as possible while minimizing any potential harm, including to humans. 

Seeing the robots react as planned was gratifying for Ho, who says the project blends theory and practice. “It’s very nice that you can go from theory all the way to robots doing things as researchers planned,” Ho said. “It’s still in the lab, but I can foresee it happening in real-world settings in the future.” 

Lahijanian envisions the team’s research being applied to robots working in a myriad different industries, from automobile manufacturing to construction and people’s homes where, for instance, a robot might help a person make dinner. All of this research is open source and available for others to build upon, which is something the team wants to see happen.  

“We hope in the future it will translate into some start-up ideas and then be employed by companies building robots with these actual regret notions so they can work safely with humans and improve our society,” Lahijanian said. 

Annemarie Mannion is a technology writer in Chicago.  
As robots move beyond factories and into everyday life, new algorithms are helping machines make safer, smarter decisions when interacting with people.