Robots Form Surveillance Teams

A robot that can perform a task better and more accurately is valuable indeed. But what if a group of robots could work together to accomplish goals and tasks better than they ever could individually? A team of researchers recently put their minds to just that concept.

Working Together

“We formed an interdisciplinary team in response to an ONR [Office of Naval Research] project on distributed perception,” says Prof. Silvia Ferrari, professor of mechanical and aerospace engineering at Cornell University and the principal investigator for this ONR-funded project. “We decided to collaborate across computer science and mechanical engineering to develop systems that would take advantage of both the latest developments in computer vision and robotics.”

The result was a computer system that can combine information and data from multiple robots to track people or objects. The philosophy behind the system is partly based on the robots not always being able to communicate with each other. “The communications are wireless and, as such, can be unreliable due to the environment or to jammed communication channels,” she says. “Similarly, the robots may at times be in a GPS-denied environment. So the question is: Can the robots learn how to cope with these circumstances and reconfigure accordingly to develop, together, a perception of the scene.”

Engineers teach computers to combine many possible views of the same area from fixed and mobile cameras. Image: J.P. Oleson / Cornell University

What are the parts involved in this system? “[There are] multiple Segway-type robots equipped with onboard sensors and communication devices,” she says. “Sensors include simple cameras, stereo vision cameras, and range finders. The robots will be collaborating with stationary pan-tilt-zoom cameras as well as interacting with the cloud and the World Wide Web.”

She says one thing they didn’t anticipate was “how difficult it is to interpret the scene for a robot despite all the advancements in sensor technologies and processing algorithms.” Ferarri also learned just how different the perspectives of various team members can be. “Generally, I am surprised at the different perspectives on common problems, such as tracking and classification, on which we think we know so much, but, when we get right down to it, they mean different things to different people,” she says.

She says applications of this technology include security and surveillance, along with enhancing perception for autonomous systems, in order to help them understand their environment. This could spill over into areas ranging from medical robotics to self-driving vehicles, she says.

But the work is not without its challenges. A major one is to provide performance guarantees on the perception, tracking, and mapping algorithms when communication is changing, she says. “Fusion allows better perception and mapping, but it is difficult to perform with intermittent communications,” she says. “Another big challenge is to develop a broad understanding of the scene that goes beyond ‘simple’ classification and detection and mapping. Namely, what does it mean to understand what is happening to the scene? How can we extract context and detect unusual behaviors and actions?”

Eric Butterman is an independent writer.

We decided to collaborate across computer science and mechanical engineering to develop systems that would take advantage of both the latest developments in computer vision and robotics.Prof. Silvia Ferrari, Cornell University

You are now leaving ASME.org