Spatial Navigation Through a New LENS

Spatial Navigation Through a New LENS

QUT robotics researchers have developed a new robot navigation system that mimics neural processes of the human brain and uses less than 10 percent of the energy required by traditional systems.
Researchers at Australia’s Queensland University of Technology (QUT) Centre for Robotics have leveraged brain-inspired computing to create an accurate and efficient robotic navigation system.

For more than two decades, researchers have been trying to program autonomous robots to navigate the world around them seamlessly, enabling them to perform activities such as search and rescue operations and space exploration. There’s only one problem: Many existing robotic systems consume an incredible amount of energy thanks to power-hungry computer vision setups. 

“The vision systems of most robots can consume up to one-third of the power of a lithium arm battery,” said Adam Hines, a neuroscience-trained postdoctoral research fellow in Computational Modelling and Biorobotics at the University of Macquarie and a visiting fellow at the QUT Centre for Robotics. “If you’ve deployed a mobile robot to do an activity like underwater monitoring in a place where you can’t really access it, you are looking at significantly reduced mission times because they just use too much power to get around.” 

Now, however, Hines, along with colleagues Michael Milford and Tobias Fischer, at the QUT Centre for Robotics, has taken an entirely new approach to robot navigation, which reduces energy consumption by 90-99 percent. 

You Might Also Enjoy: A 3D-Printed Robot, No Electronics Required

Historically, Hines said, researchers in computer vision have put a strong emphasis on achieving exceptionally high accuracy—but doing so comes at a cost. Such systems require high-powered graphics processing units (GPUs) and high-performance computing clusters to run intensive algorithms. He and his colleagues instead focused on neuromorphic computing, a paradigm that mimics the way the human brain processes information. 

“We developed an algorithm called visual place recognition (VPR) Tempo, a super-efficient, fast learning neural networks that allow robots to visually localize themselves super quickly, as well as train very quickly compared to other systems,” he said. “VPRTempo uses spiking neural networks, which make connections with other based on the propagation of spiking activity, similar to how human neurons learn new information.” 

The use of VPRTempo enabled the team to develop a new navigation system, known as Locational Encoding with Neuromorphic Systems (LENS). Instead of trying to take in everything in the visual field, LENS relies on a small camera that only reacts to movement or changes in the environment. In doing so, its energy consumption dramatically decreases. 

Discover the Benefits of ASME Membership

“LENS only uses very small amounts of information to work. So, we’re only encoding 49 pixels from a neuromorphic sensor, and it’s connected to a very small machine learning (ML) model,” he said. “In contrast, typical artificial intelligence (AI) systems like ChatGPT use parameters in the order of billions that need to be trained and weights that need to be learned. But ours only needs 44,000, which is really miniscule in terms of ML models.” 

To test the system, Hines and colleagues took the robot with LENS on an 8-kilometer journey. It was able to accurately recognize locations along the way while only using 180 kilobytes of storage. Hines said he was both pleased and surprised it worked as well as it did. 

“We rely quite heavily on sequential-based information. So, as we move through the environment, we assume, if you are going from A to B to C, if you are currently in A you are going to B next, and so on—that’s sort of the way the brain works to navigate,” he said. “The algorithm exploits that information to enhance the ability of the system to predict where it is based on where it was previously. And with those assumptions in place, the system was not only as performant as a conventional navigation system but actually worked better than the conventional system.” 

Relevant Reads: Untethered Minirobot Takes to the Air

While the QUT researchers are excited about the progress they’ve made so far, Hines said they need to create larger neural networks to help this kind of system scale to longer distances. To do that, he told LENS that it will have to replace its current classifier network, which learns how to represent each output class of a neural network with a specific location. 

“We’d really love to move away from localization as being a classification problem and more of an experiential problem,” he explained. “In this case, as you move through the environment, you would have a collection of outputs that you decode to find your location. So, you can fix the network as the same size, but you can get multiple neurons responding to the same place or similar-looking places. Rather than classifying where you are, you use a decoding network to predict where you are, including more spatial information to support higher accuracy without relying too much on sequential data.” 

This, he said, would help these systems reorient themselves if they lost their place, or were the victims of the so-called “kidnapped robot problem,” where someone picks up the robot and places it in a location where it hasn’t been before. 

“There are so many options we could go down to make this better. There are algorithm improvements, hardware sensor improvements—we’re really at the start of using this kind of neuromorphic engineering for navigation,” he said. “But I think, as other engineers are looking at robotic navigation, this shows why it is important for them to consider alternatives to what’s current ‘state-of-the-art’ in terms of deploying systems on robotic platforms. Don’t discount small, lightweight models on neuromorphic systems that can perform as well as conventional ones, but do so in a more practical, energy efficient way.” 

Kayt Sukel is a technology writer and author in Houston, Texas.
QUT robotics researchers have developed a new robot navigation system that mimics neural processes of the human brain and uses less than 10 percent of the energy required by traditional systems.