It is no surprise that we see more and more robots in our daily lives: in a restaurant bringing orders to the table, in the field as a seasonal workermaking him courier delivery competition…and that’s not to mention its applications in automation on an industrial scale. Robots don’t need to rest, they don’t have labor rights, and they don’t complain. But they get lost. And that is a real, very common problem for which a research team from the Miguel Hernández University of Elche has found solution.
The context. Autonomous robots need to know where they are to function and that does not always happen: when the location reference is lost, either because someone moves it, it is turned off or the environment changes without warning, the robot is unable to recover its position. Something as normal as running out of battery can be a technical drama.
This phenomenon is not something isolated, in fact it even has a name in robotics: the “kidnapped robot problem“. Although we see more and more robots everywhere, this incident is a pending issue that has not been resolved in a robust way for decades. Without going any further, because resorting to GPS is something that can fail in settings such as indoors or near tall buildings. As deepens Míriam Máximolead author of the article: “It is a classic problem and very difficult to solve, especially in large environments.”
The solution. What the team from the University of Elche has implemented is MCL-DLF, the acronym for Monte Carlo Localization – Deep Local Feature, a system that combines two technologies: on the one hand, a 3D LiDAR that emits laser pulses to draw a three-dimensional map of the environment similar to that of robot vacuum cleaners. On the other hand, an artificial intelligence that learns which elements of the environment are most useful for orientation.
Why is it important. Because having a reliable location system is essential for any robotic deployment in real life: autonomous vehicles, delivery and logistics, assistance… its presence may be increasingly common, but it is still tremendously dependent on supervision: knowing where it is is essential for it to operate safely.
The implemented method also introduces an important change: it is independent, in that it does not require external infrastructure to function like GPS, so its base is more robust and versatile in the face of different use scenarios in the real world.
How it works. Its approach is hierarchical, so it first recognizes large structures and then fine details, similar to how people do. When you arrive at an unknown place, first you keep the essentials: what neighborhood you are in, for example. Then you look for more specific references to refine further. Furthermore, the system does not play everything on one card: it maintains several position hypotheses simultaneously and discards or refines them as the sensor captures more information.
Tests carried out for months on the university campus with different lighting conditions, vegetation or simply the weather have shown more consistency than conventional methods.
A good start with pending subjects. Beyond its promising results, the most striking thing about this research is its commitment to sensory autonomy: it does not depend on networks of beacons or GPS, but on its own sensors. This makes it a potentially more versatile system.
However, it faces the great historical challenge of robot placement: how fragile it is in the face of changing environments. It is true that they have tested it in different conditions, but it has been within the campus: making the leap to more complex and constantly changing environments is their litmus test, in addition to additional validation in extreme conditions. Finally, before an eventual real commercial deployment, we will have to see how it integrates with other navigation systems and its computational cost.
Cover | Enchanted Tools

GIPHY App Key not set. Please check settings