Place Recognition System from Long-Term Observations

Author(s):  
Do Joon Jung ◽  
Hang Joon Kim
Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6002 ◽  
Author(s):  
Daniele De Martini ◽  
Matthew Gadd ◽  
Paul Newman

This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions.


2017 ◽  
Vol 30 (7) ◽  
pp. e4146 ◽  
Author(s):  
Loukas Bampis ◽  
Savvas Chatzichristofis ◽  
Chryssanthi Iakovidou ◽  
Angelos Amanatiadis ◽  
Yiannis Boutalis ◽  
...  

2021 ◽  
Author(s):  
Diwei Sheng ◽  
Yuxiang Chai ◽  
Xinru Li ◽  
Chen Feng ◽  
Jianzhe Lin ◽  
...  

2021 ◽  
Vol 11 (19) ◽  
pp. 8976
Author(s):  
Junghyun Oh ◽  
Gyuho Eoh

As mobile robots perform long-term operations in large-scale environments, coping with perceptual changes becomes an important issue recently. This paper introduces a stochastic variational inference and learning architecture that can extract condition-invariant features for visual place recognition in a changing environment. Under the assumption that a latent representation of the variational autoencoder can be divided into condition-invariant and condition-sensitive features, a new structure of the variation autoencoder is proposed and a variational lower bound is derived to train the model. After training the model, condition-invariant features are extracted from test images to calculate the similarity matrix, and the places can be recognized even in severe environmental changes. Experiments were conducted to verify the proposed method, and the experimental results showed that our assumption was reasonable and effective in recognizing places in changing environments.


2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.


Sign in / Sign up

Export Citation Format

Share Document