A LoCATe-based visual place recognition system for mobile robotics and GPGPUs

2017 ◽  
Vol 30 (7) ◽  
pp. e4146 ◽  
Author(s):  
Loukas Bampis ◽  
Savvas Chatzichristofis ◽  
Chryssanthi Iakovidou ◽  
Angelos Amanatiadis ◽  
Yiannis Boutalis ◽  
...  
2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110374
Author(s):  
Li Tang ◽  
Yue Wang ◽  
Qimeng Tan ◽  
Rong Xiong

In the long-term deployment of mobile robots, changing appearance brings challenges for localization. When a robot travels to the same place or restarts from an existing map, global localization is needed, where place recognition provides coarse position information. For visual sensors, changing appearances such as the transition from day to night and seasonal variation can reduce the performance of a visual place recognition system. To address this problem, we propose to learn domain-unrelated features across extreme changing appearance, where a domain denotes a specific appearance condition, such as a season or a kind of weather. We use an adversarial network with two discriminators to disentangle domain-related features and domain-unrelated features from images, and the domain-unrelated features are used as descriptors in place recognition. Provided images from different domains, our network is trained in a self-supervised manner which does not require correspondences between these domains. Besides, our feature extractors are shared among all domains, making it possible to contain more appearance without increasing model complexity. Qualitative and quantitative results on two toy cases are presented to show that our network can disentangle domain-related and domain-unrelated features from given data. Experiments on three public datasets and one proposed dataset for visual place recognition are conducted to illustrate the performance of our method compared with several typical algorithms. Besides, an ablation study is designed to validate the effectiveness of the introduced discriminators in our network. Additionally, we use a four-domain dataset to verify that the network can extend to multiple domains with one model while achieving similar performance.


2019 ◽  
Vol 9 (15) ◽  
pp. 3146 ◽  
Author(s):  
Bo Yang ◽  
Xiaosu Xu ◽  
Jun Li ◽  
Hong Zhang

Landmark generation is an essential component in landmark-based visual place recognition. In this paper, we present a simple yet effective method, called multi-scale sliding window (MSW), for landmark generation in order to improve the performance of place recognition. In our method, we generate landmarks that form a uniform distribution in multiple landmark scales (sizes) within an appropriate range by a process that samples an image with a sliding window. This is in contrast to conventional methods of landmark generation that typically depend on detecting objects whose size distributions are uneven and, as a result, may not be effective in achieving shift invariance and viewpoint invariance, two important properties in visual place recognition. We conducted experiments on four challenging datasets to demonstrate that the recognition performance can be significantly improved by our method in a standard landmark-based visual place recognition system. Our method is simple with a single input parameter, the scales of landmarks required, and it is efficient as it does not involve detecting objects.


2015 ◽  
Vol 35 (4) ◽  
pp. 334-356 ◽  
Author(s):  
Elena S. Stumm ◽  
Christopher Mei ◽  
Simon Lacroix

2021 ◽  
Vol 6 (3) ◽  
pp. 5976-5983
Author(s):  
Maria Waheed ◽  
Michael Milford ◽  
Klaus McDonald-Maier ◽  
Shoaib Ehsan

Author(s):  
Timothy L. Molloy ◽  
Tobias Fischer ◽  
Michael J. Milford ◽  
Girish Nair

Sign in / Sign up

Export Citation Format

Share Document