RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation

2018 ◽  
Vol 36 (2) ◽  
pp. 416-446 ◽  
Author(s):  
Mathieu Labbé ◽  
François Michaud
2017 ◽  
Vol 2017 ◽  
pp. 1-7
Author(s):  
Gangchen Hua ◽  
Xu Tan

In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM) using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM) to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.


2017 ◽  
Author(s):  
Sook-Lei Liew ◽  
Julia M. Anglin ◽  
Nick W. Banks ◽  
Matt Sondag ◽  
Kaori L. Ito ◽  
...  

AbstractStroke is the leading cause of adult disability worldwide, with up to two-thirds of individuals experiencing long-term disabilities. Large-scale neuroimaging studies have shown promise in identifying robust biomarkers (e.g., measures of brain structure) of long-term stroke recovery following rehabilitation. However, analyzing large rehabilitation-related datasets is problematic due to barriers in accurate stroke lesion segmentation. Manually-traced lesions are currently the gold standard for lesion segmentation on T1-weighted MRIs, but are labor intensive and require anatomical expertise. While algorithms have been developed to automate this process, the results often lack accuracy. Newer algorithms that employ machine-learning techniques are promising, yet these require large training datasets to optimize performance. Here we present ATLAS (Anatomical Tracings of Lesions After Stroke), an open-source dataset of 304 T1-weighted MRIs with manually segmented lesions and metadata. This large, diverse dataset can be used to train and test lesion segmentation algorithms and provides a standardized dataset for comparing the performance of different segmentation methods. We hope ATLAS release 1.1 will be a useful resource to assess and improve the accuracy of current lesion segmentation methods.


Author(s):  
Kai Liu ◽  
Hua Wang ◽  
Fei Han ◽  
Hao Zhang

Visual place recognition is essential for large-scale simultaneous localization and mapping (SLAM). Long-term robot operations across different time of the days, months, and seasons introduce new challenges from significant environment appearance variations. In this paper, we propose a novel method to learn a location representation that can integrate the semantic landmarks of a place with its holistic representation. To promote the robustness of our new model against the drastic appearance variations due to long-term visual changes, we formulate our objective to use non-squared ℓ2-norm distances, which leads to a difficult optimization problem that minimizes the ratio of the ℓ2,1-norms of matrices. To solve our objective, we derive a new efficient iterative algorithm, whose convergence is rigorously guaranteed by theory. In addition, because our solution is strictly orthogonal, the learned location representations can have better place recognition capabilities. We evaluate the proposed method using two large-scale benchmark data sets, the CMU-VL and Nordland data sets. Experimental results have validated the effectiveness of our new method in long-term visual place recognition applications.


Author(s):  
J.-P. Muller ◽  
Y. Tao ◽  
A. R. D. Putri ◽  
S. J. Conway

Abstract. Automated large-scale retrieval of stereo photogrammetric DTMs of Mars fall into three categories: use of COTS software such as BAE-SOCET®; private software such as the DLR-VICAR software suite and open source solutions such as the NASA Ames Stereo Pipeline (ASP). We describe here a novel open source system developed on the back of ASP known as CASP-GO (Tao et al., 2018) which has automated and extended ASP to be able to be applied to all modern single-pass or repeat-pass stereo photogrammetric datasets from 21st century systems such as HRSC, CTX and HiRISE, CASP-GO also includes an automated co-registration system which employs HRSC (itself linked to MOLA) as the base-map upon which all other DTMs are co-registered. We show an example here of this automated co-registration system applied to multi-resolution stacks including CRISM images. Several thousand multi-resolution 3D products, Digital Terrain Models (DTMs) and their corresponding orthorectified images (ORIs) have been generated and used in a wide variety of scientific studies, a few examples of which are shown here. Finally, a new method distributing these products providing long-term archiving and ease of access using DOIs is shown employing the ESA-PSA Guest Storage Facility and their corresponding display within the iMars webGIS system.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4103
Author(s):  
Junghyun Oh ◽  
Changwan Han ◽  
Seunghwan Lee

Localization is one of the essential process in robotics, as it plays an important role in autonomous navigation, simultaneous localization, and mapping for mobile robots. As robots perform large-scale and long-term operations, identifying the same locations in a changing environment has become an important problem. In this paper, we describe a robust visual localization system under severe appearance changes. First, a robust feature extraction method based on a deep variational autoencoder is described to calculate the similarity between images. Then, a global sequence alignment is proposed to find the actual trajectory of the robot. To align sequences, local fragments are detected from the similarity matrix and connected using a rectangle chaining algorithm considering the robot’s motion constraint. Since the chained fragments provide reliable clues to find the global path, false matches on featureless structures or partial failures during the alignment could be recovered and perform accurate robot localization in changing environments. The presented experimental results demonstrated the benefits of the proposed method, which outperformed existing algorithms in long-term conditions.


Author(s):  
Patrick Chwalek ◽  
David Ramsay ◽  
Joseph A. Paradiso

We present Captivates, an open-source smartglasses system designed for long-term, in-the-wild psychophysiological monitoring at scale. Captivates integrate many underutilized physiological sensors in a streamlined package, including temple and nose temperature measurement, blink detection, head motion tracking, activity classification, 3D localization, and head pose estimation. Captivates were designed with an emphasis on: (1) manufacturing and scalability, so we can easily support large scale user studies for ourselves and offer the platform as a generalized tool for ambulatory psychophysiology research; (2) robustness and battery life, so long-term studies result in trustworthy data individual's entire day in natural environments without supervision or recharge; and (3) aesthetics and comfort, so people can wear them in their normal daily contexts without self-consciousness or changes in behavior. Captivates are intended to enable large scale data collection without altering user behavior. We validate that our sensors capture useful data robustly for a small set of beta testers. We also show that our additional effort on aesthetics was imperative to meet our goals; namely, earlier versions of our prototype make people uncomfortable to interact naturally in public, and our additional design and miniaturization effort has made a significant impact in preserving natural behavior. There is tremendous promise in translating psychophysiological laboratory techniques into real-world insight. Captivates serve as an open-source bridge to this end. Paired with an accurate underlying model, Captivates will be able to quantify the long-term psychological impact of our design decisions and provide real-time feedback for technologists interested in actuating a cognitively adaptive, user-aligned future.


2022 ◽  
Vol 5 (1) ◽  
pp. 11
Author(s):  
Jooeun Song ◽  
Joongjin Kook

The simultaneous localization and mapping (SLAM) market is growing rapidly with advances in Machine Learning, Drones, and Augmented Reality (AR) technologies. However, due to the absence of an open source-based SLAM library for developing AR content, most SLAM researchers are required to conduct their own research and development to customize SLAM. In this paper, we propose an open source-based Mobile Markerless AR System by building our own pipeline based on Visual SLAM. To implement the Mobile AR System of this paper, we use ORB-SLAM3 and Unity Engine and experiment with running our system in a real environment and confirming it in the Unity Engine’s Mobile Viewer. Through this experimentation, we can verify that the Unity Engine and the SLAM System are tightly integrated and communicate smoothly. In addition, we expect to accelerate the growth of SLAM technology through this research.


Sign in / Sign up

Export Citation Format

Share Document