scholarly journals Fast and Robust Monocular Visua-Inertial Odometry Using Points and Lines

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4545
Author(s):  
Ning Zhang ◽  
Yongjia Zhao

When the camera moves quickly and the image is blurred or the texture in the scene is missing, the Simultaneous Localization and Mapping (SLAM) algorithm based on point feature experiences difficulty tracking enough effective feature points, and the positioning accuracy and robustness are poor, and even may not work properly. For this problem, we propose a monocular visual odometry algorithm based on the point and line features and combining IMU measurement data. Based on this, an environmental-feature map with geometric information is constructed, and the IMU measurement data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the initial pose estimation is obtained based on the motion estimation of the sparse image alignment, and the feature alignment is further performed to obtain the sub-pixel level feature correlation. Finally, more accurate poses and 3D landmarks are obtained by minimizing the re-projection errors of local map points and lines. The experimental results on EuRoC public datasets show that the proposed algorithm outperforms the Open Keyframe-based Visual-Inertial SLAM (OKVIS-mono) algorithm and Oriented FAST and Rotated BRIEF-SLAM (ORB-SLAM) algorithm, which demonstrates the accuracy and speed of the algorithm.

2019 ◽  
Vol 9 (7) ◽  
pp. 1428 ◽  
Author(s):  
Ran Wang ◽  
Xin Wang ◽  
MingMing Zhu ◽  
YinFu Lin

Autonomous underwater vehicles (AUVs) are widely used, but it is a tough challenge to guarantee the underwater location accuracy of AUVs. In this paper, a novel method is proposed to improve the accuracy of vision-based localization systems in feature-poor underwater environments. The traditional stereo visual simultaneous localization and mapping (SLAM) algorithm, which relies on the detection of tracking features, is used to estimate the position of the camera and establish a map of the environment. However, it is hard to find enough reliable point features in underwater environments and thus the performance of the algorithm is reduced. A stereo point and line SLAM (PL-SLAM) algorithm for localization, which utilizes point and line information simultaneously, was investigated in this study to resolve the problem. Experiments with an AR-marker (Augmented Reality-marker) were carried out to validate the accuracy and effect of the investigated algorithm.


2021 ◽  
Vol 10 (10) ◽  
pp. 673
Author(s):  
Sheng Miao ◽  
Xiaoxiong Liu ◽  
Dazheng Wei ◽  
Changze Li

A visual localization approach for dynamic objects based on hybrid semantic-geometry information is presented. Due to the interference of moving objects in the real environment, the traditional simultaneous localization and mapping (SLAM) system can be corrupted. To address this problem, we propose a method for static/dynamic image segmentation that leverages semantic and geometric modules, including optical flow residual clustering, epipolar constraint checks, semantic segmentation, and outlier elimination. We integrated the proposed approach into the state-of-the-art ORB-SLAM2 and evaluated its performance on both public datasets and a quadcopter platform. Experimental results demonstrated that the root-mean-square error of the absolute trajectory error improved, on average, by 93.63% in highly dynamic benchmarks when compared with ORB-SLAM2. Thus, the proposed method can improve the performance of state-of-the-art SLAM systems in challenging scenarios.


2007 ◽  
Vol 04 (02) ◽  
pp. 141-160 ◽  
Author(s):  
FUNG-LING TONG ◽  
MAX Q.-H. MENG

The simultaneous localization and mapping technique is an important requirement in the development autonomous robot. Many localization algorithms for wheeled robots using various sensors have been proposed. In this article, we present a visual localization algorithm for a small home-use robot pet (legged robot). A low-resolution camera is equipped on the robot as the only sensor for localization. Challenges of visual localization for legged robots include: (1) Unmodeled motion errors due to leg slippages are common in legged robot, (2) as the oscillated walking motion of robot leads to fluctuated sensor data, the high degree of freedom of legged robot increases the complexity of the localization problem and (3) camera has limited field of view and image points lack of depth information. In the proposed algorithm, the localization for high-dimensional movement robot is modeled as an optimization. The objective function is then solved by a genetic algorithm. Approaches to (1) increase the efficiency of the search and (2) weaken the influence of noisy feature points to the localization results are presented. Results from simulations show that the proposed algorithm is able to localize the legged robot accurately and efficiently.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Inam Ullah ◽  
Xin Su ◽  
Xuewu Zhang ◽  
Dongmin Choi

For more than two decades, the issue of simultaneous localization and mapping (SLAM) has gained more attention from researchers and remains an influential topic in robotics. Currently, various algorithms of the mobile robot SLAM have been investigated. However, the probability-based mobile robot SLAM algorithm is often used in the unknown environment. In this paper, the authors proposed two main algorithms of localization. First is the linear Kalman Filter (KF) SLAM, which consists of five phases, such as (a) motionless robot with absolute measurement, (b) moving vehicle with absolute measurement, (c) motionless robot with relative measurement, (d) moving vehicle with relative measurement, and (e) moving vehicle with relative measurement while the robot location is not detected. The second localization algorithm is the SLAM with the Extended Kalman Filter (EKF). Finally, the proposed SLAM algorithms are tested by simulations to be efficient and viable. The simulation results show that the presented SLAM approaches can accurately locate the landmark and mobile robot.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2230 ◽  
Author(s):  
Su Wang ◽  
Yukinori Kobayashi ◽  
Ankit A. Ravankar ◽  
Abhijeet Ravankar ◽  
Takanori Emaru

Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.


Author(s):  
Zhao Zhang ◽  
Yi Zhang ◽  
Rui Huang

Simultaneous Localization and Mapping(SLAM) is the most important tool in creating map and auto-navigation, which is an indispensable link for pilotless automobile. The current SLAM algorithms suffer from unreliable feature matching and large registration error. To reduce those deficiencies, we proposed a key-frame estimation and local map upgrade scheme, which include the following 3 parts:1) Local map matching strategy; 2) Local map updating scheme; 3) Key-frame selecting scheme. Experimental results proved that our scheme improved the performance of current localization methods.


2013 ◽  
Vol 694-697 ◽  
pp. 1931-1936
Author(s):  
Feng Ping Cao ◽  
Rong Ben Wang ◽  
Liang Xiu Zhang

In order to overcome the accumulated error in traditional localization methods for intelligent vehicle such as dead reckoning and visual odometry, a simultaneous localization and mapping(SLAM) algorithm based on stereo vision was presented in the paper. Firstly, the interrelated elements in the localization method were defined, and the probability model for intelligent vehicle localization was proposed, then the motion and observation model were established, and the detailed implementation of the introduced localization algorithm was given. Finally, an experiment was designed to confirm the effectiveness of the proposed method. Experimental results indicate that the algorithm can realize three-dimensional motion estimation of intelligent vehicle and can improve the positioning precision effectively.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nick Le Large ◽  
Frank Bieder ◽  
Martin Lauer

Abstract For the application of an automated, driverless race car, we aim to assure high map and localization quality for successful driving on previously unknown, narrow race tracks. To achieve this goal, it is essential to choose an algorithm that fulfills the requirements in terms of accuracy, computational resources and run time. We propose both a filter-based and a smoothing-based Simultaneous Localization and Mapping (SLAM) algorithm and evaluate them using real-world data collected by a Formula Student Driverless race car. The accuracy is measured by comparing the SLAM-generated map to a ground truth map which was acquired using high-precision Differential GPS (DGPS) measurements. The results of the evaluation show that both algorithms meet required time constraints thanks to a parallelized architecture, with GraphSLAM draining the computational resources much faster than Extended Kalman Filter (EKF) SLAM. However, the analysis of the maps generated by the algorithms shows that GraphSLAM outperforms EKF SLAM in terms of accuracy.


2021 ◽  
Vol 13 (12) ◽  
pp. 2351
Author(s):  
Alessandro Torresani ◽  
Fabio Menna ◽  
Roberto Battisti ◽  
Fabio Remondino

Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more and more interest due to their significantly minor costs, size and power consumption. In this work we propose an ARM-based, low-cost and lightweight stereo vision mobile mapping system based on a Visual Simultaneous Localization And Mapping (V-SLAM) algorithm. The prototype system, named GuPho (Guided Photogrammetric System) also integrates an in-house guidance system which enables optimized image acquisitions, robust management of the cameras and feedback on positioning and acquisition speed. The presented results show the effectiveness of the developed prototype in mapping large scenarios, enabling motion blur prevention, robust camera exposure control and achieving accurate 3D results.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2673
Author(s):  
Weibo Huang ◽  
Weiwei Wan ◽  
Hong Liu

The online system state initialization and simultaneous spatial-temporal calibration are critical for monocular Visual-Inertial Odometry (VIO) since these parameters are either not well provided or even unknown. Although impressive performance has been achieved, most of the existing methods are designed for filter-based VIOs. For the optimization-based VIOs, there is not much online spatial-temporal calibration method in the literature. In this paper, we propose an optimization-based online initialization and spatial-temporal calibration method for VIO. The method does not need any prior knowledge about spatial and temporal configurations. It estimates the initial states of metric-scale, velocity, gravity, Inertial Measurement Unit (IMU) biases, and calibrates the coordinate transformation and time offsets between the camera and IMU sensors. The work routine of the method is as follows. First, it uses a time offset model and two short-term motion interpolation algorithms to align and interpolate the camera and IMU measurement data. Then, the aligned and interpolated results are sent to an incremental estimator to estimate the initial states and the spatial–temporal parameters. After that, a bundle adjustment is additionally included to improve the accuracy of the estimated results. Experiments using both synthetic and public datasets are performed to examine the performance of the proposed method. The results show that both the initial states and the spatial-temporal parameters can be well estimated. The method outperforms other contemporary methods used for comparison.


Sign in / Sign up

Export Citation Format

Share Document