scholarly journals IMU-Assisted 2D SLAM Method for Low-Texture and Dynamic Environments

2018 ◽  
Vol 8 (12) ◽  
pp. 2534 ◽  
Author(s):  
Zhongli Wang ◽  
Yan Chen ◽  
Yue Mei ◽  
Kuo Yang ◽  
Baigen Cai

Generally, the key issues of 2D LiDAR-based simultaneous localization and mapping (SLAM) for indoor application include data association (DA) and closed-loop detection. Particularly, a low-texture environment, which refers to no obvious changes between two consecutive scanning outputs, with moving objects existing in the environment will bring great challenges on DA and the closed-loop detection, and the accuracy and consistency of SLAM may be badly affected. There is not much literature that addresses this issue. In this paper, a mapping strategy is firstly exploited to improve the performance of the 2D SLAM in dynamic environments. Secondly, a fusion method which combines the IMU sensor with a 2D LiDAR, based on framework of extended Kalman Filter (EKF), is proposed to enhance the performance under low-texture environments. In the front-end of the proposed SLAM method, initial motion estimation is obtained from the output of EKF, and it can be taken as the initial pose for the scan matching problem. Then the scan matching problem can be optimized by the Levenberg–Marquardt (LM) algorithm. For the back-end optimization, a sparse pose adjustment (SPA) method is employed. To improve the accuracy, the grid map is updated with the bicubic interpolation method for derivative computing. With the improvements both in the DA process and the back-end optimization stage, the accuracy and consistency of SLAM results in low-texture environments is enhanced. Qualitative and quantitative experiments with open-loop and closed-loop cases have been conducted and the results are analyzed, confirming that the proposed method is effective in low-texture and dynamic indoor environments.

Author(s):  
C. Li ◽  
Z. Kang ◽  
J. Yang ◽  
F. Li ◽  
Y. Wang

Abstract. Visual Simultaneous Localization and Mapping (SLAM) systems have been widely investigated in response to requirements, since the traditional positioning technology, such as Global Navigation Satellite System (GNSS), cannot accomplish tasks in restricted environments. However, traditional SLAM methods which are mostly based on point feature tracking, usually fail in harsh environments. Previous works have proven that insufficient feature points caused by missing textures, feature mismatches caused by too fast camera movements, and abrupt illumination changes will eventually cause state estimation to fail. And meanwhile, pedestrians are unavoidable, which introduces fake feature associations, thus violating the strict assumption that the unknown environment is static in SLAM. In order to ensure how our system copes with the huge challenges brought by these factors in a complex indoor environment, this paper proposes a semantic-assisted Visual Inertial Odometer (VIO) system towards low-textured scenes and highly dynamic environments. The trained U-net will be used to detect moving objects. Then all feature points in the dynamic object area need to be eliminated, so as to avoid moving objects to participate in the pose solution process and improve robustness in dynamic environments. Finally, the constraints of inertial measurement unit (IMU) are added for low-textured environments. To evaluate the performance of the proposed method, experiments were conducted on the EuRoC and TUM public dataset, and the results demonstrate that the performance of our approach is robust in complex indoor environments.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Author(s):  
H. A. Mohamed ◽  
A. Moussa ◽  
M. M. Elhabiby ◽  
N. El-Sheimy

<p><strong>Abstract.</strong> The autonomous vehicles, such as wheeled robots and drones, efficiently contribute in the search and rescue operations. Specially for indoor environments, these autonomous vehicles rely on simultaneous localization and mapping approach (SLAM) to construct a map for the unknown environment and simultaneously to estimate the vehicle’s position inside this map. The result of the scan matching process, which is a key step in many of SLAM approaches, has a fundamental role of the accuracy of the map construction. Typically, local and global scan matching approaches, that utilize laser scan rangefinder, suffer from accumulated errors as both approaches are sensitive to previous history. The reference key frame (RKF) algorithm reduces errors accumulation as it decreases the dependency on the accuracy of the previous history. However, the RKF algorithm still suffers; as most of the SLAM approaches, from scale shrinking problem during scanning corridors that exceed the maximum detection range of the laser scan rangefinder. The shrinking in long corridors comes from the unsuccessful estimation of the longitudinal movement from the implemented RKF algorithm and the unavailability of this information from external source as well. This paper proposes an improvement for the RKF algorithm. This is achieved by integrating the outcomes of the optical flow with the RKF algorithm using extended Kalman filter (EKF) to overcome the shrinking problem. The performance of the proposed algorithm is compared with the RKF, iterative closest point (ICP), and Hector SLAM in corridors that exceed the maximum detection range of the laser scan rangefinder.</p>


Robotica ◽  
2019 ◽  
Vol 38 (2) ◽  
pp. 256-270 ◽  
Author(s):  
Jiyu Cheng ◽  
Yuxiang Sun ◽  
Max Q.-H. Meng

SummaryVisual simultaneous localization and mapping (visual SLAM) has been well developed in recent decades. To facilitate tasks such as path planning and exploration, traditional visual SLAM systems usually provide mobile robots with the geometric map, which overlooks the semantic information. To address this problem, inspired by the recent success of the deep neural network, we combine it with the visual SLAM system to conduct semantic mapping. Both the geometric and semantic information will be projected into the 3D space for generating a 3D semantic map. We also use an optical-flow-based method to deal with the moving objects such that our method is capable of working robustly in dynamic environments. We have performed our experiments in the public TUM dataset and our recorded office dataset. Experimental results demonstrate the feasibility and impressive performance of the proposed method.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7428
Author(s):  
Wennan Chai ◽  
Chao Li ◽  
Mingyue Zhang ◽  
Zhen Sun ◽  
Hao Yuan ◽  
...  

The visual-inertial simultaneous localization and mapping (SLAM) is a feasible indoor positioning system that combines the visual SLAM with inertial navigation. There are accumulated drift errors in inertial navigation due to the state propagation and the bias of the inertial measurement unit (IMU) sensor. The visual-inertial SLAM can correct the drift errors via loop detection and local pose optimization. However, if the trajectory is not a closed loop, the drift error might not be significantly reduced. This paper presents a novel pedestrian dead reckoning (PDR)-aided visual-inertial SLAM, taking advantage of the enhanced vanishing point (VP) observation. The VP is integrated into the visual-inertial SLAM as an external observation without drift error to correct the system drift error. Additionally, the estimated trajectory’s scale is affected by the IMU measurement errors in visual-inertial SLAM. Pedestrian dead reckoning (PDR) velocity is employed to constrain the double integration result of acceleration measurement from the IMU. Furthermore, to enhance the proposed system’s robustness and the positioning accuracy, the local optimization based on the sliding window and the global optimization based on the segmentation window are proposed. A series of experiments are conducted using the public ADVIO dataset and a self-collected dataset to compare the proposed system with the visual-inertial SLAM. Finally, the results demonstrate that the proposed optimization method can effectively correct the accumulated drift error in the proposed visual-inertial SLAM system.


Author(s):  
Jessica Leu ◽  
Masayoshi Tomizuka

Abstract Real-time, safe, and stable motion planning in co-robot systems involving dynamic human robot interaction (HRI) remains challenging due to the time varying nature of the problem. One of the biggest challenges is to guarantee closed-loop stability of the planning algorithm in dynamic environments. Typically, this can be addressed if there exists a perfect predictor that precisely predicts the future motions of the obstacles. Unfortunately, a perfect predictor is not possible to achieve. In HRI environments in this paper, human workers and other robots are the obstacles to the ego robot. We discuss necessary conditions for the closed-loop stability of a planning problem using the framework of model predictive control (MPC). It is concluded that the predictor needs to be able to detect the obstacles’ movement mode change within a time delay allowance and the MPC needs to have a sufficient prediction horizon and a proper cost function. These allow MPC to have an uncertainty tolerance for closed-loop stability, and still avoid collision when the obstacles’ movement is not within the tolerance. Also, the closed-loop performance is investigated using a notion of M-convergence, which guarantees finite local convergence (at least M steps ahead) of the open-loop trajectories toward the closed-loop trajectory. With this notion, we verify the performance of the proposed MPC with stability enhanced prediction through simulations and experiments. With the proposed method, the robot can better deal with dynamic environments and the closed-loop cost is reduced.


2019 ◽  
Vol 39 (2) ◽  
pp. 297-307 ◽  
Author(s):  
Haoyao Chen ◽  
Hailin Huang ◽  
Ye Qin ◽  
Yanjie Li ◽  
Yunhui Liu

Purpose Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements. Design/methodology/approach A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM in large-scale environments; both vision and laser loop detections are integrated to detect robust loops. A method based on oriented brief (ORB) feature detection and bag of words (BoW) is developed, to ensure the robustness and computational effectiveness of the multi-robot SLAM system. A robust and efficient graph fusion algorithm is proposed to merge pose graphs from different robots. Findings The proposed method can detect loops more quickly and accurately than the laser-only SLAM, and it can fuse the submaps of each single robot to promote the efficiency, accuracy and robustness of the system. Originality/value Compared with the state of art of multi-robot SLAM approaches, the paper proposed a novel and more sophisticated approach. The vision-based and laser-based loops are integrated to realize a robust loop detection. The ORB features and BoW technologies are further utilized to gain real-time performance. Finally, random sample consensus and least-square methodologies are used to remove the outlier loops among robots.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Shuhuan Wen ◽  
Kamal Mohammed Othman ◽  
Ahmad B. Rad ◽  
Yixuan Zhang ◽  
Yongsheng Zhao

We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM ). K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7612
Author(s):  
Quande Yuan ◽  
Zhenming Zhang ◽  
Yuzhen Pi ◽  
Lei Kou ◽  
Fangfang Zhang

As visual simultaneous localization and mapping (vSLAM) is easy disturbed by the changes of camera viewpoint and scene appearance when building a globally consistent map, the robustness and real-time performance of key frame image selections cannot meet the requirements. To solve this problem, a real-time closed-loop detection method based on a dynamic Siamese networks is proposed in this paper. First, a dynamic Siamese network-based fast conversion learning model is constructed to handle the impact of external changes on key frame judgments, and an elementwise convergence strategy is adopted to ensure the accurate positioning of key frames in the closed-loop judgment process. Second, a joint training strategy is designed to ensure the model parameters can be learned offline in parallel from tagged video sequences, which can effectively improve the speed of closed-loop detection. Finally, the proposed method is applied experimentally to three typical closed-loop detection scenario datasets and the experimental results demonstrate the effectiveness and robustness of the proposed method under the interference of complex scenes.


Sign in / Sign up

Export Citation Format

Share Document