scholarly journals Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1772
Author(s):  
Gengyu Ge ◽  
Yi Zhang ◽  
Qin Jiang ◽  
Wei Wang

Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2230 ◽  
Author(s):  
Su Wang ◽  
Yukinori Kobayashi ◽  
Ankit A. Ravankar ◽  
Abhijeet Ravankar ◽  
Takanori Emaru

Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.


Author(s):  
Wenshan Wang ◽  
Qixin Cao ◽  
Xiaoxiao Zhu ◽  
Masaru Adachi

Purpose – Robot localization technology has been widely studied for decades and a lot of remarkable approaches have been developed. However, in practice, this technology has hardly been applied to common day-to-day deployment scenarios. The purpose of this paper is to present a novel approach that focuses on improving the localization robustness in complicated environment. Design/methodology/approach – The localization robustness is improved by dynamically switching the localization components (such as the environmental camera, the laser range finder and the depth camera). As the components are highly heterogeneous, they are developed under the robotic technology component (RTC) framework. This simplifies the developing process by increasing the potential for reusability and future expansion. To realize this switching, the localization reliability for each component is modeled, and a configuration method for dynamically selecting dependable components at run-time is presented. Findings – The experimental results show that this approach significantly decreases robot lost situation in the complicated environment. The robustness is further enhanced through the cooperation of heterogeneous localization components. Originality/value – A multi-component automatic switching approach for robot localization system is developed and described in this paper. The reliability of this system is proved to be a substantial improvement over single-component localization techniques.


2014 ◽  
Vol 11 (03) ◽  
pp. 1450023 ◽  
Author(s):  
Hafez Eslami Manoochehri ◽  
Kamal Jamshidi ◽  
Amirhassan Monadjemi ◽  
Hamed Shahbazi

In this paper, a method to find curvilinear path features is proposed. These features are defined as centers and radiuses of circles that best fit to the curvature parts of the curvilinear path. In our previous research, we proposed a hierarchical layered paradigm for humanoid robot to learn how to walk in the curvilinear path. This model consists of four layers and each one has a specific purpose and is responsible to provide some feedbacks for the lower layer. In this study, we focus on the first layer which is high level decision unit responsible to provide some feedbacks and parameters for the lower layer using robot sensory inputs. The ultimate goal is that robot learn to walk in the curvilinear path and to reach this goal, the first step is to find robot position in the environment. In this work, Monte Carlo localization method is used for robot localization. Then we used artificial potential field to generate a path between robot and a goal. Finally, we proposed an algorithm that search the circles that best fit to the curvature parts of the path. Finding these features would help the learning process for lower layers in the learning model. We used robot camera as the only sensor to identify landmarks and obstacles for robot localization, path planning and finding curvilinear path features.


Author(s):  
Zewen Xu ◽  
Zheng Rong ◽  
Yihong Wu

AbstractIn recent years, simultaneous localization and mapping in dynamic environments (dynamic SLAM) has attracted significant attention from both academia and industry. Some pioneering work on this technique has expanded the potential of robotic applications. Compared to standard SLAM under the static world assumption, dynamic SLAM divides features into static and dynamic categories and leverages each type of feature properly. Therefore, dynamic SLAM can provide more robust localization for intelligent robots that operate in complex dynamic environments. Additionally, to meet the demands of some high-level tasks, dynamic SLAM can be integrated with multiple object tracking. This article presents a survey on dynamic SLAM from the perspective of feature choices. A discussion of the advantages and disadvantages of different visual features is provided in this article.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


2004 ◽  
Vol 16 (1) ◽  
pp. 23-47 ◽  
Author(s):  
M. Di Marco ◽  
A. Garulli ◽  
A. Giannitrapani ◽  
A. Vicino

Robotica ◽  
2009 ◽  
Vol 28 (5) ◽  
pp. 663-673 ◽  
Author(s):  
Dilan Amarasinghe ◽  
George K. I. Mann ◽  
Raymond G. Gosine

SUMMARYThis paper describes a landmark detection and localization using an integrated laser-camera sensor. Laser range finder can be used to detect landmarks that are direction invariant in the laser data such as protruding edges in walls, edges of tables, and chairs. When such features are unavailable, the dependant processes will fail to function. However, in many instances, larger number of landmarks can be detected using computer vision. In the proposed method, camera is used to detect landmarks while the location of the landmark is measured by the laser range finder using laser-camera calibration information. Thus, the proposed method exploits the beneficial aspects of each sensor to overcome the disadvantages of the other sensor. While highlighting the drawbacks and limitations of single sensor based methods, an experimental results and important statistics are provided for the verification of the affectiveness sensor fusion method using Extended Kalman Filter (EKF) based simultaneous localization and mapping (SLAM) as an example application.


Sign in / Sign up

Export Citation Format

Share Document