scholarly journals A Novel Approach for Lidar-Based Robot Localization in a Scale-Drifted Map Constructed Using Monocular SLAM

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2230 ◽  
Author(s):  
Su Wang ◽  
Yukinori Kobayashi ◽  
Ankit A. Ravankar ◽  
Abhijeet Ravankar ◽  
Takanori Emaru

Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1772
Author(s):  
Gengyu Ge ◽  
Yi Zhang ◽  
Qin Jiang ◽  
Wei Wang

Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.


Author(s):  
Wenshan Wang ◽  
Qixin Cao ◽  
Xiaoxiao Zhu ◽  
Masaru Adachi

Purpose – Robot localization technology has been widely studied for decades and a lot of remarkable approaches have been developed. However, in practice, this technology has hardly been applied to common day-to-day deployment scenarios. The purpose of this paper is to present a novel approach that focuses on improving the localization robustness in complicated environment. Design/methodology/approach – The localization robustness is improved by dynamically switching the localization components (such as the environmental camera, the laser range finder and the depth camera). As the components are highly heterogeneous, they are developed under the robotic technology component (RTC) framework. This simplifies the developing process by increasing the potential for reusability and future expansion. To realize this switching, the localization reliability for each component is modeled, and a configuration method for dynamically selecting dependable components at run-time is presented. Findings – The experimental results show that this approach significantly decreases robot lost situation in the complicated environment. The robustness is further enhanced through the cooperation of heterogeneous localization components. Originality/value – A multi-component automatic switching approach for robot localization system is developed and described in this paper. The reliability of this system is proved to be a substantial improvement over single-component localization techniques.


Robotica ◽  
2009 ◽  
Vol 28 (5) ◽  
pp. 663-673 ◽  
Author(s):  
Dilan Amarasinghe ◽  
George K. I. Mann ◽  
Raymond G. Gosine

SUMMARYThis paper describes a landmark detection and localization using an integrated laser-camera sensor. Laser range finder can be used to detect landmarks that are direction invariant in the laser data such as protruding edges in walls, edges of tables, and chairs. When such features are unavailable, the dependant processes will fail to function. However, in many instances, larger number of landmarks can be detected using computer vision. In the proposed method, camera is used to detect landmarks while the location of the landmark is measured by the laser range finder using laser-camera calibration information. Thus, the proposed method exploits the beneficial aspects of each sensor to overcome the disadvantages of the other sensor. While highlighting the drawbacks and limitations of single sensor based methods, an experimental results and important statistics are provided for the verification of the affectiveness sensor fusion method using Extended Kalman Filter (EKF) based simultaneous localization and mapping (SLAM) as an example application.


Robotica ◽  
1998 ◽  
Vol 16 (3) ◽  
pp. 297-307 ◽  
Author(s):  
Jun Miura ◽  
Katsushi Ikeuchi

The ability of manipulating flexible objects, such as rubber belts and paper sheets, is important in automated manufacturing systems. This paper describes a novel approach to vision-guided assembly of flexible objects. The operation dealt with in this paper is to assemble a rubber belt with fixed pulleys. By analyzing possible states of the belt based on the empirical knowledge of the belt, we can derive a method to have not only the action planning but also the visual verification planning. We have implemented a belt assembly system using the two manipulators and a laser range finder as the sensor, and succeeded in performing the belt-pulley assembly. The extension of our approach to other kinds of assembly of flexible objects is also discussed.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Ruwan Egodagamage ◽  
Mihran Tuceryan

Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.


2013 ◽  
Vol 404 ◽  
pp. 645-649
Author(s):  
Li Ping Jiang ◽  
Biao Zhang ◽  
Qi Xin Cao ◽  
Chun Tao Leng

In order to solve the transportation problem in large aircraft components assembly process, an AGV posture synchronization system is built, which utilizes a two-dimensional laser range finder and adaptive control method. Two-dimensional laser range finder is located in the front of AGV to collect real-time point cloud of environment. After tracking AGV section point cloud, we extract straight lines and turning points using the RANSAC algorithm, and estimate the relative posture accordingly. Then adaptive controller processes the position information to achieve master-slave tracking for multi-AGV. In experiment we used three sets of identical AGV; the average distance error was less than 5mm while the angle error was limited within 0.3 °. The results verified the reliability and practicability of our system, which can meet the requirements for transporting large parts.


Sign in / Sign up

Export Citation Format

Share Document