scholarly journals Integrate Point-Cloud Segmentation with 3D LiDAR Scan-Matching for Mobile Robot Localization and Mapping

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 237 ◽  
Author(s):  
Xuyou Li ◽  
Shitong Du ◽  
Guangchun Li ◽  
Haoyu Li

Localization and mapping are key requirements for autonomous mobile systems to perform navigation and interaction tasks. Iterative Closest Point (ICP) is widely applied for LiDAR scan-matching in the robotic community. In addition, the standard ICP algorithm only considers geometric information when iteratively searching for the nearest point. However, ICP individually cannot achieve accurate point-cloud registration performance in challenging environments such as dynamic environments and highways. Moreover, the computation of searching for the closest points is an expensive step in the ICP algorithm, which is limited to meet real-time requirements, especially when dealing with large-scale point-cloud data. In this paper, we propose a segment-based scan-matching framework for six degree-of-freedom pose estimation and mapping. The LiDAR generates a large number of ground points when scanning, but many of these points are useless and increase the burden of subsequent processing. To address this problem, we first apply an image-based ground-point extraction method to filter out noise and ground points. The point cloud after removing the ground points is then segmented into disjoint sets. After this step, a standard point-to-point ICP is applied into to calculate the six degree-of-freedom transformation between consecutive scans. Furthermore, once closed loops are detected in the environment, a 6D graph-optimization algorithm for global relaxation (6D simultaneous localization and mapping (SLAM)) is employed. Experiments based on publicly available KITTI datasets show that our method requires less runtime while at the same time achieves higher pose estimation accuracy compared with the standard ICP method and its variants.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5072
Author(s):  
Haiwei Yang ◽  
Peilin Jiang ◽  
Fei Wang

Pose estimation is a typical problem in the field of image processing, the purpose of which is to compare or fuse images acquired under different conditions. In recent years, many studies have focused on pose estimation algorithms, but so far there are still many challenges, such as efficiency, complexity and accuracy for various targets and conditions, in the field of algorithm research and practical applications. In this paper, a multi-view-based pose estimation method is proposed. This method can solve the pose estimation problem effectively for large-scale targets and achieve good performance accuracy and stability. Compared with existing methods, this method uses different views (positions and angles), each of which only observes some features of large-size parts, to estimate the six-degree-of-freedom pose of the entire large-size parts. Experimental results demonstrate that the accurate six-degree-of-freedom pose for different targets can be obtained by the proposed method which plays an important role in many actual production lines. What is more, a new visual guidance system, applied into intelligent manufacturing, is presented based on this method. The new visual guidance system has been widely used in automobile manufacturing with high accuracy and efficiency but low cost.


2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2021 ◽  
Author(s):  
Kamran Shahid

Future autonomous satellite repair missions would benefit from higher accuracy pose estimates of target satellites. Constraint analysis provides a sensitivity index which can be used as a registration accuracy predictor. It was shown that point cloud configurations with higher values of this index returned more accurate pose estimates than unstable configurations with lower index values. Registration tests were conducted on four satellite geometries using synthetic range data. These results elucidate a means of determining the optimal scanning area of a given satellite for registration with the Iterative Closest Point (ICP) algorithm to return a highly accurate pose estimate.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 250
Author(s):  
Zhiyuan Niu ◽  
Yongjie Ren ◽  
Linghui Yang ◽  
Jiarui Lin ◽  
Jigui Zhu

Large-scale measurement plays an increasingly important role in intelligent manufacturing. However, existing instruments have problems with immersive experiences. In this paper, an immersive positioning and measuring method based on augmented reality is introduced. An inside-out vision measurement approach using a multi-camera rig with non-overlapping views is presented for dynamic six-degree-of-freedom measurement. By using active LED markers, a flexible and robust solution is delivered to deal with complex manufacturing sites. The space resection adjustment principle is addressed and measurement errors are simulated. The improved Nearest Neighbor method is employed for feature correspondence. The proposed tracking method is verified by experiments and results with good performance are obtained.


2021 ◽  
Vol 10 (1) ◽  
pp. 19-24
Author(s):  
Jan Nitsche ◽  
Matthias Franke ◽  
Nils Haverkamp ◽  
Daniel Heißelmann

Abstract. The estimation of the six-degree-of-freedom position and orientation of an end effector is of high interest in industrial robotics. High precision and data rates are important requirements when choosing an adequate measurement system. In this work, a six-degree-of-freedom pose estimation setup based on laser multilateration is described together with the measurement principle and self-calibration strategies used in this setup. In an experimental setup, data rates of 200 Hz are achieved. During movement, deviations from a reference coordinate measuring machine of 20 µm are observed. During standstill, the deviations are reduced to 5 µm.


Sensors ◽  
2015 ◽  
Vol 15 (7) ◽  
pp. 16448-16465 ◽  
Author(s):  
Changyu He ◽  
Peter Kazanzides ◽  
Hasan Sen ◽  
Sungmin Kim ◽  
Yue Liu

2012 ◽  
Vol 178-181 ◽  
pp. 1438-1441
Author(s):  
Li Hua Wang ◽  
Guang Wei Liu ◽  
An Ning Huang ◽  
Ya Yu Huang

With the large-scale speed-up of the railway, the dynamic track stabilizer will play an important role on the track overhauling and railroading of new line in our country. Bogie is one of the major critical components of the dynamic track stabilizer; its vibrating characteristic will affect the vibrating characteristic of the dynamic track stabilizer directly. The method of numerical simulate was used, based on the spectral density of the track irregularities, the time domain loads of the track irregularities were gained. Then the vibrating characteristics of the dynamic track stabilizer bogie under the excitation of the track irregularities were analyzed on the bases of the ANSYS/LS-DYNA. And the lateral, dilation, ups and downs, nod, swing and anti-rolling vibrating characteristics of the bogie on the six degree of freedom were obtained. The analysis results of this paper will provide foundation for the research on the stationarity and security of the dynamic track stabilizer.


2008 ◽  
Vol 25 (3) ◽  
pp. 148-163 ◽  
Author(s):  
Oliver Wulf ◽  
Andreas Nüchter ◽  
Joachim Hertzberg ◽  
Bernardo Wagner

Author(s):  
Hanieh Deilamsalehy ◽  
Timothy C. Havens ◽  
Joshua Manela

Precise, robust, and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and simultaneous localization and mapping (SLAM). To estimate the pose of a platform, sensors such as inertial measurement units (IMUs), global positioning system (GPS), and cameras are commonly employed. Each of these sensors has their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this paper, a three-dimensional (3D) pose estimation algorithm is presented for a unmanned aerial vehicle (UAV) in an unknown GPS-denied environment. A UAV can be fully localized by three position coordinates and three orientation angles. The proposed algorithm fuses the data from an IMU, a camera, and a two-dimensional (2D) light detection and ranging (LiDAR) using extended Kalman filter (EKF) to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a two-dimensional (2D) LiDAR can only provide pose estimation in its scanning plane, and thus, it cannot obtain a full pose estimation in a 3D environment. A novel method is introduced in this paper that employs a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera, and it is shown that this method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments.


Sign in / Sign up

Export Citation Format

Share Document