scholarly journals LIDAR Point Cloud Registration for Sensing and Reconstruction of Unstructured Terrain

2018 ◽  
Vol 8 (11) ◽  
pp. 2318 ◽  
Author(s):  
Qingyuan Zhu ◽  
Jinjin Wu ◽  
Huosheng Hu ◽  
Chunsheng Xiao ◽  
Wei Chen

When 3D laser scanning (LIDAR) is used for navigation of autonomous vehicles operated on unstructured terrain, it is necessary to register the acquired point cloud and accurately perform point cloud reconstruction of the terrain in time. This paper proposes a novel registration method to deal with uneven-density and high-noise of unstructured terrain point clouds. It has two steps of operation, namely initial registration and accurate registration. Multisensor data is firstly used for initial registration. An improved Iterative Closest Point (ICP) algorithm is then deployed for accurate registration. This algorithm extracts key points and builds feature descriptors based on the neighborhood normal vector, point cloud density and curvature. An adaptive threshold is introduced to accelerate iterative convergence. Experimental results are given to show that our two-step registration method can effectively solve the uneven-density and high-noise problem in registration of unstructured terrain point clouds, thereby improving the accuracy of terrain point cloud reconstruction.

2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Author(s):  
A. Walicka ◽  
N. Pfeifer ◽  
G. Jóźków ◽  
A. Borkowski

<p><strong>Abstract.</strong> Remote sensing techniques are an important tool in fluvial transport monitoring, since they allow for effective evaluation of the volume of transported material. Nevertheless, there is no methodology for automatic calculation of movement parameters of individual rocks. These parameters can be determined by point cloud registration. Hence, the goal of this study is to develop a robust algorithm for terrestrial laser scanning point cloud registration. The registration is based on Iterative Closest Point algorithm, which requires well established initial parameters of transformation. Thus, we propose to calculate the initial parameters based on key points representing the maximum of Gaussian curvature. For each key point the set of geometrical features is calculated. The key points are then matched between two point clouds as a nearest neighbor in feature domain. Different combinations of neighborhood sizes, feature subsets, metrics and number of nearest neighbors were tested to obtain the highest ratio between properly and improperly matched key points. Finally, RANSAC algorithm was used to calculate the initial transformation parameters between the point clouds and the ICP algorithm was used for calculation of final transformation parameters. The investigations carried out on sample point clouds representing rocks enabled the adjustment of parameters of the algorithm and showed that the Gaussian curvature can be used as a 3-dimentional key point detector for such objects. The proposed algorithm enabled to register point clouds with the mean distance between point clouds equal to 3&amp;thinsp;mm.</p>


Author(s):  
M. R. Hess ◽  
V. Petrovic ◽  
F. Kuester

Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.


Author(s):  
A. A. Sidiropoulos ◽  
K. N. Lakakis ◽  
V. K. Mouza

The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points’ position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 938 ◽  
Author(s):  
Anna Fryskowska

Measurement using terrestrial laser scanning is performed at several stations to measure an entire object. In order to obtain a complete and uniform point cloud, it is necessary to register each and every scan in one local or global coordinate system. One registration method is based on reference points—in this case, checkerboard targets. The aim of this research was to analyse the accuracy of checkerboard target identification and propose an algorithm to improve the accuracy of target centre identification, particularly for low-resolution and low-quality point clouds. The proposed solution is based on the geometric determination of the target centre. This work presents an outline of a new approach, designed by the author, to discuss the influence of the point cloud parameters on the process of checkerboard centre identification and to propose an improvement in target centre identification. The validation of the proposed solutions reveals that the difference between the typical automatic target identification and the proposed method amounts to a maximum of 6 mm for scans of different qualities. The proposed method may serve as an alternative to, or supplement for, checkerboard identification, particularly when the quality of these scans is not sufficient for automatic algorithms.


2021 ◽  
Vol 37 (6) ◽  
pp. 1073-1087
Author(s):  
Xingbo Hu ◽  
Leidong Yang ◽  
Fangming Wu ◽  
Yinghong Tian

HighlightsFully automated registration free from artificial markers for multi-scan point clouds aimed for TLS-based measurement of bulk grains in large storehouses.The geometric structure of the large grain storehouse is explored to derive geometrical features as the structurally semantic information for scene understanding.The geometrical features are modeled as a small ordered set and correspondences are established by performing trials for all possible matching pairs of two sets extracted from two different scans.Significant improvements have been achieved in registration accuracy, computational efficiency, and robustness against scenes with symmetric structures as well as the immunity to noises and varying point density.Abstract. Point clouds collected by terrestrial laser scanning (TLS) in the application of bulk grain measurement and quantification contain a vast amount of data, relatively low-textured surfaces and highly symmetric structures. All of these challenges make it a difficult task to automatically register multiple scans from different viewpoints needed to fully cover a large-scale scene. To address the challenges, this article presents a robust automatic marker-free registration method dedicated for multi-scan TLS point cloud data captured in large grain storehouses. The framework of the dedicated method follows the common procedure to split the entire registration into coarse alignment and fine registration, and uses the iterative closest point (ICP) algorithm for the latter. The main contribution of the proposed dedicated method is an efficient way to find a global coarse alignment that is robust across individual scans in a TLS-based bulk grain measurement project. To tackle the correspondence problem, which is at the core of a registration task, the geometric information inherent in grain storehouses is explored in the stage of global coarse alignment. The derived semantic feature points are modeled as a small ordered set and reliable correspondences are established by performing trials for all possible matching pairs of two sets extracted from two different scans. Experimental results show the dedicated method outperforms the existing generic markless registration approaches in terms of accuracy, robustness and computational efficiency. With robustness, efficiency and accuracy, the proposed markless point cloud registration method dedicated for bulk grain measurement can cover a gap between the TLS technology and various granary field applications. Especially, its applicability to the dominant storage structure in Chinese huge grain reserve system implies remarkable efficiency improvements and will facilitate the application of TLS-based measurement in the national grain inventory of China. Keywords: Bulk grain measurement, Feature extraction, Grain storehouse, Markerless registration, Point cloud, Terrestrial laser scanning.


2017 ◽  
Vol 11 (4) ◽  
pp. 657-665 ◽  
Author(s):  
Ryuji Miyazaki ◽  
Makoto Yamamoto ◽  
Koichi Harada ◽  
◽  
◽  
...  

We propose a line-based region growing method for extracting planar regions with precise boundaries from a point cloud with an anisotropic distribution. Planar structure extraction from point clouds is an important process in many applications, such as maintenance of infrastructure components including roads and curbstones, because most artificial structures consist of planar surfaces. A mobile mapping system (MMS) is able to obtain a large number of points while traveling at a standard speed. However, if a high-end laser scanning system is equipped, the point cloud has an anisotropic distribution. In traditional point-based methods, this causes problems when calculating geometric information using neighboring points. In the proposed method, the precise boundary of a planar structure is maintained by appropriately creating line segments from an input point cloud. Furthermore, a normal vector at a line segment is precisely estimated for the region growing process. An experiment using the point cloud from an MMS simulation indicates that the proposed method extracts planar regions accurately. Additionally, we apply the proposed method to several real point clouds and evaluate its effectiveness via visual inspection.


2021 ◽  
Vol 13 (10) ◽  
pp. 1905
Author(s):  
Biao Xiong ◽  
Weize Jiang ◽  
Dengke Li ◽  
Man Qi

Terrestrial laser scanning (TLS) is an important part of urban reconstruction and terrain surveying. In TLS applications, 4-point congruent set (4PCS) technology is widely used for the global registration of point clouds. However, TLS point clouds usually enjoy enormous data and uneven density. Obtaining the congruent set of tuples in a large point cloud scene can be challenging. To address this concern, we propose a registration method based on the voxel grid of the point cloud in this paper. First, we establish a voxel grid structure and index structure for the point cloud and eliminate uneven point cloud density. Then, based on the point cloud distribution in the voxel grid, keypoints are calculated to represent the entire point cloud. Fast query of voxel grids is used to restrict the selection of calculation points and filter out 4-point tuples on the same surface to reduce ambiguity in building registration. Finally, the voxel grid is used in our proposed approach to perform random queries of the array. Using different indoor and outdoor data to compare our proposed approach with other 4-point congruent set methods, according to the experimental results, in terms of registration efficiency, the proposed method is more than 50% higher than K4PCS and 78% higher than Super4PCS.


Author(s):  
A. Kumar ◽  
K. Anders ◽  
L Winiwarter ◽  
B. Höfle

<p><strong>Abstract.</strong> 3D point clouds acquired by laser scanning and other techniques are difficult to interpret because of their irregular structure. To make sense of this data and to allow for the derivation of useful information, a segmentation of the points in groups, units, or classes fit for the specific use case is required. In this paper, we present a non-end-to-end deep learning classifier for 3D point clouds using multiple sets of input features and compare it with an implementation of the state-of-the-art deep learning framework PointNet++. We first start by extracting features derived from the local normal vector (normal vectors, eigenvalues, and eigenvectors) from the point cloud, and study the result of classification for different local search radii. We extract additional features related to spatial point distribution and use them together with the normal vector-based features. We find that the classification accuracy improves by up to 33% as we include normal vector features with multiple search radii and features related to spatial point distribution. Our method achieves a mean Intersection over Union (mIoU) of 94% outperforming PointNet++’s Multi Scale Grouping by up to 12%. The study presents the importance of multiple search radii for different point cloud features for classification in an urban 3D point cloud scene acquired by terrestrial laser scanning.</p>


Author(s):  
L. Gigli ◽  
B. R. Kiran ◽  
T. Paul ◽  
A. Serna ◽  
N. Vemuri ◽  
...  

Abstract. Point cloud datasets for perception tasks in the context of autonomous driving often rely on high resolution 64-layer Light Detection and Ranging (LIDAR) scanners. They are expensive to deploy on real-world autonomous driving sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task. In our experiments the low resolution 16/32 layer LIDAR point clouds are simulated by subsampling the original 64 layer data, for subsequent transformation in to a feature map in the Bird-Eye-View(BEV) and Spherical-View (SV) representations of the point cloud. We introduce the usage of the local normal vector with the LIDAR’s spherical coordinates as an input channel to existing LoDNN architectures. We demonstrate that this local normal feature in conjunction with classical features not only improves performance for binary road segmentation on full resolution point clouds, but it also reduces the negative impact on the accuracy when subsampling dense point clouds as compared to the usage of classical features alone. We assess our method with several experiments on two datasets: KITTI Road-segmentation benchmark and the recently released Semantic KITTI dataset.


Sign in / Sign up

Export Citation Format

Share Document