scholarly journals A Point Cloud Registration Algorithm Based on Feature Extraction and Matching

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yongshan Liu ◽  
Dehan Kong ◽  
Dandan Zhao ◽  
Xiang Gong ◽  
Guichun Han

The existing registration algorithms suffer from low precision and slow speed when registering a large amount of point cloud data. In this paper, we propose a point cloud registration algorithm based on feature extraction and matching; the algorithm helps alleviate problems of precision and speed. In the rough registration stage, the algorithm extracts feature points based on the judgment of retention points and bumps, which improves the speed of feature point extraction. In the registration process, FPFH features and Hausdorff distance are used to search for corresponding point pairs, and the RANSAC algorithm is used to eliminate incorrect point pairs, thereby improving the accuracy of the corresponding relationship. In the precise registration phase, the algorithm uses an improved normal distribution transformation (INDT) algorithm. Experimental results show that given a large amount of point cloud data, this algorithm has advantages in both time and precision.

2014 ◽  
Vol 644-650 ◽  
pp. 4624-4629
Author(s):  
Song Liu ◽  
Xiao Yao Xie

For the problem of huge computation and requiring high computing resource in point cloud registration, according to the theory of parallel computing, the algorithm of point cloud registration base on MapReduce is designed. Through building a Hadoop cluster consisted by average PCs, four examples have been tested. The experiment results show that point cloud registration algorithm based on MapReduce can register point cloud data with high accuracy.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 385-395
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Guoli Wang ◽  
Pingjun Nie ◽  
Deng Pan ◽  
...  

Aiming at the narrow and long tunnel structure, few internal features, and a large amount of point cloud data, the existing registration algorithms and commercial software registration results are not ideal, an iterative global registration algorithm is proposed for massive underground tunnel point cloud registration, which is composed of local initial pose acquisition and global adjustment. Firstly, the feature point coordinates in the point cloud are extracted, and then the station-by-station registration is performed according to the Rodrigues matrix. Finally, the registration result is considered as the initial value of the parameter, and the global adjustment of all observations is carried out. The observation values are weighted by the selection weight iteration method and the weights are constantly modified in the iteration process until the threshold conditions are met and the iteration stops. In this paper, the experimental data, made up of 85 stations of point cloud data, are from the Xiamen subway tunnel, which is about 1300 m long. When the accumulated error of station-to-station registration is too large, several stations are regarded as partial wholes, and the optimal registration is achieved through multiple global adjustments, and the registration accuracy is within 5 mm. Experimental results confirm the feasibility and effectiveness of the algorithm, which provides a new method for point cloud registration of underground space tunnel.


2011 ◽  
Vol 299-300 ◽  
pp. 1091-1094 ◽  
Author(s):  
Jiang Zhu ◽  
Yuichi Takekuma ◽  
Tomohisa Tanaka ◽  
Yoshio Saito

Currently, design and processing of complicated model are enabled by the progress of the CAD/CAM system. In shape measurement, high precision measurement is performed using CMM. In order to evaluate the machined part, the designed model made by CAD system the point cloud data provided by the measurement system are analyzed and compared. Usually, the designed CAD model and measured point cloud data are made in the different coordinate systems, it is necessary to register those models in the same coordinate system for evaluation. In this research, a 3D model registration method based on feature extraction and iterative closest point (ICP) algorithm is proposed. It could efficiently and accurately register two models in different coordinate systems, and effectively avoid the problem of localized solution.


2020 ◽  
Vol 14 (12) ◽  
pp. 2675-2681
Author(s):  
Wenting Cui ◽  
Jianyi Liu ◽  
Shaoyi Du ◽  
Yuying Liu ◽  
Teng Wan ◽  
...  

2016 ◽  
Vol 31 (9) ◽  
pp. 889-896
Author(s):  
马鑫 MA Xin ◽  
魏仲慧 WEI Zhong-hui ◽  
何昕 HE Xin ◽  
于国栋 YU Guo-dong

2013 ◽  
Vol 2013 ◽  
pp. 1-19 ◽  
Author(s):  
Yi An ◽  
Zhuohan Li ◽  
Cheng Shao

Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from[0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.


Author(s):  
D. L. Bool ◽  
L. C. Mabaquiao ◽  
M. E. Tupas ◽  
J. L. Fabila

<p><strong>Abstract.</strong> For the past 10 years, the Philippines has seen and experienced the growing force of different natural disasters and because of this the Philippine governement started an initiative to use LiDAR technology in the forefront of disaster management to mitigate the effects of these natural phenomenons. The study aims to help the initiative by determining the shape, number and distribution and location of buildings within a given vicinity. The study implements a Python script to automate the detection of the different buildings within a given area using a RANSAC Algorithm to process the Classified LiDAR Dataset. Pre-processing is done by clipping the LiDAR data into a sample area. The program starts by using the a Python module to read .LAS files then implements the RANSAC algorithm to detect roof planes from a given set of parameters. The detected planes are intersected and combined by the program to define the roof of a building. Points lying on the detected building are removed from the initial list and the program runs again. A sample area in Pulilan, Bulacan was used. A total of 8 out of 9 buildings in the test area were detected by the program and the difference in area between the generated shapefile and the digitized shapefile were compared.</p>


Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


Sign in / Sign up

Export Citation Format

Share Document