scholarly journals PCA-Based Denoising Algorithm for Outdoor Lidar Point Cloud Data

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3703
Author(s):  
Dongyang Cheng ◽  
Dangjun Zhao ◽  
Junchao Zhang ◽  
Caisheng Wei ◽  
Di Tian

Due to the complexity of surrounding environments, lidar point cloud data (PCD) are often degraded by plane noise. In order to eliminate noise, this paper proposes a filtering scheme based on the grid principal component analysis (PCA) technique and the ground splicing method. The 3D PCD is first projected onto a desired 2D plane, within which the ground and wall data are well separated from the PCD via a prescribed index based on the statistics of points in all 2D mesh grids. Then, a KD-tree is constructed for the ground data, and rough segmentation in an unsupervised method is conducted to obtain the true ground data by using the normal vector as a distinctive feature. To improve the performance of noise removal, we propose an elaborate K nearest neighbor (KNN)-based segmentation method via an optimization strategy. Finally, the denoised data of the wall and ground are spliced for further 3D reconstruction. The experimental results show that the proposed method is efficient at noise removal and is superior to several traditional methods in terms of both denoising performance and run speed.

Author(s):  
Q. Kang ◽  
G. Huang ◽  
S. Yang

Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.


2019 ◽  
Vol 16 (4) ◽  
pp. 172988141985753 ◽  
Author(s):  
Janghun Hyeon ◽  
Weonsuk Lee ◽  
Joo Hyung Kim ◽  
Nakju Doh

In this article, a point-wise normal estimation network for three-dimensional point cloud data called NormNet is proposed. We propose the multiscale K-nearest neighbor convolution module for strengthened local feature extraction. With the multiscale K-nearest neighbor convolution module and PointNet-like architecture, we achieved a hybrid of three features: a global feature, a semantic feature from the segmentation network, and a local feature from the multiscale K-nearest neighbor convolution module. Those features, by mutually supporting each other, not only increase the normal estimation performance but also enable the estimation to be robust under severe noise perturbations or point deficiencies. The performance was validated in three different data sets: Synthetic CAD data (ModelNet), RGB-D sensor-based real 3D PCD (S3DIS), and LiDAR sensor-based real 3D PCD that we built and shared.


Author(s):  
P. Liang ◽  
G. Q. Zhou ◽  
Y. L. Lu ◽  
X. Zhou ◽  
B. Song

Abstract. Due to the influence of the occlusion of objects or the complexity of the measured terrain in the scanning process of airborne lidar, the point cloud data inevitably appears holes after filtering and other processing. The incomplete data will inevitably have an impact on the quality of the reconstructed digital elevation model, so how to repair the incomplete point cloud data has become an urgent problem to be solved. To solve the problem of hole repair in point cloud data, a hole repair algorithm based on improved moving least square method is proposed in this paper by studying existing hole repair algorithms. Firstly, the algorithm extracts the boundary of the point cloud based on the triangular mesh model. Then we use k-nearest neighbor search to obtain the k-nearest neighbor points of the boundary point. Finally, according to the boundary point and its k-nearest neighbor point, the improved moving least squares method is used to fit the hole surface to realize the hole repair. Combined with C++ and MATLAB language, the feasibility of the algorithm is tested by specific application examples. The experimental results show that the algorithm can effectively repair the point cloud data holes, and the repairing precision is high. The filled hole area can be smoothly connected with the boundary.


2021 ◽  
Vol 13 (5) ◽  
pp. 1003
Author(s):  
Nan Luo ◽  
Hongquan Yu ◽  
Zhenfeng Huo ◽  
Jinhui Liu ◽  
Quan Wang ◽  
...  

Semantic segmentation of the sensed point cloud data plays a significant role in scene understanding and reconstruction, robot navigation, etc. This work presents a Graph Convolutional Network integrating K-Nearest Neighbor searching (KNN) and Vector of Locally Aggregated Descriptors (VLAD). KNN searching is utilized to construct the topological graph of each point and its neighbors. Then, we perform convolution on the edges of constructed graph to extract representative local features by multiple Multilayer Perceptions (MLPs). Afterwards, a trainable VLAD layer, NetVLAD, is embedded in the feature encoder to aggregate the local and global contextual features. The designed feature encoder is repeated for multiple times, and the extracted features are concatenated in a jump-connection style to strengthen the distinctiveness of features and thereby improve the segmentation. Experimental results on two datasets show that the proposed work settles the shortcoming of insufficient local feature extraction and promotes the accuracy (mIoU 60.9% and oAcc 87.4% for S3DIS) of semantic segmentation comparing to existing models.


2020 ◽  
Vol 10 (22) ◽  
pp. 8073
Author(s):  
Min Woo Ryu ◽  
Sang Min Oh ◽  
Min Ju Kim ◽  
Hun Hee Cho ◽  
Chang Baek Son ◽  
...  

This study proposes a new method to generate a three-dimensional (3D) geometric representation of an indoor environment by refining and processing an indoor point cloud data (PCD) captured through backpack laser scanners. The proposed algorithm comprises two parts to generate the 3D geometric representation: data refinement and data processing. In the refinement section, the inputted indoor PCD are roughly segmented by applying random sample consensus (RANSAC) to raw data based on an estimated normal vector. Next, the 3D geometric representation is generated by calculating and separating tangent points on segmented PCD. This study proposes a robust algorithm that utilizes the topological feature of the indoor PCD created by a hierarchical data process. The algorithm minimizes the size and the uncertainty of raw PCD caused by the absence of a global navigation satellite system and equipment errors. The result of this study shows that the indoor environment can be converted into 3D geometric representation by applying the proposed algorithm to the indoor PCD.


2019 ◽  
Vol 16 (2) ◽  
pp. 172988141983813
Author(s):  
Haobin Shi ◽  
Meng Xu ◽  
Kao-Shing Hwang ◽  
Chia-Hung Hung

The objective of this article aims at the safety problems where robots and operators are highly coupled in a working space. A method to model an articulated robot manipulator by cylindrical geometries based on partial cloud points is proposed in this article. Firstly, images with point cloud data containing the posture of a robot with five resolution links are captured by a pair of RGB-D cameras. Secondly, the process of point cloud clustering and Gaussian noise filtering is applied to the images to separate the point cloud data of three links from the combined images. Thirdly, an ideal cylindrical model fits the processed point cloud data are segmented by the random sample consensus method such that three joint angles corresponding to three major links are computed. The original method for calculating the normal vector of point cloud data is the cylindrical model segmentation method, but the accuracy of posture measurement is low when the point cloud data is incomplete. To solve this problem, a principal axis compensation method is proposed, which is not affected by number of point cloud cluster data. The original method and the proposed method are used to estimate the three joint angular of the manipulator system in experiments. Experimental results show that the average error is reduced by 27.97%, and the sample standard deviation of the error is reduced by 54.21% compared with the original method for posture measurement. The proposed method is 0.971 piece/s slower than the original method in terms of the image processing velocity. But the proposed method is still feasible, and the purpose of posture measurement is achieved.


2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Joseph A. Beck ◽  
Jeffrey M. Brown ◽  
Alex A. Kaszynski ◽  
Emily B. Carper

The impact of geometry variations on integrally bladed disk eigenvalues is investigated. A large population of industrial bladed disks (blisks) are scanned via a structured light optical scanner to provide as-measured geometries in the form of point-cloud data. The point cloud data are transformed using principal component (PC) analysis that results in a Pareto of PCs. The PCs are used as inputs to predict the variation in a blisk's eigenvalues due to geometry variations from nominal when all blades have the same deviations. A large subset of the PCs is retained to represent the geometry variation, which proves challenging in probabilistic analyses because of the curse of dimensionality. To overcome this, the dimensionality of the problem is reduced by computing an active subspace that describes critical directions in the PC input space. Active variables in this subspace are then fit with a surrogate model of a blisk's eigenvalues. This surrogate can be sampled efficiently with the large subset of PCs retained in the active subspace formulation to yield a predicted distribution in eigenvalues. The ability of building an active subspace mapping PC coefficient to eigenvalues is demonstrated. Results indicate that exploitation of the active subspace is capable of capturing eigenvalue variation.


2021 ◽  
Vol 13 (16) ◽  
pp. 3156
Author(s):  
Yong Li ◽  
Yinzheng Luo ◽  
Xia Gu ◽  
Dong Chen ◽  
Fang Gao ◽  
...  

Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud data has become easier in recent years, most point cloud processing algorithms do not consider the color information associated with the point cloud or do not make full use of the color information. Therefore, we propose a voxel-based local feature descriptor according to the voxel-based local binary pattern (VLBP) and fuses point cloud RGB information and geometric structure features using a random forest classifier to build a color point cloud classification algorithm. The proposed algorithm voxelizes the point cloud; divides the neighborhood of the center point into cubes (i.e., multiple adjacent sub-voxels); compares the gray information of the voxel center and adjacent sub-voxels; performs voxel global thresholding to convert it into a binary code; and uses a local difference sign–magnitude transform (LDSMT) to decompose the local difference of an entire voxel into two complementary components of sign and magnitude. Then, the VLBP feature of each point is extracted. To obtain more structural information about the point cloud, the proposed method extracts the normal vector of each point and the corresponding fast point feature histogram (FPFH) based on the normal vector. Finally, the geometric mechanism features (normal vector and FPFH) and color features (RGB and VLBP features) of the point cloud are fused, and a random forest classifier is used to classify the color laser point cloud. The experimental results show that the proposed algorithm can achieve effective point cloud classification for point cloud data from different indoor and outdoor scenes, and the proposed VLBP features can improve the accuracy of point cloud classification.


Author(s):  
A. Nurunnabi ◽  
Y. Sadahiro ◽  
R. Lindenbergh

This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD). Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA) with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i) in the presence of noise and high percentage of outliers, (ii) for incomplete as well as complete data, (iii) for small and large number of points, and (iv) for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m); the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic) poles, diameter at breast height estimation for trees, and building and bridge information modelling.


Author(s):  
M. Eslami ◽  
M. Saadatseresht

Abstract. Laser scanner generated point cloud and photogrammetric imagery are complimentary data for many applications and services. Misalignment between imagery and point cloud data is a common problem, which causes to inaccurate products and procedures. In this paper, a novel strategy is proposed for coarse to fine registration between close-range imagery and terrestrial laser scanner point cloud data. In such a case, tie points are extracted and matched from photogrammetric imagery and preprocessing is applied on generated tie points to eliminate non-robust ones. At that time, for every tie point, two neighbor pixels are selected and matched in all overlapped images. After that, coarse interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) of the images are employed to reconstruct object space points of the tie point and its two neighbor pixels. Then, corresponding nearest points to the object space photogrammetric points are estimated in the point cloud data. Estimated three point cloud points are used to calculate a plane and its normal vector. Theoretically, every object space tie point should be located on the aforementioned plane, which is used as conditional equation alongside the collinearity equation to fine register the photogrammetric imagery network. Attained root mean square error (RMSE) results on check points, have been shown less than 2.3 pixels, which shows the accuracy, completeness and robustness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document