scholarly journals Laser Point Cloud Segmentation in MATLAB

2021 ◽  
Author(s):  
Bahadır Ergün ◽  
Cumhur Şahin

Currently, as a result of the massive continuous advancements in laser measurement technology, possibilities of map production are broadened, the loss of time and the waste of material sources are highly prevented, and the accuracy and precision of the obtained results are significantly improved. In the view of engineering concept. However, big data which are from laser point clouds have been especially used in the significant procedures of surveying studies. Programming methods are dependent in each studies. In the necessity of the applications, the coding procedure has more efficient, the data of work has increased, and time has been consumed. The coding methods have necessarily been optimized for working together especially in the big data studies. In this section, an automated survey (building facade surveying) is produced from scanning data by means of coding in MatLAB.

Author(s):  
J. Boehm ◽  
K. Liu ◽  
C. Alis

In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.


2019 ◽  
Vol 11 (23) ◽  
pp. 2727 ◽  
Author(s):  
Ming Huang ◽  
Pengcheng Wei ◽  
Xianglei Liu

Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.


2020 ◽  
Vol 37 (6) ◽  
pp. 1019-1027
Author(s):  
Ali Saglam ◽  
Hasan B. Makineci ◽  
Ömer K. Baykan ◽  
Nurdan Akhan Baykan

Point cloud processing is a struggled field because the points in the clouds are three-dimensional and irregular distributed signals. For this reason, the points in the point clouds are mostly sampled into regularly distributed voxels in the literature. Voxelization as a pretreatment significantly accelerates the process of segmenting surfaces. The geometric cues such as plane directions (normals) in the voxels are mostly used to segment the local surfaces. However, the sampling process may include a non-planar point group (patch), which is mostly on the edges and corners, in a voxel. These voxels can cause misleading the segmentation process. In this paper, we separate the non-planar patches into planar sub-patches using k-means clustering. The largest one among the planar sub-patches replaces the normal and barycenter properties of the voxel with those of itself. We have tested this process in a successful point cloud segmentation method and measure the effects of the proposed method on two point cloud segmentation datasets (Mosque and Train Station). The method increases the accuracy success of the Mosque dataset from 83.84% to 87.86% and that of the Train Station dataset from 85.36% to 87.07%.


Author(s):  
M. Bassier ◽  
M. Bonduel ◽  
B. Van Genechten ◽  
M. Vergauwen

Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent.<br><br> In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.


Author(s):  
R. Honma ◽  
H. Date ◽  
S. Kanai

<p><strong>Abstract.</strong> Point clouds acquired using Mobile Laser Scanning (MLS) are applied to extract road information such as curb stones, road markings, and road side objects. In this paper, we present a scanline-based MLS point cloud segmentation method for various road and road side objects. First, end points of the scanline, jump edge points, and corner points are extracted as feature points. The feature points are then interpolated to accurately extract irregular parts consisting of irregularly distributed points such as vegetation. Next, using a point reduction method, additional feature points on a smooth surface are extracted for segmentation at the edges of the curb cut. Finally, points between the feature points are extracted as flat segments on the scanline, and continuing feature points are extracted as irregular segments on the scanline. Furthermore, these segments on the scanline are integrated as flat or irregular regions. In the extraction of the feature points, neighboring points based on the spatial distance are used to avoid being influenced by the difference in the point density. Based on experiments, the effectiveness of the proposed method was indicated based on an application to an MLS point cloud.</p>


Author(s):  
Omar A. Mures ◽  
Alberto Jaspe ◽  
Emilio J. Padrón ◽  
Juan R. Rabuñal

Recent advances in acquisition technologies, such as LIDAR and photogrammetry, have brought back to popularity 3D point clouds in a lot of fields of application of Computer Graphics: Civil Engineering, Architecture, Topography, etc. These acquisition systems are producing an unprecedented amount of geometric data with additional attached information, resulting in huge datasets whose processing and storage requirements exceed usual approaches, presenting new challenges that can be addressed from a Big Data perspective by applying High Performance Computing and Computer Graphics techniques. This chapter presents a series of applications built on top of Point Cloud Manager (PCM), a middleware that provides an abstraction for point clouds with arbitrary attached data and makes it easy to perform out-of-core operations on them on commodity CPUs and GPUs. Hence, different kinds of real world applications are tackled, showing both real-time and offline examples, and render-oriented and computation-related operations as well.


Author(s):  
Lee J. Wells ◽  
Mohammed S. Shafae ◽  
Jaime A. Camelio

Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Liang Gong ◽  
Xiaofeng Du ◽  
Kai Zhu ◽  
Ke Lin ◽  
Qiaojun Lou ◽  
...  

The automated measurement of crop phenotypic parameters is of great significance to the quantitative study of crop growth. The segmentation and classification of crop point cloud help to realize the automation of crop phenotypic parameter measurement. At present, crop spike-shaped point cloud segmentation has problems such as fewer samples, uneven distribution of point clouds, occlusion of stem and spike, disorderly arrangement of point clouds, and lack of targeted network models. The traditional clustering method can realize the segmentation of the plant organ point cloud with relatively independent spatial location, but the accuracy is not acceptable. This paper first builds a desktop-level point cloud scanning apparatus based on a structured-light projection module to facilitate the point cloud acquisition process. Then, the rice ear point cloud was collected, and the rice ear point cloud data set was made. In addition, data argumentation is used to improve sample utilization efficiency and training accuracy. Finally, a 3D point cloud convolutional neural network model called Panicle-3D was designed to achieve better segmentation accuracy. Specifically, the design of Panicle-3D is aimed at the multiscale characteristics of plant organs, combined with the structure of PointConv and long and short jumps, which accelerates the convergence speed of the network and reduces the loss of features in the process of point cloud downsampling. After comparison experiments, the segmentation accuracy of Panicle-3D reaches 93.4%, which is higher than PointNet. Panicle-3D is suitable for other similar crop point cloud segmentation tasks.


2021 ◽  
Vol 8 (2) ◽  
pp. 303-315
Author(s):  
Jingyu Gong ◽  
Zhou Ye ◽  
Lizhuang Ma

AbstractA significant performance boost has been achieved in point cloud semantic segmentation by utilization of the encoder-decoder architecture and novel convolution operations for point clouds. However, co-occurrence relationships within a local region which can directly influence segmentation results are usually ignored by current works. In this paper, we propose a neighborhood co-occurrence matrix (NCM) to model local co-occurrence relationships in a point cloud. We generate target NCM and prediction NCM from semantic labels and a prediction map respectively. Then, Kullback-Leibler (KL) divergence is used to maximize the similarity between the target and prediction NCMs to learn the co-occurrence relationship. Moreover, for large scenes where the NCMs for a sampled point cloud and the whole scene differ greatly, we introduce a reverse form of KL divergence which can better handle the difference to supervise the prediction NCMs. We integrate our method into an existing backbone and conduct comprehensive experiments on three datasets: Semantic3D for outdoor space segmentation, and S3DIS and ScanNet v2 for indoor scene segmentation. Results indicate that our method can significantly improve upon the backbone and outperform many leading competitors.


Sign in / Sign up

Export Citation Format

Share Document