scholarly journals Machine Learning-Based Supervised Classification of Point Clouds Using Multiscale Geometric Features

2021 ◽  
Vol 10 (3) ◽  
pp. 187
Author(s):  
Muhammed Enes Atik ◽  
Zaide Duran ◽  
Dursun Zafer Seker

3D scene classification has become an important research field in photogrammetry, remote sensing, computer vision and robotics with the widespread usage of 3D point clouds. Point cloud classification, called semantic labeling, semantic segmentation, or semantic classification of point clouds is a challenging topic. Machine learning, on the other hand, is a powerful mathematical tool used to classify 3D point clouds whose content can be significantly complex. In this study, the classification performance of different machine learning algorithms in multiple scales was evaluated. The feature spaces of the points in the point cloud were created using the geometric features generated based on the eigenvalues of the covariance matrix. Eight supervised classification algorithms were tested in four different areas from three datasets (the Dublin City dataset, Vaihingen dataset and Oakland3D dataset). The algorithms were evaluated in terms of overall accuracy, precision, recall, F1 score and process time. The best overall results were obtained for four test areas with different algorithms. Dublin City Area 1 was obtained with Random Forest as 93.12%, Dublin City Area 2 was obtained with a Multilayer Perceptron algorithm as 92.78%, Vaihingen was obtained as 79.71% with Support Vector Machines and Oakland3D with Linear Discriminant Analysis as 97.30%.

Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7392
Author(s):  
Danish Nazir ◽  
Muhammad Zeshan Afzal ◽  
Alain Pagani ◽  
Marcus Liwicki ◽  
Didier Stricker

In this paper, we present the idea of Self Supervised learning on the shape completion and classification of point clouds. Most 3D shape completion pipelines utilize AutoEncoders to extract features from point clouds used in downstream tasks such as classification, segmentation, detection, and other related applications. Our idea is to add contrastive learning into AutoEncoders to encourage global feature learning of the point cloud classes. It is performed by optimizing triplet loss. Furthermore, local feature representations learning of point cloud is performed by adding the Chamfer distance function. To evaluate the performance of our approach, we utilize the PointNet classifier. We also extend the number of classes for evaluation from 4 to 10 to show the generalization ability of the learned features. Based on our results, embeddings generated from the contrastive AutoEncoder enhances shape completion and classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


Author(s):  
M. R. Hess ◽  
V. Petrovic ◽  
F. Kuester

Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.


Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> 3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (&amp;gt;<span class="thinspace"></span>30<span class="thinspace"></span>pts/sqm) in combination with aerial RGB orthoimages (~<span class="thinspace"></span>10<span class="thinspace"></span>cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results.</p>


Author(s):  
E. Özdemir ◽  
F. Remondino ◽  
A. Golkar

Abstract. With recent advances in technology, 3D point clouds are getting more and more frequently requested and used, not only for visualization needs but also e.g. by public administrations for urban planning and management. 3D point clouds are also a very frequent source for generating 3D city models which became recently more available for many applications, such as urban development plans, energy evaluation, navigation, visibility analysis and numerous other GIS studies. While the main data sources remained the same (namely aerial photogrammetry and LiDAR), the way these city models are generated have been evolving towards automation with different approaches. As most of these approaches are based on point clouds with proper semantic classes, our aim is to classify aerial point clouds into meaningful semantic classes, e.g. ground level objects (GLO, including roads and pavements), vegetation, buildings’ facades and buildings’ roofs. In this study we tested and evaluated various machine learning algorithms for classification, including three deep learning algorithms and one machine learning algorithm. In the experiments, several hand-crafted geometric features depending on the dataset are used and, unconventionally, these geometric features are used also for deep learning.


Author(s):  
J. Wolf ◽  
R. Richter ◽  
S. Discher ◽  
J. Döllner

<p><strong>Abstract.</strong> In this work, we present an approach that uses an established image recognition convolutional neural network for the semantic classification of two-dimensional objects found in mobile mapping 3D point cloud scans of road environments, namely manhole covers and road markings. We show that the approach is capable of classifying these objects and that it can efficiently be applied on large datasets. Top-down view images from the point cloud are rendered and classified by a U-Net implementation. The results are integrated into the point cloud by setting an additional semantic attribute. Shape files can be computed from the classified points.</p>


Author(s):  
J. Höhle

Facades of buildings contain various types of objects which have to be recorded for information systems. The article describes a solution for this task focussing on automated classification by means of machine learning techniques. Stereo pairs of oblique images are used to derive 3D point clouds of buildings. The planes of the buildings are automatically detected. The derived planes are supplemented with a regular grid of points for which the colour values are found in the images. For each grid point of the façade additional attributes are derived from image and object data. This "intelligent" point cloud is analysed by a decision tree, which is derived from a small training set. The derived decision tree is then used to classify the complete point cloud. To each point of the regular façade grid a class is assigned and a façade plan is mapped by a colour palette representing the different objects. Some image processing methods are applied to improve the appearance of the interpreted façade plot and to extract additional information. The proposed method is tested on facades of a church. Accuracy measures were derived from 140 independent checkpoints, which were randomly selected. When selecting four classes ("window", "stone work", "painted wall", and "vegetation") the overall accuracy is assessed with 80 % (95 % Confidence Interval: 71 %&ndash;88 %). The user accuracy of class “stonework” was assessed with 90 % (95 % CI: 80 %&ndash;97 %). The proposed methodology has a high potential for automation and fast processing.


Sign in / Sign up

Export Citation Format

Share Document