scholarly journals Spectral Pattern Classification in Lidar Data for Rock Identification in Outcrops

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Leonardo Campos Inocencio ◽  
Mauricio Roberto Veronez ◽  
Francisco Manoel Wohnrath Tognoli ◽  
Marcelo Kehl de Souza ◽  
Reginaldo Macedônio da Silva ◽  
...  

The present study aimed to develop and implement a method for detection and classification of spectral signatures in point clouds obtained from terrestrial laser scanner in order to identify the presence of different rocks in outcrops and to generate a digital outcrop model. To achieve this objective, a software based on cluster analysis was created, named K-Clouds. This software was developed through a partnership between UNISINOS and the company V3D. This tool was designed to begin with an analysis and interpretation of a histogram from a point cloud of the outcrop and subsequently indication of a number of classes provided by the user, to process the intensity return values. This classified information can then be interpreted by geologists, to provide a better understanding and identification from the existing rocks in the outcrop. Beyond the detection of different rocks, this work was able to detect small changes in the physical-chemical characteristics of the rocks, as they were caused by weathering or compositional changes.

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7392
Author(s):  
Danish Nazir ◽  
Muhammad Zeshan Afzal ◽  
Alain Pagani ◽  
Marcus Liwicki ◽  
Didier Stricker

In this paper, we present the idea of Self Supervised learning on the shape completion and classification of point clouds. Most 3D shape completion pipelines utilize AutoEncoders to extract features from point clouds used in downstream tasks such as classification, segmentation, detection, and other related applications. Our idea is to add contrastive learning into AutoEncoders to encourage global feature learning of the point cloud classes. It is performed by optimizing triplet loss. Furthermore, local feature representations learning of point cloud is performed by adding the Chamfer distance function. To evaluate the performance of our approach, we utilize the PointNet classifier. We also extend the number of classes for evaluation from 4 to 10 to show the generalization ability of the learned features. Based on our results, embeddings generated from the contrastive AutoEncoder enhances shape completion and classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.


Author(s):  
D. Shokri ◽  
H. Rastiveis ◽  
A. Shams ◽  
W. A. Sarasua

Abstract. Utility poles located along roads play a key role in road safety and planning as well as communications and electricity distribution. In this regard, new sensing technologies such as Mobile Terrestrial Laser Scanner (MTLS) could be an efficient method to detect utility poles and other planimetric objects along roads. However, due to the vast amount of data collected by MTLS in the form of a point cloud, automated techniques are required to extract objects from this data. This study proposes a novel method for automatic extraction of utility poles from the MTLS point clouds. The proposed algorithm is composed of three consecutive steps of pre-processing, cable area detection, and poles extraction. The point cloud is first pre-processed and then candidate areas for utility poles are specified based on Hough Transform (HT). Utility poles are extracted by applying horizontal and vertical density information to these areas. The performance of the method was evaluated on a sample point cloud and 98% accuracy was achieved in extracting utility poles using the proposed method.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


Author(s):  
M. Franzini ◽  
V. Casella ◽  
P. Marchese ◽  
M. Marini ◽  
G. Della Porta ◽  
...  

Abstract. Recent years showed a gradual transition from terrestrial to aerial survey thanks to the development of UAV and sensors for it. Many sectors benefited by this change among which geological one; drones are flexible, cost-efficient and can support outcrops surveying in many difficult situations such as inaccessible steep and high rock faces. The experiences acquired in terrestrial survey, with total stations, GNSS or terrestrial laser scanner (TLS), are not yet completely transferred to UAV acquisition. Hence, quality comparisons are still needed. The present paper is framed in this perspective aiming to evaluate the quality of the point clouds generated by an UAV in a geological context; data analysis was conducted comparing the UAV product with the homologous acquired with a TLS system. Exploiting modern semantic classification, based on eigenfeatures and support vector machine (SVM), the two point clouds were compared in terms of density and mutual distance. The UAV survey proves its usefulness in this situation with a uniform density distribution in the whole area and producing a point cloud with a quality comparable with the more traditional TLS systems.


Author(s):  
M. Lemmens

<p><strong>Abstract.</strong> A knowledge-based system exploits the knowledge, which a human expert uses for completing a complex task, through a database containing decision rules, and an inference engine. Already in the early nineties knowledge-based systems have been proposed for automated image classification. Lack of success faded out initial interest and enthusiasm, the same fate neural networks struck at that time. Today the latter enjoy a steady revival. This paper aims at demonstrating that a knowledge-based approach to automated classification of mobile laser scanning point clouds has promising prospects. An initial experiment exploiting only two features, height and reflectance value, resulted in an overall accuracy of 79<span class="thinspace"></span>% for the Paris-rue-Madame point cloud bench mark data set.</p>


Author(s):  
Bernardo Lourenço ◽  
Tiago Madeira ◽  
Paulo Dias ◽  
Vitor M. Ferreira Santos ◽  
Miguel Oliveira

Purpose 2D laser rangefinders (LRFs) are commonly used sensors in the field of robotics, as they provide accurate range measurements with high angular resolution. These sensors can be coupled with mechanical units which, by granting an additional degree of freedom to the movement of the LRF, enable the 3D perception of a scene. To be successful, this reconstruction procedure requires to evaluate with high accuracy the extrinsic transformation between the LRF and the motorized system. Design/methodology/approach In this work, a calibration procedure is proposed to evaluate this transformation. The method does not require a predefined marker (commonly used despite its numerous disadvantages), as it uses planar features in the point acquired clouds. Findings Qualitative inspections show that the proposed method reduces artifacts significantly, which typically appear in point clouds because of inaccurate calibrations. Furthermore, quantitative results and comparisons with a high-resolution 3D scanner demonstrate that the calibrated point cloud represents the geometries present in the scene with much higher accuracy than with the un-calibrated point cloud. Practical implications The last key point of this work is the comparison of two laser scanners: the lemonbot (authors’) and a commercial FARO scanner. Despite being almost ten times cheaper, the laser scanner was able to achieve similar results in terms of geometric accuracy. Originality/value This work describes a novel calibration technique that is easy to implement and is able to achieve accurate results. One of its key features is the use of planes to calibrate the extrinsic transformation.


2020 ◽  
Author(s):  
Moritz Bruggisser ◽  
Johannes Otepka ◽  
Norbert Pfeifer ◽  
Markus Hollaus

&lt;p&gt;Unmanned aerial vehicles-borne laser scanning (ULS) allows time-efficient acquisition of high-resolution point clouds on regional extents at moderate costs. The quality of ULS-point clouds facilitates the 3D modelling of individual tree stems, what opens new possibilities in the context of forest monitoring and management. In our study, we developed and tested an algorithm which allows for i) the autonomous detection of potential stem locations within the point clouds, ii) the estimation of the diameter at breast height (DBH) and iii) the reconstruction of the tree stem. In our experiments on point clouds from both, a RIEGL miniVUX-1DL and a VUX-1UAV, respectively, we could detect 91.0 % and 77.6 % of the stems within our study area automatically. The DBH could be modelled with biases of 3.1 cm and 1.1 cm, respectively, from the two point cloud sets with respective detection rates of 80.6 % and 61.2 % of the trees present in the field inventory. The lowest 12 m of the tree stem could be reconstructed with absolute stem diameter differences below 5 cm and 2 cm, respectively, compared to stem diameters from a point cloud from terrestrial laser scanning. The accuracy of larger tree stems thereby was higher in general than the accuracy for smaller trees. Furthermore, we recognized a small influence only of the completeness with which a stem is covered with points, as long as half of the stem circumference was captured. Likewise, the absolute point count did not impact the accuracy, but, in contrast, was critical to the completeness with which a scene could be reconstructed. The precision of the laser scanner, on the other hand, was a key factor for the accuracy of the stem diameter estimation.&amp;#160;&lt;br&gt;The findings of this study are highly relevant for the flight planning and the sensor selection of future ULS acquisition missions in the context of forest inventories.&lt;/p&gt;


2018 ◽  
Vol 10 (8) ◽  
pp. 1192 ◽  
Author(s):  
Chen-Chieh Feng ◽  
Zhou Guo

The automating classification of point clouds capturing urban scenes is critical for supporting applications that demand three-dimensional (3D) models. Achieving this goal, however, is met with challenges because of the varying densities of the point clouds and the complexity of the 3D data. In order to increase the level of automation in the point cloud classification, this study proposes a segment-based parameter learning method that incorporates a two-dimensional (2D) land cover map, in which a strategy of fusing the 2D land cover map and the 3D points is first adopted to create labelled samples, and a formalized procedure is then implemented to automatically learn the following parameters of point cloud classification: the optimal scale of the neighborhood for segmentation, optimal feature set, and the training classifier. It comprises four main steps, namely: (1) point cloud segmentation; (2) sample selection; (3) optimal feature set selection; and (4) point cloud classification. Three datasets containing the point cloud data were used in this study to validate the efficiency of the proposed method. The first two datasets cover two areas of the National University of Singapore (NUS) campus while the third dataset is a widely used benchmark point cloud dataset of Oakland, Pennsylvania. The classification parameters were learned from the first dataset consisting of a terrestrial laser-scanning data and a 2D land cover map, and were subsequently used to classify both of the NUS datasets. The evaluation of the classification results showed overall accuracies of 94.07% and 91.13%, respectively, indicating that the transition of the knowledge learned from one dataset to another was satisfactory. The classification of the Oakland dataset achieved an overall accuracy of 97.08%, which further verified the transferability of the proposed approach. An experiment of the point-based classification was also conducted on the first dataset and the result was compared to that of the segment-based classification. The evaluation revealed that the overall accuracy of the segment-based classification is indeed higher than that of the point-based classification, demonstrating the advantage of the segment-based approaches.


Sign in / Sign up

Export Citation Format

Share Document