scholarly journals Quantitative Comparison of UAS-Borne LiDAR Systems for High-Resolution Forested Wetland Mapping

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4453
Author(s):  
Narcisa Gabriela Pricope ◽  
Joanne Nancie Halls ◽  
Kerry Lynn Mapes ◽  
Joseph Britton Baxley ◽  
James JyunYueh Wu

Wetlands provide critical ecosystem services across a range of environmental gradients and are at heightened risk of degradation from anthropogenic pressures and continued development, especially in coastal regions. There is a growing need for high-resolution (spatially and temporally) habitat identification and precise delineation of wetlands across a variety of stakeholder groups, including wetlands loss mitigation programs. Traditional wetland delineations are costly, time-intensive and can physically degrade the systems that are being surveyed, while aerial surveys are relatively fast and relatively unobtrusive. To assess the efficacy and feasibility of using two variable-cost LiDAR sensors mounted on a commercial hexacopter unmanned aerial system (UAS) in deriving high resolution topography, we conducted nearly concomitant flights over a site located in the Atlantic Coastal plain that contains a mix of palustrine forested wetlands, upland coniferous forest, upland grass and bare ground/dirt roads. We compared point clouds and derived topographic metrics acquired using the Quanergy M8 and the Velodyne HDL-32E LiDAR sensors with airborne LiDAR and results showed that the less expensive and lighter payload sensor outperforms the more expensive one in deriving high resolution, high accuracy ground elevation measurements under a range of canopy cover densities and for metrics of point cloud density and digital terrain computed both globally and locally using variable size tessellations. The mean point cloud density was not significantly different between wetland and non-wetland areas, but the two sensors were significantly different by wetland/non-wetland type. Ultra-high-resolution LiDAR-derived topography models can fill evolving wetlands mapping needs and increase accuracy and efficiency of detection and prediction of sensitive wetland ecosystems, especially for heavily forested coastal wetland systems.

2021 ◽  
Vol 13 (8) ◽  
pp. 1442
Author(s):  
Kaisen Ma ◽  
Yujiu Xiong ◽  
Fugen Jiang ◽  
Song Chen ◽  
Hua Sun

Detecting and segmenting individual trees in forest ecosystems with high-density and overlapping crowns often results in bias due to the limitations of the commonly used canopy height model (CHM). To address such limitations, this paper proposes a new method to segment individual trees and extract tree structural parameters. The method involves the following key steps: (1) unmanned aerial vehicle (UAV)-scanned, high-density laser point clouds were classified, and a vegetation point cloud density model (VPCDM) was established by analyzing the spatial density distribution of the classified vegetation point cloud in the plane projection; and (2) a local maximum algorithm with an optimal window size was used to detect tree seed points and to extract tree heights, and an improved watershed algorithm was used to extract the tree crowns. The proposed method was tested at three sites with different canopy coverage rates in a pine-dominated forest in northern China. The results showed that (1) the kappa coefficient between the proposed VPCDM and the commonly used CHM was 0.79, indicating that performance of the VPCDM is comparable to that of the CHM; (2) the local maximum algorithm with the optimal window size could be used to segment individual trees and obtain optimal single-tree segmentation accuracy and detection rate results; and (3) compared with the original watershed algorithm, the improved watershed algorithm significantly increased the accuracy of canopy area extraction. In conclusion, the proposed VPCDM may provide an innovative data segmentation model for light detection and ranging (LiDAR)-based high-density point clouds and enhance the accuracy of parameter extraction.


2019 ◽  
Vol 11 (18) ◽  
pp. 2154 ◽  
Author(s):  
Ján Šašak ◽  
Michal Gallay ◽  
Ján Kaňuk ◽  
Jaroslav Hofierka ◽  
Jozef Minár

Airborne and terrestrial laser scanning and close-range photogrammetry are frequently used for very high-resolution mapping of land surface. These techniques require a good strategy of mapping to provide full visibility of all areas otherwise the resulting data will contain areas with no data (data shadows). Especially, deglaciated rugged alpine terrain with abundant large boulders, vertical rock faces and polished roche-moutones surfaces complicated by poor accessibility for terrestrial mapping are still a challenge. In this paper, we present a novel methodological approach based on a combined use of terrestrial laser scanning (TLS) and close-range photogrammetry from an unmanned aerial vehicle (UAV) for generating a high-resolution point cloud and digital elevation model (DEM) of a complex alpine terrain. The approach is demonstrated using a small study area in the upper part of a deglaciated valley in the Tatry Mountains, Slovakia. The more accurate TLS point cloud was supplemented by the UAV point cloud in areas with insufficient TLS data coverage. The accuracy of the iterative closest point adjustment of the UAV and TLS point clouds was in the order of several centimeters but standard deviation of the mutual orientation of TLS scans was in the order of millimeters. The generated high-resolution DEM was compared to SRTM DEM, TanDEM-X and national DMR3 DEM products confirming an excellent applicability in a wide range of geomorphologic applications.


2019 ◽  
Vol 8 (4) ◽  
pp. 178 ◽  
Author(s):  
Richard Boerner ◽  
Yusheng Xu ◽  
Ramona Baran ◽  
Frank Steinbacher ◽  
Ludwig Hoegner ◽  
...  

This article proposes a method for registration of two different point clouds with different point densities and noise recorded by airborne sensors in rural areas. In particular, multi-sensor point clouds with different point densities are considered. The proposed method is marker-less and uses segmented ground areas for registration.Therefore, the proposed approach offers the possibility to fuse point clouds of different sensors in rural areas within an accuracy of fine registration. In general, such registration is solved with extensive use of control points. The source point cloud is used to calculate a DEM of the ground which is further used to calculate point to raster distances of all points of the target point cloud. Furthermore, each cell of the raster DEM gets a height variance, further addressed as reconstruction accuracy, by calculating the grid. An outlier removal based on a dynamic threshold of distances is used to gain more robustness against noise and small geometry variations. The transformation parameters are calculated with an iterative least-squares optimization of the distances weighted with respect to the reconstruction accuracies of the grid. Evaluations consider two flight campaigns of the Mangfall area inBavaria, Germany, taken with different airborne LiDAR sensors with different point density. The accuracy of the proposed approach is evaluated on the whole flight strip of approximately eight square kilometers as well as on selected scenes in a closer look. For all scenes, it obtained an accuracy of rotation parameters below one tenth degrees and accuracy of translation parameters below the point spacing and chosen cell size of the raster. Furthermore, the possibility of registration of airborne LiDAR and photogrammetric point clouds from UAV taken images is shown with a similar result. The evaluation also shows the robustness of the approach in scenes where a classical iterative closest point (ICP) fails.


2019 ◽  
Vol 12 (1) ◽  
pp. 4
Author(s):  
Tiangang Yin ◽  
Jianbo Qi ◽  
Bruce D. Cook ◽  
Douglas C. Morton ◽  
Shanshan Wei ◽  
...  

Airborne lidar point clouds of vegetation capture the 3-D distribution of its scattering elements, including leaves, branches, and ground features. Assessing the contribution from vegetation to the lidar point clouds requires an understanding of the physical interactions between the emitted laser pulses and their targets. Most of the current methods to estimate the gap probability ( P gap ) or leaf area index (LAI) from small-footprint airborne laser scan (ALS) point clouds rely on either point-number-based (PNB) or intensity-based (IB) approaches, with additional empirical correlations with field measurements. However, site-specific parameterizations can limit the application of certain methods to other landscapes. The universality evaluation of these methods requires a physically based radiative transfer model that accounts for various lidar instrument specifications and environmental conditions. We conducted an extensive study to compare these approaches for various 3-D forest scenes using a point-cloud simulator developed for the latest version of the discrete anisotropic radiative transfer (DART) model. We investigated a range of variables for possible lidar point intensity, including radiometric quantities derived from Gaussian Decomposition (GD), such as the peak amplitude, standard deviation, integral of Gaussian profiles, and reflectance. The results disclosed that the PNB methods fail to capture the exact P gap as footprint size increases. By contrast, we verified that physical methods using lidar point intensity defined by either the distance-weighted integral of Gaussian profiles or reflectance can estimate P gap and LAI with higher accuracy and reliability. Additionally, the removal of certain additional empirical correlation coefficients is feasible. Routine use of small-footprint point-cloud radiometric measures to estimate P gap and the LAI potentially confirms a departure from previous empirical studies, but this depends on additional parameters from lidar instrument vendors.


Author(s):  
W. Barragán ◽  
A. Campos ◽  
G. Sanchez

The objective of this research is automatic generation of buildings in the interest areas. This research was developed by using high resolution vertical aerial photographs and the LIDAR point cloud through radiometric and geometric digital processes. The research methodology usesknown building heights and various segmentation algorithms and spectral band combination. The overall effectiveness of the algorithm is 97.2% with the test data.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


Author(s):  
F. Dadras Javan ◽  
M. Savadkouhi

Abstract. In the last few years, Unmanned Aerial Vehicles (UAVs) are being frequently used to acquire high resolution photogrammetric images and consequently producing Digital Surface Models (DSMs) and orthophotos in a photogrammetric procedure for topography and surface processing applications. Thermal imaging sensors are mostly used for interpretation and monitoring purposes because of lower geometric resolution. But yet, thermal mapping is getting more important in civil applications, as thermal sensors can be used in condition that visible sensors cannot, such as foggy weather and night times which is not possible for visible cameras. But, low geometric quality and resolution of thermal images is a main drawback that 3D thermal modelling are encountered with. This study aims to offer a solution for to fixing mentioned problem and generating a thermal 3D model with higher spatial resolution based on thermal and visible point clouds integration. This integration leads to generate a more accurate thermal point cloud and DEM with more density and resolution which is appropriate for 3D thermal modelling. The main steps of this study are: generating thermal and RGB point clouds separately, registration of them in two course and fine level and finally adding thermal information to RGB high resolution point cloud by interpolation concept. Experimental results are presented in a mesh that has more faces (With a factor of 23) which leads to a higher resolution textured mesh with thermal information.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Sign in / Sign up

Export Citation Format

Share Document