scholarly journals Examining Changes in Stem Taper and Volume Growth with Two-Date 3D Point Clouds

Forests ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 382 ◽  
Author(s):  
Ville Luoma ◽  
Ninni Saarinen ◽  
Ville Kankare ◽  
Topi Tanhuanpää ◽  
Harri Kaartinen ◽  
...  

Exact knowledge over tree growth is valuable information for decision makers when considering the purposes of sustainable forest management and planning or optimizing the use of timber, for example. Terrestrial laser scanning (TLS) can be used for measuring tree and forest attributes in very high detail. The study aims at characterizing changes in individual tree attributes (e.g., stem volume growth and taper) during a nine year-long study period in boreal forest conditions. TLS-based three-dimensional (3D) point cloud data were used for identifying and quantifying these changes. The results showed that observing changes in stem volume was possible from TLS point cloud data collected at two different time points. The average volume growth of sample trees was 0.226 m3 during the study period, and the mean relative change in stem volume was 65.0%. In addition, the results of a pairwise Student’s t-test gave strong support (p-value 0.0001) that the used method was able to detect tree growth within the nine-year period between 2008–2017. The findings of this study allow the further development of enhanced methods for TLS-based single tree and forest growth modeling and estimation, which can thus improve the accuracy of forest inventories and offer better tools for future decision-making processes.

Author(s):  
J. Wolf ◽  
S. Discher ◽  
L. Masopust ◽  
S. Schulz ◽  
R. Richter ◽  
...  

<p><strong>Abstract.</strong> Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e.g., about manholes in the street) that can be directly accessed during evaluation.</p>


Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


2021 ◽  
Vol 10 (11) ◽  
pp. 762
Author(s):  
Kaisa Jaalama ◽  
Heikki Kauhanen ◽  
Aino Keitaanniemi ◽  
Toni Rantanen ◽  
Juho-Pekka Virtanen ◽  
...  

The importance of ensuring the adequacy of urban ecosystem services and green infrastructure has been widely highlighted in multidisciplinary research. Meanwhile, the consolidation of cities has been a dominant trend in urban development and has led to the development and implementation of the green factor tool in cities such as Berlin, Melbourne, and Helsinki. In this study, elements of the green factor tool were monitored with laser-scanned and photogrammetrically derived point cloud datasets encompassing a yard in Espoo, Finland. The results show that with the support of 3D point clouds, it is possible to support the monitoring of the local green infrastructure, including elements of smaller size in green areas and yards. However, point clouds generated by distinct means have differing abilities in conveying information on green elements, and canopy covers, for example, might hinder these abilities. Additionally, some green factor elements are more promising for 3D measurement-based monitoring than others, such as those with clear geometrical form. The results encourage the involvement of 3D measuring technologies for monitoring local urban green infrastructure (UGI), also of small scale.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


Forests ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 660 ◽  
Author(s):  
Yangbo Deng ◽  
Kunyong Yu ◽  
Xiong Yao ◽  
Qiaoya Xie ◽  
Yita Hsieh ◽  
...  

The accurate estimation of leaf area is of great importance for the acquisition of information on the forest canopy structure. Currently, direct harvesting is used to obtain leaf area; however, it is difficult to quickly and effectively extract the leaf area of a forest. Although remote sensing technology can obtain leaf area by using a wide range of leaf area estimates, such technology cannot accurately estimate leaf area at small spatial scales. The purpose of this study is to examine the use of terrestrial laser scanning data to achieve a fast, accurate, and non-destructive estimation of individual tree leaf area. We use terrestrial laser scanning data to obtain 3D point cloud data for individual tree canopies of Pinus massoniana. Using voxel conversion, we develop a model for the number of voxels and canopy leaf area and then apply it to the 3D data. The results show significant positive correlations between reference leaf area and mass (R2 = 0.8603; p < 0.01). Our findings demonstrate that using terrestrial laser point cloud data with a layer thickness of 0.1 m and voxel size of 0.05 m can effectively improve leaf area estimations. We verify the suitability of the voxel-based method for estimating the leaf area of P. massoniana and confirmed the effectiveness of this non-destructive method.


2020 ◽  
Vol 12 (11) ◽  
pp. 1729 ◽  
Author(s):  
Saifullahi Aminu Bello ◽  
Shangshu Yu ◽  
Cheng Wang ◽  
Jibril Muhmmad Adam ◽  
Jonathan Li

A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.


2021 ◽  
Vol 13 (1) ◽  
pp. 705-716
Author(s):  
Qiuji Chen ◽  
Xin Wang ◽  
Mengru Hang ◽  
Jiye Li

Abstract The correct individual tree segmentation of the forest is necessary for extracting the additional information of trees, such as tree height, crown width, and other tree parameters. With the development of LiDAR technology, the research method of individual tree segmentation based on point cloud data has become a focus of the research community. In this work, the research area is located in an underground coal mine in Shenmu City, Shaanxi Province, China. Vegetation information with and without leaves in this coal mining area are obtained with airborne LiDAR to conduct the research. In this study, we propose hybrid clustering technique by combining DBSCAN and K-means for segmenting individual trees based on airborne LiDAR point cloud data. First, the point cloud data are processed for denoising and filtering. Then, the pre-processed data are projected to the XOY plane for DBSCAN clustering. The number and coordinates of clustering centers are obtained, which are used as an input for K-means clustering algorithm. Finally, the results of individual tree segmentation of the forest in the mining area are obtained. The simulation results and analysis show that the new method proposed in this paper outperforms other methods in forest segmentation in mining area. This provides effective technical support and data reference for the study of forest in mining areas.


Sign in / Sign up

Export Citation Format

Share Document