scholarly journals Measuring change at Earth’s surface: On-demand vertical and three- dimensional topographic differencing implemented in OpenTopography

Geosphere ◽  
2021 ◽  
Author(s):  
Chelsea Scott ◽  
Minh Phan ◽  
Viswanath Nandigam ◽  
Christopher Crosby ◽  
J Ramon Arrowsmith

Topographic differencing measures landscape change by comparing multitemporal high-resolution topography data sets. Here, we focused on two types of topographic differencing: (1) Vertical differencing is the subtraction of digital elevation models (DEMs) that span an event of interest. (2) Three-dimensional (3-D) differencing measures surface change by registering point clouds with a rigid deformation. We recently released topo­graphic differencing in OpenTopography where users perform on-demand vertical and 3-D differencing via an online interface. OpenTopography is a U.S. National Science Foundation–funded facility that provides access to topographic data and processing tools. While topographic differencing has been applied in numerous research studies, the lack of standardization, particularly of 3-D differencing, requires the customization of processing for individ­ual data sets and hinders the community’s ability to efficiently perform differencing on the growing archive of topography data. Our paper focuses on streamlined techniques with which to efficiently difference data sets with varying spatial resolution and sensor type (i.e., optical vs. light detection and ranging [lidar]) and over variable landscapes. To optimize on-demand differencing, we considered algorithm choice and displacement resolution. The optimal resolution is controlled by point density, landscape characteristics (e.g., leaf-on vs. leaf-off), and data set quality. We provide processing options derived from metadata that allow users to produce optimal high-quality results, while experienced users can fine tune the parameters to suit their needs. We anticipate that the differencing tool will expand access to this state-of-the-art technology, will be a valuable educational tool, and will serve as a template for differencing the growing number of multitemporal topography data sets.

2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773540 ◽  
Author(s):  
Robert A Hewitt ◽  
Alex Ellery ◽  
Anton de Ruiter

A classifier training methodology is presented for Kapvik, a micro-rover prototype. A simulated light detection and ranging scan is divided into a grid, with each cell having a variety of characteristics (such as number of points, point variance and mean height) which act as inputs to classification algorithms. The training step avoids the need for time-consuming and error-prone manual classification through the use of a simulation that provides training inputs and target outputs. This simulation generates various terrains that could be encountered by a planetary rover, including untraversable ones, in a random fashion. A sensor model for a three-dimensional light detection and ranging is used with ray tracing to generate realistic noisy three-dimensional point clouds where all points that belong to untraversable terrain are labelled explicitly. A neural network classifier and its training algorithm are presented, and the results of its output as well as other popular classifiers show high accuracy on test data sets after training. The network is then tested on outdoor data to confirm it can accurately classify real-world light detection and ranging data. The results show the network is able to identify terrain correctly, falsely classifying just 4.74% of untraversable terrain.


2014 ◽  
Vol 2 (1) ◽  
pp. 97-104 ◽  
Author(s):  
S. Hergarten ◽  
J. Robl ◽  
K. Stüwe

Abstract. We present a new method to extend the widely used geomorphic technique of swath profiles towards curved geomorphic structures such as river valleys. In contrast to the established method that hinges on stacking parallel cross sections, our approach does not refer to any individual profile lines, but uses the signed distance from a given baseline (for example, a valley floor) as the profile coordinate. The method can be implemented easily for arbitrary polygonal baselines and for rastered digital elevation models as well as for irregular point clouds such as laser scanner data. Furthermore it does not require any smoothness of the baseline and avoids over- and undersampling due to the curvature of the baseline. The versatility of the new method is illustrated by its application to topographic profiles across valleys, a large subduction zone, and the rim of an impact crater. Similarly to the ordinary swath profile method, the new method is not restricted to analyzing surface elevations themselves, but can aid the quantitative description of topography by analyzing other geomorphic features such as slope or local relief. It is even not constrained to geomorphic data, but can be applied to any two-dimensional data set such as temperature, precipitation or ages of rocks.


2021 ◽  
Author(s):  
Alexander K. Bartella ◽  
Josefine Laser ◽  
Mohammad Kamal ◽  
Dirk Halama ◽  
Michael Neuhaus ◽  
...  

Abstract Introduction: Three-dimensional facial scan images have been showing an increasingly important role in peri-therapeutic management of oral and maxillofacial and head and neck surgery cases. Face scan images can be open using optical facial scanners utilizing line-laser, stereophotography, structured light modality, or from volumetric data obtained from cone beam computed tomography (CBCT). The aim of this study is to evaluate, if two low-cost procedures for creating a three-dimensional face scan images are able to produce a sufficient data set for clinical analysis. Materials and methods: 50 healthy volunteers were included in the study. Two test objects with defined dimensions were attached to the forehead and the left cheek. Anthropometric values were first measured manually, and consecutively, face scans were performed with a smart device and manual photogrammetry and compared to the manually measured data sets.Results: Anthropometric distances on average deviated 2.17 mm from the manual measurement (smart device scanning 3.01 mm vs. photogrammetry 1.34 mm), with 7 out of 8 deviations were statistically significant. Of a total of 32 angles, 19 values showed a significant difference to the original 90° angles. The average deviation was 6.5° (smart device scanning 10.1° vs. photogrammetry 2.8°).Conclusion: Manual photogrammetry with a regular photo-camera shows higher accuracy than scanning with smart device. However, the smart device was more intuitive in handling and further technical improvement of the cameras used should be watched carefully.


2021 ◽  
Vol 13 (19) ◽  
pp. 3975
Author(s):  
Fei Zhang ◽  
Amirhossein Hassanzadeh ◽  
Julie Kikkert ◽  
Sarah Jane Pethybridge ◽  
Jan van Aardt

The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points.


2017 ◽  
Author(s):  
Julia Boike ◽  
Inge Juszak ◽  
Stephan Lange ◽  
Sarah Chadburn ◽  
Eleanor Burke ◽  
...  

Abstract. Most permafrost is located in the Arctic, where frozen organic carbon makes it an important component of the global climate system. Despite the fact that the Arctic climate changes more rapidly than the rest of the globe, observational data density in the region is low. Permafrost thaw and carbon release to the atmosphere are a positive feedback mechanism that can exacerbate climate warming. This positive feedback functions via changing land-atmosphere energy and mass exchanges. There is thus a great need to understand links between the energy balance, which can vary rapidly over hourly to annual time scales, and permafrost, which changes slowly over long time periods. This understanding thus mandates long-term observational data sets. Such a data set is available from the Bayelva Site at Ny-Ålesund, Svalbard, where meteorology, energy balance components and subsurface observations have been made for the last 20 years. Additional data include a high resolution digital elevation model and a panchromatic image. This paper presents the data set produced so far, explains instrumentation, calibration, processing and data quality control, as well as the sources for various resulting data sets. The resulting data set is unique in the Arctic and serves a baseline for future studies. Since the data provide observations of temporally variable parameters that mitigate energy fluxes between permafrost and atmosphere, such as snow depth and soil moisture content, they are suitable for use in integrating, calibrating and testing permafrost as a component in Earth System Models. The data set also includes a high resolution digital elevation model that can be used together with the snow physical information for snow pack modeling. The presented data are available in the supplementary material for this paper and through the PANGAEA website ( https://doi.pangaea.de/10.1594/PANGAEA.880120).


Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.


Author(s):  
L. Markelin ◽  
E. Honkavaara ◽  
R. Näsi ◽  
N. Viljanen ◽  
T. Rosnell ◽  
...  

Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.


2020 ◽  
Vol 12 (18) ◽  
pp. 2884
Author(s):  
Qingwang Liu ◽  
Liyong Fu ◽  
Qiao Chen ◽  
Guangxing Wang ◽  
Peng Luo ◽  
...  

Forest canopy height is one of the most important spatial characteristics for forest resource inventories and forest ecosystem modeling. Light detection and ranging (LiDAR) can be used to accurately detect canopy surface and terrain information from the backscattering signals of laser pulses, while photogrammetry tends to accurately depict the canopy surface envelope. The spatial differences between the canopy surfaces estimated by LiDAR and photogrammetry have not been investigated in depth. Thus, this study aims to assess LiDAR and photogrammetry point clouds and analyze the spatial differences in canopy heights. The study site is located in the Jigongshan National Nature Reserve of Henan Province, Central China. Six data sets, including one LiDAR data set and five photogrammetry data sets captured from an unmanned aerial vehicle (UAV), were used to estimate the forest canopy heights. Three spatial distribution descriptors, namely, the effective cell ratio (ECR), point cloud homogeneity (PCH) and point cloud redundancy (PCR), were developed to assess the LiDAR and photogrammetry point clouds in the grid. The ordinary neighbor (ON) and constrained neighbor (CN) interpolation algorithms were used to fill void cells in digital surface models (DSMs) and canopy height models (CHMs). The CN algorithm could be used to distinguish small and large holes in the CHMs. The optimal spatial resolution was analyzed according to the ECR changes of DSMs or CHMs resulting from the CN algorithms. Large negative and positive variations were observed between the LiDAR and photogrammetry canopy heights. The stratified mean difference in canopy heights increased gradually from negative to positive when the canopy heights were greater than 3 m, which means that photogrammetry tends to overestimate low canopy heights and underestimate high canopy heights. The CN interpolation algorithm achieved smaller relative root mean square errors than the ON interpolation algorithm. This article provides an operational method for the spatial assessment of point clouds and suggests that the variations between LiDAR and photogrammetry CHMs should be considered when modeling forest parameters.


Author(s):  
E. Alby ◽  
E. Vigouroux ◽  
R. Elter

<p><strong>Abstract.</strong> In this paper will be presented the use of photogrammetry integrated to the process of representation of an archaeological site. The Khirbat al-Dūsaq site, Jordan, is an architectural complex composed by three remaining buildings with different shapes and functions. The first one is a reception building name īwān. The second one is vaulted and its function has not been determined yet. The third is a bath with all the complexity that are required for such a function (multiple rooms and sequence of spaces). The site is being excavated and there remains unknown information archaeologists want to discover and represent. This project takes places after several years of collaboration on different other archaeological sites. During these different projects, methods of acquisition, processes and drawings at different places and stages have been developed and work methods that includes the use of photogrammetry are now integrated to the archaeological practices. There is now a need by archaeologists for ortho-photos to draw precise plans. The integration of photogrammetry into the practice of archaeology on site helps also to reduce the time consumption to survey and to represent excavation activity. The data sets obtained year after year can also be used as a support for 3D reconstruction. The 3D modelling stage begins by integrating the context represented here by 3D textured mesh produced during the process of ortho-photos. The integration of photogrammetry started in 2015 by acquiring pictures from bath building. This work had to be extended to the entire complex so that it has been decided to manage it, in a proper way. In 2016, a survey network has been implemented, and complete photogrammetric data set have been produced. At this time there was a photogrammetric survey reference for all the data sets of the site. Several years of survey means that the project has to adapt to its specific context. The site life during 11 months without archaeological preoccupations signifies that it is evolving, so that in 2017, ground points had disappeared. The possibility to geo-reference future data sets imposes to integrate targets on pictures from 2016 data set. The remaining building walls on site keep their shape enough to be integrated as constant structures over the years. At first it has been decided to integrate photogrammetry technic to the representation process of the Khirbat al-Dūsaq site. It has proved, by the precision and flexibility of processes that good quality representations could be produced and the 3D documentation could be used as a support of 3D reconstruction stage also. Photogrammetric documentation, as soon as it is properly managed over the years can thus be integrated in archaeologic practices and can help to reduce time consuming stages and propose other activity support as 3D reconstruction.</p>


2021 ◽  
Vol 87 (12) ◽  
pp. 879-890
Author(s):  
Sagar S. Deshpande ◽  
Mike Falk ◽  
Nathan Plooster

Rollers are an integral part of a hot-rolling steel mill. They transport hot metal from one end of the mill to another. The quality of the steel highly depends on the surface quality of the rollers. This paper presents semi-automated methodologies to extract roller parameters from terrestrial lidar points. The procedure was divided into two steps. First, the three-dimensional points were converted to a two-dimensional image to detect the extents of the rollers using fast Fourier transform image matching. Lidar points for every roller were iteratively fitted to a circle. The radius and center of the fitted circle were considered as the average radius and average rotation axis of the roller, respectively. These parameters were also extracted manually and were compared to the measured parameters for accuracy analysis. The proposed methodology was able to extract roller parameters at millimeter level. Erroneously identified rollers were identified by moving average filters. In the second step, roller parameters were determined using the filtered roller points. Two data sets were used to validate the proposed methodologies. In the first data set, 366 out of 372 rollers (97.3%) were identified and modeled. The second, smaller data set consisted of 18 rollers which were identified and modelled accurately.


Sign in / Sign up

Export Citation Format

Share Document