scholarly journals A Fast Global Interpolation Method for Digital Terrain Model Generation from Large LiDAR-Derived Data

2019 ◽  
Vol 11 (11) ◽  
pp. 1324 ◽  
Author(s):  
Chuanfa Chen ◽  
Yanyan Li

Airborne light detection and ranging (LiDAR) datasets with a large volume pose a great challenge to the traditional interpolation methods for the production of digital terrain models (DTMs). Thus, a fast, global interpolation method based on thin plate spline (TPS) is proposed in this paper. In the methodology, a weighted version of finite difference TPS is first developed to deal with the problem of missing data in the grid-based surface construction. Then, the interpolation matrix of the weighted TPS is deduced and found to be largely sparse. Furthermore, the values and positions of each nonzero element in the matrix are analytically determined. Finally, to make full use of the sparseness of the interpolation matrix, the linear system is solved with an iterative manner. These make the new method not only fast, but also require less random-access memory. Tests on six simulated datasets indicate that compared to recently developed discrete cosine transformation (DCT)-based TPS, the proposed method has a higher speed and accuracy, lower memory requirement, and less sensitivity to the smoothing parameter. Real-world examples on 10 public and 1 private dataset demonstrate that compared to the DCT-based TPS and the locally weighted interpolation methods, such as linear, natural neighbor (NN), inverse distance weighting (IDW), and ordinary kriging (OK), the proposed method produces visually good surfaces, which overcome the problems of peak-cutting, coarseness, and discontinuity of the aforementioned interpolators. More importantly, the proposed method has a similar performance to the simple interpolation methods (e.g., IDW and NN) with respect to computing time and memory cost, and significantly outperforms OK. Overall, the proposed method with low memory requirement and computing cost offers great potential for the derivation of DTMs from large-scale LiDAR datasets.

Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3586 ◽  
Author(s):  
Sizhou Sun ◽  
Jingqi Fu ◽  
Ang Li

Given the large-scale exploitation and utilization of wind power, the problems caused by the high stochastic and random characteristics of wind speed make researchers develop more reliable and precise wind power forecasting (WPF) models. To obtain better predicting accuracy, this study proposes a novel compound WPF strategy by optimal integration of four base forecasting engines. In the forecasting process, density-based spatial clustering of applications with noise (DBSCAN) is firstly employed to identify meaningful information and discard the abnormal wind power data. To eliminate the adverse influence of the missing data on the forecasting accuracy, Lagrange interpolation method is developed to get the corrected values of the missing points. Then, the two-stage decomposition (TSD) method including ensemble empirical mode decomposition (EEMD) and wavelet transform (WT) is utilized to preprocess the wind power data. In the decomposition process, the empirical wind power data are disassembled into different intrinsic mode functions (IMFs) and one residual (Res) by EEMD, and the highest frequent time series IMF1 is further broken into different components by WT. After determination of the input matrix by a partial autocorrelation function (PACF) and normalization into [0, 1], these decomposed components are used as the input variables of all the base forecasting engines, including least square support vector machine (LSSVM), wavelet neural networks (WNN), extreme learning machine (ELM) and autoregressive integrated moving average (ARIMA), to make the multistep WPF. To avoid local optima and improve the forecasting performance, the parameters in LSSVM, ELM, and WNN are tuned by backtracking search algorithm (BSA). On this basis, BSA algorithm is also employed to optimize the weighted coefficients of the individual forecasting results that produced by the four base forecasting engines to generate an ensemble of the forecasts. In the end, case studies for a certain wind farm in China are carried out to assess the proposed forecasting strategy.


2013 ◽  
Vol 318 ◽  
pp. 100-107
Author(s):  
Zhen Shen ◽  
Biao Wang ◽  
Hui Yang ◽  
Yun Zheng

Six kinds of interpolation methods, including projection-shape function method, three-dimensional linear interpolation method, optimal interpolation method, constant volume transformation method and so on, were adoped in the study of interpolation accuracy. From the point of view about the characterization of matching condition of two different grids and interpolation function, the infuencing factor on the interpolation accuracy was studied. The results revealed that different interpolation methods had different interpolation accuracy. The projection-shape function interpolation method had the best effect and the more complex interpolation function had lower accuracy. In many cases, the matching condition of two grids had much greater impact on the interpolation accuracy than the method itself. The error of interpolation method is inevitable, but the error caused by the grid quality could be reduced through efforts.


2021 ◽  
Author(s):  
Kazuki Murata ◽  
Shinji Sassa ◽  
Tomohiro Takagawa ◽  
Toshikazu Ebisuzaki ◽  
Shigenori Maruyama

Abstract We first propose and examine a method for digitizing analog data of submarine topography by focusing on the seafloor survey records available in the literature to facilitate a detailed analysis of submarine landslides and landslide-induced tsunamis. Second, we apply this digitization method to the seafloor topographic changes recorded before and after the 1923 Great Kanto earthquake tsunami event and evaluate its effectiveness. Third, we discuss the coseismic large-scale seafloor deformation at the Sagami Bay and the mouth of the Tokyo Bay, Japan. The results confirmed that the latitude / longitude and water depth values recorded by the lead sounding measurement method can be approximately extracted from the sea depth coordinates by triangulation survey through the overlaying of the currently available GIS map data without geometric correction such as affine transformation. Further, this proposed method allows us to obtain mesh data of depth changes in the sea area by using the interpolation method based on the IDW (Inverse Distance Weighted) average method through its application to the case of the 1923 Great Kanto Earthquake. Finally, we analyzed and compared the submarine topography before and after the 1923 tsunami event and the current seabed topography. Consequently, we found that these large-scale depth changes correspond to the valley lines that flow down as the topography of the Sagami Bay and the Tokyo Bay mouth.


Author(s):  
Luca Accorsi ◽  
Daniele Vigo

In this paper, we propose a fast and scalable, yet effective, metaheuristic called FILO to solve large-scale instances of the Capacitated Vehicle Routing Problem. Our approach consists of a main iterative part, based on the Iterated Local Search paradigm, which employs a carefully designed combination of existing acceleration techniques, as well as novel strategies to keep the optimization localized, controlled, and tailored to the current instance and solution. A Simulated Annealing-based neighbor acceptance criterion is used to obtain a continuous diversification, to ensure the exploration of different regions of the search space. Results on extensively studied benchmark instances from the literature, supported by a thorough analysis of the algorithm’s main components, show the effectiveness of the proposed design choices, making FILO highly competitive with existing state-of-the-art algorithms, both in terms of computing time and solution quality. Finally, guidelines for possible efficient implementations, algorithm source code, and a library of reusable components are open-sourced to allow reproduction of our results and promote further investigations.


2002 ◽  
Vol 124 (5) ◽  
pp. 812-819 ◽  
Author(s):  
S. L. Lee ◽  
Y. F. Chen

The NAPPLE algorithm for incompressible viscous flow on Cartesian grid system is extended to nonorthogonal curvilinear grid system in this paper. A pressure-linked equation is obtained by substituting the discretized momentum equations into the discretized continuity equation. Instead of employing a velocity interpolation such as pressure-weighted interpolation method (PWIM), a particular approximation is adopted to circumvent the checkerboard error such that the solution does not depend on the under-relaxation factor. This is a distinctive feature of the present method. Furthermore, the pressure is directly solved from the pressure-linked equation without recourse to a pressure-correction equation. In the use of the NAPPLE algorithm, solving the pressure-linked equation is as simple as solving a heat conduction equation. Through two well-documented examples, performance of the NAPPLE algorithm is validated for both buoyancy-driven and pressure-driven flows.


2012 ◽  
Vol 16 (6) ◽  
pp. 1709-1723 ◽  
Author(s):  
D. González-Zeas ◽  
L. Garrote ◽  
A. Iglesias ◽  
A. Sordo-Ward

Abstract. An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the "best estimator" of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results.


2021 ◽  
Author(s):  
Mayra Mendoza Cariño ◽  
Ana Laura Bautista Olivas ◽  
Daniel Mendoza Cariño ◽  
Carlos Alberto Ortíz Solorio ◽  
Héctor Duarte Tagles ◽  
...  

Agriculture productivity in the state of Nayarit has decreased since 1998. The aim of the study was to undertake the agroclimatic zoning across the state in order to determine the type of crops more convenient to render the highest yields, based on Papadakis climate classification system. Hydric and thermal characteristics pertaining to the geographic distribution of crops were used, as well as indexes derived from meteorological data provided by 25 climate stations. There were three climatic groups identified: tropical, subtropical and cold land, having four, three and two subgroups each, respectively. First two climatic groups support winter cereals such as oat (Avena sativa L.), barley (Hordeum vulgare L.), rye (Secale cereale L.) and wheat (Triticum aestivum L.); and summer cereals such as corn (Zea mays L.), millet (Panicum italicum L.), rice (Oryza sativa L.) and sorghum (Sorghum bicolor (L.) Moench); in addition to banana (Musa paradisiaca L.), citrus and potato (Solanum tuberosum L.) and sugar cane (Saccharum officinarum L.). On the other hand, corn and potato were found in the cold land climatic group. Based on Papadakis’ methodology, for each climatic sub-group identified, a set of recommendation management were given to improve yields: crop type, sowing season, irrigation, fertilizing and other agrochemicals application; and to avoid crop damage. Agroclimatic zoning map was generated by using the inverse distance weighted interpolation method. This study may contribute to the successful planning of crops across the region and thus improving the state’s economy.


2021 ◽  
Vol 10 (10) ◽  
pp. 666
Author(s):  
Lei Zhang ◽  
Ping Wang ◽  
Chengyi Huang ◽  
Bo Ai ◽  
Wenjun Feng

Terrain rendering is an important issue in Geographic Information Systems and other fields. During large-scale, real-time terrain rendering, complex terrain structure and an increasing amount of data decrease the smoothness of terrain rendering. Existing rendering methods rarely use the features of terrain to optimize terrain rendering. This paper presents a method to increase rendering performance through precomputing roughness and self-occlusion information making use of GIS-based Digital Terrain Analysis. Our method is based on GPU tessellation. We use quadtrees to manage patches and take surface roughness in Digital Terrain Analysis as a factor of Levels of Detail (LOD) selection. Before rendering, we first regularly partition the terrain scene into view cells. Then, for each cell, we calculate its potential visible patch set (PVPS) using a visibility analysis algorithm. After that, A PVPS Image Pyramid is built, and each LOD level has its corresponding PVPS. The PVPS Image Pyramid is stored on a disk and is read into RAM before rendering. Based on the PVPS Image Pyramid and the viewpoint’s position, invisible terrain areas that are not culled through view frustum culling can be dynamically culled. We use Digital Elevation Model (DEM) elevation data of a square area in Henan Province to verify the effectiveness of this method. The experiments show that this method can increase the frame rate compared with other methods, especially for lower camera flight heights.


2020 ◽  
Author(s):  
Shimpei Uesawa ◽  
Kiyoshi Toshida ◽  
Shingo Takeuchi ◽  
Daisuke Miura

Abstract Tephra falls can disrupt critical infrastructure, including transportation and electricity networks. Probabilistic assessments of tephra fall hazards have been performed using computational techniques, but it is also important to integrate long-term, regional geological records. To assess tephra fall load hazards in Japan, we re-digitized an existing database of 551 tephra distribution maps. We used the re-digitized datasets to produce hazard curves for a range of tephra loads for various localities. We calculated annual exceedance probabilities (AEPs) and constructed hazard curves from the most complete part of the geological record. We used records of tephra fall events with a Volcanic Explosivity Index (VEI) of 4–7 (based on survivor functions) that occurred over the last 150 ka, as the database contains a very high percentage (around 90%) of VEI 4–7 events for this period. We fitted the data for this period using a Poisson distribution function. Hazard curves were constructed for the tephra fall load at 47 prefectural offices throughout Japan, and four broad regions were defined (NE–W, NE–E, W, and SW Japan). AEPs were relatively high, exceeding 1 × 10 −4 for loads greater than 0 kg/m 2 on the eastern (down-wind) side of the volcanic front in the NE–E region. In much of the W and SW regions, maximum loads were heavier, but AEPs were lower (<10 −4 ). Tephras from large (VEI ≥ 6) events are the predominant hazard in every region. A parametric analysis was applied to investigate regional variability using AEP diagrams and slope shape parameters via curve fitting with exponential and double-exponential decay functions. Two major differences were recognized between the hazard curves from borehole data and those from the digitized tephra database. The first is a significant underestimation of AEP for frequent events using the tephra database, by one to two orders of magnitude. This is explained in terms of the lack of records for smaller tephra fall events in the database. The second is an overestimation of the heaviest tephra load events, which differ by a factor of two to three. This difference might be due to the tephra fall distribution contour interpolation methodology used to generate the original database. The hazard curve for Tokyo developed in this study differs from those that have been generated previously using computational techniques. For the Tokyo region, the probabilities and tephra loads produced by computational methods are at least one order of magnitude greater than those generated during the present study. These discrepancies are inferred to have been caused by initial parameter settings in the computational simulations, including their incorporation of large-scale eruptions of up to VEI = 7 for all large stratovolcanoes, regardless of their eruptive histories. To improve the precision of the digital database, we plan to incorporate recent (since 2003) tephra distributions, revise questionable isopach maps, and develop an improved interpolation method for digitizing tephra fall distributions.


Agronomy ◽  
2019 ◽  
Vol 10 (1) ◽  
pp. 3 ◽  
Author(s):  
Mladen Jurišić ◽  
Ivan Plaščak ◽  
Oleg Antonić ◽  
Dorijan Radočaj

Red spicy pepper is traditionally considered as the fundamental ingredient for multiple authentic products of Eastern Croatia. The objectives of this study were to: (1) evaluate the optimal interpolation method necessary for modeling of criteria layers; (2) calculate the sustainability and vulnerability of red spicy pepper cultivation using hybrid Geographic Information System (GIS)-based multicriteria analysis with the analytical hierarchy process (AHP) method; (3) determine the suitability classes for red spicy pepper cultivation using K-means unsupervised classification. The inverse distance weighted interpolation method was selected as optimal as it produced higher accuracies than ordinary kriging and natural neighbour. Sustainability and vulnerability represented the positive and negative influences on red spicy pepper production. These values served as the input in the K-means unsupervised classification of four classes. Classes were ranked by the average of mean class sustainability and vulnerability values. Top two ranked classes, highest suitability and moderate-high suitability, produced suitability values of 3.618 and 3.477 out of a possible 4.000, respectively. These classes were considered as the most suitable for red spicy pepper cultivation, covering an area of 2167.5 ha (6.9% of the total study area). A suitability map for red spicy pepper cultivation was created as a basis for the establishment of red spicy pepper plantations.


Sign in / Sign up

Export Citation Format

Share Document