scholarly journals Mr4Soil: A MapReduce-Based Framework Integrated with GIS for Soil Erosion Modelling

2019 ◽  
Vol 8 (3) ◽  
pp. 103
Author(s):  
Zhigang Han ◽  
Fen Qin ◽  
Caihui Cui ◽  
Yannan Liu ◽  
Lingling Wang ◽  
...  

A soil erosion model is used to evaluate the conditions of soil erosion and guide agricultural production. Recently, high spatial resolution data have been collected in new ways, such as three-dimensional laser scanning, providing the foundation for refined soil erosion modelling. However, serial computing cannot fully meet the computational requirements of massive data sets. Therefore, it is necessary to perform soil erosion modelling under a parallel computing framework. This paper focuses on a parallel computing framework for soil erosion modelling based on the Hadoop platform. The framework includes three layers: the methodology, algorithm, and application layers. In the methodology layer, two types of parallel strategies for data splitting are defined as row-oriented and sub-basin-oriented methods. The algorithms for six parallel calculation operators for local, focal and zonal computing tasks are designed in detail. These operators can be called to calculate the model factors and perform model calculations. We defined the key-value data structure of GeoCSV format for vector, row-based and cell-based rasters as the inputs for the algorithms. A geoprocessing toolbox is developed and integrated with the geographic information system (GIS) platform in the application layer. The performance of the framework is examined by taking the Gushanchuan basin as an example. The results show that the framework can perform calculations involving large data sets with high computational efficiency and GIS integration. This approach is easy to extend and use and provides essential support for applying high-precision data to refine soil erosion modelling.

Author(s):  
Hakan Ancin

This paper presents methods for performing detailed quantitative automated three dimensional (3-D) analysis of cell populations in thick tissue sections while preserving the relative 3-D locations of cells. Specifically, the method disambiguates overlapping clusters of cells, and accurately measures the volume, 3-D location, and shape parameters for each cell. Finally, the entire population of cells is analyzed to detect patterns and groupings with respect to various combinations of cell properties. All of the above is accomplished with zero subjective bias.In this method, a laser-scanning confocal light microscope (LSCM) is used to collect optical sections through the entire thickness (100 - 500μm) of fluorescently-labelled tissue slices. The acquired stack of optical slices is first subjected to axial deblurring using the expectation maximization (EM) algorithm. The resulting isotropic 3-D image is segmented using a spatially-adaptive Poisson based image segmentation algorithm with region-dependent smoothing parameters. Extracting the voxels that were labelled as "foreground" into an active voxel data structure results in a large data reduction.


1997 ◽  
Vol 3 (S2) ◽  
pp. 1131-1132
Author(s):  
Jansma P.L ◽  
M.A. Landis ◽  
L.C. Hansen ◽  
N.C. Merchant ◽  
N.J. Vickers ◽  
...  

We are using Data Explorer (DX), a general-purpose, interactive visualization program developed by IBM, to perform three-dimensional reconstructions of neural structures from microscopic or optical sections. We use the program on a Silicon Graphics workstation; it also can run on Sun, IBM RS/6000, and Hewlett Packard workstations. DX comprises modular building blocks that the user assembles into data-flow networks for specific uses. Many modules come with the program, but others, written by users (including ourselves), are continually being added and are available at the DX ftp site, http://www.tc.cornell.edu/DXhttp://www.nice.org.uk/page.aspx?o=43210.Initally, our efforts were aimed at developing methods for isosurface- and volume-rendering of structures visible in three-dimensional stacks of optical sections of insect brains gathered on our Bio-Rad MRC-600 laser scanning confocal microscope. We also wanted to be able to merge two 3-D data sets (collected on two different photomultiplier channels) and to display them at various angles of view.


Some steps are taken towards a parametric statistical model for the velocity and velocity derivative fields in stationary turbulence, building on the background of existing theoretical and empirical knowledge of such fields. While the ultimate goal is a model for the three-dimensional velocity components, and hence for the corresponding velocity derivatives, we concentrate here on the stream wise velocity component. Discrete and continuous time stochastic processes of the first-order autoregressive type and with one-dimensional marginals having log-linear tails are constructed and compared with two large data-sets. It turns out that a first-order autoregression that fits the local correlation structure well is not capable of describing the correlations over longer ranges. A good fit locally as well as at longer ranges is achieved by using a process that is the sum of two independent autoregressions. We study this type of model in some detail. We also consider a model derived from the above-mentioned autoregressions and with dependence structure on the borderline to long-range dependence. This model is obtained by means of a general method for construction of processes with long-range dependence. Some suggestions for future empirical and theoretical work are given.


2001 ◽  
Vol 34 (1) ◽  
pp. 76-79 ◽  
Author(s):  
Lynn Ribaud ◽  
Guang Wu ◽  
Yuegang Zhang ◽  
Philip Coppens

As the combination of high-intensity synchrotron sources and area detectors allows collection of large data sets in a much shorter time span than previously possible, the use of open helium gas-flow systems is much facilitated. A flow system installed at the SUNY X3 synchrotron beamline at the National Synchrotron Light Source has been used for collection of a number of large data sets at a temperature of ∼16 K. Instability problems encountered when using a helium cryostat for three-dimensional data collection are eliminated. Details of the equipment, its temperature calibration and a typical result are described.


2020 ◽  
pp. paper46-1-paper46-10
Author(s):  
Ilya Rylskiy

During past 25 years, laser scanning has evolved from an experimental method into a fully autonomous family of Earth remote sensing methods. Now this group of methods provides the most accurate and detailed spatial data sets, while the cost of data is constantly falling, the number of measuring instruments (laser scanners) is constantly growing. The volumes of data that will be obtained during the surveys in the coming decades will allow the creation of the first sub-global coverage of the planet. However, the flip side of high accuracy and detail is the need to store fantastically large volumes of three-dimensional data without loss of accuracy. At the same time, the ability to work with the specified data in both 2D and 3D mode should be improved. Standard storage methods (file method, geodatabases, archiving, etc) solve the problem only partially. At the same time, there are some other alternative methods that can remove current restrictions and lead to the emergence of more flexible and functional spatial data infrastructures. One of the most flexible and promising ways of laser data storage and processing are quadtree and octree-based approaches. Of course, these approaches are more complicated than typical file data structures, that are commonly used for LIDAR data storage, but they allow users to solve some typical negative features of point datasets (processing speed, non-topological spatial structure, limited precision, etc.).


2021 ◽  
Author(s):  
Jakob J. Assmann ◽  
Jesper E. Moeslund ◽  
Urs A. Treier ◽  
Signe Normand

Abstract. Biodiversity studies could strongly benefit from three-dimensional data on ecosystem structure derived from contemporary remote sensing technologies, such as Light Detection and Ranging (LiDAR). Despite the increasing availability of such data at regional and national scales, the average ecologist has been limited in accessing them due to high requirements on computing power and remote-sensing knowledge. We processed Denmark's publicly available national Airborne Laser Scanning (ALS) data set acquired in 2014/15 together with the accompanying elevation model to compute 70 rasterized descriptors of interest for ecological studies. With a grain size of 10 m, these data products provide a snapshot of high-resolution measures including vegetation height, structure and density, as well as topographic descriptors including elevation, aspect, slope and wetness across more than forty thousand square kilometres covering almost all of Denmark's terrestrial surface. The resulting data set is comparatively small (~ 87 GB, compressed 16.4 GB) and the raster data can be readily integrated into analytical workflows in software familiar to many ecologists (GIS software, R, Python). Source code and documentation for the processing workflow are openly available via a code repository, allowing for transfer to other ALS data sets, as well as modification or re-calculation of future instances of Denmark’s national ALS data set. We hope that our high-resolution ecological vegetation and terrain descriptors (EcoDes-DK15) will serve as an inspiration for the publication of further such data sets covering other countries and regions and that our rasterized data set will provide a baseline of the ecosystem structure for current and future studies of biodiversity, within Denmark and beyond.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10545
Author(s):  
Matt A. White ◽  
Nicolás E. Campione

Classifying isolated vertebrate bones to a high level of taxonomic precision can be difficult. Many of Australia’s Cretaceous terrestrial vertebrate fossil-bearing deposits, for example, produce large numbers of isolated bones and very few associated or articulated skeletons. Identifying these often fragmentary remains beyond high-level taxonomic ranks, such as Ornithopoda or Theropoda, is difficult and those classified to lower taxonomic levels are often debated. The ever-increasing accessibility to 3D-based comparative techniques has allowed palaeontologists to undertake a variety of shape analyses, such as geometric morphometrics, that although powerful and often ideal, require the recognition of diagnostic landmarks and the generation of sufficiently large data sets to detect clusters and accurately describe major components of morphological variation. As a result, such approaches are often outside the scope of basic palaeontological research that aims to simply identify fragmentary specimens. Herein we present a workflow in which pairwise comparisons between fragmentary fossils and better known exemplars are digitally achieved through three-dimensional mapping of their surface profiles and the iterative closest point (ICP) algorithm. To showcase this methodology, we compared a fragmentary theropod ungual (NMV P186153) from Victoria, Australia, identified as a neovenatorid, with the manual unguals of the megaraptoran Australovenator wintonensis (AODF604). We discovered that NMV P186153 was a near identical match to AODF604 manual ungual II-3, differing only in size, which, given their 10–15Ma age difference, suggests stasis in megaraptoran ungual morphology throughout this interval. Although useful, our approach is not free of subjectivity; care must be taken to eliminate the effects of broken and incomplete surfaces and identify the human errors incurred during scaling, such as through replication. Nevertheless, this approach will help to evaluate and identify fragmentary remains, adding a quantitative perspective to an otherwise qualitative endeavour.


2013 ◽  
Vol 6 (4) ◽  
pp. 1261-1273 ◽  
Author(s):  
T. Heus ◽  
A. Seifert

Abstract. This paper presents a method for feature tracking of fields of shallow cumulus convection in large eddy simulations (LES) by connecting the projected cloud cover in space and time, and by accounting for splitting and merging of cloud objects. Existing methods tend to be either imprecise or, when using the full three-dimensional (3-D) spatial field, prohibitively expensive for large data sets. Compared to those 3-D methods, the current method reduces the memory footprint by up to a factor 100, while retaining most of the precision by correcting for splitting and merging events between different clouds. The precision of the algorithm is further enhanced by taking the vertical extent of the cloud into account. Furthermore, rain and subcloud thermals are also tracked, and links between clouds, their rain, and their subcloud thermals are made. The method compares well with results from the literature. Resolution and domain dependencies are also discussed. For the current simulations, the cloud size distribution converges for clouds larger than an effective resolution of 6 times the horizontal grid spacing, and smaller than about 20% of the horizontal domain size.


Geophysics ◽  
1990 ◽  
Vol 55 (10) ◽  
pp. 1321-1326 ◽  
Author(s):  
X. Wang ◽  
R. O. Hansen

Two‐dimensional (profile) inversion techniques for magnetic anomalies are widely used in exploration geophysics: but, until now, the three‐dimensional (3-D) methods available have been restricted in their geologic applicability, dependent upon good initial values or limited by the capabilities of existing computers. We have developed a fully 3-D inversion algorithm intended for routine application to large data sets. The algorithm based on a Fourier transform expression for the magnetic field of homogeneous polyhedral bodies (Hansen and Wang, 1998), is a 3-D generalization of CompuDepth (O’Brien, 1972). Like CompuDepth, the new inversion algorithm employs thespatial equivalent of frequency‐domain autoregression to determine a series of coefficients from which the depths and locations of polyhedral vertices are calculated by solving complex polynomials. These vertices are used to build a 3-D geologic model. Application to the Medicine Lake Volcano aeromagnetic anomaly resulted in a geologically reasonable model of the source.


Sign in / Sign up

Export Citation Format

Share Document