scholarly journals Three-Dimensional Induced Polarization Parallel Inversion Using Nonlinear Conjugate Gradients Method

2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Huan Ma ◽  
Handong Tan ◽  
Yue Guo

Four kinds of array of induced polarization (IP) methods (surface, borehole-surface, surface-borehole, and borehole-borehole) are widely used in resource exploration. However, due to the presence of large amounts of the sources, it will take much time to complete the inversion. In the paper, a new parallel algorithm is described which uses message passing interface (MPI) and graphics processing unit (GPU) to accelerate 3D inversion of these four methods. The forward finite differential equation is solved by ILU0 preconditioner and the conjugate gradient (CG) solver. The inverse problem is solved by nonlinear conjugate gradients (NLCG) iteration which is used to calculate one forward and two “pseudo-forward” modelings and update the direction, space, and model in turn. Because each source is independent in forward and “pseudo-forward” modelings, multiprocess modes are opened by calling MPI library. The iterative matrix solver within CULA is called in each process. Some tables and synthetic data examples illustrate that this parallel inversion algorithm is effective. Furthermore, we demonstrate that the joint inversion of surface and borehole data produces resistivity and chargeability results are superior to those obtained from inversions of individual surface data.

Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. F239-F250 ◽  
Author(s):  
Fernando A. Monteiro Santos ◽  
Hesham M. El-Kaliouby

Joint or sequential inversion of direct current resistivity (DCR) and time-domain electromagnetic (TDEM) data commonly are performed for individual soundings assuming layered earth models. DCR and TDEM have different and complementary sensitivity to resistive and conductive structures, making them suitable methods for the application of joint inversion techniques. This potential joint inversion of DCR and TDEM methods has been used by several authors to reduce the ambiguities of the models calculated from each method separately. A new approach for joint inversion of these data sets, based on a laterally constrained algorithm, was found. The method was developed for the interpretation of soundings collected along a line over a 1D or 2D geology. The inversion algorithm was tested on two synthetic data sets, as well as on field data from Saudi Arabia. The results show that the algorithm is efficient and stable in producing quasi-2D models from DCR and TDEM data acquired in relatively complex environments.


Author(s):  
Alan Gray ◽  
Kevin Stratford

Leading high performance computing systems achieve their status through use of highly parallel devices such as NVIDIA graphics processing units or Intel Xeon Phi many-core CPUs. The concept of performance portability across such architectures, as well as traditional CPUs, is vital for the application programmer. In this paper we describe targetDP, a lightweight abstraction layer which allows grid-based applications to target data parallel hardware in a platform agnostic manner. We demonstrate the effectiveness of our pragmatic approach by presenting performance results for a complex fluid application (with which the model was co-designed), plus separate lattice quantum chromodynamics particle physics code. For each application, a single source code base is seen to achieve portable performance, as assessed within the context of the Roofline model. TargetDP can be combined with Message Passing Interface (MPI) to allow use on systems containing multiple nodes: we demonstrate this through provision of scaling results on traditional and graphics processing unit-accelerated large scale supercomputers.


Author(s):  
Roberto Porcù ◽  
Edie Miglio ◽  
Nicola Parolini ◽  
Mattia Penati ◽  
Noemi Vergopolan

Helicopters can experience brownout when flying close to a dusty surface. The uplifting of dust in the air can remarkably restrict the pilot’s visibility area. Consequently, a brownout can disorient the pilot and lead to the helicopter collision against the ground. Given its risks, brownout has become a high-priority problem for civil and military operations. Proper helicopter design is thus critical, as it has a strong influence over the shape and density of the cloud of dust that forms when brownout occurs. A way forward to improve aircraft design against brownout is the use of particle simulations. For simulations to be accurate and comparable to the real phenomenon, billions of particles are required. However, using a large number of particles, serial simulations can be slow and too computationally expensive to be performed. In this work, we investigate an message passing interface (MPI) + graphics processing unit (multi-GPU) approach to simulate brownout. In specific, we use a semi-implicit Euler method to consider the particle dynamics in a Lagrangian way, and we adopt a precomputed aerodynamic field. Here, we do not include particle–particle collisions in the model; this allows for independent trajectories and effective model parallelization. To support our methodology, we provide a speedup analysis of the parallelization concerning the serial and pure-MPI simulations. The results show (i) very high speedups of the MPI + multi-GPU implementation with respect to the serial and pure-MPI ones, (ii) excellent weak and strong scalability properties of the implemented time-integration algorithm, and (iii) the possibility to run realistic simulations of brownout with billions of particles at a relatively small computational cost. This work paves the way toward more realistic brownout simulations, and it highlights the potential of high-performance computing for aiding and advancing aircraft design for brownout mitigation.


Geophysics ◽  
2000 ◽  
Vol 65 (2) ◽  
pp. 540-552 ◽  
Author(s):  
Yaoguo Li ◽  
Douglas W. Oldenburg

The inversion of magnetic data is inherently nonunique with respect to the distance between the source and observation locations. This manifests itself as an ambiguity in the source depth when surface data are inverted and as an ambiguity in the distance between the source and boreholes if borehole data are inverted. Joint inversion of surface and borehole data can help to reduce this nonuniqueness. To achieve this, we develop an algorithm for inverting data sets that have arbitrary observation locations in boreholes and above the surface. The algorithm depends upon weighting functions that counteract the geometric decay of magnetic kernels with distance from the observer. We apply these weighting functions to the inversion of three‐component magnetic data collected in boreholes and then to the joint inversion of surface and borehole data. Both synthetic and field data sets are used to illustrate the new inversion algorithm. When borehole data are inverted directly, three‐component data are far more useful in constructing good susceptibility models than are single‐component data. However, either can be used effectively in a joint inversion with surface data to produce models that are superior to those obtained by inversion of surface data alone.


Geophysics ◽  
2000 ◽  
Vol 65 (2) ◽  
pp. 492-501 ◽  
Author(s):  
Zhiyi Zhang ◽  
Partha S. Routh ◽  
Douglas W. Oldenburg ◽  
David L. Alumbaugh ◽  
Gregory A. Newman

Inversions of electromagnetic data from different coil configurations provide independent information about geological structures. We develop a 1-D inversion algorithm that can invert data from the horizontal coplanar (HC), vertical coplanar, coaxial (CA), and perpendicular coil configurations separately or jointly. The inverse problem is solved by minimizing a model objective function subject to data constraints. Tests using synthetic data from 1-D models indicate that if data are collected at a sufficient number of frequencies, then the recovered models from individual inversions of different coil systems can be quite similar. However, if only a limited number of frequencies are available, then joint inversion of data from different coils produces a better model than the individual inversions. Tests on 3-D synthetic data sets indicate that 1-D inversions can be used as a fast and approximate tool to locate anomalies in the subsurface. Also for the test example presented here, the joint inversion of HC and CA data over a 3-D conductivity provided a better model than that produced by the individual inversion of the data sets.


Geophysics ◽  
2000 ◽  
Vol 65 (6) ◽  
pp. 1931-1945 ◽  
Author(s):  
Yaoguo Li ◽  
Douglas W. Oldenburg

We present an algorithm for inverting induced polarization (IP) data acquired in a 3-D environment. The algorithm is based upon the linearized equation for the IP response, and the inverse problem is solved by minimizing an objective function of the chargeability model subject to data and bound constraints. The minimization is carried out using an interior‐point method in which the bounds are incorporated by using a logarithmic barrier and the solution of the linear equations is accelerated using wavelet transforms. Inversion of IP data requires knowledge of the background conductivity. We study the effect of different approximations to the background conductivity by comparing IP inversions performed using different conductivity models, including a uniform half‐space and conductivities recovered from one‐pass 3-D inversions, composite 2-D inversions, limited AIM updates, and full 3-D nonlinear inversions of the dc resistivity data. We demonstrate that, when the background conductivity is simple, reasonable IP results are obtainable without using the best conductivity estimate derived from full 3-D inversion of the dc resistivity data. As a final area of investigation, we study the joint use of surface and borehole data to improve the resolution of the recovered chargeability models. We demonstrate that the joint inversion of surface and crosshole data produces chargeability models superior to those obtained from inversions of individual data sets.


2021 ◽  
Vol 8 (2) ◽  
pp. 169-180
Author(s):  
Mark Lin ◽  
Periklis Papadopoulos

Computational methods such as Computational Fluid Dynamics (CFD) traditionally yield a single output – a single number that is much like the result one would get if one were to perform a theoretical hand calculation. However, this paper will show that computation methods have inherent uncertainty which can also be reported statistically. In numerical computation, because many factors affect the data collected, the data can be quoted in terms of standard deviations (error bars) along with a mean value to make data comparison meaningful. In cases where two data sets are obscured by uncertainty, the two data sets are said to be indistinguishable. A sample CFD problem pertaining to external aerodynamics is copied and ran on 29 identical computers in a university computer lab. The expectation is that all 29 runs should return exactly the same result; unfortunately, in a few cases the result turns out to be different. This is attributed to the parallelization scheme which partitions the mesh to run in parallel on multiple cores of the computer. The distribution of the computational load is hardware-driven depending on the available resource of each computer at the time. Things, such as load-balancing among multiple Central Processing Unit (CPU) cores using Message Passing Interface (MPI) are transparent to the user. Software algorithm such as METIS or JOSTLE is used to automatically divide up the load between different processors. As such, the user has no control over the outcome of the CFD calculation even when the same problem is computed. Because of this, numerical uncertainty arises from parallel (multicore) computing. One way to resolve this issue is to compute problems using a single core, without mesh repartitioning. However, as this paper demonstrates even this is not straight forward. Keywords: numerical uncertainty, parallelization, load-balancing, automotive aerodynamics


2021 ◽  
Vol 26 (1) ◽  
pp. 35-48
Author(s):  
Arseny Shlykov ◽  
Alexander Saraev ◽  
Sudha Agrahari ◽  
Bülent Tezkan ◽  
Akarsh Singh

In this paper, we discuss several approaches for a joint inversion of controlled source radiomagnetotelluric (CSRMT) and electrical resistivity tomography (ERT) data observed over anisotropic media. We compare results of 2D isotropic joint inversion with results of a newly developed joint 1D anisotropic inversion algorithm. The developed algorithm involves the full controlled source high frequency forward and inversion formulations without the plane wave assumption. We demonstrate that for measurements on an anisotropic subsurface the isotropic joint inversion cannot fit both datasets properly due to a high anisotropy of shallow horizons of quaternary sands and loams. The joint anisotropic inversion helps to solve this problem and highlights the advantages of a joint inversion of CSRMT and ERT data. We also demonstrate application of the laterally constrain algorithm for the anisotropic inversion. Results of the joint 1D anisotropic inversion of CSRMT and ERT data were successfully compared with existing borehole data.


Geophysics ◽  
2010 ◽  
Vol 75 (3) ◽  
pp. C25-C35 ◽  
Author(s):  
Ulrich Theune ◽  
Ingrid Østgård Jensås ◽  
Jo Eidsvik

Resolving thinner layers and focusing layer boundaries better in inverted seismic sections are important challenges in exploration and production seismology to better identify a potential drilling target. Many seismic inversion methods are based on a least-squares optimization approach that can intrinsically lead to unfocused transitions between adjacent layers. A Bayesian seismic amplitude variation with angle (AVA) inversion algorithm forms sharper boundaries between layers when enforcing sparseness in the vertical gradients of the inversion results. The underlying principle is similar to high-resolution processing algorithms and has been adapted from digital-image-sharpening algorithms. We have investigated the Cauchy and Laplace statistical distributions for their potential to improve contrasts betweenlayers. An inversion algorithm is derived statistically from Bayes’ theorem and results in a nonlinear problem that requires an iterative solution approach. Bayesian inversions require knowledge of certain statistical properties of the model we want to invert for. The blocky inversion method requires an additional parameter besides the usual properties for a multivariate covariance matrix, which we can estimate from borehole data. Tests on synthetic and field data show that the blocky inversion algorithm can detect and enhance layer boundaries in seismic inversions by effectively suppressing side lobes. The analysis of the synthetic data suggests that the Laplace constraint performs more reliably, whereas the Cauchy constraint may not find the optimum solution by converging to a local minimum of the cost function and thereby introducing some numerical artifacts.


Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. G25-G39 ◽  
Author(s):  
Craig R. W. Mosher ◽  
Colin G. Farquharson

A borehole gravimeter for the diameters of holes typically used in mineral exploration has recently been developed. Investigating how the data from such instruments can contribute to the gravity interpretation procedures used in mineral exploration is therefore appropriate. Here, results are presented from a study in which synthetic data for 3D exploration-relevant earth models were inverted and the impact of borehole data assessed. The inversions were carried out using a minimum-structure procedure that is typical of those commonly used to invert surface gravity data. Examples involving data from a single borehole, from multiple boreholes, and combinations of borehole and surface data, are considered. Also, a range of options for the particulars of the inversion algorithm are investigated, including using a reference model and cell weights to incorporate along-borehole density information, and an [Formula: see text]-type measure of model structure. The selection of examples presented demonstrates what one can and cannot expect to determine about the density variation around and between boreholes when borehole gravity data are inverted using a minimum-structure approach. Specifically, the density variation along a borehole can be accurately determined, even without constraints in the inversion, but this capability decreases dramatically a few tens of meters from a borehole.


Sign in / Sign up

Export Citation Format

Share Document