Automatic inversion of magnetic anomalies from two height levels using finite-difference similarity transforms

Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. L75-L86 ◽  
Author(s):  
Petar Stavrev ◽  
Daniela Gerovska ◽  
Marcos J. Araúzo-Bravo

We solve the inverse magnetic problem for the depth and shape of simple sources in the presence of a regional field and truly random noise. We do not use noise-generating derivatives nor are we forced to solve complex systems of equations. Our inverse operator applies a new geometric type of field transform, the finite-difference similarity transform (FDST), that is based on a postulated degree of homogeneity in the potential field. Magnetic data from two height levels are required for the calculation of the FDSTs. The FDSTs are generated for an assumed central point of similarity (CPS) and a trial value (index) for the coefficient of similarity, and they are sensitive to the distance between the source and the CPS and to the agreement between the index and the degree of homogeneity in the data. When the CPS converges to a singular point in the potential field, say, the center or the topedge of the source, and when the trial index converges on the degree of homogeneity present in the data, the FDST drops in amplitude and its plot approaches a straight line, thereby signaling an interpretation for the source position and type. All inverse operations are fully automated and applicable to the interpretation of large data sets. The necessary data for the second level can be obtained by actual measurement or, alternatively, by deriving them from the data at the first level by an upward, analytical continuation. Upward continuation suppresses high-wavenumber random noise and thus contributes to a stable inversion. Model tests show that a suitable height for the second level is less than the expected depth of the source below the first level, while a suitable window length is about twice that depth. Examples show that the proposed inversion is effective on both model and field data. Note that this approach can be extended to the inversion of any component or derivative of the 2D or 3D magnetic or gravity fields from simple sources.

Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. L79-L90 ◽  
Author(s):  
Daniela Gerovska ◽  
Marcos J. Araúzo-Bravo ◽  
Kathryn Whaler ◽  
Petar Stavrev ◽  
Alan Reid

We present an automatic procedure for interpretation of magnetic or gravity gridded anomalies based on the finite-difference similarity transform (FDST). It is called MaGSoundFDST (magnetic and gravity sounding based on the finite-difference similarity transform) and uses a “focusing” principle in contrast to deriving multiple clusters of many solutions as in the widely used Euler deconvolution method. The source parameters are characterized by isolated solutions, and the interpreter obtains parallel images showing the horizontal position, depth, and structural index [Formula: see text] value. The underlying principle is that the FDST of a potential field anomaly becomes zero or linear at all observation points when the central point of similarity (CPS) of the transform coincides with a source field’s singular point and a correct [Formula: see text] value is used. The procedure involves calculating a 3D function that evaluates the linearity of the FDST for a series of [Formula: see text] values, using a moving window and sounding the subsurface along a verticalline under each window center. We then combine the 3D results for different [Formula: see text] values into a single map whose minima determine the horizontal position of the sources. The [Formula: see text] value and the CPS depth associated with each minimum determine the [Formula: see text] value and depth of the corresponding source. Only one estimate characterizes a simple source, which is a major advantage over other window-based procedures. MaGSoundFDST uses only the measured anomalous field and its upward continuation, thus avoiding the direct use of field derivatives. It is independent of the magnetization-vector direction in the magnetic data case. The procedure accounts for a linear background of local gravity or magnetic anomalies and has been applied effectively to several cases of synthetic and real data. MaGSoundFDST shares common features with the magnetic and gravity sounding based on the differential similarity transform (MaGSoundDST) but is more stable in estimating depth and structural index in the presence of random noise.


Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


Geophysics ◽  
1994 ◽  
Vol 59 (6) ◽  
pp. 902-908 ◽  
Author(s):  
Lindrith Cordell

Potential‐field (gravity) data are transformed into a physical‐property (density) distribution in a lower half‐space, constrained solely by assumed upper bounds on physical‐property contrast and data error. A two‐step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler’s homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume‐density product, constrained to an upper density bound, by “bubbling,” which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential‐field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.


2020 ◽  
pp. 1-16
Author(s):  
Amir Maleki ◽  
Richard Smith ◽  
Esmaeil Eshaghi ◽  
Lucie Mathieu ◽  
David Snyder ◽  
...  

This paper focusses on obtaining a better understanding of the subsurface geology of the Chibougamau area, in the northeast of the Abitibi greenstone belt (Superior craton), using geophysical data collected along a 128 km long traverse with a rough southwest–northeast orientation. We have constructed two-dimensional (2D) models of the study area that are consistent with newly collected gravity data and high-resolution magnetic data sets. The initial models were constrained at depth by an interpretation of a new seismic section and at surface by the bedrock geology and known geometry of lithological units. The attributes of the model were constrained using petrophysical measurements so that the final model is compatible with all available geological and geophysical data. The potential-field data modelling resolved the geometry of plutons and magnetic bodies that are transparent on seismic sections. The new model is consistent with the known structural geology, such as open folding, and provides an improvement in estimating the size, shape, and depth of the Barlow and Chibougamau plutons. The Chibougamau pluton is known to be associated with Cu–Au magmatic-hydrothermal mineralisation and, as the volume and geometry of intrusive bodies is paramount to the exploration of such mineralisation, the modelling presented here provides a scientific foundation to exploration models focused on such mineralisation.


Geophysics ◽  
1992 ◽  
Vol 57 (1) ◽  
pp. 126-130 ◽  
Author(s):  
Jianghai Xia ◽  
Donald R. Sprowl

Direct inversion of potential‐field data is hindered by the nonuniqueness of the general solution. Convergence to a single solution can only be obtained when external constraints are placed on the subsurface geometry. Two such constrained geometries are dealt with here: a single, nonplanar interface between two layers, each of uniform density or magnetization, and the distribution of the density or magnetization contrast within a single layer. Both of these simple geometries have geologic application. Inversion is accomplished by iterative improvement in an initial subsurface model in the wavenumber domain. The inversion process is stable and is efficient for usage on large data sets. Forward calculation of anomalies is by Parker’s (1973) algorithm (Blakely, 1981).


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


2020 ◽  
Vol 1 (3) ◽  
Author(s):  
Maysam Abedi

The presented work examines application of an Augmented Iteratively Re-weighted and Refined Least Squares method (AIRRLS) to construct a 3D magnetic susceptibility property from potential field magnetic anomalies. This algorithm replaces an lp minimization problem by a sequence of weighted linear systems in which the retrieved magnetic susceptibility model is successively converged to an optimum solution, while the regularization parameter is the stopping iteration numbers. To avoid the natural tendency of causative magnetic sources to concentrate at shallow depth, a prior depth weighting function is incorporated in the original formulation of the objective function. The speed of lp minimization problem is increased by inserting a pre-conditioner conjugate gradient method (PCCG) to solve the central system of equation in cases of large scale magnetic field data. It is assumed that there is no remanent magnetization since this study focuses on inversion of a geological structure with low magnetic susceptibility property. The method is applied on a multi-source noise-corrupted synthetic magnetic field data to demonstrate its suitability for 3D inversion, and then is applied to a real data pertaining to a geologically plausible porphyry copper unit.  The real case study located in  Semnan province of  Iran  consists  of  an arc-shaped  porphyry  andesite  covered  by  sedimentary  units  which  may  have  potential  of  mineral  occurrences, especially  porphyry copper. It is demonstrated that such structure extends down at depth, and consequently exploratory drilling is highly recommended for acquiring more pieces of information about its potential for ore-bearing mineralization.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


Sign in / Sign up

Export Citation Format

Share Document