On: “Variable‐depth magnetization mapping: Application to the Athabasca basin, northern Alberta and Saskatchewan, Canada” by Mark Pilkington (GEOPHYSICS, 54, 1164–1179, September 1989)

Geophysics ◽  
1990 ◽  
Vol 55 (12) ◽  
pp. 1652-1652
Author(s):  
R. Jerry Brod

The thrust of Pilkington’s paper is that a frequency‐domain approach to variable‐depth magnetization mapping is superior to a space‐domain approach and has been “shown to improve the geologic mapping capability over total‐field data.” He states that “apparent susceptivility or magnetization‐mapping methods have proven useful in improving the resolving power of total‐field magnetic data, leading to a more precise delineation of geologic boundaries and providing a map of susceptibility‐magnetization levels that can be related directly to rock properties.” He accepts the premise that northern Saskatchewan can be divided into domains on the basis of structure and lithology and that these “lithostructural domains can be distinguished on the basis of aeromagnetic character.”

Geophysics ◽  
1986 ◽  
Vol 51 (9) ◽  
pp. 1725-1735 ◽  
Author(s):  
J. W. Paine

The vertical gradient of a one‐dimensional magnetic field is known to be a useful aid in interpretation of magnetic data. When the vertical gradient is required but has not been measured, it is necessary to approximate the gradient using the available total‐field data. An approximation is possible because a relationship between the total field and the vertical gradient can be established using Fourier analysis. After reviewing the theoretical basis of this relationship, a number of methods for approximating the vertical gradient are derived. These methods fall into two broad categories: methods based on the discrete Fourier transform, and methods based on discrete convolution filters. There are a number of choices necessary in designing such methods, each of which will affect the accuracy of the computed values in differing, and sometimes conflicting, ways. A comparison of the spatial and spectral accuracy of the methods derived here shows that it is possible to construct a filter which maintains a reasonable balance between the various components of the total error. Further, the structure of this filter is such that it is also computationally more efficient than methods based on fast Fourier transform techniques. The spacing and width of the convolution filter are identified as the principal factors which influence the accuracy and efficiency of the method presented here, and recommendations are made on suitable choices for these parameters.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. J75-J84 ◽  
Author(s):  
Camriel Coleman ◽  
Yaoguo Li

Three-dimensional inversion plays an important role in the quantitative interpretation of magnetic data in exploration problems, and magnetic amplitude data can be an effective tool in cases in which remanently magnetized materials are present. Because amplitude data are typically calculated from total-field anomaly data, the error levels must be characterized for inversions. Lack of knowledge of the error in amplitude data hinders the ability to properly estimate the data misfit associated with an inverse model and, therefore, the selection of the appropriate regularization parameter for a final model. To overcome these challenges, we have investigated the propagation of errors from total-field anomaly to amplitude data. Using parametric bootstrapping, we find that the standard deviation of the noise in amplitude data is approximately equal to that of the noise in total-field anomaly data when the amplitude data are derived from the conversion of total-field data to three orthogonal components. We then illustrate how the equivalent source method can be used to estimate the error in total-field anomaly data when needed. The obtained noise estimate can be applied to amplitude inversion to recover an optimal inverse model by applying the discrepancy principle. We test this method on synthetic and field data and determine its effectiveness.


Geophysics ◽  
1989 ◽  
Vol 54 (9) ◽  
pp. 1164-1173 ◽  
Author(s):  
Mark Pilkington

Total magnetic‐field data, from the Athabasca basin of northern Saskatchewan and Alberta, Canada, have been inverted to produce a magnetization map of the sub‐Athabasca crystalline basement. Since the basement topography is variable, the problem is nonlinear and an extra degree of freedom in the solution is caused by the existence of a distribution of magnetization (the annihilator) that produces no external magnetic field. I outline an iterative frequency‐domain inversion scheme, which is based on an approximation to the true partial derivative matrix for the linearized problem. This approximation causes each iteration to be equivalent to a simple frequency‐domain deconvolution. Modeling of selected anomalies allows determination of the magnetization at a number of points in the study area. These values are then used to determine the amount of annihilator to be added to the general solution found from the inversion. The procedure automatically corrects for the effects of variable attenuation of anomalies due to changes in basement depth. Thus, magnetization units and geology that are correlated in areas of outcrop can be extended beneath the sedimentary cover to provide improved geologic mapping control.


Geophysics ◽  
1991 ◽  
Vol 56 (2) ◽  
pp. 308-308
Author(s):  
Nelson C. Steenland

Apparent magnetizations calculated from magnetic intensity depend on Z, and Pilkington attempts to remove this variableness by annihilating the topographic effect with the convolution of a topographic set from a seismic source (One would have to consult his references for a description of the topographic set.) having a crude contour interval of 300 m where maximal values are 1500 m. Then there are no data given to show that this surface is the surface of a magnetic basement. The largest element, an outcropping dome in the west with no corresponding anomaly on the intensity map, is not magnetic basement.


2020 ◽  
Vol 1 (3) ◽  
Author(s):  
Maysam Abedi

The presented work examines application of an Augmented Iteratively Re-weighted and Refined Least Squares method (AIRRLS) to construct a 3D magnetic susceptibility property from potential field magnetic anomalies. This algorithm replaces an lp minimization problem by a sequence of weighted linear systems in which the retrieved magnetic susceptibility model is successively converged to an optimum solution, while the regularization parameter is the stopping iteration numbers. To avoid the natural tendency of causative magnetic sources to concentrate at shallow depth, a prior depth weighting function is incorporated in the original formulation of the objective function. The speed of lp minimization problem is increased by inserting a pre-conditioner conjugate gradient method (PCCG) to solve the central system of equation in cases of large scale magnetic field data. It is assumed that there is no remanent magnetization since this study focuses on inversion of a geological structure with low magnetic susceptibility property. The method is applied on a multi-source noise-corrupted synthetic magnetic field data to demonstrate its suitability for 3D inversion, and then is applied to a real data pertaining to a geologically plausible porphyry copper unit.  The real case study located in  Semnan province of  Iran  consists  of  an arc-shaped  porphyry  andesite  covered  by  sedimentary  units  which  may  have  potential  of  mineral  occurrences, especially  porphyry copper. It is demonstrated that such structure extends down at depth, and consequently exploratory drilling is highly recommended for acquiring more pieces of information about its potential for ore-bearing mineralization.


Geosciences ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 150
Author(s):  
Nilgün Güdük ◽  
Miguel de la Varga ◽  
Janne Kaukolinna ◽  
Florian Wellmann

Structural geological models are widely used to represent relevant geological interfaces and property distributions in the subsurface. Considering the inherent uncertainty of these models, the non-uniqueness of geophysical inverse problems, and the growing availability of data, there is a need for methods that integrate different types of data consistently and consider the uncertainties quantitatively. Probabilistic inference provides a suitable tool for this purpose. Using a Bayesian framework, geological modeling can be considered as an integral part of the inversion and thereby naturally constrain geophysical inversion procedures. This integration prevents geologically unrealistic results and provides the opportunity to include geological and geophysical information in the inversion. This information can be from different sources and is added to the framework through likelihood functions. We applied this methodology to the structurally complex Kevitsa deposit in Finland. We started with an interpretation-based 3D geological model and defined the uncertainties in our geological model through probability density functions. Airborne magnetic data and geological interpretations of borehole data were used to define geophysical and geological likelihoods, respectively. The geophysical data were linked to the uncertain structural parameters through the rock properties. The result of the inverse problem was an ensemble of realized models. These structural models and their uncertainties are visualized using information entropy, which allows for quantitative analysis. Our results show that with our methodology, we can use well-defined likelihood functions to add meaningful information to our initial model without requiring a computationally-heavy full grid inversion, discrepancies between model and data are spotted more easily, and the complementary strength of different types of data can be integrated into one framework.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2014 ◽  
Vol 644-650 ◽  
pp. 2670-2673
Author(s):  
Jun Wang ◽  
Xiao Hong Meng ◽  
Fang Li ◽  
Jun Jie Zhou

With the continuing growth in influence of near surface geophysics, the research of the subsurface structure is of great significance. Geophysical imaging is one of the efficient computer tools that can be applied. This paper utilize the inversion of potential field data to do the subsurface imaging. Here, gravity data and magnetic data are inverted together with structural coupled inversion algorithm. The subspace (model space) is divided into a set of rectangular cells by an orthogonal 2D mesh and assume a constant property (density and magnetic susceptibility) value within each cell. The inversion matrix equation is solved as an unconstrained optimization problem with conjugate gradient method (CG). This imaging method is applied to synthetic data for typical models of gravity and magnetic anomalies and is tested on field data.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. B121-B133 ◽  
Author(s):  
Shida Sun ◽  
Chao Chen ◽  
Yiming Liu

We have developed a case study on the use of constrained inversion of magnetic data for recovering ore bodies quantitatively in the Macheng iron deposit, China. The inversion is constrained by the structural orientation and the borehole lithology in the presence of high magnetic susceptibility and strong remanent magnetization. Either the self-demagnetization effect caused by high susceptibility or strong remanent magnetization would lead to an unknown total magnetization direction. Here, we chose inversion of amplitude data that indicate low sensitivity to the direction of magnetization of the sources when constructing the underground model of effective susceptibility. To reduce the errors that arise when treating the total-field anomaly as the projection of an anomalous field vector in the direction of the geomagnetic reference field, we develop an equivalent source technique to calculate the amplitude data from the total-field anomaly. This equivalent source technique is based on the acquisition of the total-field anomaly, which uses the total-field intensity minus the magnitude of the reference field. We first design a synthetic model from a simplified real case to test the new approach, involving the amplitude data calculation and the constrained amplitude inversion. Then, we apply this approach to the real data. The results indicate that the structural orientation and borehole susceptibility bounds are compatible with each other and are able to improve the quality of the recovered model to obtain the distribution of ore bodies quantitatively and effectively.


2020 ◽  
Author(s):  
Chaitanya Narendra ◽  
Puyan Mojabi

<p>A phaseless Gauss-Newton inversion (GNI) algorithm is developed for microwave imaging applications. In contrast to full-data microwave imaging inversion that uses complex (magnitude and phase) scattered field data, the proposed phaseless GNI algorithm inverts phaseless (magnitude-only) total field data. This phaseless Gauss-Newton inversion (PGNI) algorithm is augmented with three different forms of regularization, originally developed for complex GNI. First, we use the standard weighted L2 norm total variation multiplicative regularizer which is appropriate when there is no prior information about the object being imaged. We then use two other forms of regularization operators to incorporate prior information about the object being imaged into the PGNI algorithm. The first one, herein referred to as SL-PGNI, incorporates prior information about the expected relative complex permittivity values of the object of interest. The other, referred to as SP-PGNI, incorporates spatial priors (structural information) about the objects being imaged. The use of prior information aims to compensate for the lack of total field phase data. The PGNI, SL-PGNI, and SP-PGNI inversion algorithms are then tested against synthetic and experimental phaseless total field data.</p>


Sign in / Sign up

Export Citation Format

Share Document