A stable truncated series approximation of the reduction‐to‐the‐pole operator

Geophysics ◽  
1993 ◽  
Vol 58 (8) ◽  
pp. 1084-1090 ◽  
Author(s):  
Carlos Alberto Mendonça ◽  
João B. C. Silva

We combine a stabilized reduction‐to‐the‐pole and an upward continuation filter to produce meaningful reduced‐to‐the‐pole fields at low magnetic latitudes. The stabilizing procedure is based on the development, in Taylor’s series, of the theoretical expression for the reduction‐to‐the‐pole filter in the wavenumber domain. The filter instability is caused by the huge filter amplitudes along the magnetization azimuth, which are expressed by the infinite sum of terms close to unity. The stabilizing procedure reduces to simply truncating the infinite series. The upward continuation filter attenuates the high wavenumber component of the noise and allows us to design a stabilized filter closer to the theoretical one. Besides, quantitative interpretations of source depth based on the filtered field are more reliable when using upward continuation as compared with arbitrary low‐pass filters. The proposed filter was applied to synthetic data of a single prism uniformly magnetized along a supposedly known direction, and it produced a reduced‐to‐the‐pole field very close to the theoretical field at pole. We also applied the filter to magnetic data from Dixon Seamount assuming induced magnetization only. We obtained, within the central part of the anomaly, roughly circular contours of the reduced‐to‐the‐pole anomaly due to the nearly circular shape of the Seamount (evidenced by topographic data).

Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. J67-J80 ◽  
Author(s):  
Giovanni Florio ◽  
Maurizio Fedi ◽  
Roman Pašteka

The estimation of the structural index and of the depth to the source is the main task of many popular methods used to analyze potential field data, such as Euler deconvolution. However, these estimates are unstable even in the presence of a weak amount of noise, and Euler deconvolution of noisy data leads to an underestimation of structural index and depth. We have studied how the structural index and depth estimates are affected by applying low-pass filtering to the data. Physically based low-pass filters, such as upward continuation and integration, have been shown to be the best choice over a range of altitudes (upward continuation) or orders (integration filters), mainly because their outputs have a well-defined physical meaning. In contrast, mathematical low-pass filters require that the filter parameters be tuned carefully by means of several trial tests to produce optimally smoothed fields. The C-norm criterion is a reliable strategy to produce a stabilized vertical derivative, and we discourage Butterworth filters because they tend to a vertical integral filter, for a high cutoff wavenumber, thus complicating the interpretation of the estimated structural index. We found that the estimated structural index and depth to source increase proportionally with the amount of smoothing, unless in the case of overfiltering. In that case, the severe distortion of the original field may cause a decrease of the estimated structural index and depth to source.


2020 ◽  
Author(s):  
Leonardo Uieda ◽  
Santiago Soler

<p>We investigate the use of cross-validation (CV) techniques to estimate the accuracy of equivalent-source (also known as equivalent-layer) models for interpolation and processing of potential-field data. Our preliminary results indicate that some common CV algorithms (e.g., random permutations and k-folds) tend to overestimate the accuracy. We have found that blocked CV methods, where the data are split along spatial blocks instead of randomly, provide more conservative and realistic accuracy estimates. Beyond evaluating an equivalent-source model's performance, cross-validation can be used to automatically determine configuration parameters, like source depth and amount of regularization, that maximize prediction accuracy and avoid over-fitting.</p><p>Widely used in gravity and magnetic data processing, the equivalent-source technique consists of a linear model (usually point sources) used to predict the observed field at arbitrary locations. Upward-continuation, interpolation, gradient calculations, leveling, and reduction-to-the-pole can be performed simultaneously by using the model to make predictions (i.e., forward modelling). Likewise, the use of linear models to make predictions is the backbone of many machine learning (ML) applications. The predictive performance of ML models is usually evaluated through cross-validation, in which the data are split (usually randomly) into a training set and a validation set. Models are fit on the training set and their predictions are evaluated using the validation set using a goodness-of-fit metric, like the mean square error or the R² coefficient of determination. Many cross-validation methods exist in the literature, varying in how the data are split and how this process is repeated. Prior research from the statistical modelling of ecological data suggests that prediction accuracy is usually overestimated by traditional CV methods when the data are spatially auto-correlated. This issue can be mitigated by splitting the data along spatial blocks rather than randomly. We conducted experiments on synthetic gravity data to investigate the use of traditional and blocked CV methods in equivalent-source interpolation. We found that the overestimation problem also occurs and that more conservative accuracy estimates are obtained when applying blocked versions of random permutations and k-fold. Further studies need to be conducted to generalize these findings to upward-continuation, reduction-to-the-pole, and derivative calculation.</p><p>Open-source software implementations of the equivalent-source and blocked cross-validation (in progress) methods are available in the Python libraries Harmonica and Verde, which are part of the Fatiando a Terra project (www.fatiando.org).</p>


Author(s):  
Boxin Zuo ◽  
Xiangyun Hu ◽  
Marcelo Leão-Santos ◽  
Yi Cai ◽  
Mason Andy Kass ◽  
...  

Summary Magnetic surveys conducted in complex conditions, such as low magnetic latitudes, uneven observation surfaces, or above high-susceptibility sources, pose significant challenges for obtaining stable solutions for reduction-to-the-pole (RTP) and upward-continuation processing on arbitrary surfaces. To tackle these challenges, in this study, we propose constructing an equivalent-susceptibility model based on the partial differential equation (PDE) framework in the space domain. A multilayer equivalent-susceptibility method was employed for RTP and upward-continuation operations, thus allowing for application on undulating observation surfaces and strong self-demagnetisation effect in a non-uniform mesh. A novel positivity constraint is introduced to improve the accuracy and efficiency of the inversion. We analysed the effect of the depth-weighting function in the inversion of equivalent susceptibility for RTP and upward-continuation reproduction. Iterative and direct solvers were utilised and compared in solving the large, sparse, nonsymmetric, and ill-conditioned system of linear equations produced by PDE-based equivalent-source construction. Two synthetic models were used to illustrate the efficiency and accuracy of the proposed method in processing both ground and airborne magnetic data. Aeromagnetic, ground data, and prior magnetic orebody information collected in Brazil at a low magnetic latitude region were used to validate the proposed method for processing RTP and upward-continuation operations on magnetic data sets with strong self-demagnetisation.


Geophysics ◽  
1989 ◽  
Vol 54 (12) ◽  
pp. 1607-1613 ◽  
Author(s):  
R. O. Hansen ◽  
R. S. Pawlowski

Using simple estimates of the signal and noise power from gridded magnetic data, we design regulated frequency‐domain operators for reduction to the pole at low magnetic latitudes. These operators suppress the artifacts along the direction of the magnetic declination associated with the conventional reduction‐to‐the‐pole procedure, with negligible increase in computational load. The new procedure is applied to produce high‐quality reductions to the pole for noisy low‐latitude synthetic data and for magnetic data from the Dixon Seamount.


2012 ◽  
Vol 30 (3) ◽  
Author(s):  
Alessandra De Barros e Silva Bongiolo ◽  
Francisco José Fonseca Ferreira

The purpose of this article is to describe the work carried out for evaluating enhancement techniques of magnetic anomalies applying the reduction-to-the-pole method and its implications for structural interpretation of a region located in low magnetic latitude. With this objective, the answer given by several data enhancement methods with and without reduction-to-the-pole was analyzed. These methods were applied to synthetic prisms located at low magnetic latitudes similar to the area of analysis and the resulting anomalies were compared to those calculated at the magnetic pole. The synthetic data has been generated from a program that calculates the anomalies from prisms with arbitrary dimensions, susceptibilities and depths. The enhancement methods were also applied to magnetic data of rocks from the Amazon Basin and the Amazonian Craton, in the Itaituba region, Par´a state, northern Brazil. The reduction-to-the-pole algorithm applied to synthetic data during this work improved the performance of the enhancement methods, once, after its application, the maximum amplitude of the transformed anomalies were positioned over the edges of the sources, facilitating magnetic-structural interpretation. Good correlation among magnetic lineaments – particularly those inferred by the recently proposed tilt derivative of the total horizontal gradient method – and the already interpreted geologic structures back up the reduction to the pole indicating it may be applied even when data is collected in low magnetic latitudes.


2015 ◽  
Vol E98.C (2) ◽  
pp. 156-161
Author(s):  
Hidenori YUKAWA ◽  
Koji YOSHIDA ◽  
Tomohiro MIZUNO ◽  
Tetsu OWADA ◽  
Moriyasu MIYAZAKI
Keyword(s):  
Ka Band ◽  
Low Pass ◽  

2011 ◽  
Vol 5 (2) ◽  
pp. 155-162
Author(s):  
Jose de Jesus Rubio ◽  
Diana M. Vazquez ◽  
Jaime Pacheco ◽  
Vicente Garcia

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 328
Author(s):  
Mikulas Huba ◽  
Damir Vrancic

The paper investigates and explains a new simple analytical tuning of proportional-integrative-derivative (PID) controllers. In combination with nth order series binomial low-pass filters, they are to be applied to the double-integrator-plus-dead-time (DIPDT) plant models. With respect to the use of derivatives, it should be understood that the design of appropriate filters is not only an implementation problem. Rather, it is also critical for the resulting performance, robustness and noise attenuation. To simplify controller commissioning, integrated tuning procedures (ITPs) based on three different concepts of filter delay equivalences are presented. For simultaneous determination of controller + filter parameters, the design uses the multiple real dominant poles method. The excellent control loop performance in a noisy environment and the specific advantages and disadvantages of the resulting equivalences are discussed. The results show that none of them is globally optimal. Each of them is advantageous only for certain noise levels and the desired degree of their filtering.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 563
Author(s):  
Jorge Pérez-Bailón ◽  
Belén Calvo ◽  
Nicolás Medrano

This paper presents a new approach based on the use of a Current Steering (CS) technique for the design of fully integrated Gm–C Low Pass Filters (LPF) with sub-Hz to kHz tunable cut-off frequencies and an enhanced power-area-dynamic range trade-off. The proposed approach has been experimentally validated by two different first-order single-ended LPFs designed in a 0.18 µm CMOS technology powered by a 1.0 V single supply: a folded-OTA based LPF and a mirrored-OTA based LPF. The first one exhibits a constant power consumption of 180 nW at 100 nA bias current with an active area of 0.00135 mm2 and a tunable cutoff frequency that spans over 4 orders of magnitude (~100 mHz–152 Hz @ CL = 50 pF) preserving dynamic figures greater than 78 dB. The second one exhibits a power consumption of 1.75 µW at 500 nA with an active area of 0.0137 mm2 and a tunable cutoff frequency that spans over 5 orders of magnitude (~80 mHz–~1.2 kHz @ CL = 50 pF) preserving a dynamic range greater than 73 dB. Compared with previously reported filters, this proposal is a competitive solution while satisfying the low-voltage low-power on-chip constraints, becoming a preferable choice for general-purpose reconfigurable front-end sensor interfaces.


Sign in / Sign up

Export Citation Format

Share Document