GRAVITY AND MAGNETIC FIELDS OF POLYGONAL PRISMS AND APPLICATION TO MAGNETIC TERRAIN CORRECTIONS

Geophysics ◽  
1976 ◽  
Vol 41 (4) ◽  
pp. 727-741 ◽  
Author(s):  
Donald Plouff

Computer programs based on the exact calculations of the gravity and magnetic anomalies of polygonal prisms are faster in operation and more accurate than previous programs based on the numerical integration of polygonal laminas. The prism programs also are of more general application than existing computer programs that are based on the exact gravity and magnetic effects of rectangular prisms. There are no restrictions on the use of the exact formula for the gravitational attraction of a polygonal prism, but the formulas for the magnetic effect are restricted in that demagnetization is not considered, and a finite answer is not obtained in the unrealistic circumstance where an observation point coincides with an edge of the prism. Least‐squares methods permit calculation of the gravity or magnetic effect of models without knowledge of the density or magnetization contrasts, respectively, by comparison of the observed anomalies with theoretical dimensionless values to determine contrasts as regression coefficients. The coefficient of correlation provides a goodness of fit estimate that helps model evaluation. After calculating a magnetic terrain correction for an outcrop of Quaternary dacite and andestite near Clear Lake, Calif., an improvement of the coefficient of correlation from 88 to the 92 percent level indicates that this volcanic unit probably extends at least 150 m beneath the surface. Application of a magnetic terrain correction to disconnected outcrops of Tertiary andesite, eliminates most of a prominent v-shaped magnetic anomaly south of the San Juan Mountains, Colo.

Geophysics ◽  
1995 ◽  
Vol 60 (4) ◽  
pp. 1007-1017 ◽  
Author(s):  
Robert L. Parker

A description of a new Fourier technique is given for calculating the gravitational attraction of a layer with an irregular top surface for application in the terrain correction of marine gravity surveys in shallow water. An earlier Fourier‐based algorithm fails or becomes inaccurate when the peaks of the topography approach the sea surface too closely. The new approach divides the attraction into two parts: a local contribution from the material within a cylinder around each observation point and the attraction from the matter outside the cylinder. A special quadrature rule, optimized for the actual data distribution, evaluates the local contribution. The calculation of the exterior component represents the bulk of the numerical effort. Fortunately, the exterior integral possesses an expansion as a series of convolutions, and by evaluating these in the Fourier domain, the procedure can take advantage of the efficiency of the fast Fourier transform. Chebychev economization of the convolution series provides further significant improvements in computational speed. Two examples, one artificial and the other based on a survey around Guadalupe Island, illustrate the application of the new technique. Estimates of the errors from computation sources and from the inadequacies of the topographic model confirm the general accuracy of the approach, except in regions of very steep terrain.


2020 ◽  
Author(s):  
Santiago Rubén Soler ◽  
Leonardo Uieda

<p>We present a new strategy for gravity and magnetic data interpolation and processing. Our method is based on the equivalent layer technique (EQL) and produces more accurate interpolations when compared with similar EQL methods. It also reduces the computation time and memory requirements, both of which have been severe limiting factors.</p><p>The equivalent layer technique (also known as equivalent source, radial basis functions, or Green’s functions interpolation) is used to predict the value of gravity and magnetic fields (or transformations thereof) at any point based on the data gathered on some observation points. It consists in estimating a source distribution that produces the same field as the one measured and using this estimate to predict new values. It generally outperforms other general-purpose 2D interpolators, like the minimum curvature or bi-harmonic splines, because it takes into account the height of measurements and the fact that these fields are harmonic functions. Nevertheless, defining a layout for the source distribution used by the EQL is not trivial and plays an important role in the quality of the predictions.</p><p>The most widely used source distributions are: (a) a regular grid of point sources and (b) one point source beneath each observation point. We propose a new source distribution: (c) divide the area into blocks, calculate the average location of observation points inside each block, and place one point source beneath each average location. This produces a smaller number of point sources in comparison with the other source distributions, effectively reducing the computational load. Traditionally, the source points are located: (i) all at the same depth or (ii) each source point at a constant relative depth beneath its corresponding observation point. Besides these two, we also considered (iii) a variable relative depth for each source point proportional to the median distance to its nearest neighbours. The combination of source distributions and depth configurations leads to seven different source layouts (the regular grid is only compatible with the constant depth configuration).</p><p>We have scored the performance of each configuration by interpolating synthetic ground and airborne gravity data, and comparing the interpolation against the true values of the model. The block-averaged source layout (c) with variable relative depth (iii) produces more accurate interpolation results (R² of 0.97 versus R² of 0.63 for the traditional grid layout) in less time than the alternatives (from 2 to 10 times faster on our test cases). These results are consistent between ground and airborne survey layouts. Our conclusions can be extrapolated to other applications of equivalent layers, such as upward continuation, reduction-to-the-pole, and derivative calculation. What is more, we expect that these optimizations can benefit similar spatial prediction problems beyond gravity and magnetic data.</p><p>The source code developed for this study is based on the EQL implementation available in Harmonica (fatiando.org/harmonica), an open-source Python library for modelling and processing gravity and magnetic data.</p>


Geophysics ◽  
1975 ◽  
Vol 40 (6) ◽  
pp. 981-992 ◽  
Author(s):  
B. K. Bhattacharyya ◽  
M. E. Navolio

The magnetic and gravitational potentials and fields due to arbitrarily shaped bodies with homogeneous magnetization and uniform density distribution are expressed as a convolution of the source geometry and the Green’s function. The Green’s function depends on the location of the observation point and on either the magnetization vector (in the case of the magnetic field) or the density (in the case of the gravitational attraction). A fast digital convolution algorithm is used for efficiently and accurately calculating anomalies caused by irregular bodies. The shapes of the calculated anomalies faithfully reproduce the exact shapes when the sampling interval selected for digitizing the source geometry and the Green’s function is less than one‐tenth of the depth of the source. In the digital convolution method for computing anomalies, it is unnecessary, for any given structure, to perform analytical integration of the dipolar magnetic field or the gravitational field of a point mass. One of the examples given in the paper deals with the computation of the magnetic anomaly due to the irregularly shaped Round Butte Laccolith, Montana. The results are found to be in satisfactory agreement with the observed aeromagnetic data. A new method is also described for calculating the magnetization vector associated with the laccolith and the datum level of the magnetic observations.


Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. L29-L34 ◽  
Author(s):  
Zhen Jia ◽  
Shiguo Wu

We summarized and revised the present forward modeling methods for calculating the gravity- and magnetic-field components and their partial derivatives of a 2D homogeneous source with a polygonal cross section. The responses of interest include the gravity-field components and their first- and second-order partial derivatives and the magnetic-field components and their first-order partial derivatives. The revised formulas consist of several basic quantities that are common in all the formulations. A singularity appears when the observation point coincides with a polygon vertex. This singularity is removable for the gravity formulas but not for the others. The compact forms of the revised formulas make them easy to implement. We compare the gravity- and magnetic-field components and their partial derivatives produced by a 2D prism whose polygonal cross section approximates a cylinder with the corresponding analytical fields and partial derivatives of the cylinder. The perfect fittings presented by both data sets confirm the reliability of the updated formulas.


1986 ◽  
Vol 228 (1252) ◽  
pp. 317-353 ◽  

This paper offers a quantitative theory of the length of food chains. The theory derives from a mathematical model of community food webs called the cascade model. The paper tests the predictions against data from real webs. An exact formula for the expected number of chains of each length in a model web with any given finite number, S , of species is, to our knowledge, the first exactly derived theory of the length of food chains. Since the numbers of chains of different lengths are dependent in the cascade model, we evaluate the goodness of fit between the observed and predicted numbers of chains by a Monte Carlo method. Without fitting any free parameters, and using no direct information about chain lengths other than that implied by the total number of species and the total number of links in a web, we find that the cascade model describes acceptably the observed numbers of chains of each length in all but 16 or 17 of 113 webs. Of 62 webs previously used to test the cascade model, the cascade model describes acceptably the chain lengths in all but 11 or 12. With a fresh batch of 51 webs, we establish first that (apart from two outlying webs) the numbers of links are very nearly proportional to the numbers of species and that the constant of proportionality is consistent with that in the original 62 webs. This finding verifies the so-called species–link scaling law with new data. The cascade model describes acceptably the chain lengths of all but 5 of the 51 new webs. Most of the 16 or 17 webs with chain lengths described poorly by the cascade model have unusually large average chain lengths (greater than 4 links) or unusually small average chain lengths (fewer than 2 links).


2021 ◽  
Vol 58 (03) ◽  
pp. 262-273
Author(s):  
Arun Kumar Attkan ◽  
M. S. Alam ◽  
Angam Raleng ◽  
Y. K. Yadav

Onion slices were dried in a low-humidity air-assisted hybrid solar dryer. Drying occurred in the falling rate period, and the drying rate was attenuated with the initial moisture content of the samples. The effects of different drying air temperatures (50, 60, 70°C) and KMS pre-treatments (0.1, 0.3, 0.5%) on drying characteristics of onion slices were also studied. Eight thin layer drying mathematical models viz. Newton, Page, Modified Page, Exponential, Asymptotic, Logistic, Wang and Singh, and two-term exponential were investigated and the results were compared to their goodness of fit in terms of coefficient of correlation (r), standard error (es), and the mean square of the deviation ?2. Drying characteristics of onion slices were better delineated by Page’s regression model for hybrid solar drying with values for the coefficient of correlation (0.9962–0.9999), standard error (0.0048–0.0431), and ?² (5.98E-05 to 2.16E-03). Effective moisture diffusivity values of onion slices ranged between 1.33E-08 m2.s-1 to 2.49E-08 m2.s-1 for the drying conditions under investigation.


2020 ◽  
Vol 50 (5) ◽  
Author(s):  
Osman Özbek

ABSTRACT: Vertical screw conveyors have low energy efficiency but this is generally acceptable within the normally low power range. Previously, a fuzzy logic approach was used to model volumetric efficiency and specific energy consumption in screw conveyors. The performance of conveyors in different working conditions and the geometry of the screw were studied. It was reported that increasing the screw speed, pitch, and loading angle also increases specific energy consumption. In this study, an intelligent fuzzy model based on the Mamdani approach was developed to predict volumetric efficiency and specific energy consumption. The model inputs included the slope, speed, and pitch of screw conveyors. The fuzzy model consists of 27 rules in which three parameters, namely the goodness of fit (η), relative error (ε), and coefficient of correlation (R), are used to evaluate the model. The goodness of fit, relative error, and coefficient of correlation values were 0.986, 5.28%, and 0.99, respectively, for volumetric efficiency and 0.987, 4.93%, and 0.99, respectively, for specific energy consumption. Results revealed that the developed model is capable of predicting volumetric efficiency and specific energy consumption in barley transport under different working conditions with high accuracy.


Geophysics ◽  
1987 ◽  
Vol 52 (1) ◽  
pp. 94-107 ◽  
Author(s):  
V. J. S. Grauch

Terrain effects in aeromagnetic data are produced by rugged, magnetic topography. These effects mimic the shape of topography and can often be so large that they obscure anomalies of interest. Thus it is desirable to remove terrain effects from aeromagnetic data in order to isolate the anomalies to be investigated. However, removal of aeromagnetic terrain effects has been a longstanding problem. Previously developed methods have succeeded only in certain, specific geologic situations. I present a new aeromagnetic terrain‐correction method that is superior to the previously developed methods for the general case. This method takes into account the highly variable magnetic properties of rocks and can remove terrain effects whether the sources of interest are shallow or deep. The new method is based on the assumption that magnetic sources of interest are often geometrically unrelated to terrain. It finds the magnetization that gives a magnetic‐field residual with minimum correlation to terrain effects for a window of data within a grid of magnetic‐field values. By repeating the calculation for windows covering the entire grid, a grid of variable‐magnetization values is produced which is combined with topography to calculate a magnetic‐terrain correction. The variable‐magnetizaton method was extensively tested using theoretical models (where the answer is known) and using real data from the Lake City caldera area in the San Juan Mountains of southern Colorado. The tests demonstrated the method’s effectiveness in removing terrain effects from aeromagnetic data. Valid terrain corrections were not obtained where anomalies of interest correlated with terrain effects. However, these places are readily recognizable and easily corrected by editing some of the magnetization values.


Sign in / Sign up

Export Citation Format

Share Document