Regularized two‐dimensional Fourier gravity inversion method with application to the Silent Canyon caldera, Nevada

Geophysics ◽  
1989 ◽  
Vol 54 (4) ◽  
pp. 486-496 ◽  
Author(s):  
Sharon K. Reamer ◽  
John F. Ferguson

A modification of the 2‐D Fourier gravity inversion method includes regularization and a linear density variation with depth. Explicit downward continuation in the Fourier inversion of gravity observations from mass distributions at depth produces instability in the presence of noise and shallow mass distributions. A data‐adaptive regularization filter tapers growth of the exponential continuation function. An empirical relationship between the regularization filter parameter and a parametric model of potential field spectra results in automatic selection of the filter parameter for a given continuation depth. Inversion of synthetic data from a random noise‐contaminated basin type model produces a depth model that agrees with the synthetic structure with an rms error commensurate with the data noise. A model of the Silent Canyon caldera, buried beneath Pahute Mesa at the Nevada Test Site, results in a gravity field that agrees with the observations to within a 4 percent rms error. The caldera gravity model supports the hypothesis of a high‐density half‐space (precaldera lithology) beneath a lower density caldera infill (postcaldera volcanic activity).

Geophysics ◽  
2007 ◽  
Vol 72 (3) ◽  
pp. B59-B68 ◽  
Author(s):  
Valeria C. Barbosa ◽  
Paulo T. Menezes ◽  
João B. Silva

We demonstrate the potential of gravity data to detect and to locate in-depth subtle normal faults in the basement relief of a sedimentary basin. This demonstration is accomplished by inverting the gravity data with the constraint that the estimated basement relief presents local abrupt faults and is smooth elsewhere. We inverted the gravity data from the onshore Almada Basin in northeastern Brazil, and we mapped several normal faults whose locations and plane geometries were already known from seismic imaging. The inversion method delineated well both the discontinuities with small or large slips and a sequence of step faults. Using synthetic data, we performed a systematic search of normal fault slips versus fault displacement depths to map the fault-detectable region in this space. This mapping helps to assess the ability of gravity inversion to detect normal faults. Mapping shows that normal faults with small [Formula: see text], medium (about [Formula: see text]), and large (about [Formula: see text]) vertical slips can be detected if the maximum midpoint depths of the fault planes are smaller than 1.8, 3.8, and [Formula: see text], respectively.


Geophysics ◽  
2010 ◽  
Vol 75 (3) ◽  
pp. I21-I28 ◽  
Author(s):  
Cristiano M. Martins ◽  
Valeria C. Barbosa ◽  
João B. Silva

We have developed a gravity-inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining a presumed parabolic decay of the density contrast with depth in a sedimentary pack, assuming prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions of a right-handed coordinate system. The prisms’ thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To estimate the parameters defining the parabolic decay of the density contrast with depth and to produce stable depth-to-basement estimates, we imposed smoothness on the basement depths and proximity between estimated and known depths at boreholes. We applied our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density-contrast variation with depth. The results provide good estimates of the true parameters of the parabolic law of density-contrast decay with depth and of the basement relief. Inverting the gravity data from the onshore and part of the shallow offshore Almada Basin on Brazil’s northeastern coast shows good correlation with known structural features.


2021 ◽  
Vol 13 (15) ◽  
pp. 2967
Author(s):  
Nicola Acito ◽  
Marco Diani ◽  
Gregorio Procissi ◽  
Giovanni Corsini

Atmospheric compensation (AC) allows the retrieval of the reflectance from the measured at-sensor radiance and is a fundamental and critical task for the quantitative exploitation of hyperspectral data. Recently, a learning-based (LB) approach, named LBAC, has been proposed for the AC of airborne hyperspectral data in the visible and near-infrared (VNIR) spectral range. LBAC makes use of a parametric regression function whose parameters are learned by a strategy based on synthetic data that accounts for (1) a physics-based model for the radiative transfer, (2) the variability of the surface reflectance spectra, and (3) the effects of random noise and spectral miscalibration errors. In this work we extend LBAC with respect to two different aspects: (1) the platform for data acquisition and (2) the spectral range covered by the sensor. Particularly, we propose the extension of LBAC to spaceborne hyperspectral sensors operating in the VNIR and short-wave infrared (SWIR) portion of the electromagnetic spectrum. We specifically refer to the sensor of the PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission, and the recent Earth Observation mission of the Italian Space Agency that offers a great opportunity to improve the knowledge on the scientific and commercial applications of spaceborne hyperspectral data. In addition, we introduce a curve fitting-based procedure for the estimation of column water vapor content of the atmosphere that directly exploits the reflectance data provided by LBAC. Results obtained on four different PRISMA hyperspectral images are presented and discussed.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


2021 ◽  
Author(s):  
Francesca Maddaloni ◽  
Damien Delvaux ◽  
Magdala Tesauro ◽  
Taras Gerya ◽  
Carla Braitenberg

<p>The Congo basin (CB), considered as a typical intracratonic basin, due its slow and long-lived subsidence history and the largely unknown formation mechanisms, occupies a large part of the Congo craton, derived from the amalgamation of different cratonic pieces. It recorded the history of deposition of up to one billion years of sediments, one of the longest geological records on Earth above a metamorphic basement. The CB initiated very probably as a failed rift in late Mesoproterozoic and evolved during the Neoproterozoic and Phanerozoic under the influence of far-field compressional tectonic events, global climate fluctuation between icehouse and greenhouse conditions and drifting of Central Africa through the South Pole then towards its present-day equatorial position. Since Cretaceous, the CB has been subjected to an intraplate compressional setting due to ridge-push forces related to the spreading of the South Atlantic Ocean, where most of sediments are being eroded and accumulated only in the center of the basin.</p><p>In this study, we first reconstructed the stratigraphy, the depths of the main seismic horizons, and the tectonic history of the CB, using geological and exploration geophysical data. In particular, we interpreted about 2600 km of seismic reflection profiles and well log data located inside the central area of the CB (Cuvette Centrale). We used the obtained results to constrain the gravity field data that we analyzed, in order to reconstruct the depth of the basement and investigate the shallow crustal structure of the basin. To this purpose, we used a gravity inversion method with two different density contrasts between the surface sediments and crystalline rocks.</p><p>The results evidence NW-SE trending structures, also revealed by magnetic and seismic data, corresponding to the alternation of highs and sediments filled topographic depressions, related to rift structures, characterizing the first stage of evolution of the CB. They also show a general good consistency between the seismic and gravity basement along the seismic profiles and evidence the presence of possible high-density bodies in the shallow to deep crust. The identified structures are prevalently the product of an extensional tectonics, which likely acted in more than one direction.</p><p>Therefore, we performed 3D numerical simulations to test the hypothesis of the formation of the CB as multi-extensional rift in a cratonic area, using the thermomechanical I3ELVIS code, based on a combination of a finite difference method applied on a uniformly spaced Eulerian staggered grid with the marker-in-cell technique. To this purpose, the numerical tests have been conducted considering a sub-circular weak zone in the central part of the cratonic lithosphere and applying a velocity of 2.5 cm/yr in two orthogonal directions (N-S and E-W). We repeated these numerical tests by increasing the size of the weak zone and varying its lithospheric thickness. The results show the formation of a circular basin in the central part of the cratonic lithosphere, characterized by a series of highs and depressions, consistent with those obtained from geophysical/geological reconstructions.</p>


2021 ◽  
Author(s):  
Kyubo Noh ◽  
◽  
Carlos Torres-Verdín ◽  
David Pardo ◽  
◽  
...  

We develop a Deep Learning (DL) inversion method for the interpretation of 2.5-dimensional (2.5D) borehole resistivity measurements that requires negligible online computational costs. The method is successfully verified with the inversion of triaxial LWD resistivity measurements acquired across faulted and anisotropic formations. Our DL inversion workflow employs four independent DL architectures. The first one identifies the type of geological structure among several predefined types. Subsequently, the second, third, and fourth architectures estimate the corresponding spatial resistivity distributions that are parameterized (1) without the crossings of bed boundaries or fault plane, (2) with the crossing of a bed boundary but without the crossing of a fault plane, and (3) with the crossing of the fault plane, respectively. Each DL architecture employs convolutional layers and is trained with synthetic data obtained from an accurate high-order, mesh-adaptive finite-element forward numerical simulator. Numerical results confirm the importance of using multi-component resistivity measurements -specifically cross-coupling resistivity components- for the successful reconstruction of 2.5D resistivity distributions adjacent to the well trajectory. The feasibility and effectiveness of the developed inversion workflow is assessed with two synthetic examples inspired by actual field measurements. Results confirm that the proposed DL method successfully reconstructs 2.5D resistivity distributions, location and dip angles of bed boundaries, and the location of the fault plane, and is therefore reliable for real-time well geosteering applications.


Geophysics ◽  
2021 ◽  
pp. 1-54
Author(s):  
Jie Liu ◽  
Jianzhong Zhang

Gravity inversion, as a static potential field inversion, has inherent ambiguity with low vertical resolution. In order to reduce the nonuniqueness of inversion, it is necessary to impose the apriori constraints derived by other geophysical inversion, drilling or geological modeling. Based on the a priori normalized gradients derived from seismic imaging or reference models, a structure-guided gravity inversion method with a few known point constraints is developed for mapping density with multiple layers. The cubic B-spline interpolation is used to parameterize the forward modeling calculation of the gravity response to smooth density fields. A recently proposed summative gradient is used to maximize the structural similarity between the a priori and inverted models. We first demonstrate the methodology, followed by a synthetic fault model example to confirm its validity. Monte Carlo tests and uncertainty tests further illustrate the stability and practicality of the method. This method is easy to implement, and consequently produces an interpretable density model with geological consistency. Finally, we apply this method to the density modeling of the Chezhen Depression in the Bohai Bay Basin. Our work determines the distribution of deep Lower Paleozoic carbonate rocks and Archean buried hills with high-density characteristics. Our results are consistent with the existing formation mechanism of the “upper source-lower reservoir” type oil-gas targets.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. R449-R461 ◽  
Author(s):  
Guanghui Huang ◽  
Rami Nammour ◽  
William W. Symes

Source signature estimation from seismic data is a crucial ingredient for successful application of seismic migration and full-waveform inversion (FWI). If the starting velocity deviates from the target velocity, FWI method with on-the-fly source estimation may fail due to the cycle-skipping problem. We have developed a source-based extended waveform inversion method, by introducing additional parameters in the source function, to solve the FWI problem without the source signature as a priori. Specifically, we allow the point source function to be dependent on spatial and time variables. In this way, we can easily construct an extended source function to fit the recorded data by solving a source matching subproblem; hence, it is less prone to cycle skipping. A novel source focusing annihilator, defined as the distance function from the real source position, is used for penalizing the defocused energy in the extended source function. A close data fit avoiding the cycle-skipping problem effectively makes the new method less likely to suffer from local minima, which does not require extreme low-frequency signals in the data. Numerical experiments confirm that our method can mitigate cycle skipping in FWI and is robust against random noise.


Sign in / Sign up

Export Citation Format

Share Document