Simultaneous multifrequency inversion of full-waveform seismic data

Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. R1-R14 ◽  
Author(s):  
Wenyi Hu ◽  
Aria Abubakar ◽  
Tarek M. Habashy

We present a simultaneous multifrequency inversion approach for seismic data interpretation. This algorithm inverts all frequency data components simultaneously. A data-weighting scheme balances the contributions from different frequency data components so the inversion process does not become dominated by high-frequency data components, which produce a velocity image with many artifacts. A Gauss-Newton minimization approach achieves a high convergence rate and an accurate reconstructed velocity image. By introducing a modified adjoint formulation, we can calculate the Jacobian matrix efficiently, allowing the material properties in the perfectly matched layers (PMLs) to be updated automatically during the inversion process. This feature ensures the correct behavior of the inversion and implies that the algorithm is appropriate for realistic applications where a priori information of the background medium is unavailable. Two different regularization schemes, an [Formula: see text]-norm and a weighted [Formula: see text]-norm function, are used in this algorithm for smooth profiles and profiles with sharp boundaries, respectively. The regularization parameter is determined automatically and adaptively by the so-called multiplicative regularization technique. To test the algorithm, we implement the inversion to reconstruct the Marmousi velocity model using synthetic data generated by the finite-difference time-domain code. These numerical simulation results indicate that this inversion algorithm is robust in terms of starting model and noise suppression. Under some circumstances, it is more robust than a traditional sequential inversion approach.

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 494-503 ◽  
Author(s):  
Wenjie Dong

The [Formula: see text] of hydrocarbon‐bearing sediments normally deviates from the [Formula: see text] trend of the background rocks. This causes anomalous reflection amplitude variation with offset (AVO) in the seismic data. The estimation of these AVOs is inevitably affected by wave propagation effects and inversion algorithm limitations, such as thin‐bed tuning and migration stretch. A logical point is to determine the minimum [Formula: see text] change required for an anomalous AVO to be detectable beyond the background tuning and stretching effects. Assuming Ricker wavelet for the seismic data, this study addresses this point by quantifying the errors in the intercept/slope estimate. Using these results, two detectability conditions are derived. Denoting the background [Formula: see text] by γ and its variation by δγ, the thin‐bed parameter (thickness/wavelength) by ξ, the maximum background intercept closest to the AVO by |A|max, and the thin‐bed intercept value by |A|thin the two conditions are [Formula: see text] [Formula: see text] for detectability against stretching and tuning plus stretching, respectively. Tests on synthetic data confirm their validity and accuracy. These conditions provide a quantitative guideline for evaluating AVO applicability and effectiveness in seismic exploration. They can eliminate some of the subjectivity when interpreting AVO results in different attribute spaces. To improve AVO detectability, a procedure is suggested for removing the tuning and stretching effects.


Geophysics ◽  
2006 ◽  
Vol 71 (1) ◽  
pp. G1-G9 ◽  
Author(s):  
Aria Abubakar ◽  
Tarek M. Habashy ◽  
Vladimir Druskin ◽  
Leonid Knizhnerman ◽  
Sofia Davydycheva

We develop a parametric inversion algorithm to determine simultaneously the horizontal and vertical resistivities of both the formation and invasion zones, invasion radius, bed boundary upper location and thickness, and relative dip angle from electromagnetic triaxial induction logging data. This is a full 3D inverse scattering problem in transversally isotropic media. To acquire sufficient sensitivity to invert for all of these parameters, we collect the data using a multicomponent, multispacing induction array. For each transmitter-receiver spacing this multicomponent tool has sets of three orthogonal transmitter and receiver coils. At each logging point single-frequency data are collected at multiple spacings to obtain information at different depths of investigation. This inversion problem is solved iteratively with a constrained regularized Gauss-Newton minimization scheme. As documented in the literature, the main computational bottleneck when solving this full 3D inverse problem is the CPU time associated with constructing the Jacobian matrix. In this study, to achieve the inversion results within a reasonable computational time, we implement a dual grid approach wherein the Jacobian matrix is computed using a very coarse optimal grid. Furthermore, to regularize the inversion process we use the so-called multiplicative regularization technique. This technique automatically determines the regularization parameter. Synthetic data tests indicate the developed inversion algorithm is robust in extracting formation and invasion anisotropic resistivities, invasion radii, bed boundary locations, relative dip, and azimuth angle from multispacing, multicomponent induction logging data.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V227-V233
Author(s):  
Jitao Ma ◽  
Xiaohong Chen ◽  
Mrinal K. Sen ◽  
Yaru Xue

Blended data sets are now being acquired because of improved efficiency and reduction in cost compared with conventional seismic data acquisition. We have developed two methods for blended data free-surface multiple attenuation. The first method is based on an extension of surface-related multiple elimination (SRME) theory, in which free-surface multiples of the blended data can be predicted by a multidimensional convolution of the seismic data with the inverse of the blending operator. A least-squares inversion method is used, which indicates that crosstalk noise existed in the prediction result due to the approximate inversion. An adaptive subtraction procedure similar to that used in conventional SRME is then applied to obtain the blended primary — this can damage the energy of primaries. The second method is based on inverse data processing (IDP) theory adapted to blended data. We derived a formula similar to that used in conventional IDP, and we attenuated free-surface multiples by simple muting of the focused points in the inverse data space (IDS). The location of the focused points in the IDS for blended data, which can be calculated, is also related to the blending operator. We chose a singular value decomposition-based inversion algorithm to stabilize the inversion in the IDP method. The advantage of IDP compared with SRME is that, it does not have crosstalk noise and is able to better preserve the primary energy. The outputs of our methods are all blended primaries, and they can be further processed using blended data-based algorithms. Synthetic data examples show that the SRME and IDP algorithms for blended data are successful in attenuating free-surface multiples.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. SA61-SA69 ◽  
Author(s):  
G. J. van Groenestijn ◽  
D. J. Verschuur

For passive seismic data, surface multiples are used to obtain an estimate of the subsurface responses, usually by a crosscorrelation process. This crosscorrelation process relies on the assumption that the surface has been uniformly illuminated by subsurface sources in terms of incident angles and strengths. If this is not the case, the crosscorrelation process cannot give a true amplitude estimation of the subsurface response. Furthermore, cross terms in the crosscorrelation result are not related to actual subsurface inhomogeneities. We have developed a method that can obtain true amplitude subsurface responses without a uniform surface-illumination assumption. Our methodology goes beyond the crosscorrelation process and estimates primaries only from the surface-related multiples in the available signal. We use the recently introduced estimation of primaries by sparse inversion (EPSI) methodology, in which the primary impulse responses are considered to be the unknowns in a large-scale inversion process. With some modifications, the EPSI method can be used for passive seismic data. The output of this process is primary impulse responses with point sources and receivers at the surface, which can be used directly in traditional imaging schemes. The methodology was tested on 2D synthetic data.


Geophysics ◽  
1983 ◽  
Vol 48 (7) ◽  
pp. 854-886 ◽  
Author(s):  
Ken Larner ◽  
Ron Chambers ◽  
Mai Yang ◽  
Walt Lynn ◽  
Willon Wai

Despite significant advances in marine streamer design, seismic data are often plagued by coherent noise having approximately linear moveout across stacked sections. With an understanding of the characteristics that distinguish such noise from signal, we can decide which noise‐suppression techniques to use and at what stages to apply them in acquisition and processing. Three general mechanisms that might produce such noise patterns on stacked sections are examined: direct and trapped waves that propagate outward from the seismic source, cable motion caused by the tugging action of the boat and tail buoy, and scattered energy from irregularities in the water bottom and sub‐bottom. Depending upon the mechanism, entirely different noise patterns can be observed on shot profiles and common‐midpoint (CMP) gathers; these patterns can be diagnostic of the dominant mechanism in a given set of data. Field data from Canada and Alaska suggest that the dominant noise is from waves scattered within the shallow sub‐buttom. This type of noise, while not obvious on the shot records, is actually enhanced by CMP stacking. Moreover, this noise is not confined to marine data; it can be as strong as surface wave noise on stacked land seismic data as well. Of the many processing tools available, moveout filtering is best for suppressing the noise while preserving signal. Since the scattered noise does not exhibit a linear moveout pattern on CMP‐sorted gathers, moveout filtering must be applied either to traces within shot records and common‐receiver gathers or to stacked traces. Our data example demonstrates that although it is more costly, moveout filtering of the unstacked data is particularly effective because it conditions the data for the critical data‐dependent processing steps of predictive deconvolution and velocity analysis.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Sign in / Sign up

Export Citation Format

Share Document