Interpolation and multiple attenuation with migration operators

Geophysics ◽  
2003 ◽  
Vol 68 (6) ◽  
pp. 2043-2054 ◽  
Author(s):  
Daniel Trad

A hyperbolic Radon transform (RT) can be applied with success to attenuate or interpolate hyperbolic events in seismic data. However, this method fails when the hyperbolic events have apexes located at nonzero offset positions. A different RT operator is required for these cases, an operator that scans for hyperbolas with apexes centered at any offset. This procedure defines an extension of the standard hyperbolic RT with hyperbolic basis functions located at every point of the data gather. The mathematical description of such an operator is basically similar to a kinematic poststack time‐migration equation, with the horizontal coordinate being not midpoint but offset. In this paper, this transformation is implemented by using a least‐squares conjugate gradient algorithm with a sparseness constraint. Two different operators are considered, one in the time domain and the other in the frequency‐wavenumber domain (Stolt operator). The sparseness constraint in the time‐offset domain is essential for resampling and for interpolation. The frequency‐wavenumber domain operator is very efficient, not much more expensive in computation time than a sparse parabolic RT, and much faster than a standard hyperbolic RT. Examples of resampling, interpolation, and coherent noise attenuation using the frequency‐wavenumber domain operator are presented. Near and far offset gaps are interpolated in synthetic and real shot gathers, with simultaneous resampling beyond aliasing. Waveforms are well preserved in general except when there is little coherence in the data outside the gaps or events with very different velocities are located at the same time. Multiples of diffractions are predicted and attenuated by subtraction from the data.

Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1293-1303 ◽  
Author(s):  
Luc T. Ikelle ◽  
Lasse Amundsen ◽  
Seung Yoo

The inverse scattering multiple attenuation (ISMA) algorithm for ocean‐bottom seismic (OBS) data can be formulated in the form of a series expansion for each of the four components of OBS data. Besides the actual data, which constitute the first term of the series, each of the other terms is computed as a multidimensional convolution of OBS data with streamer data, and aims at removing one specific order of multiples. If the streamer data do not contain free‐surface multiples, we found that the computation of only the second term of the series is needed to predict and remove all orders of multiples, whatever the water depth. As the computation of the various terms of the series is the most expensive part of ISMA, this result can produce significant savings in computation time, even in data storage, as we no longer need to store the various terms of the series. For example, if the streamer data contained free‐surface multiples, OBS seismic data of 6‐s duration, corresponding to a geological model of the subsurface with 250‐m water depth, require the computation of five terms of the series for each of the four components of OBS data. With the new implementation, in which the streamer data do not contain free‐surface multiples, we need the computation of only one term of the series for each component of the OBS data. The saving in CPU time for this particular case is at least fourfold. The estimation of the inverse source signature, which is an essential part of ISMA, also benefits from the reduction of the number of terms needed for the demultiple to two because it becomes a linear inverse problem instead of a nonlinear one. Assuming that the removal of multiple events produces a significant reduction in the energy of the data, the optimization of this problem leads to a stable, noniterative analytic solution. We have also adapted these results to the implementation of ISMA for vertical‐cable (VC) data. This implementation is similar to that for OBS data. The key difference is that the basic model in VC imaging assumes that data consist of receiver ghosts of primaries instead of the primaries themselves. We have used the following property to achieve this goal. The combination of VC data with surface seismic data, which do not contain free‐surface multiples, allows us to predict free‐surface multiples and receiver ghosts as well as the receiver ghosts of primary reflections. However, if the direct wave arrivals are removed from the VC data, this combination will not predict the receiver ghosts of primary reflections. The difference between these two predictions produces data containing only receiver ghosts of primaries.


Geophysics ◽  
2021 ◽  
pp. 1-70
Author(s):  
Rodrigo S. Santos ◽  
Daniel E. Revelo ◽  
Reynam C. Pestana ◽  
Victor Koehne ◽  
Diego F. Barrera ◽  
...  

Seismic images produced by migration of seismic data related to complex geologies, suchas pre-salt environments, are often contaminated by artifacts due to the presence of multipleinternal reflections. These reflections are created when the seismic wave is reflected morethan once in a source-receiver path and can be interpreted as the main coherent noise inseismic data. Several schemes have been developed to predict and subtract internal multiplereflections from measured data, such as the Marchenko multiple elimination (MME) scheme,which eliminates the referred events without requiring a subsurface model or an adaptivesubtraction approach. The MME scheme is data-driven, can remove or attenuate mostof these internal multiples, and was originally based on the Neumann series solution ofMarchenko’s projected equations. However, the Neumann series approximate solution isconditioned to a convergence criterion. In this work, we propose to formulate the MMEas a least-squares problem (LSMME) in such a way that it can provide an alternative thatavoids a convergence condition as required in the Neumann series approach. To demonstratethe LSMME scheme performance, we apply it to 2D numerical examples and compare theresults with those obtained by the conventional MME scheme. Additionally, we evaluatethe successful application of our method through the generation of in-depth seismic images,by applying the reverse-time migration (RTM) algorithm on the original data set and tothose obtained through MME and LSMME schemes. From the RTM results, we show thatthe application of both schemes on seismic data allows the construction of seismic imageswithout artifacts related to internal multiple events.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB67-WB78 ◽  
Author(s):  
Alastair M. Swanston ◽  
Michael D. Mathias ◽  
Craig A. Barker

The Tahiti field is a recent major development in the deepwater Gulf of Mexico. The field’s prolific Miocene reservoir section lies below a thick salt canopy with structural dips as high as 80 degrees, adjacent to a near-vertical salt root. Successful appraisal and initial development was enabled by interpretation of proprietary depth imaging products generated from narrow-azimuth seismic data. However, reservoir-scale mapping and fault definition remained problematic due to seismic imaging and illumination challenges. In 2009–2010, the Tahiti partnership initiated a reimaging project using multiclient wide-azimuth seismic data. The project employed current technologies for multiple attenuation, tilted transverse isotropy velocity modeling, and migration. Increased azimuthal coverage and inherent multiple suppression provided by wide azimuth acquisition delivered significant imaging enhancements. Advanced noise and multiple attenuation techniques provided cleaner data with improved signal-to-noise. Earth models representing multiazimuth subsurface velocities and anisotropy parameters calibrated to well control with detailed salt interpretation resulted in higher confidence structural imaging. Comparison of Gaussian beam, one-way wave equation, and reverse time migration algorithms shows that reverse time migration generally provides superior subsalt and salt-body data quality, with improved event positioning, higher resolution, and enhanced steep dip imaging. The resulting seismic volumes enable accurate mapping of reservoir horizons and faulting. This will improve resource determination and future well placement in the next phase of field development.


Author(s):  
Ehsan Jamali Hondori ◽  
Chen Guo ◽  
Hitoshi Mikada ◽  
Jin-Oh Park

AbstractFull-waveform inversion (FWI) of limited-offset marine seismic data is a challenging task due to the lack of refracted energy and diving waves from the shallow sediments, which are fundamentally required to update the long-wavelength background velocity model in a tomographic fashion. When these events are absent, a reliable initial velocity model is necessary to ensure that the observed and simulated waveforms kinematically fit within an error of less than half a wavelength to protect the FWI iterative local optimization scheme from cycle skipping. We use a migration-based velocity analysis (MVA) method, including a combination of the layer-stripping approach and iterations of Kirchhoff prestack depth migration (KPSDM), to build an accurate initial velocity model for the FWI application on 2D seismic data with a maximum offset of 5.8 km. The data are acquired in the Japan Trench subduction zone, and we focus on the area where the shallow sediments overlying a highly reflective basement on top of the Cretaceous erosional unconformity are severely faulted and deformed. Despite the limited offsets available in the seismic data, our carefully designed workflow for data preconditioning, initial model building, and waveform inversion provides a velocity model that could improve the depth images down to almost 3.5 km. We present several quality control measures to assess the reliability of the resulting FWI model, including ray path illuminations, sensitivity kernels, reverse time migration (RTM) images, and KPSDM common image gathers. A direct comparison between the FWI and MVA velocity profiles reveals a sharp boundary at the Cretaceous basement interface, a feature that could not be observed in the MVA velocity model. The normal faults caused by the basal erosion of the upper plate in the study area reach the seafloor with evident subsidence of the shallow strata, implying that the faults are active.


2015 ◽  
Vol 86 (3) ◽  
pp. 901-907 ◽  
Author(s):  
R. Takagi ◽  
K. Nishida ◽  
Y. Aoki ◽  
T. Maeda ◽  
K. Masuda ◽  
...  

Geophysics ◽  
1983 ◽  
Vol 48 (7) ◽  
pp. 854-886 ◽  
Author(s):  
Ken Larner ◽  
Ron Chambers ◽  
Mai Yang ◽  
Walt Lynn ◽  
Willon Wai

Despite significant advances in marine streamer design, seismic data are often plagued by coherent noise having approximately linear moveout across stacked sections. With an understanding of the characteristics that distinguish such noise from signal, we can decide which noise‐suppression techniques to use and at what stages to apply them in acquisition and processing. Three general mechanisms that might produce such noise patterns on stacked sections are examined: direct and trapped waves that propagate outward from the seismic source, cable motion caused by the tugging action of the boat and tail buoy, and scattered energy from irregularities in the water bottom and sub‐bottom. Depending upon the mechanism, entirely different noise patterns can be observed on shot profiles and common‐midpoint (CMP) gathers; these patterns can be diagnostic of the dominant mechanism in a given set of data. Field data from Canada and Alaska suggest that the dominant noise is from waves scattered within the shallow sub‐buttom. This type of noise, while not obvious on the shot records, is actually enhanced by CMP stacking. Moreover, this noise is not confined to marine data; it can be as strong as surface wave noise on stacked land seismic data as well. Of the many processing tools available, moveout filtering is best for suppressing the noise while preserving signal. Since the scattered noise does not exhibit a linear moveout pattern on CMP‐sorted gathers, moveout filtering must be applied either to traces within shot records and common‐receiver gathers or to stacked traces. Our data example demonstrates that although it is more costly, moveout filtering of the unstacked data is particularly effective because it conditions the data for the critical data‐dependent processing steps of predictive deconvolution and velocity analysis.


Geophysics ◽  
1973 ◽  
Vol 38 (2) ◽  
pp. 310-326 ◽  
Author(s):  
R. J. Wang ◽  
S. Treitel

The normal equations for the discrete Wiener filter are conventionally solved with Levinson’s algorithm. The resultant solutions are exact except for numerical roundoff. In many instances, approximate rather than exact solutions satisfy seismologists’ requirements. The so‐called “gradient” or “steepest descent” iteration techniques can be used to produce approximate filters at computing speeds significantly higher than those achievable with Levinson’s method. Moreover, gradient schemes are well suited for implementation on a digital computer provided with a floating‐point array processor (i.e., a high‐speed peripheral device designed to carry out a specific set of multiply‐and‐add operations). Levinson’s method (1947) cannot be programmed efficiently for such special‐purpose hardware, and this consideration renders the use of gradient schemes even more attractive. It is, of course, advisable to utilize a gradient algorithm which generally provides rapid convergence to the true solution. The “conjugate‐gradient” method of Hestenes (1956) is one of a family of algorithms having this property. Experimental calculations performed with real seismic data indicate that adequate filter approximations are obtainable at a fraction of the computer cost required for use of Levinson’s algorithm.


Author(s):  
Niels Hørbye Christiansen ◽  
Per Erlend Torbergsen Voie ◽  
Jan Høgsberg ◽  
Nils Sødahl

Dynamic analyses of slender marine structures are computationally expensive. Recently it has been shown how a hybrid method which combines FEM models and artificial neural networks (ANN) can be used to reduce the computation time spend on the time domain simulations associated with fatigue analysis of mooring lines by two orders of magnitude. The present study shows how an ANN trained to perform nonlinear dynamic response simulation can be optimized using a method known as optimal brain damage (OBD) and thereby be used to rank the importance of all analysis input. Both the training and the optimization of the ANN are based on one short time domain simulation sequence generated by a FEM model of the structure. This means that it is possible to evaluate the importance of input parameters based on this single simulation only. The method is tested on a numerical model of mooring lines on a floating off-shore installation. It is shown that it is possible to estimate the cost of ignoring one or more input variables in an analysis.


Sign in / Sign up

Export Citation Format

Share Document