scholarly journals Converted-wave azimuth moveout

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. S99-S110
Author(s):  
Daniel A. Rosales ◽  
Biondo Biondi

A new partial-prestack migration operator to manipulate multicomponent data, called converted-wave azimuth moveout (PS-AMO), transforms converted-wave prestack data with an arbitrary offset and azimuth to equivalent data with a new offset and azimuth position. This operator is a sequential application of converted-wave dip moveout and its inverse. As expected, PS-AMO reduces to the known expression of AMO for the extreme case when the P velocity is the same as the S velocity. Moreover, PS-AMO preserves the resolution of dipping events and internally applies a correction for the lateral shift between the common-midpoint and the common-reflection/conversion point. An implementation of PS-AMO in the log-stretch frequency-wavenumber domain is computationally efficient. The main applications for the PS-AMO operator are geometry regularization, data-reduction through partial stacking, and interpolation of unevenly sampled data. We test our PS-AMO operator by solving 3D acquisition geometry-regularization problems for multicomponent, ocean-bottom seismic data. The geometry-regularization problem is defined as a regularized least-squares-objective function. To preserve the resolution of dipping events, the regularization term uses the PS-AMO operator. Application of this methodology on a portion of the Alba 3D, multicomponent, ocean-bottom seismic data set shows that we can satisfactorily obtain an interpolated data set that honors the physics of converted waves.

Geophysics ◽  
2005 ◽  
Vol 70 (5) ◽  
pp. U51-U65 ◽  
Author(s):  
Stig-Kyrre Foss ◽  
Bjørn Ursin ◽  
Maarten V. de Hoop

We present a method of reflection tomography for anisotropic elastic parameters from PP and PS reflection seismic data. The method is based upon the differential semblance misfit functional in scattering angle and azimuth (DSA) acting on common-image-point gathers (CIGs) to find fitting velocity models. The CIGs are amplitude corrected using a generalized Radon transform applied to the data. Depth consistency between the PP and PS images is enforced by penalizing any mis-tie between imaged key reflectors. The mis-tie is evaluated by means of map migration-demigration applied to the geometric information (times and slopes) contained in the data. In our implementation, we simplify the codepthing approach to zero-scattering-angle data only. The resulting measure is incorporated as a regularization in the DSA misfit functional. We then resort to an optimization procedure, restricting ourselves to transversely isotropic (TI) velocity models. In principle, depending on the available surface-offset range and orientation of reflectors in the subsurface, by combining the DSA with codepthing, the anisotropic parameters for TI models can be determined, provided the orientation of the symmetry axis is known. A proposed strategy is applied to an ocean-bottom-seismic field data set from the North Sea.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. D171-D182 ◽  
Author(s):  
Jason E. Gumble ◽  
James E. Gaiser

Anisotropy and fracture characterization in individual layers is realized through iterative layer stripping corrections of four, converted-wave (PS-wave) synthetic reflection seismic data sets, generated from azimuthally anisotropic (HTI and TTI) models, and a four component (4-C) data set from the Teal South, Gulf of Mexico. The corrections were applied on a layer-by-layer basis to evaluate the efficacy of constant polarization rotation and time-shift operators. Equivalent isotropic models were compared to anisotropic models after layer-stripping corrections using rms amplitude and shear-wave-splitting time-difference maps to quantify and identify inherent errors in estimating seismic polarization parameters. For HTI media radial and transverse components of PS data that have had layer-stripping corrections applied, exhibit incorrect symmetry and orientations. This may adversely affect inversion and/or amplitude-variation with angle offset (AVO) and amplitude versus azimuth (AVA)analysis. Layer-stripping corrections applied to fast and slow ([Formula: see text] and [Formula: see text], respectively) components exhibit the correct symmetry and orientation. Time differences between PS1 and PS2 are computed using crosscorrelation. Previous studies have addressed some of the problems associated with layer-stripping corrections for the case of vertical fractures (HTI media) and poststack layer-stripping analyses. This study includes an equivalent model with dipping fractures (TTI media) and extends the scope to encompass the effects of anisotropy on prestack data. The results from an application of the same technique are also applied to a limited set of 4-C data from the Teal South project in the Gulf of Mexico. Results are consistent with those of previous studies involving solely poststack 4-C rotation analysis in terms of average, or zero offset, time differences and symmetry orientation. Offset and azimuth amplitude/traveltime variations, however, indicate that there is more information contained in prestack seismic data than 4-C rotation can comprehend.


Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1822-1832 ◽  
Author(s):  
Biondo Biondi ◽  
Gopal Palacharla

In principle, downward continuation of 3-D prestack data should be carried out in the 5-D space of full 3-D prestack geometry (recording time, source surface location, and receiver surface location), even when the data sets to be migrated have fewer dimensions, as in the case of common‐azimuth data sets that are only four dimensional. This increase in dimensionality of the computational space causes a severe increase in the amount of computations required for migrating the data. Unless this computational efficiency issue is solved, 3-D prestack migration methods based on downward continuation cannot compete with Kirchhoff methods. We address this problem by presenting a method for downward continuing common‐azimuth data in the original 4-D space of the common‐azimuth data geometry. The method is based on a new common‐azimuth downward‐continuation operator derived by a stationary‐phase approximation of the full 3-D prestack downward‐continuation operator expressed in the frequency‐wavenumber domain. Although the new common‐azimuth operator is exact only for constant velocity, a ray‐theoretical interpretation of the stationary‐phase approximation enables us to derive an accurate generalization of the method to media with both vertical and lateral velocity variations. The proposed migration method successfully imaged a synthetic data set that was generated assuming strong lateral and vertical velocity gradients. The common‐azimuth downward‐continuation theory also can be applied to the derivation of a computationally efficient constant‐velocity Stolt migration of common‐azimuth data. The Stolt migration formulation leads to the important theoretical result that constant‐velocity common‐azimuth migration can be split into two exact sequential migration processes: 2-D prestack migration along the inline direction, followed by 2-D zero‐offset migration along the cross‐line direction.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA13-WA26 ◽  
Author(s):  
Jing Sun ◽  
Sigmund Slang ◽  
Thomas Elboth ◽  
Thomas Larsen Greiner ◽  
Steven McDonald ◽  
...  

For economic and efficiency reasons, blended acquisition of seismic data is becoming increasingly commonplace. Seismic deblending methods are computationally demanding and normally consist of multiple processing steps. Furthermore, the process of selecting parameters is not always trivial. Machine-learning-based processing has the potential to significantly reduce processing time and to change the way seismic deblending is carried out. We have developed a data-driven deep-learning-based method for fast and efficient seismic deblending. The blended data are sorted from the common-source to the common-channel domain to transform the character of the blending noise from coherent events to incoherent contributions. A convolutional neural network is designed according to the special characteristics of seismic data and performs deblending with results comparable to those obtained with conventional industry deblending algorithms. To ensure authenticity, the blending was performed numerically and only field seismic data were used, including more than 20,000 training examples. After training and validating the network, seismic deblending can be performed in near real time. Experiments also indicate that the initial signal-to-noise ratio is the major factor controlling the quality of the final deblended result. The network is also demonstrated to be robust and adaptive by using the trained model to first deblend a new data set from a different geologic area with a slightly different delay time setting and second to deblend shots with blending noise in the top part of the record.


2020 ◽  
Vol 223 (3) ◽  
pp. 1888-1898
Author(s):  
Kirill Gadylshin ◽  
Ilya Silvestrov ◽  
Andrey Bakulin

SUMMARY We propose an advanced version of non-linear beamforming assisted by artificial intelligence (NLBF-AI) that includes additional steps of encoding and interpolating of wavefront attributes using inpainting with deep neural network (DNN). Inpainting can efficiently and accurately fill the holes in waveform attributes caused by acquisition geometry gaps and data quality issues. Inpainting with DNN delivers excellent quality of interpolation with the negligible computational effort and performs particularly well for a challenging case of irregular holes where other interpolation methods struggle. Since conventional brute-force attribute estimation is very costly, we can further intentionally create additional holes or masks to restrict expensive conventional estimation to a smaller subvolume and obtain missing attributes with cost-effective inpainting. Using a marine seismic data set with ocean bottom nodes, we show that inpainting can reliably recover wavefront attributes even with masked areas reaching 50–75 per cent. We validate the quality of the results by comparing attributes and enhanced data from NLBF-AI and conventional NLBF using full-density data without decimation.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. V79-V86 ◽  
Author(s):  
Kurang Mehta ◽  
Andrey Bakulin ◽  
Jonathan Sheiman ◽  
Rodney Calvert ◽  
Roel Snieder

The virtual source method has recently been proposed to image and monitor below complex and time-varying overburden. The method requires surface shooting recorded at downhole receivers placed below the distorting or changing part of the overburden. Redatuming with the measured Green’s function allows the reconstruction of a complete downhole survey as if the sources were also buried at the receiver locations. There are still some challenges that need to be addressed in the virtual source method, such as limited acquisition aperture and energy coming from the overburden. We demonstrate that up-down wavefield separation can substantially improve the quality of virtual source data. First, it allows us to eliminate artifacts associated with the limited acquisition aperture typically used in practice. Second, it allows us to reconstruct a new optimized response in the absence of downgoing reflections and multiples from the overburden. These improvements are illustrated on a synthetic data set of a complex layered model modeled after the Fahud field in Oman, and on ocean-bottom seismic data acquired in the Mars field in the deepwater Gulf of Mexico.


Geophysics ◽  
2010 ◽  
Vol 75 (1) ◽  
pp. Q11-Q20 ◽  
Author(s):  
R. James Brown

In four-component (4-C) towed ocean-bottom-cable (OBC) data sets, acquisition footprints are often observed. Sometimes these exhibit a spatial period equal to the length of the receiver cable. I have analyzed a 2D 4-C OBC data set, looking at common-offset gathers (COG), spectral analyses, and hodogram analyses of the direct P-wave first breaks. The acquisition footprint is seen to be directly related to the following effects observed on a few of the multicomponent receivers, namely, those nearest to the towing vessel: significant delays on the inline component though not on the downgoing direct-P first breaks; depletion of higher frequencies (narrower bandwidth) on the inline component; and oscillatory motion closer to the vertical on the direct-P first breaks equivalent to decreased amplitude on the in-line component. This is interpreted to be a result of the towing procedure wherein the leading end of the cable, with the first few receiver modules, is raised from the seafloor and laid down again, relatively lightly, on top of seafloor material that might be poorly consolidated, while the trailing receivers are pulled through and down into this material. For these leading receiver modules, this results in poor inline horizontal coupling (i.e., slipping) and delayed P-S onsets due to their vertically higher positions (relative to the trailing receivers) and quite high near-seafloor [Formula: see text] ratios. To rectify this problem in future acquisition, a longer lead-in cable should prevent lifting of the leading receivers and allow all of them to couple with the seafloor in the same way. For data already acquired with an acquisition footprint on the inline component, a two-step process involving surface-consistent deconvolution or trace equalization and static correction is proposed.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. S273-S283 ◽  
Author(s):  
Jan Thorbecke ◽  
A. J. Berkhout

The common-focus-point technology (CFP) describes prestack migration by focusing in two steps: emission and detection. The output of the first focusing step represents a CFP gather. This gather defines a shot record that represents the subsurface response resulting from a focused source wavefield. We propose applying the recursive shot-record, depth-migration algorithm to the CFP gathers of a seismic data volume and refer to this process as CFP-gather migration. In the situation of complex geology and/or low signal-to-noise ratio, CFP-based image gathers are easier to interpret for nonalignment than the conventional image gathers. This makes the CFP-based image gathers better suited for velocity analysis. This important property is illustrated by examples on the Marmousi model.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. C99-C113 ◽  
Author(s):  
Rodrigo Bloot ◽  
Tiago A. Coimbra ◽  
Jorge H. Faccipieri ◽  
Martin Tygel

The extraction of kinematic parameters from wave propagation through traveltimes is one of the great challenges in seismic data processing. In this context, we modify the common-reflection-surface (CRS) traveltime to improve its accuracy and also interpret its parameters via paraxial ray theory in an anisotropic medium obtaining information about the wavefront curvatures measured at surface. The proposed method consists of searching for the best stacking parameters that fit the data set followed by the extraction of kinematic information from the measured waves. Numerical tests show the effectiveness of our assumptions and that the results obtained in the fitting and parameter extraction in anisotropic media achieve better accuracy than conventional CRS.


Sign in / Sign up

Export Citation Format

Share Document