2-D continuation operators and their applications

Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.

Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. U1-U12
Author(s):  
Michelângelo G. Silva ◽  
Milton J. Porsani ◽  
Bjorn Ursin

Velocity-independent seismic data processing requires information about the local slope in the data. From estimates of local time and space derivatives of the data, a total least-squares algorithm gives an estimate of the local slope at each data point. Total least squares minimizes the orthogonal distance from the data points (the local time and space derivatives) to the fitted straight line defining the local slope. This gives a more consistent estimate of the local slope than standard least squares because it takes into account uncertainty in the temporal and spatial derivatives. The total least-squares slope estimate is the same as the one obtained from using the structure tensor with a rectangular window function. The estimate of the local slope field is used to extrapolate all traces in a seismic gather to the smallest recorded offset without using velocity information. Extrapolation to zero offset is done using a hyperbolic traveltime function in which slope information replaces the knowledge of the normal moveout (NMO) velocity. The new data processing method requires no velocity analysis and there is little stretch effect. All major reflections and diffractions that are present at zero offset will be reproduced in the output zero-offset section. Therefore, if multiple reflections are undesired in the output, they should be removed before data extrapolation to zero offset. The automatic method is sensitive to noise, so for poor signal-to-noise ratios, standard NMO velocities for primary reflections can be used to compute the slope field. Synthetic and field data examples indicate that compared with standard seismic data processing (velocity analysis, mute, NMO correction, and stack), our method provides an improved zero-offset section in complex data areas.


Geophysics ◽  
1989 ◽  
Vol 54 (11) ◽  
pp. 1455-1465 ◽  
Author(s):  
William S. Harlan

Hyperbolic reflections and convolutional wavelets are fundamental models for seismic data processing. Each sample of a “stacked” zero‐offset section can parameterize an impulsive hyperbolic reflection in a midpoint gather. Convolutional wavelets can model source waveforms and near‐surface filtering at the shot and geophone positions. An optimized inversion of the combined modeling equations for hyperbolic traveltimes and convolutional wavelets makes explicit any interdependence and nonuniqueness in these two sets of parameters. I first estimate stacked traces that best model the recorded data and then find nonimpulsive wavelets to improve the fit with the data. These wavelets are used for a new estimate of the stacked traces, and so on. Estimated stacked traces model short average wavelets with a superposition of approximately parallel hyperbolas; estimated wavelets adjust the phases and amplitudes of inconsistent traces, including static shifts. Deconvolution of land data with estimated wavelets makes wavelets consistent over offset; remaining static shifts are midpoint‐consistent. This phase balancing improves the resolution of stacked data and of velocity analyses. If precise velocity functions are not known, then many stacked traces can be inverted simultaneously, each with a different velocity function. However, the increased number of overlain hyperbolas can more easily model the effects of inconsistent wavelets. As a compromise, I limit velocity functions to reasonable regions selected from a stacking velocity analysis—a few functions cover velocities of primary and multiple reflections. Multiple reflections are modeled separately and then subtracted from marine data. The model can be extended to include more complicated amplitude changes in reflectivity. Migrated reflectivity functions would add an extra constraint on the continuity of reflections over midpoint. Including the effect of dip moveout in the model would make stacking and migration velocities equivalent.


Geophysics ◽  
2021 ◽  
pp. 1-50
Author(s):  
German Garabito ◽  
José Silas dos Santos Silva ◽  
Williams Lima

In land seismic data processing, the prestack time migration (PSTM) image remains the standard imaging output, but a reliable migrated image of the subsurface depends on the accuracy of the migration velocity model. We have adopted two new algorithms for time-domain migration velocity analysis based on wavefield attributes of the common-reflection-surface (CRS) stack method. These attributes, extracted from multicoverage data, were successfully applied to build the velocity model in the depth domain through tomographic inversion of the normal-incidence-point (NIP) wave. However, there is no practical and reliable method for determining an accurate and geologically consistent time-migration velocity model from these CRS attributes. We introduce an interactive method to determine the migration velocity model in the time domain based on the application of NIP wave attributes and the CRS stacking operator for diffractions, to generate synthetic diffractions on the reflection events of the zero-offset (ZO) CRS stacked section. In the ZO data with diffractions, the poststack time migration (post-STM) is applied with a set of constant velocities, and the migration velocities are then selected through a focusing analysis of the simulated diffractions. We also introduce an algorithm to automatically calculate the migration velocity model from the CRS attributes picked for the main reflection events in the ZO data. We determine the precision of our diffraction focusing velocity analysis and the automatic velocity calculation algorithms using two synthetic models. We also applied them to real 2D land data with low quality and low fold to estimate the time-domain migration velocity model. The velocity models obtained through our methods were validated by applying them in the Kirchhoff PSTM of real data, in which the velocity model from the diffraction focusing analysis provided significant improvements in the quality of the migrated image compared to the legacy image and to the migrated image obtained using the automatically calculated velocity model.


2021 ◽  
Vol 11 (1) ◽  
pp. 78
Author(s):  
Jianbo He ◽  
Zhenyu Wang ◽  
Mingdong Zhang

When the signal to noise ratio of seismic data is very low, velocity spectrum focusing will be poor., the velocity model obtained by conventional velocity analysis methods is not accurate enough, which results in inaccurate migration. For the low signal noise ratio (SNR) data, this paper proposes to use partial Common Reflection Surface (CRS) stack to build CRS gathers, making full use of all of the reflection information of the first Fresnel zone, and improves the signal to noise ratio of pre-stack gathers by increasing the number of folds. In consideration of the CRS parameters of the zero-offset rays emitted angle and normal wave front curvature radius are searched on zero offset profile, we use ellipse evolving stacking to improve the zero offset section quality, in order to improve the reliability of CRS parameters. After CRS gathers are obtained, we use principal component analysis (PCA) approach to do velocity analysis, which improves the noise immunity of velocity analysis. Models and actual data results demonstrate the effectiveness of this method.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. C229-C237 ◽  
Author(s):  
Shibo Xu ◽  
Alexey Stovas

The moveout approximations are commonly used in seismic data processing such as velocity analysis, modeling, and time migration. The anisotropic effect is very obvious for a converted wave when estimating the physical and processing parameters from the real data. To approximate the traveltime in an elastic orthorhombic (ORT) medium, we defined an explicit rational-form approximation for the traveltime of the converted [Formula: see text]-, [Formula: see text]-, and [Formula: see text]-waves. To obtain the expression of the coefficients, the Taylor-series approximation is applied in the corresponding vertical slowness for three pure-wave modes. By using the effective model parameters for [Formula: see text]-, [Formula: see text]-, and [Formula: see text]-waves, the coefficients in the converted-wave traveltime approximation can be represented by the anisotropy parameters defined in the elastic ORT model. The accuracy in the converted-wave traveltime for three ORT models is illustrated in numerical examples. One can see from the results that, for converted [Formula: see text]- and [Formula: see text]-waves, our rational-form approximation is very accurate regardless of the tested ORT model. For a converted [Formula: see text]-wave, due to the existence of cusps, triplications, and shear singularities, the error is relatively larger compared with PS-waves.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


2013 ◽  
Vol 373-375 ◽  
pp. 694-697 ◽  
Author(s):  
Guang Xun Chen ◽  
Yan Hui Du ◽  
Lei Zhang ◽  
Pan Ke Qin

The commonly used method for high resolution velocity analysis in seismic data processing and interpreting is based on signal estimation algorithm. However, the numerical realization of this method is complicated and time-consuming due to the process of signal-noise separation requiring enormous loop calculations before constructing the energy function. In this paper, we improved the method on the base of multi-trace signal estimation. This improved method made full use of amplitude information that can enhance the anti-noise ability and improve the resolution greatly. Meanwhile, this method has more economical calculation cost than other methods for it didnt require multiple loop calculations.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. V197-V206 ◽  
Author(s):  
Ali Gholami ◽  
Milad Farshad

The traditional hyperbolic Radon transform (RT) decomposes seismic data into a sum of constant amplitude basis functions. This limits the performance of the transform when dealing with real data in which the reflection amplitudes include the amplitude variation with offset (AVO) variations. We adopted the Shuey-Radon transform as a combination of the RT and Shuey’s approximation of reflectivity to accurately model reflections including AVO effects. The new transform splits the seismic gather into three Radon panels: The first models the reflections at zero offset, and the other two panels add capability to model the AVO gradient and curvature. There are two main advantages of the Shuey-Radon transform over similar algorithms, which are based on a polynomial expansion of the AVO response. (1) It is able to model reflections more accurately. This leads to more focused coefficients in the transform domain and hence provides more accurate processing results. (2) Unlike polynomial-based approaches, the coefficients of the Shuey-Radon transform are directly connected to the classic AVO parameters (intercept, gradient, and curvature). Therefore, the resulting coefficients can further be used for interpretation purposes. The solution of the new transform is defined via an underdetermined linear system of equations. It is formulated as a sparsity-promoting optimization, and it is solved efficiently using an orthogonal matching pursuit algorithm. Applications to different numerical experiments indicate that the Shuey-Radon transform outperforms the polynomial and conventional RTs.


2021 ◽  
Author(s):  
Alexander Bauer ◽  
Benjamin Schwarz ◽  
Dirk Gajewski

<p>Most established methods for the estimation of subsurface velocity models rely on the measurements of reflected or diving waves and therefore require data with sufficiently large source-receiver offsets. For seismic data that lacks these offsets, such as vintage data, low-fold academic data or near zero-offset P-Cable data, these methods fail. Building on recent studies, we apply a workflow that exploits the diffracted wavefield for depth-velocity-model building. This workflow consists of three principal steps: (1) revealing the diffracted wavefield by modeling and adaptively subtracting reflections from the raw data, (2) characterizing the diffractions with physically meaningful wavefront attributes, (3) estimating depth-velocity models with wavefront tomography. We propose a hybrid 2D/3D approach, in which we apply the well-established and automated 2D workflow to numerous inlines of a high-resolution 3D P-Cable dataset acquired near Ritter Island, a small volcanic island located north-east of New Guinea known for a catastrophic flank collapse in 1888. We use the obtained set of parallel 2D velocity models to interpolate a 3D velocity model for the whole data cube, thus overcoming possible issues such as varying data quality in inline and crossline direction and the high computational cost of 3D data analysis. Even though the 2D workflow may suffer from out-of-plane effects, we obtain a smooth 3D velocity model that is consistent with the data.</p>


Sign in / Sign up

Export Citation Format

Share Document