Residual migration: Applications and limitations

Geophysics ◽  
1985 ◽  
Vol 50 (1) ◽  
pp. 110-126 ◽  
Author(s):  
Daniel H. Rothman ◽  
Stewart A. Levin ◽  
Fabio Rocca

The correct migration of seismic data depends on the accuracy of the chosen velocity model. Rocca and Salvador (1982) showed that small errors in the velocity model may be efficiently corrected by applying a residual migration to previously migrated data, rather than remigrating the original data with a corrected velocity field. The effective velocity used in this residual processing is usually small compared to the original migration velocity. This decreases computational cost relative to a full migration, and allows the initial migration to be done with a less accurate but faster algorithm than would otherwise be required. The possible advantages are many. The overall cost of migration may be reduced, a consideration especially important when migrating 3-D data sets. Migration quality may be improved, because the location of mispositioned reflectors can be corrected and because of the freedom to choose initial migration with a high dip, low dispersion method such as Stolt migration. Interactive residual sharpening of the migrated image also becomes feasible. We discuss the theoretical and practical limitations of residual migration and quantify the related reductions of effective dip, velocity, and frequency after initial migration. We determine how accurate the initial migration velocity must be to justify use of this approach and analyze aliasing and numerical artifacts. Field data examples using Kirchhoff summation and finite‐difference migration illustrate the features and drawbacks of the method.

Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. U9-U22 ◽  
Author(s):  
Jide Nosakare Ogunbo ◽  
Guy Marquis ◽  
Jie Zhang ◽  
Weizhong Wang

Geophysical joint inversion requires the setting of a few parameters for optimum performance of the process. However, there are yet no known detailed procedures for selecting the various parameters for performing the joint inversion. Previous works on the joint inversion of electromagnetic (EM) and seismic data have reported parameter applications for data sets acquired from the same dimensional geometry (either in two dimensions or three dimensions) and few on variant geometry. But none has discussed the parameter selections for the joint inversion of methods from variant geometry (for example, a 2D seismic travel and pseudo-2D frequency-domain EM data). With the advantage of affordable computational cost and the sufficient approximation of a 1D EM model in a horizontally layered sedimentary environment, we are able to set optimum joint inversion parameters to perform structurally constrained joint 2D seismic traveltime and pseudo-2D EM data for hydrocarbon exploration. From the synthetic experiments, even in the presence of noise, we are able to prescribe the rules for optimum parameter setting for the joint inversion, including the choice of initial model and the cross-gradient weighting. We apply these rules on field data to reconstruct a more reliable subsurface velocity model than the one obtained by the traveltime inversions alone. We expect that this approach will be useful for performing joint inversion of the seismic traveltime and frequency-domain EM data for the production of hydrocarbon.


Geophysics ◽  
2003 ◽  
Vol 68 (4) ◽  
pp. 1357-1370 ◽  
Author(s):  
Stéphane Operto ◽  
Gilles Lambaré ◽  
Pascal Podvin ◽  
Philippe Thierry

The SEG/EAGE overthrust model is a synthetic onshore velocity model that was used to generate several large synthetic seismic data sets using acoustic finite‐difference modeling. From this database, several realistic subdata sets were extracted and made available for testing 3D processing methods. For example, classic onshore‐type data‐acquisition geometries are available such as a swath acquisition, which is characterized by a nonuniform distribution of long offsets with azimuth and midpoints. In this paper, we present an application of 2.5D and 3D ray‐Born migration/inversion to several classical data sets from the SEG/EAGE overthrust experiment. The method is formulated as a linearized inversion of the scattered wavefield. The method allows quantitative estimates of short wavelength components of the velocity model. First, we apply a 3D migration/inversion formula formerly developed for marine acquisitions to the swath data set. The migrated sections exhibit significant amplitude artifacts and acquisition footprints, also revealed by the shape of the local spatial resolution filters. From the analysis of these spatial resolution filters, we propose a new formula significantly improving the migrated dip section. We also present 3D migrated results for the strike section and a small 3D target containing a channel. Finally, the applications demonstrate, that the ray+Born migration formula must be adapted to the acquisition geometry to obtain reliable estimates of the true amplitude of the model perturbations. This adaptation is relatively straightforward in the frame of the ray+Born formalism and can be guided by the analysis of the resolution operator.


2021 ◽  
Author(s):  
Alexander Bauer ◽  
Benjamin Schwarz ◽  
Dirk Gajewski

<p>Most established methods for the estimation of subsurface velocity models rely on the measurements of reflected or diving waves and therefore require data with sufficiently large source-receiver offsets. For seismic data that lacks these offsets, such as vintage data, low-fold academic data or near zero-offset P-Cable data, these methods fail. Building on recent studies, we apply a workflow that exploits the diffracted wavefield for depth-velocity-model building. This workflow consists of three principal steps: (1) revealing the diffracted wavefield by modeling and adaptively subtracting reflections from the raw data, (2) characterizing the diffractions with physically meaningful wavefront attributes, (3) estimating depth-velocity models with wavefront tomography. We propose a hybrid 2D/3D approach, in which we apply the well-established and automated 2D workflow to numerous inlines of a high-resolution 3D P-Cable dataset acquired near Ritter Island, a small volcanic island located north-east of New Guinea known for a catastrophic flank collapse in 1888. We use the obtained set of parallel 2D velocity models to interpolate a 3D velocity model for the whole data cube, thus overcoming possible issues such as varying data quality in inline and crossline direction and the high computational cost of 3D data analysis. Even though the 2D workflow may suffer from out-of-plane effects, we obtain a smooth 3D velocity model that is consistent with the data.</p>


2014 ◽  
Vol 26 (5) ◽  
pp. 907-919 ◽  
Author(s):  
Abd-Krim Seghouane ◽  
Yousef Saad

This letter proposes an algorithm for linear whitening that minimizes the mean squared error between the original and whitened data without using the truncated eigendecomposition (ED) of the covariance matrix of the original data. This algorithm uses Lanczos vectors to accurately approximate the major eigenvectors and eigenvalues of the covariance matrix of the original data. The major advantage of the proposed whitening approach is its low computational cost when compared with that of the truncated ED. This gain comes without sacrificing accuracy, as illustrated with an experiment of whitening a high-dimensional fMRI data set.


1992 ◽  
Vol 32 (1) ◽  
pp. 276
Author(s):  
T.J. Allen ◽  
P. Whiting

Several recent advances made in 3-D seismic data processing are discussed in this paper.Development of a time-variant FK dip-moveout algorithm allows application of the correct three-dimensional operator. Coupled with a high-dip one-pass 3-D migration algorithm, this provides improved resolution and response at all azimuths. The use of dilation operators extends the capability of the process to include an economical and accurate (within well-defined limits) 3-D depth migration.Accuracy of the migration velocity model may be improved by the use of migration velocity analysis: of the two approaches considered, the data-subsetting technique gives more reliable and interpretable results.Conflicts in recording azimuth and bin dimensions of overlapping 3-D surveys may be resolved by the use of a 3-D interpolation algorithm applied post 3-D stack and which allows the combined surveys to be 3-D migrated as one data set.


Geophysics ◽  
2008 ◽  
Vol 73 (6) ◽  
pp. S241-S249 ◽  
Author(s):  
Xiao-Bi Xie ◽  
Hui Yang

We have derived a broadband sensitivity kernel that relates the residual moveout (RMO) in prestack depth migration (PSDM) to velocity perturbations in the migration-velocity model. We have compared the kernel with the RMO directly measured from the migration image. The consistency between the sensitivity kernel and the measured sensitivity map validates the theory and the numerical implementation. Based on this broadband sensitivity kernel, we propose a new tomography method for migration-velocity analysis and updating — specifically, for the shot-record PSDM and shot-index common-image gather. As a result, time-consuming angle-domain analysis is not required. We use a fast one-way propagator and multiple forward scattering and single backscattering approximations to calculate the sensitivity kernel. Using synthetic data sets, we can successfully invert velocity perturbations from the migration RMO. This wave-equation-based method naturally incorporates the wave phenomena and is best teamed with the wave-equation migration method for velocity analysis. In addition, the new method maintains the simplicity of the ray-based velocity analysis method, with the more accurate sensitivity kernels replacing the rays.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCC79-WCC89 ◽  
Author(s):  
Hansruedi Maurer ◽  
Stewart Greenhalgh ◽  
Sabine Latzel

Analyses of synthetic frequency-domain acoustic waveform data provide new insights into the design and imaging capability of crosshole surveys. The full complex Fourier spectral data offer significantly more information than other data representations such as the amplitude, phase, or Hartley spectrum. Extensive eigenvalue analyses are used for further inspection of the information content offered by the seismic data. The goodness of different experimental configurations is investigated by varying the choice of (1) the frequencies, (2) the source and receiver spacings along the boreholes, and (3) the borehole separation. With only a few carefully chosen frequencies, a similar amount of information can be extracted from the seismic data as can be extracted with a much larger suite of equally spaced frequencies. Optimized data sets should include at least one very low frequencycomponent. The remaining frequencies should be chosen fromthe upper end of the spectrum available. This strategy proved to be applicable to a simple homogeneous and a very complex velocity model. Further tests are required, but it appears on the available evidence to be model independent. Source and receiver spacings also have an effect on the goodness of an experimental setup, but there are only minor benefits to denser sampling when the increment is much smaller than the shortest wavelength included in a data set. If the borehole separation becomes unfavorably large, the information content of the data is degraded, even when many frequencies and small source and receiver spacings are considered. The findings are based on eigenvalue analyses using the true velocity models. Because under realistic conditions the true model is not known, it is shown that the optimized data sets are sufficiently robust to allow the iterative inversion schemes to converge to the global minimum. This is demonstrated by means of tomographic inversions of several optimized data sets.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. U21-U28 ◽  
Author(s):  
Weihong Fei ◽  
George A. McMechan

A new migration velocity analysis is developed by combining the speed of parsimonious prestack depth migration with velocity adjustments estimated within and across common-reflection-point (CRP) gathers. The proposed approach is much more efficient than conventional tomographic velocity analysis because only the traces that contribute to a series of CRP gathers are depth migrated at each iteration. The local interval-velocity adjustments for each CRP are obtained by maximizing the stack amplitude over the predicted (nonhyperbolic) moveout in each CRP gather; this does not involve retracing rays. At every iteration, the velocity in each pixel is updated by averaging over all the predicted velocity updates. Finally, CRP positions and orientations are updated by parsimonious migration, and rays are retraced to define new CRP gathers for the next iteration; this ensures internal consistency between the updated velocity model and the CRP gather. Because the algorithm has a gridded-model parameterization, no explicit representation or fitting of reflectors is involved. Strong lateral-velocity variations, such as those found at salt flanks, can be handled. Application to synthetic and field data sets show that the proposed algorithm works effectively and efficiently.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. U1-U11 ◽  
Author(s):  
Chunhui Dong ◽  
Shangxu Wang ◽  
Jianfeng Zhang ◽  
Jingsheng Ma ◽  
Hao Zhang

Migration velocity analysis is a labor-intensive part of the iterative prestack time migration (PSTM) process. We have developed a velocity estimation scheme to improve the efficiency of the velocity analysis process using an automatic approach. Our scheme is the numerical implementation of the conventional velocity analysis process based on residual moveout analysis. The key aspect of this scheme is the automatic event picking in the common-reflection point (CRP) gathers, which is implemented by semblance scanning trace by trace. With the picked traveltime curves, we estimate the velocities at discrete grids in the velocity model using the least-squares method, and build the final root-mean-square (rms) velocity model by spatial interpolation. The main advantage of our method is that it can generate an appropriate rms velocity model for PSTM in just a few iterations without manual manipulations. In addition, using the fitting curves of the picked events in a range of offsets to estimate the velocity model, which is fitting to a normal moveout correction, can prevent our scheme from the local minima issue. The Sigsbee2B model and a field data set are used to verify the feasibility of our scheme. High-quality velocity model and imaging results are obtained. Compared with the computational cost to generate the CRP gathers, the cost of our scheme can be neglected, and the quality of the initial velocity is not critical.


Sign in / Sign up

Export Citation Format

Share Document