Imaging discontinuities on seismic sections

Geophysics ◽  
1988 ◽  
Vol 53 (3) ◽  
pp. 334-345 ◽  
Author(s):  
Ernest R. Kanasewich ◽  
Suhas M. Phadke

In routine seismic processing, normal moveout (NMO) corrections are performed to enhance the reflected signals on common‐depth‐point or common‐midpoint stacked sections. However, when faults are present, reflection interference from the two blocks and the diffractions from their edges hinder fault location determination. Destruction of diffraction patterns by poststack migration further inhibits proper imaging of diffracting centers. This paper presents a new technique which helps in the interpretation of diffracting edges by concentrating the signal amplitudes from discontinuous diffracting points on seismic sections. It involves application to the data of moveout and amplitude corrections appropriate to an assumed diffractor location. The maximum diffraction amplitude occurs at the location of the receiver for which the diffracting discontinuity is beneath the source‐receiver midpoint. Since the amplitudes of these diffracted signals drop very rapidly on either side of the midpoint, an appropriate amplitude correction must be applied. Also, because the diffracted signals are present on all traces, one can use all of them to obtain a stacked trace for one possible diffractor location. Repetition of this procedure for diffractors assumed to be located beneath each surface point results in the common‐fault‐ point (CFP) stacked section, which shows diffractor locations by high amplitudes. The method was tested for synthetic data with and without noise. It proves to be quite effective, but is sensitive to the velocity model used for moveout corrections. Therefore, the velocity model obtained from NMO stacking is generally used for enhancing diffractor locations by stacking. Finally, the technique was applied to a field reflection data set from an area south of Princess well in Alberta.

Geophysics ◽  
2005 ◽  
Vol 70 (6) ◽  
pp. S111-S120
Author(s):  
Fabio Rocca ◽  
Massimiliano Vassallo ◽  
Giancarlo Bernasconi

Seismic depth migration back-propagates seismic data in the correct depth position using information about the velocity of the medium. Usually, Kirchhoff summation is the preferred migration procedure for seismic-while-drilling (SWD) data because it can handle virtually any configuration of sources and receivers and one can compensate for irregular spatial sampling of the array elements (receivers and sources). Under the assumption of a depth-varying velocity model, with receivers arranged along a horizontal circumference and sources placed along the central vertical axis, we reformulate the Kirchhoff summation in the angular frequency domain. In this way, the migration procedure becomes very efficient because the migrated volume is obtained by an inverse Fourier transform of the weighted data. The algorithm is suitable for 3D SWD acquisitions when the aforementioned hypothesis holds. We show migration tests on SWD synthetic data, and we derive solutions to reduce the migration artifacts and to control aliasing. The procedure is also applied on a real 3D SWD data set. The result compares satisfactorily with the seismic stack section obtained from surface reflection data and with the results from traditional Kirchhoff migration.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. EN77-EN90 ◽  
Author(s):  
Paolo Bergamo ◽  
Laura Valentina Socco

Surface-wave (SW) techniques are mainly used to retrieve 1D velocity models and are therefore characterized by a 1D approach, which might prove unsatisfactory when relevant 2D effects are present in the investigated subsurface. In the case of sharp and sudden lateral heterogeneities in the subsurface, a strategy to tackle this limitation is to estimate the location of the discontinuities and to separately process seismic traces belonging to quasi-1D subsurface portions. We have addressed our attention to methods aimed at locating discontinuities by identifying anomalies in SW propagation and attenuation. The considered methods are the autospectrum computation and the attenuation analysis of Rayleigh waves (AARW). These methods were developed for purposes and/or scales of analysis that are different from those of this work, which aims at detecting and characterizing sharp subvertical discontinuities in the shallow subsurface. We applied both methods to two data sets, synthetic data from a finite-element method simulation and a field data set acquired over a fault system, both presenting an abrupt lateral variation perpendicularly crossing the acquisition line. We also extended the AARW method to the detection of sharp discontinuities from large and multifold data sets and we tested these novel procedures on the field case. The two methods are proven to be effective for the detection of the discontinuity, by portraying propagation phenomena linked to the presence of the heterogeneity, such as the interference between incident and reflected wavetrains, and energy concentration as well as subsequent decay at the fault location. The procedures we developed for the processing of multifold seismic data set showed to be reliable tools in locating and characterizing subvertical sharp heterogeneities.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. Q15-Q26 ◽  
Author(s):  
Giovanni Angelo Meles ◽  
Kees Wapenaar ◽  
Andrew Curtis

State-of-the-art methods to image the earth’s subsurface using active-source seismic reflection data involve reverse time migration. This and other standard seismic processing methods such as velocity analysis provide best results only when all waves in the data set are primaries (waves reflected only once). A variety of methods are therefore deployed as processing to predict and remove multiples (waves reflected several times); however, accurate removal of those predicted multiples from the recorded data using adaptive subtraction techniques proves challenging, even in cases in which they can be predicted with reasonable accuracy. We present a new, alternative strategy to construct a parallel data set consisting only of primaries, which is calculated directly from recorded data. This obviates the need for multiple prediction and removal methods. Primaries are constructed by using convolutional interferometry to combine the first-arriving events of upgoing and direct-wave downgoing Green’s functions to virtual receivers in the subsurface. The required upgoing wavefields to virtual receivers are constructed by Marchenko redatuming. Crucially, this is possible without detailed models of the earth’s subsurface reflectivity structure: Similar to the most migration techniques, the method only requires surface reflection data and estimates of direct (nonreflected) arrivals between the virtual subsurface sources and the acquisition surface. We evaluate the method on a stratified synclinal model. It is shown to be particularly robust against errors in the reference velocity model used and to improve the migrated images substantially.


Geophysics ◽  
2021 ◽  
pp. 1-97
Author(s):  
Haorui Peng ◽  
Ivan Vasconcelos ◽  
Yanadet Sripanich ◽  
Lele Zhang

Marchenko methods can retrieve Green’s functions and focusing functions from single-sided reflection data and a smooth velocity model, as essential components of a redatuming process. Recent studies also indicate that a modified Marchenko scheme can reconstruct primary-only reflection responses directly from reflection data without requiring a priori model information. To provide insight into the artifacts that arise when input data are not ideally sampled, we study the effects of subsampling in both types of Marchenko methods in 2D earth and data — by analyzing the behavior of Marchenko-based results on synthetic data subsampled in sources or receivers. With a layered model, we find that for Marchenko redatuming, subsampling effects jointly depend on the choice of integration variable and the subsampling dimension, originated from the integrand gather in the multidimensional convolution process. When reflection data are subsampled in a single dimension, integrating on the other yields spatial gaps together with artifacts, whereas integrating on the subsampled dimension produces aliasing artifacts but without spatial gaps. Our complex subsalt model indicates that the subsampling may lead to very strong artifacts, which can be further complicated by having limited apertures. For Marchenko-based primary estimation (MPE), subsampling below a certain fraction of the fully sampled data can cause MPE iterations to diverge, which can be mitigated to some extent by using more robust iterative solvers, such as least-squares QR. Our results, covering redatuming and primary estimation in a range of subsampling scenarios, provide insights that can inform acquisition sampling choices as well as processing parameterization and quality control, e.g., to set up appropriate data filters and scaling to accommodate the effects of dipole fields, or to help ensuring that the data interpolation achieves the desired levels of reconstruction quality that minimize subsampling artifacts in Marchenko-derived fields and images.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


Geophysics ◽  
1990 ◽  
Vol 55 (3) ◽  
pp. 284-292 ◽  
Author(s):  
A. Pica ◽  
J. P. Diet ◽  
A. Tarantola

Interpretation of seismic waveforms can be expressed as an optimization problem based on a non‐linear least‐squares criterion to find the model which best explains the data. An initial model is corrected iteratively using a gradient method (conjugate gradient). At each iteration, computation of the direction of the model perturbation requires the forward propagation of the actual sources and the reverse‐time propagation of the residuals (misfit between the data and the synthetics); the two wave fields thus obtained are then correlated. An extra forward propagation is required to compute the amplitude of the perturbation along the conjugate‐gradient direction. The number of propagations to be simulated numerically in each iteration equals three times the number of shots. Since a 2-D finite‐difference code is employed to solve forward‐ and backward‐propagation problems, the method is general and can handle arbitrary 2-D source‐receiver configurations and lateral heterogeneities. Using conventional velocity analysis to derive an initial velocity model, the method is implemented on a real marine data set. The data set which has been selected corresponds approximately to a horizontally stratified medium. Consequently, a single‐shot gather has been used for inversion. In spite of some simplifying assumptions used for wave‐field propagation (acoustic approximation, point source), and using synthetics generated by a nearby sonic log to calibrate amplitudes, our final synthetics match the input data very well and the inversion result has clear similarities to the log.


Sign in / Sign up

Export Citation Format

Share Document