Straight-rays redatuming: A fast and robust alternative to wave-equation-based datuming

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. U37-U46 ◽  
Author(s):  
Tariq Alkhalifah ◽  
Claudio Bagaini

Wave-equation-based redatuming is expensive and requires a detailed knowledge of the shallow velocity field. We derive the analytical expression of a new prestack wavefield extrapolation operator, the Topographic Datuming Operator (TDO), which applies redatuming based on straight-rays approximation above and below a chosen datum. This redatuming operator is directly applied to common-source gathers to downward continue the source and the receivers, simultaneously, to the datum level without resorting to common-receiver gathers. As a result, the method is far more efficient and robust than the conventional wave-equation-based redatuming and does not require an accurate depth-domain interval velocity model. In addition, TDO, unlike wave-equation-based redatuming, requires effective velocities above datum, and thus can be applied using attributes valid for static correction methods. Effective velocities beneath the datum permit us to replace the surface integral, which is needed for wave-equation redatuming with a line integral. In the particular case of infinite (in practice, very high with respect to the shallow layers) velocity beneath the datum, the TDO impulse response collapses to a point, and TDO redatuming is equivalent to conventional static correction, which may, therefore, be regarded as a special case of the newly derived operator. The computational cost of applying TDO is slightly larger than static corrections, yet provides higher quality results partially attributable to the ability of TDO to suppress diffractions emanating from anomalies above datum. Since TDO is an operation based on geometrical optics approximation, velocity after TDO is not biased by the vertical shift correction associated with conventional static correction. Application to a synthetic data set demonstrates the features of the method.

2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1988 ◽  
Vol 53 (10) ◽  
pp. 1311-1322 ◽  
Author(s):  
V. Shtivelman ◽  
A. Canning

Seismic sections are usually datum corrected by static shifting. For small differences in elevation and slow velocity variations between the input datum and the output datum, static shifting is a sufficiently accurate datum correction procedure. However, for significant differences in elevations and a more complicated velocity model, the accuracy of the static solution may prove to be insufficient; and a more exact method should be used. In this paper, we study the limitations of the static method of datum correction and develop simple and effective extrapolation schemes based on the wave equation, schemes which lead to more accurate datum correction. The distortions of seismic events caused by static correction are illustrated by a number of simple examples. To reduce the distortions, we propose a number of extrapolation schemes based on the asymptories of the Kirchhoff integral solution of the 2-D scalar wave equation. Application of the extrapolation algorithms to synthetic data shows that they provide accurate datum corrections even for a nonplanar input datum and vertical and lateral velocity variations. The algorithms have been successfully applied to real data.


Geophysics ◽  
2021 ◽  
Vol 86 (6) ◽  
pp. R913-R926
Author(s):  
Jianhua Wang ◽  
Jizhong Yang ◽  
Liangguo Dong ◽  
Yuzhu Liu

Wave-equation traveltime inversion (WTI) is a useful tool for background velocity model building. It is generally formulated and implemented in the time domain, in which the gradient is calculated by temporally crosscorrelating the source- and receiver-side wavefields. The time-domain source-side snapshots are either stored in memory or are reconstructed through back propagation. The memory requirements and computational cost of WTI are thus prohibitively expensive, especially for 3D applications. To partially alleviate this problem, we provide an implementation of WTI in the frequency domain with a monofrequency component. Because only one frequency is used, it is affordable to directly store the source- and receiver-side wavefields in memory. There is no need for wavefield reconstruction during gradient calculation. In such a way, we have dramatically reduced the memory requirements and computational cost compared with the traditional time-domain WTI realization. For practical implementation, the frequency-domain wavefield is calculated by time-domain finite-difference forward modeling and is transformed to the frequency domain by an on-the-fly discrete Fourier transform. Numerical examples on a simple lateral periodic velocity model and the Marmousi model demonstrate that our method can obtain accurate background velocity models comparable with those from time-domain WTI and frequency-domain WTI with multiple frequencies. A field data set test indicates that our method obtains a background velocity model that well predicts the seismic wave traveltime.


2012 ◽  
Vol 30 (4) ◽  
pp. 473 ◽  
Author(s):  
Felipe A. Terra ◽  
Jessé C. Costa ◽  
Amin Bassrei

O imageamento sísmico em profundidade é um desafio em áreas geologicamente complexas, onde a velocidade sísmica apresenta variação lateral. Porém, para se obter sucesso no imageamento sísmico em profundidade é necessário que se tenha uma estimativa confiável do modelo de velocidade. A estereotomografia é uma ferramenta efetiva para se alcançar esse propósito. Também denominada de tomografia de inclinação, ela utiliza as vagarosidades e os tempos de trânsito selecionados de famílias de fonte comum e de receptor comum. Nós avaliamos uma alternativa da estereotomografia para a construção do modelo de velocidades. O algoritmo foi validado no conjunto de dados sintéticos Marmousoft e também em dados reais provenientes da Bacia do Jequitinhonha, Brasil, numa região de talude continental. Este conjunto de dados com complexidade estrutural demandou um controle de alta qualidade na seleção de eventos, numa escolha criteriosa dos parâmetros de regularização, e a atenuação de múltiplas de superfície livre. Os resultados tanto para os dados sintéticos como para os reais mostraram a viabilidade computacional e precisão do método. ABSTRACT. Seismic imaging in depth is a challenge in geologically complex areas, where the seismic velocity varies laterally. The estimation of a reliable velocity model is necessary in order to succeed in seismic depth imaging. Stereotomography is an effective tool to achieve this purpose. Also called slope tomography, it uses the slowness and picked traveltimes from reflection events picked in common source and common receiver gathers. We evaluate an alternative implementation of stereotomography for velocity model building. The algorithm was validated in the Marmousoft synthetic data set and also used for velocity model estimation in acontinental slope region, using real data from Jequitinhonha Basin, Brazil. This data set of structural complexity demanded a high quality control of event selection forpicking, judicious choice of regularization parameters and free surface multiple attenuation. The results for both the synthetic and real data have shown the computational feasibility and accuracy of this method.Keywords: stereotomography, regularization, Jequitinhonha Basin


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. Q27-Q40 ◽  
Author(s):  
Katrin Löer ◽  
Andrew Curtis ◽  
Giovanni Angelo Meles

We have evaluated an explicit relationship between the representations of internal multiples by source-receiver interferometry and an inverse-scattering series. This provides a new insight into the interaction of different terms in each of these internal multiple prediction equations and explains why amplitudes of estimated multiples are typically incorrect. A downside of the existing representations is that their computational cost is extremely high, which can be a precluding factor especially in 3D applications. Using our insight from source-receiver interferometry, we have developed an alternative, computationally more efficient way to predict internal multiples. The new formula is based on crosscorrelation and convolution: two operations that are computationally cheap and routinely used in interferometric methods. We have compared the results of the standard and the alternative formulas qualitatively in terms of the constructed wavefields and quantitatively in terms of the computational cost using examples from a synthetic data set.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
1992 ◽  
Vol 57 (8) ◽  
pp. 1034-1047 ◽  
Author(s):  
Biondo Biondi

Imaging seismic data requires detailed knowledge of the propagation velocity of compressional waves in the subsurface. In conventional seismic processing, the interval velocity model is usually derived from stacking velocities. Stacking velocities are determined by measuring the coherency of the reflections along hyperbolic moveout trajectories in offset. This conventional method becomes inaccurate in geologically complex areas because the conversion of stacking velocities to interval velocities assumes a horizontally stratified medium and mild lateral variations in velocity. The tomographic velocity estimation proposed in this paper can be applied when there are dipping reflectors and strong lateral variations. The method is based on the measurements of moveouts by beam stacks. A beam stack measures local coherency of reflections along hyperbolic trajectories. Because it is a local operator, the beam stack can provide information on nonhyperbolic moveouts in the data. This information is more reliable than traveltimes of reflections picked directly from the data because many seismic traces are used for computing beam stacks. To estimate interval velocity, I iteratively search for the velocity model that best predicts the events in beam‐stacked data. My estimation method does not require a preliminary picking of the data because it directly maximizes the beam‐stack’s energy at the traveltimes and surface locations predicted by ray tracing. The advantage of this formulation is that detection of the events in the beam‐stacked data can be guided by the imposition of smoothness constraints on the velocity model. The optimization problem of maximizing beam‐stack energy is solved by a gradient algorithm. To compute the derivatives of the objective function with respect to the velocity model, I derive a linear operator that relates perturbations in velocity to the observed changes in the beam‐stack kinematics. The method has been successfully applied to a marine survey for estimating a low‐velocity anomaly. The estimated velocity function correctly predicts the nonhyperbolic moveouts in the data caused by the velocity anomaly.


Sign in / Sign up

Export Citation Format

Share Document