Kirchhoff migration using eikonal equation traveltimes

Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 810-817 ◽  
Author(s):  
Samuel H. Gray ◽  
William P. May

The use of ray shooting followed by interpolation of traveltimes onto a regular grid is a popular and robust method for computing diffraction curves for Kirchhoff migration. An alternative to this method is to compute the traveltimes by directly solving the eikonal equation on a regular grid, without computing raypaths. Solving the eikonal equation on such a grid simplifies the problem of interpolating times onto the migration grid, but this method is not well defined at points where two different branches of the traveltime field meet. Also, computational and data storage issues that are relatively unimportant for performance in two dimensions limit the applicability of both schemes in three dimensions. A new implementation of a gridded eikonal equation solver has been designed to address these problems. A 2-D version of this algorithm is tested by using it to generate traveltimes to migrate the Marmousi synthetic data set using the exact velocity model. The results are compared with three other images: an F-X migration (a standard for comparison), a Kirchhoff migration using ray tracing, and a Kirchhoff migration using traveltimes generated by a commonly used eikonal equation solver. The F-X‐migrated image shows the imaging objective more clearly than any of the Kirchhoff migrations, and we advance a heuristic reason to explain this fact. Of the Kirchhoff migrations, the one using ray tracing produces the best image, and the other two are of comparable quality.

Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1270-1274 ◽  
Author(s):  
Le‐Wei Mo ◽  
Jerry M. Harris

Traveltimes of direct arrivals are obtained by solving the eikonal equation using finite differences. A uniform square grid represents both the velocity model and the traveltime table. Wavefront discontinuities across a velocity interface at postcritical incidence and some insights in direct‐arrival ray tracing are incorporated into the traveltime computation so that the procedure is stable at precritical, critical, and postcritical incidence angles. The traveltimes can be used in Kirchhoff migration, tomography, and NMO corrections that require traveltimes of direct arrivals on a uniform grid.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. U9-U22 ◽  
Author(s):  
Jide Nosakare Ogunbo ◽  
Guy Marquis ◽  
Jie Zhang ◽  
Weizhong Wang

Geophysical joint inversion requires the setting of a few parameters for optimum performance of the process. However, there are yet no known detailed procedures for selecting the various parameters for performing the joint inversion. Previous works on the joint inversion of electromagnetic (EM) and seismic data have reported parameter applications for data sets acquired from the same dimensional geometry (either in two dimensions or three dimensions) and few on variant geometry. But none has discussed the parameter selections for the joint inversion of methods from variant geometry (for example, a 2D seismic travel and pseudo-2D frequency-domain EM data). With the advantage of affordable computational cost and the sufficient approximation of a 1D EM model in a horizontally layered sedimentary environment, we are able to set optimum joint inversion parameters to perform structurally constrained joint 2D seismic traveltime and pseudo-2D EM data for hydrocarbon exploration. From the synthetic experiments, even in the presence of noise, we are able to prescribe the rules for optimum parameter setting for the joint inversion, including the choice of initial model and the cross-gradient weighting. We apply these rules on field data to reconstruct a more reliable subsurface velocity model than the one obtained by the traveltime inversions alone. We expect that this approach will be useful for performing joint inversion of the seismic traveltime and frequency-domain EM data for the production of hydrocarbon.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2007 ◽  
Vol 72 (3) ◽  
pp. S133-S138 ◽  
Author(s):  
Tianfei Zhu ◽  
Samuel H. Gray ◽  
Daoliu Wang

Gaussian-beam depth migration is a useful alternative to Kirchhoff and wave-equation migrations. It overcomes the limitations of Kirchhoff migration in imaging multipathing arrivals, while retaining its efficiency and its capability of imaging steep dips with turning waves. Extension of this migration method to anisotropic media has, however, been hampered by the difficulties in traditional kinematic and dynamic ray-tracing systems in inhomogeneous, anisotropic media. Formulated in terms of elastic parameters, the traditional anisotropic ray-tracing systems aredifficult to implement and inefficient for computation, especially for the dynamic ray-tracing system. They may also result inambiguity in specifying elastic parameters for a given medium.To overcome these difficulties, we have reformulated the ray-tracing systems in terms of phase velocity.These reformulated systems are simple and especially useful for general transversely isotropic and weak orthorhombic media, because the phase velocities for these two types of media can be computed with simple analytic expressions. These two types of media also represent the majority of anisotropy observed in sedimentary rocks. Based on these newly developed ray-tracing systems, we have extended prestack Gaussian-beam depth migration to general transversely isotropic media. Test results with synthetic data show that our anisotropic, prestack Gaussian-beam migration is accurate and efficient. It produces images superior to those generated by anisotropic, prestack Kirchhoff migration.


Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. S1-S10 ◽  
Author(s):  
Mathias Alerini ◽  
Bjørn Ursin

Kirchhoff migration is based on a continuous integral ranging from minus infinity to plus infinity. The necessary discretization and truncation of this integral introduces noise in the migrated image. The attenuation of this noise has been studied by many authors who propose different strategies. The main idea is to limit the migration operator around the specular point. This means that the specular point must be known before migration and that a criterion exists to determine the size of the migration operator. We propose an original approach to estimate the size of the focusing window, knowing the geologic dip. The approach benefits from the use of prestack depth migration in angle domain, which is recognized as the most artifact-free Kirchhoff-type migration. The main advantages of the method are ease of implementation in an existing angle-migration code (two or three dimensions), user friendliness, ability to take into account multiorientation of the local geology as in faulted regions, and flexibility with respect to the quality of the estimated geologic dip field. Common-image gathers resulting from the method are free from migration noise and can be postprocessed in an easier way. We validate the approach and its possibilities on synthetic data examples with different levels of complexity.


Geophysics ◽  
1988 ◽  
Vol 53 (12) ◽  
pp. 1540-1546 ◽  
Author(s):  
T. H. Keho ◽  
W. B. Beydoun

A rapid nonrecursive prestack Kirchhoff migration is implemented (for 2-D or 2.5-D media) by computing the Green’s functions (both traveltimes and amplitudes) in variable velocity media with the paraxial ray method. Since the paraxial ray method allows the Green’s functions to be determined at points which do not lie on the ray, two‐point ray tracing is not required. The Green’s functions between a source or receiver location and a dense grid of thousands of image points can be estimated to a desired accuracy by shooting a sufficiently dense fan of rays. For a given grid of image points, the paraxial ray method reduces computation time by one order of magnitude compared with interpolation schemes. The method is illustrated using synthetic data generated by acoustic ray tracing. Application to VSP data collected in a borehole adjacent to a reef in Michigan produces an image that clearly shows the location of the reef.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


Sign in / Sign up

Export Citation Format

Share Document