Wave‐equation tomography

Geophysics ◽  
1992 ◽  
Vol 57 (1) ◽  
pp. 15-26 ◽  
Author(s):  
Marta Jo Woodward

The relation between ray‐trace and diffraction tomography is usually obscured by formulation of the two methods in different domains: the former in space, the latter in wavenumber. Here diffraction tomography is reformulated in the space domain, under the title of wave‐equation tomography. With this transformation, wave‐equation tomography projects monochromatic, scattered wavefields back over source‐receiver wavepaths, just as ray‐trace tomography projects traveltime delays back over source‐receiver raypaths. Derived under the Born approximation, these wavepaths are wave‐theoretic back‐projection patterns for reflected energy; derived under the Rytov approximation, they are wave‐theoretic back‐projection patterns for transmitted energy. Differences between ray‐trace and wave‐equation tomography are examined through comparison of wavepaths and raypaths, followed by their application to a transmission‐geometry, synthetic data set. Rytov wave‐equation tomography proves superior to ray‐trace tomography in dealing with geometrical frequency dispersion and finite‐aperture data, but inferior in robustness. Where ray‐trace tomography assumes linear phase delay and inverts the arrival time of one well‐understood event, wave‐equation tomography accommodates scattering and inverts all of the signal and noise on an infinite trace simultaneously. Interpreted through the uncertainty relation, these differences lead to a redefinition of Rytov wavepaths as monochromatic raypaths, and of raypaths as infinite‐bandwidth wavepaths (Rytov wavepaths averaged over an infinite bandwidth). The infinite‐bandwidth and infinite‐time assumptions of ray‐trace and Rytov, wave‐equation tomography are reconciled through the introduction of bandlimited raypaths (Rytov wavepaths averaged over a finite bandwidth). A compromise between rays and waves, bandlimited raypaths are broad back‐projection patterns that account for the uncertainty inherent in picking traveltimes from bandlimited data.

2017 ◽  
Vol 5 (3) ◽  
pp. SO21-SO30 ◽  
Author(s):  
Shihang Feng ◽  
Gerard T. Schuster

We have developed a tutorial for skeletonized inversion of pseudo-acoustic anisotropic vertical symmetry axis (VTI) data. We first invert for the anisotropic models using wave-equation traveltime inversion. Here, the skeletonized data are the traveltimes of transmitted and/or reflected arrivals that lead to simpler misfit functions and more robust convergence compared with full-waveform inversion. This provides a good starting model for waveform inversion. The effectiveness of this procedure is illustrated with synthetic data examples and a marine data set recorded in the Gulf of Mexico.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. U1-U8 ◽  
Author(s):  
Bingbing Sun ◽  
Tariq Alkhalifah

Macro-velocity model building is important for subsequent prestack depth migration and full-waveform inversion. Wave-equation migration velocity analysis uses the band-limited waveform to invert for velocity. Normally, inversion would be implemented by focusing the subsurface offset common-image gathers. We reexamine this concept with a different perspective: In the subsurface offset domain, using extended Born modeling, the recorded data can be considered as invariant with respect to the perturbation of the position of the virtual sources and velocity at the same time. A linear system connecting the perturbation of the position of those virtual sources and velocity is derived and solved subsequently by the conjugate gradient method. In theory, the perturbation of the position of the virtual sources is given by the Rytov approximation. Thus, compared with the Born approximation, it relaxes the dependency on amplitude and makes the proposed method more applicable for real data. We determined the effectiveness of the approach by applying the proposed method on isotropic and anisotropic vertical transverse isotropic synthetic data. A real data set example verifies the robustness of the proposed method.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. U37-U46 ◽  
Author(s):  
Tariq Alkhalifah ◽  
Claudio Bagaini

Wave-equation-based redatuming is expensive and requires a detailed knowledge of the shallow velocity field. We derive the analytical expression of a new prestack wavefield extrapolation operator, the Topographic Datuming Operator (TDO), which applies redatuming based on straight-rays approximation above and below a chosen datum. This redatuming operator is directly applied to common-source gathers to downward continue the source and the receivers, simultaneously, to the datum level without resorting to common-receiver gathers. As a result, the method is far more efficient and robust than the conventional wave-equation-based redatuming and does not require an accurate depth-domain interval velocity model. In addition, TDO, unlike wave-equation-based redatuming, requires effective velocities above datum, and thus can be applied using attributes valid for static correction methods. Effective velocities beneath the datum permit us to replace the surface integral, which is needed for wave-equation redatuming with a line integral. In the particular case of infinite (in practice, very high with respect to the shallow layers) velocity beneath the datum, the TDO impulse response collapses to a point, and TDO redatuming is equivalent to conventional static correction, which may, therefore, be regarded as a special case of the newly derived operator. The computational cost of applying TDO is slightly larger than static corrections, yet provides higher quality results partially attributable to the ability of TDO to suppress diffractions emanating from anomalies above datum. Since TDO is an operation based on geometrical optics approximation, velocity after TDO is not biased by the vertical shift correction associated with conventional static correction. Application to a synthetic data set demonstrates the features of the method.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. WA59-WA68 ◽  
Author(s):  
Yunyue Li ◽  
Biondo Biondi ◽  
Robert Clapp ◽  
Dave Nichols

Anisotropic models are needed for wave simulation and inversion where a complex geologic environment exists. We extended the theory of wave equation migration velocity analysis to build vertical transverse isotropic models. Because of the ambiguity between depth and [Formula: see text] in the acoustic regime, we assumed [Formula: see text] can be accurately obtained from other sources of information, and inverted for the NMO slowness and the anellipticity parameter [Formula: see text]. We combined the differential semblance optimization objective function with the stacking power maximization to evaluate the focusing of the prestack image in the subsurface-offset domain. To regularize the multiparameter inversion, we built a framework to adapt the geologic and the rock physics information to guide the updates in NMO slowness and [Formula: see text]. This regularization step was crucial to stabilize the inversion and to produce geologically meaningful results. We tested the proposed approach on a synthetic data set and a 2D Gulf of Mexico data set starting with a fairly good initial anisotropic model. The inversion results revealed shallow anomalies collocated in NMO velocity and [Formula: see text] and improved the continuity and the resolution of the final stacked images.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Sign in / Sign up

Export Citation Format

Share Document