Seismic Velocity Estimation Complete Session

2016 ◽  
Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. U21-U29
Author(s):  
Gabriel Fabien-Ouellet ◽  
Rahul Sarkar

Applying deep learning to 3D velocity model building remains a challenge due to the sheer volume of data required to train large-scale artificial neural networks. Moreover, little is known about what types of network architectures are appropriate for such a complex task. To ease the development of a deep-learning approach for seismic velocity estimation, we have evaluated a simplified surrogate problem — the estimation of the root-mean-square (rms) and interval velocity in time from common-midpoint gathers — for 1D layered velocity models. We have developed a deep neural network, whose design was inspired by the information flow found in semblance analysis. The network replaces semblance estimation by a representation built with a deep convolutional neural network, and then it performs velocity estimation automatically with recurrent neural networks. The network is trained with synthetic data to identify primary reflection events, rms velocity, and interval velocity. For a synthetic test set containing 1D layered models, we find that rms and interval velocity are accurately estimated, with an error of less than [Formula: see text] for the rms velocity. We apply the neural network to a real 2D marine survey and obtain accurate rms velocity predictions leading to a coherent stacked section, in addition to an estimation of the interval velocity that reproduces the main structures in the stacked section. Our results provide strong evidence that neural networks can estimate velocity from seismic data and that good performance can be achieved on real data even if the training is based on synthetics. The findings for the 1D problem suggest that deep convolutional encoders and recurrent neural networks are promising components of more complex networks that can perform 2D and 3D velocity model building.


2020 ◽  
Author(s):  
Zack Spica ◽  
Takeshi Akuhara ◽  
Gregory Beroza ◽  
Biondo Biondi ◽  
William Ellsworth ◽  
...  

<p>Our understanding of subsurface processes suffers from a profound observation bias: ground-motion sensors are rare, sparse, clustered on continents and not available where they are most needed. A new seismic recording technology called distributed acoustic sensing (DAS), can transform existing telecommunication fiber-optic cables into arrays of thousands of sensors, enabling meter-scale recording over tens of kilometers of linear fiber length. DAS works in high-pressure and high-temperature environments, enabling long-term recordings of seismic signals inside reservoirs, fault zones, near active volcanoes, in deep seas or in highly urbanized areas.</p><p>In this talk, we will introduce this laser-based technology and present three recent cases of study. The first experiment is in the city of Stanford, California, where DAS measurements are used to provide geotechnical information at a scale normally unattainable (i.e., for each building) with traditional geophone instrumentation. In the second study, we will show how downhole DAS passive recordings from the San Andreas Fault Observatory at Depth can be used for seismic velocity estimation. In the third research, we use DAS (in collaboration with Fujitec) to understand the ocean physics and infer seismic properties of the seafloor under a 100 km telecommunication cable.</p>


Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. R29-R36 ◽  
Author(s):  
Sergey Fomel

Regularization is a required component of geophysical-estimation problems that operate with insufficient data. The goal of regularization is to impose additional constraints on the estimated model. I introduce shaping regularization, a general method for imposing constraints by explicit mapping of the estimated model to the space of admissible models. Shaping regularization is integrated in a conjugate-gradient algorithm for iterative least-squares estimation. It provides the advantage of better control on the estimated model in comparison with traditional regularization methods and, in some cases, leads to a faster iterative convergence. Simple data interpolation and seismic-velocity estimation examples illustrate the concept.


1984 ◽  
Vol 72 (10) ◽  
pp. 1330-1339 ◽  
Author(s):  
P.S. Schultz

Geophysics ◽  
1990 ◽  
Vol 55 (3) ◽  
pp. 266-276 ◽  
Author(s):  
Samuel H. Bickel

The conversion of time horizons to depth is fundamental to exploration geophysics. The interval velocity used in the conversion is often estimated from the stacking velocity, assuming that each layer’s interval velocity is homogeneous. However, even for one laterally inhomogeneous layer above a flat reflector the stacking velocity can swing violently about its average and conventional methods of velocity estimation fail. I show that violent swings in the stacking velocity are a symptom of a long‐wavelength ambiguity between the burial depth to an interface and interval velocity. Lateral variations in seismic velocity with a spatial wavelength of about 2.7 D, where D is the depth to the reflecting horizon, cannot be unambiguously resolved from traveltime measurements. The spatial wavelength of this ambiguous component varies from 2.57 D, for very small source‐receiver separations, to 2.86 D for source‐receiver separations equal to D. Spectral components of the stacking velocity at wavelengths shorter than this ambiguous value are amplified in size and reversed in polarity relative to the interval velocity. A practical inverse filter that corrects for these distortions produces an interval velocity that is almost totally lacking in low‐frequency components, giving a very distorted picture of the interval velocity. Since the wavelength of total ambiguity changes with offset, a complete description of the velocity and depth fields can, in theory, be extracted from a combination of multiple‐offset traveltime measurements. However, the wavelength of total ambiguity is such a weak function of source‐receiver separation that multiple offset processing, in practice, does little to resolve the ambiguity. In fact, the Rayleigh resolution limit implies that three or more offset measurements are more effective than two only if the seismic‐line length is at least 20 D. In a series of numerical experiments with the line set to 100 D and a spatial noise level of .01% in each channel I used a two‐channel Wiener filter to successfully extract the full‐band response for a simultaneous step change in velocity and in depth. The method fails for lines shorter than 20 D because of the transients that arise when the data are shorter than the filter. Stability was achieved by increasing the noise level to 1% in the design of the Wiener filter, but low spatial frequencies were lost and the estimated velocity‐depth model was distorted. If the results of this single flat‐layer analysis apply to practical situations, the velocity‐depth ambiguity may continue to plague exploration seismologists for some time to come.


Geophysics ◽  
1985 ◽  
Vol 50 (6) ◽  
pp. 969-988 ◽  
Author(s):  
Sven Ivansson

This paper deals with the problem of seismic velocity estimation from first‐arrival traveltimes in a two‐dimensional (2-D) cross‐hole geometry where explosions are detonated in one borehole while recordings are made in another borehole and on the surface. Standard tomographic procedures are based on decomposition of the cross‐hole area into a number of cells and a simplifying assumption of straight raypaths. In the presence of significant low‐velocity zones, the resulting images may be contaminated. Different ways of performing tomographic inversion are tested on a number of synthetic examples. Images obtained by direct, unrestricted least‐squares inversion are often seriously distorted. However, methods using more cells and some kind of damping often give more satisfactory results. Because the risk of distorted images is always present in inversion procedures, comparison with synthetic data (forward modeling) is a valuable tool in the interpretation process. With a reasonably good initial solution, improvements can often be achieved by using iterative procedures to take account of ray‐bending affects as proposed in Bois et al.(1971). An alternative way of performing these calculations is described.


2011 ◽  
Author(s):  
Riaz Alai ◽  
Mohd Hafizal Mad Zahir ◽  
Amar Ghaziah M. Adnan ◽  
Eric D.J. Verschuur

Sign in / Sign up

Export Citation Format

Share Document