Applications of median filtering to deconvolution, pulse estimation, and statistical editing of seismic data

Geophysics ◽  
1983 ◽  
Vol 48 (12) ◽  
pp. 1598-1610 ◽  
Author(s):  
J. Bee Bednar

Seismic exploration problems frequently require analysis of noisy data. Traditional processing removes or reduces noise effects by linear statistical filtering. This filtering process can be viewed as a weighted averaging with coefficients chosen to enhance the data information content. When the signal and noise components occupy separate spectral windows, or when the statistical properties of the noise are sufficiently understood, linear statistical filtering is an effective tool for data enhancement. When the noise properties are not well understood, or when the noise and signal occupy the same spectral window, linear or weighted averaging performs poorly as a signal enhancement process. One must look for alternative procedures to extract the desired information. As a nonlinear operation which is statistically similar to averaging, median filtering represents one potential alternative. This paper investigates the application of median filtering to several seismic data enhancement problems. A methodology for using median filtering as one step in cepstral deconvolution or seismic signature estimation is presented. The median filtering process is applied to statistical editing of acoustic impedance data and the removal of noise bursts from reflection data. The most surprising conclusion obtained from the empirical studies on synthetic data is that, in high‐noise situations, cepstral‐based median filtering appears to perform exceptionally well as a deconvolver but poorly as a signature estimator. For real data, the process is stable and, to the extent that the data follow the convolutional model, does a reasonable job at both pulse estimation and deconvolution.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Solid Earth ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 1301-1319 ◽  
Author(s):  
Joeri Brackenhoff ◽  
Jan Thorbecke ◽  
Kees Wapenaar

Abstract. We aim to monitor and characterize signals in the subsurface by combining these passive signals with recorded reflection data at the surface of the Earth. To achieve this, we propose a method to create virtual receivers from reflection data using the Marchenko method. By applying homogeneous Green’s function retrieval, these virtual receivers are then used to monitor the responses from subsurface sources. We consider monopole point sources with a symmetric source signal, for which the full wave field without artifacts in the subsurface can be obtained. Responses from more complex source mechanisms, such as double-couple sources, can also be used and provide results with comparable quality to the monopole responses. If the source signal is not symmetric in time, our technique based on homogeneous Green’s function retrieval provides an incomplete signal, with additional artifacts. The duration of these artifacts is limited and they are only present when the source of the signal is located above the virtual receiver. For sources along a fault rupture, this limitation is also present and more severe due to the source activating over a longer period of time. Part of the correct signal is still retrieved, as is the source location of the signal. These artifacts do not occur in another method that creates virtual sources as well as receivers from reflection data at the surface. This second method can be used to forecast responses to possible future induced seismicity sources (monopoles, double-couple sources and fault ruptures). This method is applied to field data, and similar results to the ones on synthetic data are achieved, which shows the potential for application on real data signals.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
1999 ◽  
Vol 64 (5) ◽  
pp. 1630-1636 ◽  
Author(s):  
Ayon K. Dey ◽  
Larry R. Lines

In seismic exploration, statistical wavelet estimation and deconvolution are standard tools. Both of these processes assume randomness in the seismic reflectivity sequence. The validity of this assumption is examined by using well‐log synthetic seismograms and by using a procedure for evaluating the resulting deconvolutions. With real data, we compare our wavelet estimations with the in‐situ recording of the wavelet from a vertical seismic profile (VSP). As a result of our examination of the randomness assumption, we present a fairly simple test that can be used to evaluate the validity of a randomness assumption. From our test of seismic data in Alberta, we conclude that the assumption of reflectivity randomness is less of a problem in deconvolution than other assumptions such as phase and stationarity.


Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 251-260 ◽  
Author(s):  
Gary F. Margrave

The signal band of reflection seismic data is that portion of the temporal Fourier spectrum which is dominated by reflected source energy. The signal bandwidth directly determines the spatial and temporal resolving power and is a useful measure of the value of such data. The realized signal band, which is the signal band of seismic data as optimized in processing, may be estimated by the interpretation of appropriately constructed f-x spectra. A temporal window, whose length has a specified random fluctuation from trace to trace, is applied to an ensemble of seismic traces, and the temporal Fourier transform is computed. The resultant f-x spectra are then separated into amplitude and phase sections, viewed as conventional seismic displays, and interpreted. The signal is manifested through the lateral continuity of spectral events; noise causes lateral incoherence. The fundamental assumption is that signal is correlated from trace to trace while noise is not. A variety of synthetic data examples illustrate that reasonable results are obtained even when the signal decays with time (i.e., is nonstationary) or geologic structure is extreme. Analysis of real data from a 3-C survey shows an easily discernible signal band for both P-P and P-S reflections, with the former being roughly twice the latter. The potential signal band, which may be regarded as the maximum possible signal band, is independent of processing techniques. An estimator for this limiting case is the corner frequency (the frequency at which a decaying signal drops below background noise levels) as measured on ensemble‐averaged amplitude spectra from raw seismic data. A comparison of potential signal band with realized signal band for the 3-C data shows good agreement for P-P data, which suggests the processing is nearly optimal. For P-S data, the realized signal band is about half of the estimated potential. This may indicate a relative immaturity of P-S processing algorithms or it may be due to P-P energy on the raw radial component records.


Geophysics ◽  
1967 ◽  
Vol 32 (2) ◽  
pp. 207-224 ◽  
Author(s):  
John D. Marr ◽  
Edward F. Zagst

The more recent developments in common‐depth‐point techniques to attenuate multiple reflections have resulted in an exploration capability comparable to the development of the seismic reflection method. The combination of new concepts in digital seismic data processing with CDP techniques is creating unforeseen exploration horizons with vastly improved seismic data. Major improvements in multiple reflection and reverberation attenuation are now attainable with appropriate CDP geometry and special CDP stacking procedures. Further major improvements are clearly evident in the very near future with the use of multichannel digital filtering‐stacking techniques and the application of deconvolution as the first step in seismic data processing. CDP techniques are briefly reviewed and evaluated with real and experimental data. Synthetic data are used to illustrate that all seismic reflection data should be deconvolved as the first processing step.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1714-1730 ◽  
Author(s):  
Bertrand Iooss ◽  
David Geraets ◽  
Tapan Mukerji ◽  
Yann Samuelides ◽  
Mustafa Touati ◽  
...  

Understanding the internal heterogeneities of reservoirs is one of the key issues in better recovery and efficient reservoir management. Seismic data are widely used to map subsurface heterogeneities. These heterogeneities can include variations in wave velocity and rock density, which can be used to interpret variations in reservoir properties such as porosity, lithofacies, and fluids. This paper describes a statistical tomography method to infer the spatial statistics of subsurface velocity heterogeneities from seismic data. We consider an acoustic wave propagating in a medium represented as a single macromodel superimposed on statistically stationary random velocity perturbations. While the macromodel is retrieved by classical seismic methods, the picked traveltimes and their fluctuations are used to estimate the covariance function of the spatially varying velocity perturbations. We present a formulation based on ray‐theoretical results and describe two algorithms: one using the prestack traveltimes and the other using the stacking velocities. The methods are tested with synthetic seismic reflection data in an idealized medium (with a Gaussian spatial covariance) and with synthetic transmission data in a more geologically realistic medium. Then, the two algorithms are applied on real data. The estimates of the spatial statistics obtained from inverting the traveltime statistics match reasonably well with the true parameters of the heterogeneous media.


2019 ◽  
Author(s):  
Joeri Brackenhoff ◽  
Jan Thorbecke ◽  
Kees Wapenaar

Abstract. We aim to monitor and characterize signals in the subsurface by combining these passive signals with recorded reflection data at the surface of the Earth. To achieve this, we propose a method to create virtual receivers from reflection data using the Marchenko method. By applying homogeneous Green’s function retrieval, these virtual receivers are then used to monitor the responses from subsurface sources. We consider monopole point sources with a symmetric source signal, where the full wavefield without artefacts in the subsurface can be obtained. Responses from more complex source mechanisms, such as double-couple sources, can also be used and provide results with comparable quality as the monopole responses. If the source signal is not symmetric in time, our technique that is based on homogeneous Green’s function retrieval provides an incomplete signal, with additional artefacts. The duration of these artefacts is limited and they are only present when the source of the signal is located above the virtual receiver. For sources along a fault rupture, this limitation is also present and more severe due to the source activating over a longer period of time. Part of the correct signal is still retrieved, as well as the source location of the signal. These aretefacts do not occur in another method which creates virtual sources as well as receivers from reflection data at the surface. This second method can be used to forecast responses to possible future induced seismicity sources (monopoles, double-couple sources and fault ruptures). This method is applied to field data, where similar results to synthetic data are achieved, which shows the potential for the application on real data signals.


Author(s):  
A. N. Oshkin ◽  
A. I. Kon’kov ◽  
A. V. Tarasov ◽  
A. A. Shuvalov ◽  
V. I. Ignat’ev

The use of several simultaneously operating sources in seismic operations allows one to obtain large amounts of data per unit of time than for classical works with a single source, and also to improve the seismic data recording system. Depending on the type of seismic source used (vibrating or pulsed), different methods of signal separation are used. When working with vibroseismic method, separation of signals becomes possible at the stage of correlative processing of vibrograms. In this paper, we demonstrate methods for constructing noncorrelating signals for use in vibroseis survey (with an example of using such signals on synthetic data) and hyperbolic median filtering to minimize correlation and incoherent noise.


Sign in / Sign up

Export Citation Format

Share Document