Deconvolution of noisy seismic data—A method for prestack wavelet extraction

Geophysics ◽  
1986 ◽  
Vol 51 (1) ◽  
pp. 34-44 ◽  
Author(s):  
Barry J. Newman

The presence of random additive noise is the most important degrading factor in the deconvolution of seismic data. Noise‐induced distortion of signal phase and amplitude produces severe stack attenuation, makes poststack recovery difficult with spectral enhancement techniques, and leaves the stratigraphic imprint unclear. The random noise component in the data is estimated from trace segments before the first arrivals and at the bottom of the record beyond seismic basement. An autocorrelation of this noise is used to adjust the signal autocorrelation prior to Wiener‐Levinson deconvolution filter design. To improve the robustness of the technique, an iterative surface‐consistent wavelet solution (common source, receiver, and offset) is used in preference to a single‐channel operation. Use of this deconvolution technique is shown by synthetic and case examples to result in correct phase alignment, enhanced stacking fidelity, and extended signal bandwidth even on very noisy data. The improvement, coupled with sensible handling of coherent noise energy, is crucial for the interpretation of subtle statigraphic plays in many areas.

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


Geophysics ◽  
1997 ◽  
Vol 62 (4) ◽  
pp. 1310-1314 ◽  
Author(s):  
Qing Li ◽  
Kris Vasudevan ◽  
Frederick A. Cook

Coherency filtering is a tool used commonly in 2-D seismic processing to isolate desired events from noisy data. It assumes that phase‐coherent signal can be separated from background incoherent noise on the basis of coherency estimates, and coherent noise from coherent signal on the basis of different dips. It is achieved by searching for the maximum coherence direction for each data point of a seismic event and enhancing the event along this direction through stacking; it suppresses the incoherent events along other directions. Foundations for a 2-D coherency filtering algorithm were laid out by several researchers (Neidell and Taner, 1971; McMechan, 1983; Leven and Roy‐Chowdhury, 1984; Kong et al., 1985; Milkereit and Spencer, 1989). Milkereit and Spencer (1989) have applied 2-D coherency filtering successfully to 2-D deep crustal seismic data for the improvement of visualization and interpretation. Work on random noise attenuation using frequency‐space or time‐space prediction filters both in two or three dimensions to increase the signal‐to‐noise ratio of the data can be found in geophysical literature (Canales, 1984; Hornbostel, 1991; Abma and Claerbout, 1995).


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. V271-V280
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

We have developed an algorithm to perform structure-oriented filtering (SOF) in 3D seismic data by learning the data structure in the frequency domain. The method, called spectral SOF (SSOF), allows us to enhance the signal structures in the [Formula: see text]-[Formula: see text]-[Formula: see text] domain by running a 1D edge-preserving filter along curvilinear self-adaptive trajectories that connect points of similar characteristics. These self-adaptive paths are given by the eigenvectors of the smoothed structure tensor, which are easily computed using closed-form expressions. SSOF relies on a few parameters that are easily tuned and on simple 1D convolutions for tensor calculation and smoothing. It is able to process a 3D data volume with a 2D strategy using basic 1D edge-preserving filters. In contrast to other SOF techniques, such as anisotropic diffusion, anisotropic smoothing, and plane-wave prediction, SSOF does not require any iterative process to reach the denoised result. We determine the performance of SSOF using three public domain field data sets, which are subsets of the well-known Waipuku, Penobscot, and Teapot surveys. We use the Waipuku subset to indicate the signal preservation of the method in good-quality data when mostly background random noise is present. Then, we use the Penobscot subset to illustrate random noise and footprint signature attenuation, as well as to show how faults and fractures are improved. Finally, we analyze the Teapot stacked and depth-migrated subsets to show random and coherent noise removal, leading to an improvement of the volume structural details and overall lateral continuity. The results indicate that random noise, footprints, and other artifacts can be successfully suppressed, enhancing the delineation of geologic structures and seismic horizons and preserving the original signal bandwidth.


Geophysics ◽  
1968 ◽  
Vol 33 (5) ◽  
pp. 711-722 ◽  
Author(s):  
E. B. Davies ◽  
E. J. Mercado

Several writers have proposed the use of multichannel filters for the elimination of coherent noise on seismic records. One filter of this type which can be constructed is a multichannel Wiener filter which has a multichannel input and a single channel output. In this form, it is applicable to data collected for vertical or horizontal common‐depth‐point stack processing. The choice of desired output characteristics for this Wiener filter is flexible and, for example, can be tuned to correspond to multichannel deconvolution. The results of the application of filters of this type to field and synthetic data, in general, show little if any advantage over single‐channel deconvolution. This failure appears to be connected with the low cross coherence of both noise and reflection signal on field‐recorded, common‐depth‐point traces.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V355-V365
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

Dictionary learning (DL) is a machine learning technique that can be used to find a sparse representation of a given data set by means of a relatively small set of atoms, which are learned from the input data. DL allows for the removal of random noise from seismic data very effectively. However, when seismic data are contaminated with footprint noise, the atoms of the learned dictionary are often a mixture of data and coherent noise patterns. In this scenario, DL requires carrying out a morphological attribute classification of the atoms to separate the noisy atoms from the dictionary. Instead, we have developed a novel DL strategy for the removal of footprint patterns in 3D seismic data that is based on an augmented dictionary built upon appropriately filtering the learned atoms. The resulting augmented dictionary, which contains the filtered atoms and their residuals, has a high discriminative power in separating signal and footprint atoms, thus precluding the use of any statistical classification strategy to segregate the atoms of the learned dictionary. We filter the atoms using a domain transform filtering approach, a very efficient edge-preserving smoothing algorithm. As in the so-called coherence-constrained DL method, the proposed DL strategy does not require the user to know or adjust the noise level or the sparsity of the solution for each data set. Furthermore, it only requires one pass of DL and is shown to produce successful transfer learning. This increases the speed of the denoising processing because the augmented dictionary does not need to be calculated for each time slice of the input data volume. Results on synthetic and 3D public-domain poststack field data demonstrate effective footprint removal with accurate edge preservation.


2015 ◽  
Vol 86 (3) ◽  
pp. 901-907 ◽  
Author(s):  
R. Takagi ◽  
K. Nishida ◽  
Y. Aoki ◽  
T. Maeda ◽  
K. Masuda ◽  
...  

Geophysics ◽  
1983 ◽  
Vol 48 (7) ◽  
pp. 854-886 ◽  
Author(s):  
Ken Larner ◽  
Ron Chambers ◽  
Mai Yang ◽  
Walt Lynn ◽  
Willon Wai

Despite significant advances in marine streamer design, seismic data are often plagued by coherent noise having approximately linear moveout across stacked sections. With an understanding of the characteristics that distinguish such noise from signal, we can decide which noise‐suppression techniques to use and at what stages to apply them in acquisition and processing. Three general mechanisms that might produce such noise patterns on stacked sections are examined: direct and trapped waves that propagate outward from the seismic source, cable motion caused by the tugging action of the boat and tail buoy, and scattered energy from irregularities in the water bottom and sub‐bottom. Depending upon the mechanism, entirely different noise patterns can be observed on shot profiles and common‐midpoint (CMP) gathers; these patterns can be diagnostic of the dominant mechanism in a given set of data. Field data from Canada and Alaska suggest that the dominant noise is from waves scattered within the shallow sub‐buttom. This type of noise, while not obvious on the shot records, is actually enhanced by CMP stacking. Moreover, this noise is not confined to marine data; it can be as strong as surface wave noise on stacked land seismic data as well. Of the many processing tools available, moveout filtering is best for suppressing the noise while preserving signal. Since the scattered noise does not exhibit a linear moveout pattern on CMP‐sorted gathers, moveout filtering must be applied either to traces within shot records and common‐receiver gathers or to stacked traces. Our data example demonstrates that although it is more costly, moveout filtering of the unstacked data is particularly effective because it conditions the data for the critical data‐dependent processing steps of predictive deconvolution and velocity analysis.


2013 ◽  
Vol 56 (7) ◽  
pp. 1200-1208 ◽  
Author(s):  
Yue Li ◽  
BaoJun Yang ◽  
HongBo Lin ◽  
HaiTao Ma ◽  
PengFei Nie

2014 ◽  
Vol 672-674 ◽  
pp. 1964-1967
Author(s):  
Jun Qiu Wang ◽  
Jun Lin ◽  
Xiang Bo Gong

Vibroseis obtained the seismic record by cross-correlation detection calculation. compared with dynamite source, cross-correlation detection can suppress random noise, but produce more correlation noise. This paper studies Radon transform to remove correlation noise produced by electromagnetic drive vibroseis and impact rammer. From the results of processing field seismic records, we can see that Radon transform can remove correlation noise by vibroseis, the SNR of vibroseis seismic data is effectively improved.


Sign in / Sign up

Export Citation Format

Share Document