scholarly journals Nonequispaced curvelet transform for seismic data reconstruction: A sparsity-promoting approach

Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.


Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 251-260 ◽  
Author(s):  
Gary F. Margrave

The signal band of reflection seismic data is that portion of the temporal Fourier spectrum which is dominated by reflected source energy. The signal bandwidth directly determines the spatial and temporal resolving power and is a useful measure of the value of such data. The realized signal band, which is the signal band of seismic data as optimized in processing, may be estimated by the interpretation of appropriately constructed f-x spectra. A temporal window, whose length has a specified random fluctuation from trace to trace, is applied to an ensemble of seismic traces, and the temporal Fourier transform is computed. The resultant f-x spectra are then separated into amplitude and phase sections, viewed as conventional seismic displays, and interpreted. The signal is manifested through the lateral continuity of spectral events; noise causes lateral incoherence. The fundamental assumption is that signal is correlated from trace to trace while noise is not. A variety of synthetic data examples illustrate that reasonable results are obtained even when the signal decays with time (i.e., is nonstationary) or geologic structure is extreme. Analysis of real data from a 3-C survey shows an easily discernible signal band for both P-P and P-S reflections, with the former being roughly twice the latter. The potential signal band, which may be regarded as the maximum possible signal band, is independent of processing techniques. An estimator for this limiting case is the corner frequency (the frequency at which a decaying signal drops below background noise levels) as measured on ensemble‐averaged amplitude spectra from raw seismic data. A comparison of potential signal band with realized signal band for the 3-C data shows good agreement for P-P data, which suggests the processing is nearly optimal. For P-S data, the realized signal band is about half of the estimated potential. This may indicate a relative immaturity of P-S processing algorithms or it may be due to P-P energy on the raw radial component records.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB189-WB202 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi

We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least-squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. The proposed method can be used to reconstruct regularly and irregularly sampled seismic data. We believe that our exposition leads to a clear unifying thread between [Formula: see text] and [Formula: see text] beyond-alias interpolation methods and curvelet reconstruction. As in [Formula: see text] and [Formula: see text] interpolation, we stress the necessity of examining seismic data at different scales (frequency bands) to come up with viable and robust interpolation schemes. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.


Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V87-V95 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Don Pham ◽  
Gilles Lambaré

Seismic data regularization, which spatially transforms irregularly sampled acquired data to regularly sampled data, is a long-standing problem in seismic data processing. Data regularization can be implemented using Fourier theory by using a method that estimates the spatial frequency content on an irregularly sampled grid. The data can then be reconstructed on any desired grid. Difficulties arise from the nonorthogonality of the global Fourier basis functions on an irregular grid, which results in the problem of “spectral leakage”: energy from one Fourier coefficient leaks onto others. We investigate the nonorthogonality of the Fourier basis on an irregularly sampled grid and propose a technique called “antileakage Fourier transform” to overcome the spectral leakage. In the antileakage Fourier transform, we first solve for the most energetic Fourier coefficient, assuming that it causes the most severe leakage. To attenuate all aliases and the leakage of this component onto other Fourier coefficients, the data component corresponding to this most energetic Fourier coefficient is subtracted from the original input on the irregular grid. We then use this new input to solve for the next Fourier coefficient, repeating the procedure until all Fourier coefficients are estimated. This procedure is equivalent to “reorthogonalizing” the global Fourier basis on an irregularly sampled grid. We demonstrate the robustness and effectiveness of this technique with successful applications to both synthetic and real data examples.


Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. V1-V10 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Kristopher A. Innanen

We have found a fast and efficient method for the interpolation of nonstationary seismic data. The method uses the fast generalized Fourier transform (FGFT) to identify the space-wavenumber evolution of nonstationary spatial signals at each temporal frequency. The nonredundant nature of FGFT renders a big computational advantage to this interpolation method. A least-squares fitting scheme is used next to retrieve the optimal FGFT coefficients representative of the ideal interpolated data. For randomly sampled data on a regular grid, we seek a sparse representation of FGFT coefficients to retrieve the missing samples. In addition, to interpolate the regularly sampled seismic data at a given frequency, we use a mask function derived from the FGFT coefficients of the low frequencies. Synthetic and real data examples can be used to examine the performance of the method.


Geophysics ◽  
2010 ◽  
Vol 75 (2) ◽  
pp. S73-S79
Author(s):  
Ørjan Pedersen ◽  
Sverre Brandsberg-Dahl ◽  
Bjørn Ursin

One-way wavefield extrapolation methods are used routinely in 3D depth migration algorithms for seismic data. Due to their efficient computer implementations, such one-way methods have become increasingly popular and a wide variety of methods have been introduced. In salt provinces, the migration algorithms must be able to handle large velocity contrasts because the velocities in salt are generally much higher than in the surrounding sediments. This can be a challenge for one-way wavefield extrapolation methods. We present a depth migration method using one-way propagators within lateral windows for handling the large velocity contrasts associated with salt-sediment interfaces. Using adaptive windowing, we can handle large perturbations locally in a similar manner as the beamlet propagator, thus limiting the impact of the errors on the global wavefield. We demonstrate the performance of our method by applying it to synthetic data from the 2D SEG/EAGE [Formula: see text] salt model and an offshore real data example.


Sign in / Sign up

Export Citation Format

Share Document