Adaptive subtraction using complex-valued curvelet transforms

Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2020 ◽  
Vol 2020 (14) ◽  
pp. 307-1-307-6
Author(s):  
Laura Galvis ◽  
Juan M. Ramírez ◽  
Edwin Vargas ◽  
Ofelia Villarreal ◽  
William Agudelo ◽  
...  

In a 3D seismic survey, the source sampling in a regular grid is commonly limited by economic costs, geological constraints, and environmental challenges. This non-uniform sampling cannot be ignored since the lack of regularity leads to incomplete seismic data with missing 2D wavefields. Notice that the postprocessing tasks have been developed under the assumption that 3D seismic data are obtained from a regular sampling. Therefore, signal recovery from incomplete data becomes a crucial step in the seismic imaging processing flow. In this work, we propose a pre-processing step that includes the nonuniformly acquired wavefields in a finer regular grid, such that shot gathers are stacked considering the actual spatial location of the sources. Then, based on the 3D curvelet transform, a sparse signal recovery algorithm that considers an interpolation operator is employed in order to reconstruct the missing wavefields in a regular grid. The performance of the proposed seismic reconstruction approach is evaluated on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. V59-V67 ◽  
Author(s):  
Shoudong Huo ◽  
Yanghua Wang

In seismic multiple attenuation, once the multiple models have been built, the effectiveness of the processing depends on the subtraction step. Usually the primary energy is partially attenuated during the adaptive subtraction if an [Formula: see text]-norm matching filter is used to solve a least-squares problem. The expanded multichannel matching (EMCM) filter generally is effective, but conservative parameters adopted to preserve the primary could lead to some remaining multiples. We have managed to improve the multiple attenuation result through an iterative application of the EMCM filter to accumulate the effect of subtraction. A Butterworth-type masking filter based on the multiple model can be used to preserve most of the primary energy prior to subtraction, and then subtraction can be performed on the remaining part to better suppress the multiples without affecting the primaries. Meanwhile, subtraction can be performed according to the orders of the multiples, as a single subtraction window usually covers different-order multiples with different amplitudes. Theoretical analyses, and synthetic and real seismic data set demonstrations, proved that a combination of these three strategies is effective in improving the adaptive subtraction during seismic multiple attenuation.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. V243-V252
Author(s):  
Wail A. Mousa

A stable explicit depth wavefield extrapolation is obtained using [Formula: see text] iterative reweighted least-squares (IRLS) frequency-space ([Formula: see text]-[Formula: see text]) finite-impulse response digital filters. The problem of designing such filters to obtain stable images of challenging seismic data is formulated as an [Formula: see text] IRLS minimization. Prestack depth imaging of the challenging Marmousi model data set was then performed using the explicit depth wavefield extrapolation with the proposed [Formula: see text] IRLS-based algorithm. Considering the extrapolation filter design accuracy, the [Formula: see text] IRLS minimization method resulted in an image with higher quality when compared with the weighted least-squares method. The method can, therefore, be used to design high-accuracy extrapolation filters.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.


2020 ◽  
Vol 2 (2) ◽  
pp. 1-28
Author(s):  
Tao Li ◽  
Cheng Meng

Subsampling methods aim to select a subsample as a surrogate for the observed sample. As a powerful technique for large-scale data analysis, various subsampling methods are developed for more effective coefficient estimation and model prediction. This review presents some cutting-edge subsampling methods based on the large-scale least squares estimation. Two major families of subsampling methods are introduced: the randomized subsampling approach and the optimal subsampling approach. The former aims to develop a more effective data-dependent sampling probability while the latter aims to select a deterministic subsample in accordance with certain optimality criteria. Real data examples are provided to compare these methods empirically, respecting both the estimation accuracy and the computing time.


Geophysics ◽  
2009 ◽  
Vol 74 (3) ◽  
pp. R15-R24 ◽  
Author(s):  
Taeyoung Ha ◽  
Wookeen Chung ◽  
Changsoo Shin

Waveform inversion faces difficulties when applied to real seismic data, including the existence of many kinds of noise. The [Formula: see text]-norm is more robust to noise with outliers than the least-squares method. Nevertheless, the least-squares method is preferred as an objective function in many algorithms because the gradient of the [Formula: see text]-norm has a singularity when the residual becomes zero. We propose a complex-valued Huber function for frequency-domain waveform inversion that combines the [Formula: see text]-norm (for small residuals) with the [Formula: see text]-norm (for large residuals). We also derive a discretized formula for the gradient of the Huber function. Through numerical tests on simple synthetic models and Marmousi data, we find the Huber function is more robust to outliers and coherent noise. We apply our waveform-inversion algorithm to field data taken from the continental shelf under the East Sea in Korea. In this setting, we obtain a velocity model whose synthetic shot profiles are similar to the real seismic data.


Geophysics ◽  
1992 ◽  
Vol 57 (12) ◽  
pp. 1623-1632 ◽  
Author(s):  
Richard E. Duren ◽  
Stan V. Morris

Null steering refers to the removal (or zeroing) of interferences at specified dips by creating receiving patterns with nulls that are aligned on the interferences. This type of beamforming is more effective than forming a simple crossline array and can be applied to both multistreamer and swath data for reducing out‐of‐plane interferences (sideswipe, boat interference, etc.) that corrupt two‐dimensional (2-D) data (the desired signal). Many beamforming techniques lead to signal cancellation when the interferences are correlated with the desired signal. However, a beamforming technique that has been developed is effective in the presence of signal correlated interferences. The technique can be effectively extended to prestack and poststack seismic data. The number of interferences and their dips are identified by a visual examination of the plotted data. This information can be used to design filters that are applied to the total data set. The resulting 2-D data set is free from the crossline interferences with the inline 2-D data remaining unaltered. Model and real data comparisons between null steering and simple crossline array summation show that: (1) null steering significantly attenuates crossline interference, and (2) 2-D inline data, masked by sideswipe, can be revealed once sideswipe is attenuated by null steering. The real data examples show the identification and effective attenuation of interferences that could easily be interpreted as inline 2-D data: (1) an apparent steeply dipping event, and (2) an apparent flat “bright spot.”


Sign in / Sign up

Export Citation Format

Share Document