Stochastic sparse-spike deconvolution

Geophysics ◽  
2008 ◽  
Vol 73 (1) ◽  
pp. R1-R9 ◽  
Author(s):  
Danilo R. Velis

Sparse-spike deconvolution can be viewed as an inverse problem where the locations and amplitudes of a number of spikes (reflectivity) are estimated from noisy data (seismic traces). The main objective is to find the least number of spikes that, when convolved with the available band-limited seismic wavelet estimate, fit the data within a given tolerance error (misfit). The detection of the spikes’ time lags is a highly nonlinear optimization problem that can be solved using very fast simulated annealing (SA). Amplitudes are easily estimated using linear least squares at each SA iteration. At this stage, quadratic regularization is used to stabilize the solution, to reduce its nonuniqueness, and to provide meaningful reflectivity sequences, thus avoiding the need to constrain the spikes’ time lags and/or amplitudes to force valid solutions. Impedance constraints also can be included at this stage, providing the low frequencies required to recover the acoustic impedance. One advantage of the proposed method over other sparse-spike deconvolution techniques is that the uncertainty of the obtained solutions can be estimated stochastically. Further, errors in the phase of the wavelet estimate are tolerated, for an optimum constant-phase shift is obtained to calibrate the effective wavelet that is present in the data. Results using synthetic data (including simulated data for the Marmousi2 model) and field 3D data show that physically meaningful high-resolution sparse-spike sections can be derived from band-limited noisy data, even when the available wavelet estimate is inaccurate.

Geophysics ◽  
1985 ◽  
Vol 50 (9) ◽  
pp. 1410-1425 ◽  
Author(s):  
C. J. Tsai

A common problem in interpreting marine seismic data is the interference of water‐bottom multiples with primary reflections containing the structural or stratigraphic information. In deep ‐water areas, where considerable primary energy arrives before the first simple water‐bottom multiple, weak and deep crustal reflections are often obscured by the first‐order water‐bottom multiples. In order to obtain a more interpretable section, a technique involving a two‐step process was developed to suppress the first‐order water‐bottom multiples. First, the relation between the zero‐order, water‐bottom primary and its first‐order, simple water‐bottom multiple is used to derive statistically an inverse of the seismic wavelet in order to remove its effect, i.e., to wavelet‐shape the data. This wavelet processing provides a band‐limited estimate of the subsurface impulse response. The second step consists of using the autoconvolution of the wavelet‐shaped primary energy to estimate deterministically and subtract the actual first‐order, water‐bottom multiples, The method was applied to field data from the deep Gulf of Mexico. Different incidence angles for the input primaries and multiples, as well as dipping reflecting interfaces, introduce uncompensated traveltime errors. These errors reduce the ability to suppress multiples, thus restricting the validity of the method to low frequencies where common‐depth‐point stacking is less effective. On the other hand, curved interfaces may also cause amplitude prediction problems. In spite of this, the first‐order, water‐bottom multiple energy is significantly reduced (by up to 18 dB) on dip‐filtered, single‐channel data.


Geophysics ◽  
1999 ◽  
Vol 64 (4) ◽  
pp. 1108-1115 ◽  
Author(s):  
Warren T. Wood

Estimates of the source wavelet and band‐limited earth reflectivity are obtained simultaneously from an optimization of deconvolution outputs, similar to minimum‐entropy deconvolution (MED). The only inputs required beyond the observed seismogram are wavelet length and an inversion parameter (cooling rate). The objective function to be minimized is a measure of the spikiness of the deconvolved seismogram. I assume that the wavelet whose deconvolution from the data results in the most spike‐like trace is the best wavelet estimate. Because this is a highly nonlinear problem, simulated annealing is used to solve it. The procedure yields excellent results on synthetic data and disparate field data sets, is robust in the presence of noise, and is fast enough to operate in a desktop computer environment.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 674
Author(s):  
Kushani De De Silva ◽  
Carlo Cafaro ◽  
Adom Giffin

Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. V61-V71 ◽  
Author(s):  
Stephan Ker ◽  
Yves Le Gonidec

Multiscale seismic attributes based on wavelet transform properties have recently been introduced and successfully applied to identify the geometry of a complex seismic reflector in an elastic medium. We extend this quantitative approach to anelastic media where intrinsic attenuation modifies the seismic attributes and thus requires a specific processing to retrieve them properly. The method assumes an attenuation linearly dependent with the seismic wave frequency and a seismic source wavelet approximated with a Gaussian derivative function (GDF). We highlight a quasi-conservation of the Gaussian character of the wavelet during its propagation. We found that this shape can be accurately modeled by a GDF characterized by a fractional integration and a frequency shift of the seismic source, and we establish the relationship between these wavelet parameters and [Formula: see text]. Based on this seismic wavelet modeling, we design a time-varying shaping filter that enables making constant the shape of the wavelet allowing retrieval of the wavelet transform properties. Introduced with a homogeneous step-like reflector, the method is first applied on a thin-bed reflector and then on a more realistic synthetic data set based on an in situ acoustic impedance sequence and a high-resolution seismic source. The results clearly highlight the efficiency of the method in accurately restoring the multiscale seismic attributes of complex seismic reflectors in anelastic media by the use of broadband seismic sources.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. V37-V46 ◽  
Author(s):  
Mirko van der Baan ◽  
Dinh-Tuan Pham

Robust blind deconvolution is a challenging problem, particularly if the bandwidth of the seismic wavelet is narrow to very narrow; that is, if the wavelet bandwidth is similar to its principal frequency. The main problem is to estimate the phase of the wavelet with sufficient accuracy. The mutual information rate is a general-purpose criterion to measure whiteness using statistics of all orders. We modified this criterion to measure robustly the amplitude and phase spectrum of the wavelet in the presence of noise. No minimum phase assumptions were made. After wavelet estimation, we obtained an optimal deconvolution output using Wiener filtering. The new procedure performs well, even for very band-limited data; and it produces frequency-dependent phase estimates.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N15-N27 ◽  
Author(s):  
Carlos A. M. Assis ◽  
Henrique B. Santos ◽  
Jörg Schleicher

Acoustic impedance (AI) is a widely used seismic attribute in stratigraphic interpretation. Because of the frequency-band-limited nature of seismic data, seismic amplitude inversion cannot determine AI itself, but it can only provide an estimate of its variations, the relative AI (RAI). We have revisited and compared two alternative methods to transform stacked seismic data into RAI. One is colored inversion (CI), which requires well-log information, and the other is linear inversion (LI), which requires knowledge of the seismic source wavelet. We start by formulating the two approaches in a theoretically comparable manner. This allows us to conclude that both procedures are theoretically equivalent. We proceed to check whether the use of the CI results as the initial solution for LI can improve the RAI estimation. In our experiments, combining CI and LI cannot provide superior RAI results to those produced by each approach applied individually. Then, we analyze the LI performance with two distinct solvers for the associated linear system. Moreover, we investigate the sensitivity of both methods regarding the frequency content present in synthetic data. The numerical tests using the Marmousi2 model demonstrate that the CI and LI techniques can provide an RAI estimate of similar accuracy. A field-data example confirms the analysis using synthetic-data experiments. Our investigations confirm the theoretical and practical similarities of CI and LI regardless of the numerical strategy used in LI. An important result of our tests is that an increase in the low-frequency gap in the data leads to slightly deteriorated CI quality. In this case, LI required more iterations for the conjugate-gradient least-squares solver, but the final results were not much affected. Both methodologies provided interesting RAI profiles compared with well-log data, at low computational cost and with a simple parameterization.


Energies ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 609 ◽  
Author(s):  
Sondes Gharsellaoui ◽  
Majdi Mansouri ◽  
Shady S. Refaat ◽  
Haitham Abu-Rub ◽  
Hassani Messaoud

Fault Detection and Isolation (FDI) in Heating, Ventilation, and Air Conditioning (HVAC) systems is an important approach to guarantee the human safety of these systems. Therefore, the implementation of a FDI framework is required to reduce the energy needs for buildings and improving indoor environment quality. The main goal of this paper is to merge the benefits of multiscale representation, Principal Component Analysis (PCA), and Machine Learning (ML) classifiers to improve the efficiency of the detection and isolation of Air Conditioning (AC) systems. First, the multivariate statistical features extraction and selection is achieved using the PCA method. Then, the multiscale representation is applied to separate feature from noise and approximately decorrelate autocorrelation between available measurements. Third, the extracted and selected features are introduced to several machine learning classifiers for fault classification purposes. The effectiveness and higher classification accuracy of the developed Multiscale PCA (MSPCA)-based ML technique is demonstrated using two examples: synthetic data and simulated data extracted from Air Conditioning systems.


2009 ◽  
Vol 27 (7) ◽  
pp. 2799-2811 ◽  
Author(s):  
I. I. Virtanen ◽  
J. Vierinen ◽  
M. S. Lehtinen

Abstract. Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC), whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF). When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers – about seven milliseconds – whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF) at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR) conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz) radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


Geophysics ◽  
1997 ◽  
Vol 62 (6) ◽  
pp. 1939-1946 ◽  
Author(s):  
Eike Rietsch

In this second part of a two‐part work, a more robust algorithm is derived and used for the estimation of the seismic wavelet as the common signal of two or more seismic traces. It is based on the properties of the eigenvectors with zero eigenvalue of a matrix derived in the first part, whose elements are the samples of the autocorrelation functions and crosscorrelation functions of these seismic traces for a number of lags. The noise resistance of this algorithm is illustrated by means of a synthetic‐data example and then demonstrated on field data. In one field‐data example, the so‐called Euclid wavelet is compared with one derived deterministically by means of an impedance log. The other example relates three quite different Euclid wavelets determined in three different time zones on a seismic line to one another by showing that their differences can be explained by absorption.


Sign in / Sign up

Export Citation Format

Share Document