Iterative deconvolution using generalized “positivity”

Geophysics ◽  
1989 ◽  
Vol 54 (10) ◽  
pp. 1297-1305
Author(s):  
Scott C. Hornbostel

In some cases a real signal may be known a priori to be always positive. If this positive signal is later band‐limited, the knowledge of its original positivity can be used to help in recovering the lost frequencies. Specifically, the frequency‐domain values for members of this special class of signals have the interesting property that they are related to each other via the self‐convolution of Hermitian functions. This relationship is the basis for some current deconvolution approaches and can be generalized for the case of a signal of arbitrary sign. A steepest descent formulation in the frequency domain can determine these Hermitian functions while maximizing the fit to the known in‐band data and to estimated dc values. This formulation allows for the explicit calculation of the step size and is also easily modified to include finite support or penalty/reward constraints. Simulated data tests indicate good bandwidth extension for this method, while actually sometimes improving the signal‐to‐noise ratio of the in‐band values.

2021 ◽  
Author(s):  
Rajive Kumar ◽  
T Al-Mutairi ◽  
P Bansal ◽  
Khushboo Havelia ◽  
Faical Ben Amor ◽  
...  

Abstract As Kuwait focuses on developing the deep Jurassic reservoirs, the Gotnia Formation presents significant drilling challenges. It is the regional seal, consisting of alternating Salt and Anhydrite cycles, with over-pressured carbonate streaks, which are also targets for future exploration. The objective of this study was to unravel the Gotnia architecture, through detailed mapping of the intermediate cycles, mitigating drilling risks and characterizing the carbonate reservoirs. A combination of noise attenuation, bandwidth extension and seismic adaptive wavelet processing (SAWP)) was applied on the seismic data, to improve the signal-to-noise ratio of the seismic data between 50Hz to 70Hz and therefore reveal the Anhydrite cycles, which house the carbonate streaks. The Salt-Anhydrite cycles were correlated, using Triple Combo and Elastic logs, in seventy-six wells, and spatially interpreted on the band-limited P-impedance volume, generated through pre-stack inversion. Pinched out cycles were identified by integrating mud logs with seismic data and depositional trends. Pre-stack stochastic inversion was performed to map the thin carbonate streaks and characterize the carbonate reservoirs. The improved seismic resolution resulted in superior results compared to the legacy cube and aided in enhancing the reflector continuity of Salt-Anhydrite cycles. In corroboration with the well data, three cycles of alternating salt and anhydrite, with varying thickness, were mapped. These cycles showed a distinctive impedance contrast and were noticeably more visible on the P-impedance volume, compared to the seismic amplitude volume. The second Anhydrite cycle was missing in some wells and the lateral extension of the pinch-outs was interpreted and validated based on the P-impedance volume. As the carbonate streaks were beyond the seismic resolution, they were not visible on the Deterministic P-impedance. The amount of thin carbonate streaks within the Anhydrite cycles could be qualitatively assessed based on the impedance values of the entire zone. Areas, within the zone, with a higher number of and more porous carbonate streaks displayed lowering of the overall impedance values in the Anhydrite zones, and could pose drilling risks. This information was used to guide the pre-stack stochastic inversion to populate the thin carbonate streaks and generate a high-resolution facies volume, through Bayesian Classification. Through this study, the expected cycles and over-pressured carbonate layers in the Gotnia formation were predicted, which can be used to plan and manage the drilling risks and reduce operational costs. This study presents an integrated and iterative approach to interpretation, where the well log analysis, seismic inversion and horizon interpretation were done in parallel, to develop a better understanding of the sub-surface. This workflow will be especially useful for interpretation of over-pressured overburden zones or cap rocks, where the available log data can be limited.


2006 ◽  
Vol 5-6 ◽  
pp. 285-294 ◽  
Author(s):  
D. Hickey ◽  
M. Haroon ◽  
Douglas E. Adams ◽  
Keith Worden

In real mechanical situations it is a certainty there will be non-linear behaviour present at a range of frequencies and amplitudes. It is not, however, always possible to have a priori knowledge of the input to a system. A method has been developed by Adams to allow the experimental engineer to overcome such problems. The technique is addressed in this paper and applied to both simulated and experimental data. The method makes use of time domain characterisation via work and characteristic diagrams and also the frequency domain approach toward non-linear identification from feedback of the outputs, (NIFO). This paper attempts to use these time and frequency domain techniques to locate, characterise and quantify non-linear behaviour using both simulated and experimental data. The approach to this work is to obtain simulated data from a quarter car model and real data taken from an experimental rig. The data will be taken at a variety of frequencies and amplitudes and the above time and frequency domain techniques will be applied.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 222
Author(s):  
Juan C. Laria ◽  
M. Carmen Aguilera-Morillo ◽  
Enrique Álvarez ◽  
Rosa E. Lillo ◽  
Sara López-Taruella ◽  
...  

Over the last decade, regularized regression methods have offered alternatives for performing multi-marker analysis and feature selection in a whole genome context. The process of defining a list of genes that will characterize an expression profile remains unclear. It currently relies upon advanced statistics and can use an agnostic point of view or include some a priori knowledge, but overfitting remains a problem. This paper introduces a methodology to deal with the variable selection and model estimation problems in the high-dimensional set-up, which can be particularly useful in the whole genome context. Results are validated using simulated data and a real dataset from a triple-negative breast cancer study.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


2021 ◽  
Vol 13 (1) ◽  
pp. 168781402098732
Author(s):  
Ayisha Nayyar ◽  
Ummul Baneen ◽  
Syed Abbas Zilqurnain Naqvi ◽  
Muhammad Ahsan

Localizing small damages often requires sensors be mounted in the proximity of damage to obtain high Signal-to-Noise Ratio in system frequency response to input excitation. The proximity requirement limits the applicability of existing schemes for low-severity damage detection as an estimate of damage location may not be known  a priori. In this work it is shown that spatial locality is not a fundamental impediment; multiple small damages can still be detected with high accuracy provided that the frequency range beyond the first five natural frequencies is utilized in the Frequency response functions (FRF) curvature method. The proposed method presented in this paper applies sensitivity analysis to systematically unearth frequency ranges capable of elevating damage index peak at correct damage locations. It is a baseline-free method that employs a smoothing polynomial to emulate reference curvatures for the undamaged structure. Numerical simulation of steel-beam shows that small multiple damages of severity as low as 5% can be reliably detected by including frequency range covering 5–10th natural frequencies. The efficacy of the scheme is also experimentally validated for the same beam. It is also found that a simple noise filtration scheme such as a Gaussian moving average filter can adequately remove false peaks from the damage index profile.


Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. R249-R257 ◽  
Author(s):  
Maokun Li ◽  
James Rickett ◽  
Aria Abubakar

We found a data calibration scheme for frequency-domain full-waveform inversion (FWI). The scheme is based on the variable projection technique. With this scheme, the FWI algorithm can incorporate the data calibration procedure into the inversion process without introducing additional unknown parameters. The calibration variable for each frequency is computed using a minimum norm solution between the measured and simulated data. This process is directly included in the data misfit cost function. Therefore, the inversion algorithm becomes source independent. Moreover, because all the data points are considered in the calibration process, this scheme increases the robustness of the algorithm. Numerical tests determined that the FWI algorithm can reconstruct velocity distributions accurately without the source waveform information.


2018 ◽  
Vol 10 (5-6) ◽  
pp. 578-586 ◽  
Author(s):  
Simon Senega ◽  
Ali Nassar ◽  
Stefan Lindenmeier

AbstractFor a fast scan-phase satellite radio antenna diversity system a noise correction method is presented for a significant improvement of audio availability at low signal-to-noise ratio (SNR) conditions. An error analysis of the level and phase detection within the diversity system in the presence of noise leads to a correction method based on a priori knowledge of the system's noise floor. This method is described and applied in a hardware example of a satellite digital audio radio services antenna diversity circuit for fast fading conditions. Test drives, which have been performed in real fading scenarios, are described and results are analyzed statistically. Simulations of the scan-phase antenna diversity system show higher signal amplitudes and availabilities. Measurement results of dislocated antennas as well as of a diversity antenna set on a single mounting position are presented. A comparison of a diversity system with noise correction, the same system without noise correction, and a single antenna system with each other is performed. Using this new method in fast multipath fading driving scenarios underneath dense foliage with a low SNR of the antenna signals, a reduction in audio mute time by one order of magnitude compared with single antenna systems is achieved with the diversity system.


2021 ◽  
pp. 1-12
Author(s):  
Junqing Ji ◽  
Xiaojia Kong ◽  
Yajing Zhang ◽  
Tongle Xu ◽  
Jing Zhang

The traditional blind source separation (BSS) algorithm is mainly used to deal with signal separation under the noiseless model, but it does not apply to data with the low signal to noise ratio (SNR). To solve the problem, an adaptive variable step size natural gradient BSS algorithm based on an improved wavelet threshold is proposed in this paper. Firstly, an improved wavelet threshold method is used to reduce the noise of the signal. Secondly, the wavelet coefficient layer with obvious periodicity is denoised using a morphological component analysis (MCA) algorithm, and the processed wavelet coefficients are recombined to obtain the ideal model. Thirdly, the recombined signal is pre-whitened, and a new separation matrix update formula of natural gradient algorithm is constructed by defining a new separation degree estimation function. Finally, the adaptive variable step size natural gradient blind source algorithm is used to separate the noise reduction signal. The results show that the algorithm can not only adaptively adjust the step size according to different signals, but also improve the convergence speed, stability and separation accuracy.


Sign in / Sign up

Export Citation Format

Share Document