Minimum entropy deconvolution and simplicity: A noniterative algorithm

Geophysics ◽  
1985 ◽  
Vol 50 (3) ◽  
pp. 394-413 ◽  
Author(s):  
Carlos A. Cabrelli

Minimum entropy deconvolution (MED) is a technique developed by Wiggins (1978) with the purpose of separating the components of a signal, as the convolution model of a smooth wavelet with a series of impulses. The advantage of this method, as compared with traditional methods, is that it obviates strong hypotheses over the components, which require only the simplicity of the output. The degree of simplicity is measured with the Varimax norm for factor analysis. An iterative algorithm for computation of the filter is derived from this norm, having as an outstanding characteristic its stability in presence of noise. Geometrical analysis of the Varimax norm suggests the definition of a new criterion for simplicity: the D norm. In case of multiple inputs, the D norm is obtained through modification of the kurtosis norm. One of the most outstanding characteristics of the new criterion, by comparison with the Varimax norm, is that a noniterative algorithm for computation of the deconvolution filter can be derived from the D norm. This is significant because the standard MED algorithm frequently requires in each iteration the inversion of an autocorrelation matrix whose order is the length of the filter, while the new algorithm derived from the D norm requires the inversion of a single matrix. On the other hand, results of numerical tests, performed jointly with Graciela A. Canziani, show that the new algorithm produces outputs of greater simplicity than those produced by the traditional MED algorithm. These considerations imply that the D criterion yields a new computational method for minimum entropy deconvolution. A section of numerical examples is included, where the results of an extensive simulation study with synthetic data are analyzed. The numerical computations show in all cases a remarkable improvement resulting from use of the D norm. The properties of stability in the presence of noise are preserved as shown in the examples. In the case of a single input, the relation between the D norm and the spiking filter is analyzed (Appendix B).

2021 ◽  
Vol 11 (2) ◽  
pp. 790
Author(s):  
Pablo Venegas ◽  
Rubén Usamentiaga ◽  
Juan Perán ◽  
Idurre Sáez de Ocáriz

Infrared thermography is a widely used technology that has been successfully applied to many and varied applications. These applications include the use as a non-destructive testing tool to assess the integrity state of materials. The current level of development of this application is high and its effectiveness is widely verified. There are application protocols and methodologies that have demonstrated a high capacity to extract relevant information from the captured thermal signals and guarantee the detection of anomalies in the inspected materials. However, there is still room for improvement in certain aspects, such as the increase of the detection capacity and the definition of a detailed characterization procedure of indications, that must be investigated further to reduce uncertainties and optimize this technology. In this work, an innovative thermographic data analysis methodology is proposed that extracts a greater amount of information from the recorded sequences by applying advanced processing techniques to the results. The extracted information is synthesized into three channels that may be represented through real color images and processed by quaternion algebra techniques to improve the detection level and facilitate the classification of defects. To validate the proposed methodology, synthetic data and actual experimental sequences have been analyzed. Seven different definitions of signal-to-noise ratio (SNR) have been used to assess the increment in the detection capacity, and a generalized application procedure has been proposed to extend their use to color images. The results verify the capacity of this methodology, showing significant increments in the SNR compared to conventional processing techniques in thermographic NDT.


2021 ◽  
Vol 6 (4) ◽  
pp. e005413
Author(s):  
Valeria Raparelli ◽  
Colleen M. Norris ◽  
Uri Bender ◽  
Maria Trinidad Herrero ◽  
Alexandra Kautzky-Willer ◽  
...  

Gender refers to the socially constructed roles, behaviours, expressions and identities of girls, women, boys, men and gender diverse people. Gender-related factors are seldom assessed as determinants of health outcomes, despite their powerful contribution. The Gender Outcomes INternational Group: to Further Well-being Development (GOING-FWD) project developed a standard five-step methodology applicable to retrospectively identify gender-related factors and assess their relationship to outcomes across selected cohorts of non-communicable chronic diseases from Austria, Canada, Spain, Sweden. Step 1 (identification of gender-related variables): Based on the gender framework of the Women Health Research Network (ie, identity, role, relations and institutionalised gender), and available literature for a certain disease, an optimal ‘wish-list’ of gender-related variables was created and discussed by experts. Step 2 (definition of outcomes): Data dictionaries were screened for clinical and patient-relevant outcomes, using the International Consortium for Health Outcome Measurement framework. Step 3 (building of feasible final list): a cross-validation between variables per database and the ‘wish-list’ was performed. Step 4 (retrospective data harmonisation): The harmonisation potential of variables was evaluated. Step 5 (definition of data structure and analysis): The following analytic strategies were identified: (1) local analysis of data not transferable followed by a meta-analysis combining study-level estimates; (2) centrally performed federated analysis of data, with the individual-level participant data remaining on local servers; (3) synthesising the data locally and performing a pooled analysis on the synthetic data and (4) central analysis of pooled transferable data. The application of the GOING-FWD multistep approach can help guide investigators to analyse gender and its impact on outcomes in previously collected data.


2020 ◽  
Author(s):  
Valeria Raparelli Raparelli ◽  
Colleen M. Norris ◽  
Uri Bender ◽  
Maria Trinidad Herrero ◽  
Alexandra Kautzky-Willer ◽  
...  

Abstract Background: Gender refers to the socially constructed roles, behaviors, expressions, and identities of girls, women, boys, men, and gender diverse people. It influences self-perception, individual’s actions and interactions, as well as the distribution of power and resources in society. Gender-related factors are seldom assessed as determinants of health outcomes, despite their powerful contribution.Methods: Investigators of the GOING-FWD project developed a standard methodology applicable for observational studies to retrospectively identify gender-related factors to assess their relationship to outcomes and applied this method to selected cohorts of non-communicable chronic diseases from Austria, Canada, Spain, Sweden.Results: The following multistep process was applied. Step 1 (Identification of Gender-related Variables): Based on the gender framework of the Women Health Research Network (i.e. gender identity, role, relations, and institutionalized gender), and available literature for a certain disease, an optimal “wish-list” of gender-related variables/factors was created and discussed by experts. Step 2 (Definition of Outcomes): each of the cohort data dictionaries were screened for clinical and patient relevant outcomes, using the ICHOM framework. Step 3 (Building of Feasible Final List): A cross-validation between gender-related and outcome variables available per database and the “wish-list” was performed. Step 4 (Retrospective Data Harmonization): The harmonization potential of variables was evaluated. Step 5 (Definition of Data Structure and Analysis): Depending on the database data structure, the following analytic strategies were identified: (1) local analysis of data not transferable followed by a meta-analysis combining study-level estimates; (2) centrally performed federated analysis of anonymized data, with the individual-level participant data remaining on local servers; (3) synthesizing the data locally and performing a pooled analysis on the synthetic data; and (4) central analysis of pooled transferable data.Conclusion: The application of the GOING-FWD systematic multistep approach can help guide investigators to analyze gender and its impact on outcomes in previously collected data.


Geophysics ◽  
2021 ◽  
pp. 1-69
Author(s):  
Jie Shao ◽  
Yibo Wang

Quality factor ( Q) and reflectivity are two important subsurface properties in seismic data processing and interpretation. They can be calculated simultaneously from a seismic trace corresponding to an anelastic layered model by a simultaneous inversion method based on the nonstationary convolution model. However, the conventional simultaneous inversion method calculates the optimum Q and reflectivity based on the minimum of the reflectivity sparsity by sweeping each Q value within a predefined range. As a result, the accuracy and computational efficiency of the conventional method depend heavily on the predefined Q value set. To improve the performance of the conventional simultaneous inversion method, we have developed a dictionary learning-based simultaneous inversion of Q and reflectivity. The parametric dictionary learning method is used to update the initial predefined Q value set automatically. The optimum Q and reflectivity are calculated from the updated Q value set based on minimizing not only the sparsity of the reflectivity but also the data residual. Synthetic data and two field data sets were used to test the effectiveness of our method. The results demonstrated that our method can effectively improve the accuracy of these two parameters compared to the conventional simultaneous inversion method. In addition, the dictionary learning method can improve computational efficiency up to approximately seven times when compared to the conventional method with a large predefined dictionary.


Author(s):  
M. Sulaiman Khan ◽  
Maybin Muyeba ◽  
Frans Coenen ◽  
David Reid ◽  
Hissam Tawfik

In this paper, a composite fuzzy association rule mining mechanism (CFARM), directed at identifying patterns in datasets comprised of composite attributes, is described. Composite attributes are defined as attributes that can take simultaneously two or more values that subscribe to a common schema. The objective is to generate fuzzy association rules using “properties” associated with these composite attributes. The exemplar application is the analysis of the nutrients contained in items found in grocery data sets. The paper commences with a review of the back ground and related work, and a formal definition of the CFARM concepts. The CFARM algorithm is then fully described and evaluated using both real and synthetic data sets.


Ocean Science ◽  
2006 ◽  
Vol 2 (1) ◽  
pp. 11-18 ◽  
Author(s):  
A. Henry-Edwards ◽  
M. Tomczak

Abstract. A water mass analysis method based on a constrained minimization technique is developed to derive water property changes in water mass formation regions from oceanographic station data taken at significant distance from the formation regions. The method is tested with two synthetic data sets, designed to mirror conditions in the North Atlantic at the Bermuda BATS time series station. The method requires careful definition of constraints before it produces reliable results. It is shown that an analysis of the error fields under different constraint assumptions can identify which properties vary most over the period of the observations. The method reproduces the synthetic data sets extremely well if all properties other than those that are identified as undergoing significant variations are held constant during the minimization.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Sara Garbarino ◽  
Giacomo Caviglia ◽  
Massimo Brignone ◽  
Michela Massollo ◽  
Gianmario Sambuceti ◽  
...  

[18F]fluoro-2-deoxy-D-glucose (FDG) is one of the most utilized tracers for positron emission tomography (PET) applications in oncology. FDG-PET relies on higher glycolytic activity in tumors compared to normal structures as the basis of image contrast. As a glucose analog, FDG is transported into malignant cells which typically exhibit an increased radioactivity. However, different from glucose, FDG is not reabsorbed by the renal system and is excreted to the bladder. The present paper describes a novel computational method for the quantitative assessment of this excretion process. The method is based on a compartmental analysis of FDG-PET data in which the excretion process is explicitly accounted for by the bladder compartment and on the application of an ant colony optimization (ACO) algorithm for the determination of the tracer coefficients describing the FDG transport effectiveness. The validation of this approach is performed by means of both synthetic data and real measurements acquired by a PET device for small animals (micro-PET). Possible oncological applications of the results are discussed in the final section.


Author(s):  
Alessandro Bianchini ◽  
Giulia Andreini ◽  
Giovanni Ferrara ◽  
Lorenzo Ferrari ◽  
Dante Tommaso Rubino

Recent studies showed that a prompt detection of the stall inception, connected with a specific model to predict its associated aerodynamic force, could provide room for an extension of the left margin of the operating curve of high-pressure centrifugal compressors. In industrial machines working in the field, however, robust procedures to detect and identify the phenomenon are still missing, i.e. the operating curve is almost ever cut preliminary by the manufacturer by a proper safety margin; moreover, no agreement is found in the literature about a well-defined threshold to define the onset of the stall. In particular, in some cases the intensity of the arising subsynchronous frequency is compared to the revolution frequency, while in many other ones it is compared to the blade passage frequency. A large experience in experimental stall analyses collected by the authors revealed that in some cases unexpected spikes could make this direct comparison not reliable for a robust automatic detection. To this end, a new criterion was developed based on an integral analysis of the area subtended to the entire subsynchronous spectrum of the dynamic pressure signal of probes positioned just outside the impeller exit. A dimensionless parameter was then defined to account for the spectrum area increase in proximity to stall inception. This new parameter enabled the definition of a reference threshold to highlight the arising of stall conditions, whose validity and increased robustness was here verified based on a set of experimental analyses of different types of full-stage test cases of industrial centrifugal compressors at the test rig.


Author(s):  
Mohammad Durali ◽  
Mohammad Mahdi Jalili

A new criterion for prediction of train derailment is presented in this paper. A 3 DOF wheel-set model is used to identify the main dynamic parameters that affect wheel-set derailment. Using these parameters and conventional definition of derailment coefficient, a new criterion for prediction of wheelset derailment is introduced. The proposed criterion, in addition to offering the required precision in prediction of wheel set derailment, requires measurements which are several times easier. To evaluate the capability of the new criterion in prediction of derailment, it was used to determine derailment of a full wagon model with 48 DOF moving on a rail with different random irregularities. The results were then compared with the predictions of conventional derailment coefficients.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. V7-V16 ◽  
Author(s):  
Kenji Nose-Filho ◽  
André K. Takahata ◽  
Renato Lopes ◽  
João M. T. Romano

We have addressed blind deconvolution in a multichannel framework. Recently, a robust solution to this problem based on a Bayesian approach called sparse multichannel blind deconvolution (SMBD) was proposed in the literature with interesting results. However, its computational complexity can be high. We have proposed a fast algorithm based on the minimum entropy deconvolution, which is considerably less expensive. We designed the deconvolution filter to minimize a normalized version of the hybrid [Formula: see text]-norm loss function. This is in contrast to the SMBD, in which the hybrid [Formula: see text]-norm function is used as a regularization term to directly determine the deconvolved signal. Results with synthetic data determined that the performance of the obtained deconvolution filter was similar to the one obtained in a supervised framework. Similar results were also obtained in a real marine data set for both techniques.


Sign in / Sign up

Export Citation Format

Share Document