Enhanced local correlation stacking method

Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. V33-V45 ◽  
Author(s):  
Charlotte Sanchis ◽  
Alfred Hanssen

Stacking is a common technique to improve the signal-to-noise ratio (S/N) and the imaging quality of seismic data. Conventional stacking that averages equally a collection of normal moveout corrected or migrated shot gathers with a common reflection point is not always satisfactory. Instead, we propose a novel time-dependent weighted average stacking method that utilizes local correlation between each individual trace and a chosen reference trace as a measure of weight and a new weight normalization scheme that ensures meaningful amplitudes of the output. Three different reference traces have been proposed. These are based on conventional stacking, S/N estimation, and Kalman filtering. The outputs of the enhanced stacking methods, as well as their reference traces, were compared on both synthetic data and real marine migrated subsalt data. We conclude that both S/N estimation and Kalman reference stacking methods as well as the output of the enhanced stacking method yield consistently better results than conventional stacking. They exhibit cleaner and better defined reflection events and a larger number of reflections. We found that the Kalman reference method produces the best overall seismic image contrast and reveals many more reflected events, but at the cost of a higher noise level and a longer processing time. Thus, enhanced stacking using S/N estimation as reference method is a possible alternative that has the advantages of running faster, but also emphasizes some reflected events under the subsalt structure.

Author(s):  
Daniel Jiménez-Sánchez ◽  
Mikel Ariz ◽  
José Mário Morgado ◽  
Iván Cortés-Domínguez ◽  
Carlos Ortiz-de-Solórzano

Abstract Motivation Recent advances in multiplex immunostaining and multispectral cytometry have opened the door to simultaneously visualizing an unprecedented number of biomarkers both in liquid and solid samples. Properly unmixing fluorescent emissions is a challenging task, which normally requires the characterization of the individual fluorochromes from control samples. As the number of fluorochromes increases, the cost in time and use of reagents becomes prohibitively high. Here, we present a fully unsupervised blind spectral unmixing method for the separation of fluorescent emissions in highly mixed spectral data, without the need for control samples. To this end, we extend an existing method based on non-negative Matrix Factorization, and introduce several critical improvements: initialization based on the theoretical spectra, automated selection of ‘sparse’ data and use of a re-initialized multilayer optimizer. Results Our algorithm is exhaustively tested using synthetic data to study its robustness against different levels of colocalization, signal to noise ratio, spectral resolution and the effect of errors in the initialization of the algorithm. Then, we compare the performance of our method to that of traditional spectral unmixing algorithms using novel multispectral flow and image cytometry systems. In all cases, we show that our blind unmixing algorithm performs robust unmixing of highly spatially and spectrally mixed data with an unprecedently low computational cost. In summary, we present the first use of a blind unmixing method in multispectral flow and image cytometry, opening the door to the widespread use of our method to efficiently pre-process multiplex immunostaining samples without the need of experimental controls. Availability and implementation https://github.com/djimenezsanchez/Blind_Unmixing_NMF_RI/ contains the source code and all datasets used in this manuscript. Supplementary information Supplementary data are available at Bioinformatics online.


Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. O1-O13 ◽  
Author(s):  
Anders U. Waldeland ◽  
Hao Zhao ◽  
Jorge H. Faccipieri ◽  
Anne H. Schistad Solberg ◽  
Leiv-J. Gelius

The common-reflection-surface (CRS) method offers a stack with higher signal-to-noise ratio at the cost of a time-consuming semblance search to obtain the stacking parameters. We have developed a fast method for extracting the CRS parameters using local slope and curvature. We estimate the slope and curvature with the gradient structure tensor and quadratic structure tensor on stacked data. This is done under the assumption that a stacking velocity is already available. Our method was compared with an existing slope-based method, in which the slope is extracted from prestack data. An experiment on synthetic data shows that our method has increased robustness against noise compared with the existing method. When applied to two real data sets, our method achieves accuracy comparable with the pragmatic and full semblance searches. Our method has the advantage of being approximately two and four orders of magnitude faster than the semblance searches.


2017 ◽  
Vol 210 (1) ◽  
pp. 42-55 ◽  
Author(s):  
J.A. López-Comino ◽  
S. Cesca ◽  
M. Kriegerowski ◽  
S. Heimann ◽  
T. Dahm ◽  
...  

Abstract Ideally, the performance of a dedicated seismic monitoring installation should be assessed prior to the observation of target seismicity. This work is focused on a hydrofracking experiment monitored at Wysin, NE Poland. A microseismic synthetic catalogue is generated to assess the monitoring performance during the pre-operational phase, where seismic information only concerns the noise conditions and the potential background seismicity. Full waveform, accounting for the expected spatial, magnitude and focal mechanism distributions and a realistic local crustal model, are combined with real noise recording to produce either event based or continuous synthetic waveforms. The network detection performance is assessed in terms of the magnitude of completeness (Mc) through two different techniques. First, we use an amplitude threshold, taking into the ratio among the maximal amplitude of synthetic waveforms and station-dependent noise levels, for different values of signal-to-noise ratio. The detection probability at each station is estimated for the whole data set and extrapolated to a broader range of magnitude and distances. We estimate an Mc of about 0.55, when considering the distributed network, and can further decrease Mc to 0.45 using arrays techniques. The second approach, taking advantage on an automatic, coherence-based detection algorithm, can lower Mc to ∼ 0.1, at the cost of an increase of false detections. Mc experiences significant changes during day hours, in consequence of strongly varying noise conditions. Moreover, due to the radiation patterns and network geometry, double-couple like sources are better detected than tensile cracks, which may be induced during fracking.


2021 ◽  
Vol 11 (2) ◽  
pp. 790
Author(s):  
Pablo Venegas ◽  
Rubén Usamentiaga ◽  
Juan Perán ◽  
Idurre Sáez de Ocáriz

Infrared thermography is a widely used technology that has been successfully applied to many and varied applications. These applications include the use as a non-destructive testing tool to assess the integrity state of materials. The current level of development of this application is high and its effectiveness is widely verified. There are application protocols and methodologies that have demonstrated a high capacity to extract relevant information from the captured thermal signals and guarantee the detection of anomalies in the inspected materials. However, there is still room for improvement in certain aspects, such as the increase of the detection capacity and the definition of a detailed characterization procedure of indications, that must be investigated further to reduce uncertainties and optimize this technology. In this work, an innovative thermographic data analysis methodology is proposed that extracts a greater amount of information from the recorded sequences by applying advanced processing techniques to the results. The extracted information is synthesized into three channels that may be represented through real color images and processed by quaternion algebra techniques to improve the detection level and facilitate the classification of defects. To validate the proposed methodology, synthetic data and actual experimental sequences have been analyzed. Seven different definitions of signal-to-noise ratio (SNR) have been used to assess the increment in the detection capacity, and a generalized application procedure has been proposed to extend their use to color images. The results verify the capacity of this methodology, showing significant increments in the SNR compared to conventional processing techniques in thermographic NDT.


2011 ◽  
Vol 128-129 ◽  
pp. 181-184
Author(s):  
You Lian Zhu ◽  
Cheng Huang

Design of morphological filter greatly depends on morphological operations and structuring elements selection. A filter design method used median closing morphological operation is proposed to enhance the image denoising ability and the PSO algorithm is introduced for structural elements selecting. The method takes the peak value signal-to-noise ratio (PSNR) as the cost function and may adaptively build unit structuring elements with zero square matrix. Experimental results show the proposed method can effectively remove impulse noise from a noisy image, especially from a low signal-to-noise ratio (SNR) image; the noise reduction performance has obvious advantages than the other.


2021 ◽  
Vol 2 (143) ◽  
pp. 174-183
Author(s):  
Andrey Yu. Nesmiyan ◽  
◽  
Anastasiya S. Kaymakova ◽  
Yuliya S. Tsench ◽  

Most modern agricultural machines and tools consist of components, the main parameters, design features of which were justified in the first half of the twentieth century. Slowly and evolutionarily, these technical means are developing. (Research purpose) The research purpose is in identifying general trends in the technical and technological level of steam cultivators in the first quarter of the XXI century. (Materials and Methods) For the study there was analyzed the data of the short test reports of the selected machines. The production of steam cultivators in the Russian Federation is gradually increasing. (Results and discussion) For ten years of the beginning of the XXI century, only 27 machines were provided for testing, and from 2014 to 2017 – more than 40, while for "old" cultivators, the weighted average value of the tractor traction class was 2.8, for new ones it is about of four. For the study period (on average 10 years) the quality of soil cultivation in terms of such parameters as deviation from the specified depth of cultivation, crumbling and combing of the field surface has not changed much. The productivity of cultivator units increased by 7-21 percents, which is explained not only by an increase in the power of tractors, but also by an increase in the utilization rate of charge time on average from 0.72 to 0.77. The specific weight of the "new" cultivators was on average 22 kilogram-meters less than that of the "old" analogues, which can be explained by the evolution of their designs. (Conclusions) Increasing the class of tractors by one "level" the specific material consumption of the cultivators aggregated with them increases by about 58 kilogram-meters for both "old" and " new " cultivators. With an increase in the width of the tools from 4 to 16 meters, their weight will increase by 8 times, which affects the cost and operational and environmental characteristics of wide-reach cultivators.


2018 ◽  
Vol 13 (3) ◽  
pp. 244
Author(s):  
Laura Broccardo ◽  
Luisa Tibiletti ◽  
Pertti Vilpas

This study investigates how balancing internal and external financing sources can create economic value. We set a financial scorecard, consisting of the Cost of Debt (COD), Return on Investment (ROI), and the Cost of Equity (COE). We show that COE should be a cap for COD and a floor for ROI in order to increase the Net Present Value at Weighted Average Cost of Capital and the Adjusted Present Value of the levered investment. However, leverage should be carefully monitored if COD and ROI go off the grid. Situations where leverage has the opposite effect on value creation and the Equity Internal Rate of Return are also discussed. Illustrative examples are given. The proposed model aims to help corporate management in financial decisions.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N29-N40
Author(s):  
Modeste Irakarama ◽  
Paul Cupillard ◽  
Guillaume Caumon ◽  
Paul Sava ◽  
Jonathan Edwards

Structural interpretation of seismic images can be highly subjective, especially in complex geologic settings. A single seismic image will often support multiple geologically valid interpretations. However, it is usually difficult to determine which of those interpretations are more likely than others. We have referred to this problem as structural model appraisal. We have developed the use of misfit functions to rank and appraise multiple interpretations of a given seismic image. Given a set of possible interpretations, we compute synthetic data for each structural interpretation, and then we compare these synthetic data against observed seismic data; this allows us to assign a data-misfit value to each structural interpretation. Our aim is to find data-misfit functions that enable a ranking of interpretations. To do so, we formalize the problem of appraising structural interpretations using seismic data and we derive a set of conditions to be satisfied by the data-misfit function for a successful appraisal. We investigate vertical seismic profiling (VSP) and surface seismic configurations. An application of the proposed method to a realistic synthetic model shows promising results for appraising structural interpretations using VSP data, provided that the target region is well-illuminated. However, we find appraising structural interpretations using surface seismic data to be more challenging, mainly due to the difficulty of computing phase-shift data misfits.


2015 ◽  
Vol 28 (3) ◽  
pp. 1016-1030 ◽  
Author(s):  
Erik Swenson

Abstract Various multivariate statistical methods exist for analyzing covariance and isolating linear relationships between datasets. The most popular linear methods are based on singular value decomposition (SVD) and include canonical correlation analysis (CCA), maximum covariance analysis (MCA), and redundancy analysis (RDA). In this study, continuum power CCA (CPCCA) is introduced as one extension of continuum power regression for isolating pairs of coupled patterns whose temporal variation maximizes the squared covariance between partially whitened variables. Similar to the whitening transformation, the partial whitening transformation acts to decorrelate individual variables but only to a partial degree with the added benefit of preconditioning sample covariance matrices prior to inversion, providing a more accurate estimate of the population covariance. CPCCA is a unified approach in the sense that the full range of solutions bridges CCA, MCA, RDA, and principal component regression (PCR). Recommended CPCCA solutions include a regularization for CCA, a variance bias correction for MCA, and a regularization for RDA. Applied to synthetic data samples, such solutions yield relatively higher skill in isolating known coupled modes embedded in noise. Provided with some crude prior expectation of the signal-to-noise ratio, the use of asymmetric CPCCA solutions may be justifiable and beneficial. An objective parameter choice is offered for regularization with CPCCA based on the covariance estimate of O. Ledoit and M. Wolf, and the results are quite robust. CPCCA is encouraged for a range of applications.


2021 ◽  
Vol 29 (2) ◽  
pp. 359-383
Author(s):  
Anatoly P. Dzyuba

Reducing the cost of electricity consumption by industrial enterprises is the most important area of increasing the operational efficiency of their activities. The article is devoted to the issue of reducing the cost of paying for the service component of the transport component of purchased electrical energy from industrial enterprises that have technological connection to the electrical networks of electricity producers. The article makes an empirical study of the features of the pricing of payment for the services of the transport component of purchased electrical energy for industrial enterprises connected to the electric grids of electricity producers with the identification of factors influencing the overestimation of the cost of paid electricity, and calculating such overestimations using the example of a typical schedule of electricity consumption of a machinebuilding enterprise for various regions Russia. On the basis of the developed author's indicators (tariff coefficient for electricity transportation by the level of GNP, index of tariff coefficient for electricity transportation, weighted average price for electricity transportation, index of weighted average price for electricity transportation, integral index of efficiency of GNP tariffs) study of the effectiveness of the application of tariffs for the transport of electricity for industrial enterprises connected to the electric networks of electricity producers. Based on the calculated indicators, the article groups the regions into three main groups, with the development of recommendations for managing the cost of purchasing electricity by the component of the cost of the transport component of purchased electricity in each group. As the most optimal option for reducing the cost of electricity transportation, the author proposes the introduction of demand management for electricity consumption, which will reduce the costs of industrial enterprises that pay for the transport component of purchased electricity at unfavorable tariff configurations.


Sign in / Sign up

Export Citation Format

Share Document