Effects of averaging and sampling on the statistics of reflection coefficients

Geophysics ◽  
1991 ◽  
Vol 56 (1) ◽  
pp. 50-58 ◽  
Author(s):  
K. Hsu ◽  
R. Burridge

The reflection coefficients derived from sonic and density logs are frequently used in seismic exploration. Even though they measure the in‐situ formation slowness and density, sonic and density tools do not measure the exact, continuous formation properties but locally averaged properties sampled at discrete depth points. Furthermore, the logs are frequently reinterpolated to form a Goupillaud medium for many applications such as synthetic seismogram computation. Both the logging tools and the Goupillaud interpolation introduce averaging and sampling effects into the reflection coefficients and significantly alter the autocorrelation of the reflection coefficient sequence. Analytical formulas are derived to show how the autocorrelation is altered and to calculate how the autocorrelation depends on the averaging and sampling intervals. Essentially, these effects impose sincsquared envelopes on the power spectrum of the reflection coefficient sequence and alias high‐frequency components to low‐frequency components in the spectral domain. These findings are verified using synthetic and real examples.

Author(s):  
G. Y. Fan ◽  
J. M. Cowley

It is well known that the structure information on the specimen is not always faithfully transferred through the electron microscope. Firstly, the spatial frequency spectrum is modulated by the transfer function (TF) at the focal plane. Secondly, the spectrum suffers high frequency cut-off by the aperture (or effectively damping terms such as chromatic aberration). While these do not have essential effect on imaging crystal periodicity as long as the low order Bragg spots are inside the aperture, although the contrast may be reversed, they may change the appearance of images of amorphous materials completely. Because the spectrum of amorphous materials is continuous, modulation of it emphasizes some components while weakening others. Especially the cut-off of high frequency components, which contribute to amorphous image just as strongly as low frequency components can have a fundamental effect. This can be illustrated through computer simulation. Imaging of a whitenoise object with an electron microscope without TF limitation gives Fig. 1a, which is obtained by Fourier transformation of a constant amplitude combined with random phases generated by computer.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


Author(s):  
Priya R. Kamath ◽  
Kedarnath Senapati ◽  
P. Jidesh

Speckles are inherent to SAR. They hide and undermine several relevant information contained in the SAR images. In this paper, a despeckling algorithm using the shrinkage of two-dimensional discrete orthonormal S-transform (2D-DOST) coefficients in the transform domain along with shock filter is proposed. Also, an attempt has been made as a post-processing step to preserve the edges and other details while removing the speckle. The proposed strategy involves decomposing the SAR image into low and high-frequency components and processing them separately. A shock filter is used to smooth out the small variations in low-frequency components, and the high-frequency components are treated with a shrinkage of 2D-DOST coefficients. The edges, for enhancement, are detected using a ratio-based edge detection algorithm. The proposed method is tested, verified, and compared with some well-known models on C-band and X-band SAR images. A detailed experimental analysis is illustrated.


2021 ◽  
Vol 11 (11) ◽  
pp. 5028
Author(s):  
Miaomiao Sun ◽  
Zhenchun Li ◽  
Yanli Liu ◽  
Jiao Wang ◽  
Yufei Su

Low-frequency information can reflect the basic trend of a formation, enhance the accuracy of velocity analysis and improve the imaging accuracy of deep structures in seismic exploration. However, the low-frequency information obtained by the conventional seismic acquisition method is seriously polluted by noise, which will be further lost in processing. Compressed sensing (CS) theory is used to exploit the sparsity of the reflection coefficient in the frequency domain to expand the low-frequency components reasonably, thus improving the data quality. However, the conventional CS method is greatly affected by noise, and the effective expansion of low-frequency information can only be realized in the case of a high signal-to-noise ratio (SNR). In this paper, well information is introduced into the objective function to constrain the inversion process of the estimated reflection coefficient, and then, the low-frequency component of the original data is expanded by extracting the low-frequency information of the reflection coefficient. It has been proved by model tests and actual data processing results that the objective function of estimating the reflection coefficient constrained by well logging data based on CS theory can improve the anti-noise interference ability of the inversion process and expand the low-frequency information well in the case of a low SNR.


2021 ◽  
Author(s):  
Michał Mierczak ◽  
Jerzy Karczewski

AbstractThe article describes the establishment of the location of agate geodes using the GPR method in the area of the Simota gully (Lesser Poland Voivodeship). Agates (a multicolored variety of gemstone of chalcedony group) have multifaceted values that informed their study. Traditional methods of geode location are less reliable, hence the attempt to use the GPR method. Measurements were taken at two study test sites with subsurface geology of weathered melaphyre and pyroclastic deposits using a GPR system (ProEx). A high-frequency antenna (1.6 GHz) was used along with the pre-established profiles of lengths of 6-m and 10-cm intervals. Furthermore, simple soil tests using the soil sampler tool were made prior to the GPR measurement. The GPR results show significant high attenuation of the electromagnetic energy interpreted to be due to clay components of the regolith. Advanced signal processing procedures (such as the attribute of the signal) were used on the data for better enhancement that aided interpretation. Other anomalies depicted on the radargrams were thought to be the presence of roots, pieces of melaphyres-targeted agates. Furtherance to ascertain the reflection coefficients as recorded on the GPR data, in situ samples (root pieces, melaphyres, agates) taken were tested in the laboratory for electric permittivity property. Based on the interpretation results, several agate geodes were dug out from the ground.


Author(s):  
Vladimir Barannik ◽  
Andrii Krasnorutsky ◽  
Sergii Shulgin ◽  
Valerii Yeroshenko ◽  
Yevhenii Sidchenko ◽  
...  

The subject of research in the article are the processes of video image processing using an orthogonal transformation for data transmission in information and telecommunication networks. The aim is to build a method of compression of video images while maintaining the efficiency of its delivery at a given informative probability. That will allow to provide a gain in the time of delivery of compressed video images, a necessary level of availability and authenticity at transfer of video data with preservation of strictly statistical regulations and the controlled loss of quality. Task: to study the known algorithms for selective processing of static video at the stage of approximation and statistical coding of the data based on JPEG-platform. The methods used are algorithm based on JPEG-platform, methods of approximation by orthogonal transformation of information blocks, arithmetic coding. It is a solution of scientific task-developed methods for reducing the computational complexity of transformations (compression and decompression) of static video images in the equipment for processing visual information signals, which will increase the efficiency of information delivery.The following results were obtained. The method of video image compression with preservation of the efficiency of its delivery at the set informative probability is developed. That will allow to fulfill the set requirements at the preservation of structural-statistical economy, providing a gain in time to bring compressed images based on the developed method, relative to known methods, on average up to 2 times. This gain is because with a slight difference in the compression ratio of highly saturated images compared to the JPEG-2000 method, for the developed method, the processing time will be less by at least 34%.Moreover, with the increase in the volume of transmitted images and the data transmission speed in the communication channel - the gain in the time of delivery for the developed method will increase. Here, the loss of quality of the compressed/restored image does not exceed 2% by RMS, or not worse than 45 dB by PSNR. What is unnoticeable to the human eye.Conclusions. The scientific novelty of the obtained results is as follows: for the first time the method of classification (separate) coding (compression) of high-frequency and low-frequency components of Walsh transformants of video images is offered and investigated, which allows to consider their different dynamic range and statistical redundancy reduced using arithmetic coding. This method will allow to ensure the necessary level of availability and authenticity when transmitting video data, while maintaining strict statistical statistics.Note that the proposed method fulfills the set tasks to increase the efficiency of information delivery. Simultaneously, the method for reducing the time complexity of the conversion of highly saturated video images using their representation by the transformants of the discrete Walsh transformation was further developed. It is substantiated that the perspective direction of improvement of methods of image compression is the application of orthogonal transformations on the basis of integer piecewise-constant functions, and methods of integer arithmetic coding of values of transformant transformations.It is substantiated that the joint use of Walsh transformation and arithmetic coding, which reduces the time of compression and recovery of images; reduces additional statistical redundancy. To further increase the degree of compression, a classification coding of low-frequency and high-frequency components of Walsh transformants is developed. It is shown that an additional reduction in statistical redundancy in the arrays of low-frequency components of Walsh transformants is achieved due to their difference in representation. Recommendations for the parameters of the compression method for which the lowest value of the total time of information delivery is provided are substantiated.


Author(s):  
Michio Ueno ◽  
Yoshiaki Tsukada

The authors propose a method to estimate full-scale propeller torque consisting of low-frequency and high-frequency components in waves using measured data of free-running model ship. The duct fan auxiliary thruster (DFAT) [1] and the rudder-effectiveness and speed correction (RSC) [2,3] ensure similar model ship motion to full-scale in external forces, where RSC controls the model ship propeller rate of revolution and the auxiliary thrust depending on measured model ship speed. Analyzing a fluctuating component of effective inflow velocity to propeller due to waves, the method estimates full-scale fluctuating propeller torque in waves. This method also makes it possible to adopt into free-running model ship tests any engine model simulating interaction between propeller torque and engine torque. Trial application of the method exemplifies the property of full-scale fluctuating propeller torque comparing with that of model ship.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-20
Author(s):  
Gang Zhang ◽  
Hongchi Liu ◽  
Pingli Li ◽  
Meng Li ◽  
Qiang He ◽  
...  

Power system load forecasting is an important part of power system scheduling. Since the power system load is easily affected by environmental factors such as weather and time, it has high volatility and multi-frequency. In order to improve the prediction accuracy, this paper proposes a load forecasting method based on variational mode decomposition (VMD) and feature correlation analysis. Firstly, the original load sequence is decomposed using VMD to obtain a series of intrinsic mode function (IMF), it is referred to below as a modal component, and they are divided into high frequency, intermediate frequency, and low frequency signals according to their fluctuation characteristics. Then, the feature information related to the power system load change is collected, and the correlation between each IMF and each feature information is analyzed using the maximum relevance minimum redundancy (mRMR) based on the mutual information to obtain the best feature set of each IMF. Finally, each component is input into the prediction model together with its feature set, in which back propagation neural network (BPNN) is used to predict high-frequency components, least square-support vector machine (LS-SVM) is used to predict intermediate and low frequency components, and BPNN is also used to integrate the prediction results to obtain the final load prediction value, and compare the prediction results of method in this paper with that of the prediction models such as autoregressive moving average model (ARMA), LS-SVM, BPNN, empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD), and VMD. This paper carries out an example analysis based on the data of Xi’an Power Grid Corporation, and the results show that the prediction accuracy of method in this paper is higher.


1997 ◽  
Vol 25 ◽  
pp. 177-182 ◽  
Author(s):  
J. A. Richter-Menge

In situ measurements of ice stress were made on a multi-year floe in the Alaskan Beaufort Sea over a 6 month period, beginning in October 1993. The data suggest that, in this region of the Arctic during this experiment, there were two main sources of stress: a thermally induced stress caused by changes in air temperature, and a stress generated by ice motion. Due to the natural damping of the snow and ice above the sensor, the thermally-induced stresses are low frequency (order of days). Stresses associated with periods of ice motion have both a high-frequency (order of hours), and low-frequency, content. The relative significance of these sources of stress is seasonal, reflecting the changes in the strength and continuity of the pack.


2014 ◽  
Vol 962-965 ◽  
pp. 2856-2862
Author(s):  
De Yi Sang ◽  
Jian Jun Zhao ◽  
Li Bin Yang

The noise resulted in the calibration process of the landing guidance radar can cause serious accidents. Analyse the principle of the EMD and wavelet denoising method. Points out the deficiencies of pure EMD or pure wavelet denoising method. Propose a denoising method based on EMD and wavelet. Improved the discriminanting method for high or low frequency components and the discriminanting method for wavelet thresholding. First EMD the signal, then denoise the high frequency components by wavelet, finally, combined the low frequency components and the denoised high frequency components to get the denoised data.


Sign in / Sign up

Export Citation Format

Share Document