scholarly journals Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation

Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 753
Author(s):  
Yongjun Zhu ◽  
Wenbo Liu ◽  
Qian Shen

In this paper, an altered adaptive algorithm on block-compressive sensing (BCS) is developed by using saliency and error analysis. A phenomenon has been observed that the performance of BCS can be improved by means of rational block and uneven sampling ratio as well as adopting error analysis in the process of reconstruction. The weighted mean information entropy is adopted as the basis for partitioning of BCS which results in a flexible block group. Furthermore, the synthetic feature (SF) based on local saliency and variance is introduced to step-less adaptive sampling that works well in distinguishing and sampling between smooth blocks and detail blocks. The error analysis method is used to estimate the optimal number of iterations in sparse reconstruction. Based on the above points, an altered adaptive block-compressive sensing algorithm with flexible partitioning and error analysis is proposed in the article. On the one hand, it provides a feasible solution for the partitioning and sampling of an image, on the other hand, it also changes the iteration stop condition of reconstruction, and then improves the quality of the reconstructed image. The experimental results verify the effectiveness of the proposed algorithm and illustrate a good improvement in the indexes of the Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Gradient Magnitude Similarity Deviation (GMSD), and Block Effect Index (BEI).

2021 ◽  
Author(s):  
Parnasree Chakraborty ◽  
Tharini C

Abstract With rapid development of real-time and dynamic application, Compressive Sensing or Compressed sensing (CS) has been used for medical image and biomedical signal compression in the last decades. The performance of CS based compression is mostly dependent on decoding methods rather than the CS encoding methods used in practice. Many CS encoding and decoding algorithms have been reported in literature. However, the comparative study on performance metrics of CS encoding with block processing and without block processing is not investigated by the researchers so far. This paper proposes block CS based medical images and signals compression technique and the proposed technique is compared with standard CS compression. The proposed algorithm divides the input medical images and signals to blocks and each block is processed parallel to enable faster computation. Three performance indices, i.e., the peak signal to noise ratio (PSNR), reconstruction time (RT) and structural similarity index (SSIM) were tested to observe their changes with respect to compression ratio. The results showed that block CS algorithm had better performance than standard CS based compression. More specifically, the parallel block CS reported the best results than standard CS with respect to less reconstruction time and satisfactory PSNR and SSIM.


2018 ◽  
pp. 79-86
Author(s):  
T. V. Zhertunova ◽  
E. S. Yanakova

This article describes the existing problem situation associated with the absence of resource-lights denoising algorithms, capable to produce good-quality output images in the different intensity noise conditions without blurring the boundaries, contours and basic structure. The adaptive algorithm proposed in the article allows to solve this problem due to the developed algorithms of splitting the search region into two sets of similar and points different from the pixel and adapting of the kernel type to the image region, depending on the presence or detection of structural and smooth pixels. The results of the proposed algorithm and the standard method of nonlocal means are compared with the metrics of the peak signal-to-noise ratio and structural similarity. It is found out that the developed adaptive algorithm is surpass by far than the standard method both on numerical results and on the quality of the image processing.


2020 ◽  
Vol 25 (2) ◽  
pp. 86-97
Author(s):  
Sandy Suryo Prayogo ◽  
Tubagus Maulana Kusuma

DVB merupakan standar transmisi televisi digital yang paling banyak digunakan saat ini. Unsur terpenting dari suatu proses transmisi adalah kualitas gambar dari video yang diterima setelah melalui proses transimisi tersebut. Banyak faktor yang dapat mempengaruhi kualitas dari suatu gambar, salah satunya adalah struktur frame dari video. Pada tulisan ini dilakukan pengujian sensitifitas video MPEG-4 berdasarkan struktur frame pada transmisi DVB-T. Pengujian dilakukan menggunakan simulasi matlab dan simulink. Digunakan juga ffmpeg untuk menyediakan format dan pengaturan video akan disimulasikan. Variabel yang diubah dari video adalah bitrate dan juga group-of-pictures (GOP), sedangkan variabel yang diubah dari transmisi DVB-T adalah signal-to-noise-ratio (SNR) pada kanal AWGN di antara pengirim (Tx) dan penerima (Rx). Hasil yang diperoleh dari percobaan berupa kualitas rata-rata gambar pada video yang diukur menggunakan metode pengukuran structural-similarity-index (SSIM). Dilakukan juga pengukuran terhadap jumlah bit-error-rate BER pada bitstream DVB-T. Percobaan yang dilakukan dapat menunjukkan seberapa besar sensitifitas bitrate dan GOP dari video pada transmisi DVB-T dengan kesimpulan semakin besar bitrate maka akan semakin buruk nilai kualitas gambarnya, dan semakin kecil nilai GOP maka akan semakin baik nilai kualitasnya. Penilitian diharapkan dapat dikembangkan menggunakan deep learning untuk memperoleh frame struktur yang tepat di kondisi-kondisi tertentu dalam proses transmisi televisi digital.


2014 ◽  
Vol 35 (3) ◽  
pp. 568-574 ◽  
Author(s):  
Zhi-zhen Zhu ◽  
Zhi-da Zhang ◽  
Fa-lin Liu ◽  
Bin-bing Li ◽  
Chong-bin Zhou

Photonics ◽  
2021 ◽  
Vol 8 (7) ◽  
pp. 280
Author(s):  
Huadong Zheng ◽  
Jianbin Hu ◽  
Chaojun Zhou ◽  
Xiaoxi Wang

Computer holography is a technology that use a mathematical model of optical holography to generate digital holograms. It has wide and promising applications in various areas, especially holographic display. However, traditional computational algorithms for generation of phase-type holograms based on iterative optimization have a built-in tradeoff between the calculating speed and accuracy, which severely limits the performance of computational holograms in advanced applications. Recently, several deep learning based computational methods for generating holograms have gained more and more attention. In this paper, a convolutional neural network for generation of multi-plane holograms and its training strategy is proposed using a multi-plane iterative angular spectrum algorithm (ASM). The well-trained network indicates an excellent ability to generate phase-only holograms for multi-plane input images and to reconstruct correct images in the corresponding depth plane. Numerical simulations and optical reconstructions show that the accuracy of this method is almost the same with traditional iterative methods but the computational time decreases dramatically. The result images show a high quality through analysis of the image performance indicators, e.g., peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and contrast ratio. Finally, the effectiveness of the proposed method is verified through experimental investigations.


2021 ◽  
Vol 21 (1) ◽  
pp. 1-20
Author(s):  
A. K. Singh ◽  
S. Thakur ◽  
Alireza Jolfaei ◽  
Gautam Srivastava ◽  
MD. Elhoseny ◽  
...  

Recently, due to the increase in popularity of the Internet, the problem of digital data security over the Internet is increasing at a phenomenal rate. Watermarking is used for various notable applications to secure digital data from unauthorized individuals. To achieve this, in this article, we propose a joint encryption then-compression based watermarking technique for digital document security. This technique offers a tool for confidentiality, copyright protection, and strong compression performance of the system. The proposed method involves three major steps as follows: (1) embedding of multiple watermarks through non-sub-sampled contourlet transform, redundant discrete wavelet transform, and singular value decomposition; (2) encryption and compression via SHA-256 and Lempel Ziv Welch (LZW), respectively; and (3) extraction/recovery of multiple watermarks from the possibly distorted cover image. The performance estimations are carried out on various images at different attacks, and the efficiency of the system is determined in terms of peak signal-to-noise ratio (PSNR) and normalized correlation (NC), structural similarity index measure (SSIM), number of changing pixel rate (NPCR), unified averaged changed intensity (UACI), and compression ratio (CR). Furthermore, the comparative analysis of the proposed system with similar schemes indicates its superiority to them.


2021 ◽  
pp. 1-10
Author(s):  
Hongguang Pan ◽  
Fan Wen ◽  
Xiangdong Huang ◽  
Xinyu Lei ◽  
Xiaoling Yang

In the field of super-resolution image reconstruction, as a learning-based method, deep plug-and-play super-resolution (DPSR) algorithm can be used to find the blur kernel by using the existing blind deblurring methods. However, DPSR is not flexible enough in processing images with high- and low-frequency information. Considering a channel attention mechanism can distinguish low-frequency information and features in low-resolution images, in this paper, we firstly introduce this mechanism and design a new residual channel attention networks (RCAN); then the RCAN is adopted to replace deep feature extraction part in DPSR to achieve the adaptive adjustment of channel characteristics. Through four test experiments based on Set5, Set14, Urban100 and BSD100 datasets, we find that, under different blur kernels and different scale factors, the average peak signal to noise ratio (PSNR) and structural similarity (SSIM) values of our proposed method increase by 0.31dB and 0.55%, respectively; under different noise levels, the average PSNR and SSIM values increase by 0.26dB and 0.51%, respectively.


Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3510 ◽  
Author(s):  
Zhijian Wang ◽  
Junyuan Wang ◽  
Wenhua Du

Variational Mode Decomposition (VMD) can decompose signals into multiple intrinsic mode functions (IMFs). In recent years, VMD has been widely used in fault diagnosis. However, it requires a preset number of decomposition layers K and is sensitive to background noise. Therefore, in order to determine K adaptively, Permutation Entroy Optimization (PEO) is proposed in this paper. This algorithm can adaptively determine the optimal number of decomposition layers K according to the characteristics of the signal to be decomposed. At the same time, in order to solve the sensitivity of VMD to noise, this paper proposes a Modified VMD (MVMD) based on the idea of Noise Aided Data Analysis (NADA). The algorithm first adds the positive and negative white noise to the original signal, and then uses the VMD to decompose it. After repeated cycles, the noise in the original signal will be offset to each other. Then each layer of IMF is integrated with each layer, and the signal is reconstructed according to the results of the integrated mean. MVMD is used for the final decomposition of the reconstructed signal. The algorithm is used to deal with the simulation signals and measured signals of gearbox with multiple fault characteristics. Compared with the decomposition results of EEMD and VMD, it shows that the algorithm can not only improve the signal to noise ratio (SNR) of the signal effectively, but can also extract the multiple fault features of the gear box in the strong noise environment. The effectiveness of this method is verified.


Sign in / Sign up

Export Citation Format

Share Document