scholarly journals Enhancement of Curve-Fitting Image Compression Using Hyperbolic Function

Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 291 ◽  
Author(s):  
Walaa Khalaf ◽  
Dhafer Zaghar ◽  
Noor Hashim

Image compression is one of the most interesting fields of image processing that is used to reduce image size. 2D curve-fitting is a method that converts the image data (pixel values) to a set of mathematical equations that are used to represent the image. These equations have a fixed form with a few coefficients estimated from the image which has been divided into several blocks. Since the number of coefficients is lower than the original block pixel size, it can be used as a tool for image compression. In this paper, a new curve-fitting model has been proposed to be derived from the symmetric function (hyperbolic tangent) with only three coefficients. The main disadvantages of previous approaches were the additional errors and degradation of edges of the reconstructed image, as well as the blocking effect. To overcome this deficiency, it is proposed that this symmetric hyperbolic tangent (tanh) function be used instead of the classical 1st- and 2nd-order curve-fitting functions which are asymmetric for reformulating the blocks of the image. Depending on the symmetric property of hyperbolic tangent function, this will reduce the reconstruction error and improve fine details and texture of the reconstructed image. The results of this work have been tested and compared with 1st-order curve-fitting, and standard image compression (JPEG) methods. The main advantages of the proposed approach are: strengthening the edges of the image, removing the blocking effect, improving the Structural SIMilarity (SSIM) index, and increasing the Peak Signal-to-Noise Ratio (PSNR) up to 20 dB. Simulation results show that the proposed method has a significant improvement on the objective and subjective quality of the reconstructed image.

Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 255 ◽  
Author(s):  
Walaa Khalaf ◽  
Abeer Al Gburi ◽  
Dhafer Zaghar

Image compression is one of the most important fields of image processing. Because of the rapid development of image acquisition which will increase the image size, and in turn requires bigger storage space. JPEG has been considered as the most famous and applicable algorithm for image compression; however, it has shortfalls for some image types. Hence, new techniques are required to improve the quality of reconstructed images as well as to increase the compression ratio. The work in this paper introduces a scheme to enhance the JPEG algorithm. The proposed scheme is a new method which shrinks and stretches images using a smooth filter. In order to remove the blurring artifact which would be developed from shrinking and stretching the image, a hyperbolic function (tanh) is used to enhance the quality of the reconstructed image. Furthermore, the new approach achieves higher compression ratio for the same image quality, and/or better image quality for the same compression ratio than ordinary JPEG with respect to large size and more complex content images. However, it is an application for optimization to enhance the quality (PSNR and SSIM), of the reconstructed image and to reduce the size of the compressed image, especially for large size images.


Author(s):  
N. A. N. Azman ◽  
Samura Ali ◽  
Rozeha A. Rashid ◽  
Faiz Asraf Saparudin ◽  
Mohd Adib Sarijari

Compression of images is of great interest in applications where efficiency with respect to data storage or transmission bandwidth is sought.The rapid growth of social media and digital networks have given rise to huge amount of image data being accessed and exchanged daily. However, the larger the image size, the longer it takes to transmit and archive. In other words, high quality images require huge amount of transmission bandwidth and storage space. Suitable image compression can help in reducing the image size and improving transmission speed. Lossless image compression is especially crucial in fields such as remote sensing healthcare network, security and military applications as the quality of images needs to be maintained to avoid any errors during analysis or diagnosis. In this paper, a hybrid prediction lossless image compression algorithm is proposed to address these issues. The algorithm is achieved by combining predictive Differential Pulse Code Modulation (DPCM) and Integer Wavelet Transform (IWT). Entropy and compression ratio calculation are used to analyze the performance of the designed coding. The analysis shows that the best hybrid predictive algorithm is the sequence of DPCM-IWT-Huffman which has bits sizes reduced by 36%, 48%, 34% and 13% for tested images of Lena, Cameraman, Pepper and Baboon, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


2021 ◽  
Vol 15 ◽  
pp. 43-47
Author(s):  
Ahmad Shahin ◽  
Walid Moudani ◽  
Fadi Chakik

In this paper we present a hybrid model for image compression based on segmentation and total variation regularization. The main motivation behind our approach is to offer decode image with immediate access to objects/features of interest. We are targeting high quality decoded image in order to be useful on smart devices, for analysis purpose, as well as for multimedia content-based description standards. The image is approximated as a set of uniform regions: The technique will assign well-defined members to homogenous regions in order to achieve image segmentation. The Adaptive fuzzy c-means (AFcM) is a guide to cluster image data. A second stage coding is applied using entropy coding to remove the whole image entropy redundancy. In the decompression phase, the reverse process is applied in which the decoded image suffers from missing details due to the coarse segmentation. For this reason, we suggest the application of total variation (TV) regularization, such as the Rudin-Osher-Fatemi (ROF) model, to enhance the quality of the coded image. Our experimental results had shown that ROF may increase the PSNR and hence offer better quality for a set of benchmark grayscale images.


2018 ◽  
Vol 29 (2) ◽  
pp. 141 ◽  
Author(s):  
Abbas Arab ◽  
Jamila Harbi ◽  
Amel Abbas

Principle component analysis produced reduction in dimension, therefore in our proposed method used PCA in image lossy compression and obtains the quality performance of reconstructed image. PSNR values increase when the number of PCA components is increased and CR, MSE, and other error parameters decreases when the number of components is increased.


The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


2021 ◽  
pp. 1-16
Author(s):  
Ying Huang ◽  
Qian Wan ◽  
Zixiang Chen ◽  
Zhanli Hu ◽  
Guanxun Cheng ◽  
...  

Reducing X-ray radiation is beneficial for reducing the risk of cancer in patients. There are two main approaches for achieving this goal namely, one is to reduce the X-ray current, and another is to apply sparse-view protocols to do image scanning and projections. However, these techniques usually lead to degradation of the reconstructed image quality, resulting in excessive noise and severe edge artifacts, which seriously affect the diagnosis result. In order to overcome such limitation, this study proposes and tests an algorithm based on guided kernel filtering. The algorithm combines the characteristics of anisotropic edges between adjacent image voxels, expresses the relevant weights with an exponential function, and adjusts the weights adaptively through local gray gradients to better preserve the image structure while suppressing noise information. Experiments show that the proposed method can effectively suppress noise and preserve the image structure. Comparing with similar algorithms, the proposed algorithm greatly improves the peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) of the reconstructed image. The proposed algorithm has the best effect in quantitative analysis, which verifies the effectiveness of the proposed method and good image reconstruction performance. Overall, this study demonstrates that the proposed method can reduce the number of projections required for repeated CT scans and has potential for medical applications in reducing radiation doses.


Author(s):  
Wajeeha Aslam ◽  
Muazzam A. Khan ◽  
M. Usman Akram ◽  
Nazar Abbas Saqib ◽  
Seungmin Rho

Wireless sensor networks are greatly habituated in widespread applications but still yet step behind human intelligence and vision. The main reason is constraints of processing, energy consumptions and communication of image data over the sensor nodes. Wireless sensor network is a cooperative network of nodes called motes. Image compression and transmission over a wide ranged sensor network is an emerging challenge with respect to battery, life time constraints. It reduces communication latency and makes sensor network efficient with respect to energy consumption. In this paper we will have an analysis and comparative look on different image compression techniques in order to reduce computational load, memory requirements and enhance coding speed and image quality. Along with compression, different transmission methods will be discussed and analyzed with respect to energy consumption for better performance in wireless sensor networks.


Sign in / Sign up

Export Citation Format

Share Document