Perceptual Image Compression with Block-Level Just Noticeable Difference Prediction

Author(s):  
Tao Tian ◽  
Hanli Wang ◽  
Sam Kwong ◽  
C.-C. Jay Kuo

A block-level perceptual image compression framework is proposed in this work, including a block-level just noticeable difference (JND) prediction model and a preprocessing scheme. Specifically speaking, block-level JND values are first deduced by utilizing the OTSU method based on the variation of block-level structural similarity values between two adjacent picture-level JND values in the MCL-JCI dataset. After the JND value for each image block is generated, a convolutional neural network–based prediction model is designed to forecast block-level JND values for a given target image. Then, a preprocessing scheme is devised to modify the discrete cosine transform coefficients during JPEG compression on the basis of the distribution of block-level JND values of the target test image. Finally, the test image is compressed by the max JND value across all of its image blocks in the light of the initial quality factor setting. The experimental results demonstrate that the proposed block-level perceptual image compression method is able to achieve 16.75% bit saving as compared to the state-of-the-art method with similar subjective quality. The project page can be found at https://mic.tongji.edu.cn/43/3f/c9778a148287/page.htm.

2020 ◽  
Vol 66 (3) ◽  
pp. 690-700
Author(s):  
Tao Tian ◽  
Hanli Wang ◽  
Lingxuan Zuo ◽  
C.-C. Jay Kuo ◽  
Sam Kwong

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Nur Azman Abu ◽  
Ferda Ernawan

A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression.


2007 ◽  
Vol 4 (2) ◽  
pp. 330-337
Author(s):  
Baghdad Science Journal

We explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 291 ◽  
Author(s):  
Walaa Khalaf ◽  
Dhafer Zaghar ◽  
Noor Hashim

Image compression is one of the most interesting fields of image processing that is used to reduce image size. 2D curve-fitting is a method that converts the image data (pixel values) to a set of mathematical equations that are used to represent the image. These equations have a fixed form with a few coefficients estimated from the image which has been divided into several blocks. Since the number of coefficients is lower than the original block pixel size, it can be used as a tool for image compression. In this paper, a new curve-fitting model has been proposed to be derived from the symmetric function (hyperbolic tangent) with only three coefficients. The main disadvantages of previous approaches were the additional errors and degradation of edges of the reconstructed image, as well as the blocking effect. To overcome this deficiency, it is proposed that this symmetric hyperbolic tangent (tanh) function be used instead of the classical 1st- and 2nd-order curve-fitting functions which are asymmetric for reformulating the blocks of the image. Depending on the symmetric property of hyperbolic tangent function, this will reduce the reconstruction error and improve fine details and texture of the reconstructed image. The results of this work have been tested and compared with 1st-order curve-fitting, and standard image compression (JPEG) methods. The main advantages of the proposed approach are: strengthening the edges of the image, removing the blocking effect, improving the Structural SIMilarity (SSIM) index, and increasing the Peak Signal-to-Noise Ratio (PSNR) up to 20 dB. Simulation results show that the proposed method has a significant improvement on the objective and subjective quality of the reconstructed image.


Fractals ◽  
2007 ◽  
Vol 15 (02) ◽  
pp. 183-195 ◽  
Author(s):  
RUI YANG ◽  
XIAOYUAN YANG ◽  
B. LI

Two fractal image compression algorithms based on possibility theory are originally presented in this paper. Fuzzy sets are used to represent the edge character of each image block, and two kinds of membership function are designed. A fuzzy integrated judgement model is also proposed. The model generates an accurate value for each edge block, which would be a label during the search process. The edge possibility distribution function and the edge necessity level are designed to control the quantity of the blocks to be searched. Meanwhile the pre-restriction is proposed, the average intensity value at different locations is used to be a necessary condition before the MSE computations. It is shown by our experiments that the encoding times of our two algorithms, compared to that of Jacquin's approach, are reduced to 60%–70% and 10%–20%, respectively.


Sign in / Sign up

Export Citation Format

Share Document