Perceptual quality assessment of JPEG, JPEG 2000, and JPEG XR

2010 ◽  
Author(s):  
T. Bruylants ◽  
J. Barbarien ◽  
A. Munteanu ◽  
P. Schelkens
Author(s):  
W. De Neve ◽  
S. Yang ◽  
D. Van Deursen ◽  
C. Kim ◽  
Y.M. Ro ◽  
...  
Keyword(s):  

2021 ◽  
pp. 1-1
Author(s):  
Evelyn Muschter ◽  
Andreas Noll ◽  
Jinting Zhao ◽  
Rania Hassen ◽  
Matti Strese ◽  
...  

2021 ◽  
Vol 7 (7) ◽  
pp. 112
Author(s):  
Domonkos Varga

The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ).


Author(s):  
Guangtao Zhai ◽  
Wei Sun ◽  
Xiongkuo Min ◽  
Jiantao Zhou

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.


2021 ◽  
pp. 1-1
Author(s):  
Hangwei Chen ◽  
Xiongli Chai ◽  
Feng Shao ◽  
Xuejin Wang ◽  
Qiuping Jiang ◽  
...  

Author(s):  
Anass Nouri ◽  
Christophe Charrier ◽  
Olivier Lezoray

This chapter concerns the visual saliency and the perceptual quality assessment of 3D meshes. Firstly, the chapter proposes a definition of visual saliency and describes the state-of-the-art methods for its detection on 3D mesh surfaces. A focus is made on a recent model of visual saliency detection for 3D colored and non-colored meshes whose results are compared with a ground-truth saliency as well as with the literature's methods. Since this model is able to estimate the visual saliency on 3D colored meshes, named colorimetric saliency, a description of the construction of a 3D colored mesh database that was used to assess its relevance is presented. The authors also describe three applications of the detailed model that respond to the problems of viewpoint selection, adaptive simplification and adaptive smoothing. Secondly, two perceptual quality assessment metrics for 3D non-colored meshes are described, analyzed, and compared with the state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document