scholarly journals RAVA: Region-Based Average Video Quality Assessment

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5489
Author(s):  
Xuanyi Wu ◽  
Irene Cheng ◽  
Zhenkun Zhou ◽  
Anup Basu

Video has become the most popular medium of communication over the past decade, with nearly 90 percent of the bandwidth on the Internet being used for video transmission. Thus, evaluating the quality of an acquired or compressed video has become increasingly important. The goal of video quality assessment (VQA) is to measure the quality of a video clip as perceived by a human observer. Since manually rating every video clip to evaluate quality is infeasible, researchers have attempted to develop various quantitative metrics that estimate the perceptual quality of video. In this paper, we propose a new region-based average video quality assessment (RAVA) technique extending image quality assessment (IQA) metrics. In our experiments, we extend two full-reference (FR) image quality metrics to measure the feasibility of the proposed RAVA technique. Results on three different datasets show that our RAVA method is practical in predicting objective video scores.

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Rafal Obuchowicz ◽  
Mariusz Oszust ◽  
Adam Piorkowski

Abstract Background The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. Methods For the variability evaluation, a dataset containing distorted MRI images was prepared and then assessed by 31 experienced medical professionals (radiologists). Differences between observers were analyzed using the Fleiss’ kappa. However, since the kappa evaluates the agreement among radiologists taking into account aggregated decisions, a typically employed criterion of the image quality assessment (IQA) performance was used to provide a more thorough analysis. The IQA performance of radiologists was evaluated by comparing the Spearman correlation coefficients, ρ, between individual scores with the mean opinion scores (MOS) composed of the subjective opinions of the remaining professionals. Results The experiments show that there is a significant agreement among radiologists (κ=0.12; 95% confidence interval [CI]: 0.118, 0.121; P<0.001) on the quality of the assessed images. The resulted κ is strongly affected by the subjectivity of the assigned scores, separately presenting close scores. Therefore, the ρ was used to identify poor performance cases and to confirm the consistency of the majority of collected scores (ρmean = 0.5706). The results for interns (ρmean = 0.6868) supports the finding that the quality assessment of MR images can be successfully taught. Conclusions The agreement observed among radiologists from different imaging centers confirms the subjectivity of the perception of MR images. It was shown that the image content and severity of distortions affect the IQA. Furthermore, the study highlights the importance of the psychosomatic condition of the observers and their attitude.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 313
Author(s):  
Domonkos Varga

The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2768
Author(s):  
Domonkos Varga

No-reference video quality assessment (NR-VQA) has piqued the scientific community’s interest throughout the last few decades, owing to its importance in human-centered interfaces. The goal of NR-VQA is to predict the perceptual quality of digital videos without any information about their distortion-free counterparts. Over the past few decades, NR-VQA has become a very popular research topic due to the spread of multimedia content and video databases. For successful video quality evaluation, creating an effective video representation from the original video is a crucial step. In this paper, we propose a powerful feature vector for NR-VQA inspired by Benford’s law. Specifically, it is demonstrated that first-digit distributions extracted from different transform domains of the video volume data are quality-aware features and can be effectively mapped onto perceptual quality scores. Extensive experiments were carried out on two large, authentically distorted VQA benchmark databases.


2021 ◽  
Vol 7 (7) ◽  
pp. 112
Author(s):  
Domonkos Varga

The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ).


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
V. V. Grebenyuk ◽  
◽  
O. A. Dibrivnyy ◽  
O. V. Nehodenko

A comparative analysis of functions to assess image quality in the absence of a sample: no-reference (NR) measure or NR-type methods. The availability of NR-methods is very important for assessing the quality of streaming video such as television, game streaming, online conferences, web-chatting, etc. (because on the side of the recipient of the video there is no standard for quality comparison) and assessing the results of transformations aimed at improving video, and choosing the parameters of these transformations (brightness change, semitone and others). The human visual system (HVS) is able to visually assessing video quality, but If required to visually assess the quality of dozens or hundreds of videos or ranking them by quality level it will be needed a huge amount of time. Six types of experiments were performed to analyze the correlation of calculated quantitative estimates with visual assessments of the quality of the tested video files. Three of them are fundamentally new: comparing video after gamma correction and changing the contrast with different parameters, as well as blurring, which may be the result of defocusing the camcorder. A hybrid method (or reduced-reference (RR) measure) and a full-reference (FR) measure or FR-type method were also added for comparison. It has been experimentally shown that none of the studied non-reference methods of image quality assessment is universal, and the calculated assessment cannot be converted into a quality scale without taking into account the factors influencing the distortion of image quality. Moreover, all NR-type methods could not cope with the experiment of changing the contrast, believing that the best result is the most contrasting image but the original. Instead, the reference methods showed an excellent result (except one, which showed partial ineffectiveness). Also, it has been shown performance comparison between methods. It is shown that most of the studied methods calculate local estimates for each frame, and their arithmetic mean value is an estimate of the quality of the entire video file. If the video is dominated by large areas of uniform evaluation, methods of this type may give incorrect quality evaluations that do not coincide with the visual evaluations.


Author(s):  
Guangtao Zhai ◽  
Wei Sun ◽  
Xiongkuo Min ◽  
Jiantao Zhou

Low-light image enhancement algorithms (LIEA) can light up images captured in dark or back-lighting conditions. However, LIEA may introduce various distortions such as structure damage, color shift, and noise into the enhanced images. Despite various LIEAs proposed in the literature, few efforts have been made to study the quality evaluation of low-light enhancement. In this article, we make one of the first attempts to investigate the quality assessment problem of low-light image enhancement. To facilitate the study of objective image quality assessment (IQA), we first build a large-scale low-light image enhancement quality (LIEQ) database. The LIEQ database includes 1,000 light-enhanced images, which are generated from 100 low-light images using 10 LIEAs. Rather than evaluating the quality of light-enhanced images directly, which is more difficult, we propose to use the multi-exposure fused (MEF) image and stack-based high dynamic range (HDR) image as a reference and evaluate the quality of low-light enhancement following a full-reference (FR) quality assessment routine. We observe that distortions introduced in low-light enhancement are significantly different from distortions considered in traditional image IQA databases that are well-studied, and the current state-of-the-art FR IQA models are also not suitable for evaluating their quality. Therefore, we propose a new FR low-light image enhancement quality assessment (LIEQA) index by evaluating the image quality from four aspects: luminance enhancement, color rendition, noise evaluation, and structure preserving, which have captured the most key aspects of low-light enhancement. Experimental results on the LIEQ database show that the proposed LIEQA index outperforms the state-of-the-art FR IQA models. LIEQA can act as an evaluator for various low-light enhancement algorithms and systems. To the best of our knowledge, this article is the first of its kind comprehensive low-light image enhancement quality assessment study.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6457
Author(s):  
Hayat Ullah ◽  
Muhammad Irfan ◽  
Kyungjin Han ◽  
Jong Weon Lee

Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document