scholarly journals No-Reference Objective Video Quality Measure for Frame Freezing Degradation

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4655
Author(s):  
Emil Dumic ◽  
Anamaria Bjelopera

In this paper we present a novel no-reference video quality measure, NR-FFM (no-reference frame–freezing measure), designed to estimate quality degradations caused by frame freezing of streamed video. The performance of the measure was evaluated using 40 degraded video sequences from the laboratory for image and video engineering (LIVE) mobile database. Proposed quality measure can be used in different scenarios such as mobile video transmission by itself or in combination with other quality measures. These two types of applications were presented and studied together with considerations on relevant normalization issues. The results showed promising correlation values between the user assigned quality and the estimated quality scores.

2020 ◽  
Vol 2020 (11) ◽  
pp. 93-1-93-7
Author(s):  
Lohic Fotio Tiotsop ◽  
Antonio Servetti ◽  
Enrico Masala

Large subjectively annotated datasets are crucial to the development and testing of objective video quality measures (VQMs). In this work we focus on the recently released ITS4S dataset. Relying on statistical tools, we show that the content of the dataset is rather heterogeneous from the point of view of quality assessment. Such diversity naturally makes the dataset a worthy asset to validate the accuracy of video quality metrics (VQMs). In particular we study the ability of VQMs to model the reduction or the increase of the visibility of distortion due to the spatial activity in the content. The study reveals that VQMs are likely to overestimate the perceived quality of processed video sequences whose source is characterized by few spatial details. We then propose an approach aiming at modeling the impact of spatial activity on distortion visibility when objectively assessing the visual quality of a content. The effectiveness of the proposal is validated on the ITS4S dataset as well as on the Netflix public dataset.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2872
Author(s):  
Miroslav Uhrina ◽  
Anna Holesova ◽  
Juraj Bienik ◽  
Lukas Sevcik

This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were encoded at 5 different constant bitrates in two widely video compression standards H.264/AVC and H.265/HEVC at Full HD and Ultra HD resolutions, which means 160 annotated video sequences were created. The length of Group of Pictures (GOP) was set to half the framerate value, as is typical for video intended for transmission over a noisy communication channel. The evaluation was performed in two laboratories: one situated at the University of Zilina, and the second at the VSB—Technical University in Ostrava. The results acquired in both laboratories reached/showed a high correlation. Notwithstanding the fact that the sequences with low Spatial Information (SI) and Temporal Information (TI) values reached better Mean Opinion Score (MOS) score than the sequences with higher SI and TI values, these two parameters are not sufficient for scene description, and this domain should be the subject of further research. The evaluation results led us to the conclusion that it is unnecessary to use the H.265/HEVC codec for compression of Full HD sequences and the compression efficiency of the H.265 codec by the Ultra HD resolution reaches the compression efficiency of both codecs by the Full HD resolution. This paper also includes the recommendations for minimum bitrate thresholds at which the video sequences at both resolutions retain good and fair subjectively perceived quality.


2019 ◽  
Vol 9 (10) ◽  
pp. 2003 ◽  
Author(s):  
Tung-Ming Pan ◽  
Kuo-Chin Fan ◽  
Yuan-Kai Wang

Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1949
Author(s):  
Lukas Sevcik ◽  
Miroslav Voznak

Video quality evaluation needs a combined approach that includes subjective and objective metrics, testing, and monitoring of the network. This paper deals with the novel approach of mapping quality of service (QoS) to quality of experience (QoE) using QoE metrics to determine user satisfaction limits, and applying QoS tools to provide the minimum QoE expected by users. Our aim was to connect objective estimations of video quality with the subjective estimations. A comprehensive tool for the estimation of the subjective evaluation is proposed. This new idea is based on the evaluation and marking of video sequences using the sentinel flag derived from spatial information (SI) and temporal information (TI) in individual video frames. The authors of this paper created a video database for quality evaluation, and derived SI and TI from each video sequence for classifying the scenes. Video scenes from the database were evaluated by objective and subjective assessment. Based on the results, a new model for prediction of subjective quality is defined and presented in this paper. This quality is predicted using an artificial neural network based on the objective evaluation and the type of video sequences defined by qualitative parameters such as resolution, compression standard, and bitstream. Furthermore, the authors created an optimum mapping function to define the threshold for the variable bitrate setting based on the flag in the video, determining the type of scene in the proposed model. This function allows one to allocate a bitrate dynamically for a particular segment of the scene and maintains the desired quality. Our proposed model can help video service providers with the increasing the comfort of the end users. The variable bitstream ensures consistent video quality and customer satisfaction, while network resources are used effectively. The proposed model can also predict the appropriate bitrate based on the required quality of video sequences, defined using either objective or subjective assessment.


2012 ◽  
Vol 532-533 ◽  
pp. 1219-1224
Author(s):  
Hong Tao Deng

During video transmission over error prone network, compressed video bit-stream is sensitive to channel errors that may degrade the decoded pictures severely. In order to solve this problem, error concealment technique is a useful post-processing tool for recovering the lost information. In these methods, how to estimate the lost motion vector correctly is important for the quality of decoded picture. In order to recover the lost motion vector, an Decoder Motion Vector Estimation (DMVE) criterion was proposed and have well effect for recover the lost blocks. In this paper, we propose an improved error concealment method based on DMVE, which exploits the accurate motion vector by using redundant motion vector information. The experimental results with an H.264 codec show that our method improves both subjective and objective decoder reconstructed video quality, especially for sequences of drastic motion.


2014 ◽  
Vol 496-500 ◽  
pp. 2200-2203
Author(s):  
Yang Su ◽  
Mi Lu

We introduce a new across-peer rate allocation algorithm with successive refinement to improve the video transmission performance in P2P networks, based on the combination of multiple description coding and network coding. Successive refinement is implemented through layered multiple description codes. The algorithm is developed to maximize the expected video quality at the receivers by partitioning video bitstream into different descriptions depending on different bandwidth conditions of each peer. Adaptive rate partition adjustment is applied to ensure the real reflection of the packet drop rate in the network. Also the granularity is changed to the scale of atomic blocks instead of stream rates in prior works. Through simulation results we show that the algorithm outperforms prior algorithms in terms of video playback quality at the peer ends, and helps the system more adjustable to the peer dynamics.


2013 ◽  
Vol 30 ◽  
pp. 201-213
Author(s):  
Wenting Yu ◽  
Vincent J. van Heuven

The present study investigates whether immediate repetition improves consecutive interpreting performance during training. In addition, the study tries to shed light on whether the effects of immediate repetition differ between BA and MA interpreting trainees. In the experiment, ten raters judged six major quality measures of the accuracy and fluency of the interpreting output recorded from seven BA trainees and five MA trainees. The seventh quality measure expressed linguistic complexity as the number of clauses per AS-unit. The results show that the main effects of repetition and proficiency are both significant on accuracy and fluency, but the main effects are absent on linguistic complexity. Moreover, in terms of fluency BA trainees benefit significantly more from repetition than MA trainees. Accuracy improvement through repetition does not differ significantly between the two groups. The results have implications for consecutive interpreting training at different stages.


Sign in / Sign up

Export Citation Format

Share Document