Content-Aware Convolutional Neural Network for In-Loop Filtering in High Efficiency Video Coding

2019 ◽  
Vol 28 (7) ◽  
pp. 3343-3356 ◽  
Author(s):  
Chuanmin Jia ◽  
Shiqi Wang ◽  
Xinfeng Zhang ◽  
Shanshe Wang ◽  
Jiaying Liu ◽  
...  
2019 ◽  
Vol 32 (6) ◽  
pp. 1027-1043 ◽  
Author(s):  
Ali Hassan ◽  
Mubeen Ghafoor ◽  
Syed Ali Tariq ◽  
Tehseen Zia ◽  
Waqas Ahmad

2019 ◽  
Vol 29 (11) ◽  
pp. 3291-3301 ◽  
Author(s):  
Zhenghui Zhao ◽  
Shiqi Wang ◽  
Shanshe Wang ◽  
Xinfeng Zhang ◽  
Siwei Ma ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Soulef Bouaafia ◽  
Seifeddine Messaoud ◽  
Randa Khemiri ◽  
Fatma Elzahra Sayadi

With the rapid advancement in many multimedia applications, such as video gaming, computer vision applications, and video streaming and surveillance, video quality remains an open challenge. Despite the existence of the standardized video quality as well as high definition (HD) and ultrahigh definition (UHD), enhancing the quality for the video compression standard will improve the video streaming resolution and satisfy end user’s quality of service (QoS). Versatile video coding (VVC) is the latest video coding standard that achieves significant coding efficiency. VVC will help spread high-quality video services and emerging applications, such as high dynamic range (HDR), high frame rate (HFR), and omnidirectional 360-degree multimedia compared to its predecessor high efficiency video coding (HEVC). Given its valuable results, the emerging field of deep learning is attracting the attention of scientists and prompts them to solve many contributions. In this study, we investigate the deep learning efficiency to the new VVC standard in order to improve video quality. However, in this work, we propose a wide-activated squeeze-and-excitation deep convolutional neural network (WSE-DCNN) technique-based video quality enhancement for VVC. Thus, the VVC conventional in-loop filtering will be replaced by the suggested WSE-DCNN technique that is expected to eliminate the compression artifacts in order to improve visual quality. Numerical results demonstrate the efficacy of the proposed model achieving approximately − 2.85 % , − 8.89 % , and − 10.05 % BD-rate reduction of the luma (Y) and both chroma (U, V) components, respectively, under random access profile.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3112
Author(s):  
Jinchao Zhao ◽  
Pu Dai ◽  
Qiuwen Zhang

Compared with High Efficiency Video Coding (HEVC), the latest video coding standard Versatile Video Coding Standard (VVC), due to the introduction of many novel technologies and the introduction of the Quad-tree with nested Multi-type Tree (QTMT) division scheme in the block division method, the coding quality has been greatly improved. Due to the introduction of the QTMT scheme, the encoder needs to perform rate–distortion optimization for each division mode during Coding Unit (CU) division, so as to select the best division mode, which also leads to an increase in coding time and coding complexity. Therefore, we propose a VVC intra prediction complexity reduction algorithm based on statistical theory and the Size-adaptive Convolutional Neural Network (SAE-CNN). The algorithm combines the establishment of a pre-decision dictionary based on statistical theory and a Convolutional Neural Network (CNN) model based on adaptively adjusting the size of the pooling layer to form an adaptive CU size division decision process. The algorithm can make a decision on whether to divide CUs of different sizes, thereby avoiding unnecessary Rate–distortion Optimization (RDO) and reducing coding time. Experimental results show that compared with the original algorithm, our suggested algorithm can save 35.60% of the coding time and only increases the Bjøntegaard Delta Bit Rate (BD-BR) by 0.91%.


Author(s):  
Wei Jia ◽  
Li Li ◽  
Zhu Li ◽  
Xiang Zhang ◽  
Shan Liu

The block-based coding structure in the hybrid video coding framework inevitably introduces compression artifacts such as blocking, ringing, and so on. To compensate for those artifacts, extensive filtering techniques were proposed in the loop of video codecs, which are capable of boosting the subjective and objective qualities of reconstructed videos. Recently, neural network-based filters were presented with the power of deep learning from a large magnitude of data. Though the coding efficiency has been improved from traditional methods in High-Efficiency Video Coding (HEVC), the rich features and information generated by the compression pipeline have not been fully utilized in the design of neural networks. Therefore, in this article, we propose the Residual-Reconstruction-based Convolutional Neural Network (RRNet) to further improve the coding efficiency to its full extent, where the compression features induced from bitstream in form of prediction residual are fed into the network as an additional input to the reconstructed frame. In essence, the residual signal can provide valuable information about block partitions and can aid reconstruction of edge and texture regions in a picture. Thus, more adaptive parameters can be trained to handle different texture characteristics. The experimental results show that our proposed RRNet approach presents significant BD-rate savings compared to HEVC and the state-of-the-art CNN-based schemes, indicating that residual signal plays a significant role in enhancing video frame reconstruction.


Author(s):  
Myunghoon Jeon ◽  
Byoung-Dai Lee

Recently, cloud computing has emerged as a potential platform for distributed video encoding due to its advantages in terms of costs as well as performance. For distributed video encoding, the input video must be partitioned into several segments, each of which is processed over distributed resources. This paper describes the effect of different video partitioning schemes on overall encoding performance in the distributed encoding of High-Efficiency Video Coding (HEVC). In addition, we explored performances of video partitioning schemes on the basis of the types of the content to be encoded


2016 ◽  
Vol 11 (9) ◽  
pp. 764
Author(s):  
Lella Aicha Ayadi ◽  
Nihel Neji ◽  
Hassen Loukil ◽  
Mouhamed Ali Ben Ayed ◽  
Nouri Masmoudi

Sign in / Sign up

Export Citation Format

Share Document