A Super-Resolution Method Using Spatio-Temporal Registration of Multi-Scale Components in Consideration of Color-Sampling Patterns of UHDTV Cameras

Author(s):  
Yasutaka Matsuo ◽  
Shinichi Sakaida
Sensors ◽  
2018 ◽  
Vol 18 (2) ◽  
pp. 498 ◽  
Author(s):  
Hong Zhu ◽  
Xinming Tang ◽  
Junfeng Xie ◽  
Weidong Song ◽  
Fan Mo ◽  
...  

2015 ◽  
Vol 29 (12) ◽  
pp. 2095-2120 ◽  
Author(s):  
Linwei Yue ◽  
Huanfeng Shen ◽  
Qiangqiang Yuan ◽  
Liangpei Zhang

2020 ◽  
Vol 34 (07) ◽  
pp. 11278-11286 ◽  
Author(s):  
Soo Ye Kim ◽  
Jihyong Oh ◽  
Munchurl Kim

Super-resolution (SR) has been widely used to convert low-resolution legacy videos to high-resolution (HR) ones, to suit the increasing resolution of displays (e.g. UHD TVs). However, it becomes easier for humans to notice motion artifacts (e.g. motion judder) in HR videos being rendered on larger-sized display devices. Thus, broadcasting standards support higher frame rates for UHD (Ultra High Definition) videos (4K@60 fps, 8K@120 fps), meaning that applying SR only is insufficient to produce genuine high quality videos. Hence, to up-convert legacy videos for realistic applications, not only SR but also video frame interpolation (VFI) is necessitated. In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps. For this, we propose a novel training scheme with a multi-scale temporal loss that imposes temporal regularization on the input video sequence, which can be applied to any general video-related task. The proposed structure is analyzed in depth with extensive experiments.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3351
Author(s):  
Yooho Lee ◽  
Dongsan Jun ◽  
Byung-Gyu Kim ◽  
Hunjoo Lee

Super resolution (SR) enables to generate a high-resolution (HR) image from one or more low-resolution (LR) images. Since a variety of CNN models have been recently studied in the areas of computer vision, these approaches have been combined with SR in order to provide higher image restoration. In this paper, we propose a lightweight CNN-based SR method, named multi-scale channel dense network (MCDN). In order to design the proposed network, we extracted the training images from the DIVerse 2K (DIV2K) dataset and investigated the trade-off between the SR accuracy and the network complexity. The experimental results show that the proposed method can significantly reduce the network complexity, such as the number of network parameters and total memory capacity, while maintaining slightly better or similar perceptual quality compared to the previous methods.


2006 ◽  
Vol 48 (3) ◽  
pp. 419-431 ◽  
Author(s):  
I Teliban ◽  
D Block ◽  
A Piel ◽  
V Naulin

Sign in / Sign up

Export Citation Format

Share Document