Incorporating Motion Blur Compensation to Blind Super Resolution Restoration

Author(s):  
Ryoichi Sakuragi ◽  
Nozomu Hamada
Keyword(s):  
Author(s):  
Rohita H. Jagdale ◽  
Sanjeevani K. Shah

In video Super Resolution (SR), the problem of cost expense concerning the attainment of enhanced spatial resolution, computational complexity and difficulties in motion blur makes video SR a complex task. Moreover, maintaining temporal consistency is crucial to achieving an efficient and robust video SR model. This paper plans to develop an intelligent SR model for video frames. Initially, the video frames in RGB format will be transformed into HSV. In general, the improvement in video frames is done in V-channel to achieve High-Resolution (HR) videos. In order to enhance the RGB pixels, the current window size is enhanced to high-dimensional window size. As a novelty, this paper intends to formulate a high-dimensional matrix with enriched pixel intensity in V-channel to produce enhanced HR video frames. Estimating the enriched pixels in the high-dimensional matrix is complex, however in this paper, it is dealt in a significant way by means of a certain process: (i) motion estimation (ii) cubic spline interpolation and deblurring or sharpening. As the main contribution, the cubic spline interpolation process is enhanced via optimization in terms of selecting the optimal resolution factor and different cubic spline parameters. For optimal tuning, this paper introduces a new modified algorithm, which is the modification of the Rider Optimization Algorithm (ROA) named Mean Fitness-ROA (MF-ROA). Once the HR image is attained, it combines the HSV and converts to RGB, which obtains the enhanced output RGB video frame. Finally, the performance of the proposed work is compared over other state-of-the-art models with respect to BRISQUE, SDME and ESSIM measures, and proves its superiority over other models.


Author(s):  
Vikas Kumar ◽  
Tanupriya Choudhury ◽  
Suresh Chandra Satapathy ◽  
Ravi Tomar ◽  
Archit Aggarwal

Recently, huge progress has been achieved in the field of single image super resolution which augments the resolution of images. The idea behind super resolution is to convert low-resolution images into high-resolution images. SRCNN (Single Resolution Convolutional Neural Network) was a huge improvement over the existing methods of single-image super resolution. However, video super-resolution, despite being an active field of research, is yet to benefit from deep learning. Using still images and videos downloaded from various sources, we explore the possibility of using SRCNN along with image fusion techniques (minima, maxima, average, PCA, DWT) to improve over existing video super resolution methods. Video Super-Resolution has inherent difficulties such as unexpected motion, blur and noise. We propose Video Super Resolution – Image Fusion (VSR-IF) architecture which utilizes information from multiple frames to produce a single high- resolution frame for a video. We use SRCNN as a reference model to obtain high resolution adjacent frames and use a concatenation layer to group those frames into a single frame. Since, our method is data-driven and requires only minimal initial training, it is faster than other video super resolution methods. After testing our program, we find that our technique shows a significant improvement over SCRNN and other single image and frame super resolution techniques.


Author(s):  
Ziyang Ma ◽  
Renjie Liao ◽  
Xin Tao ◽  
Li Xu ◽  
Jiaya Jia ◽  
...  
Keyword(s):  

2011 ◽  
Vol 225-226 ◽  
pp. 895-899 ◽  
Author(s):  
Feng Qing Qin ◽  
Zhong Li ◽  
Li Hong Zhu ◽  
Ying De You ◽  
Li Lan Cao

Blind image super-resolution reconstruction is one of the hot and difficult problem in image processing. a framework of blind single-image super-resolution reconstruction algorithm is proposed. In the low-resolution imaging model, the processes of motion blur, down-sampling, and noise are considered. The parameter of motion blur is estimated through an error-parameter analysis method. Utilizing Wiener filtering image restoration algorithm, an error-parameter curve at different motion distance is generated, from which the motion distance of the motion point spread function (PSF) can be estimated approximately. The super-resolution image is reconstructed through the iterative back projection (IBP) algorithm. The experimental results show that motion PSF estimation plays an important role on the quality of the SR reconstructed image, and also demonstrate the effectiveness of the proposed algorithm.


2021 ◽  
Author(s):  
Martin Laurenzis ◽  
Trevor A. Seets ◽  
Emmanuel Bacher ◽  
Atul N. Ingle ◽  
Andreas U. Velten

Author(s):  
Xin Jin ◽  
Jianfeng Xu ◽  
Kazuyuki Tasaka ◽  
Zhibo Chen

In this article, we address the degraded image super-resolution problem in a multi-task learning (MTL) manner. To better share representations between multiple tasks, we propose an all-in-one collaboration framework (ACF) with a learnable “junction” unit to handle two major problems that exist in MTL—“How to share” and “How much to share.” Specifically, ACF consists of a sharing phase and a reconstruction phase. Considering the intrinsic characteristic of multiple image degradations, we propose to first deal with the compression artifact, motion blur, and spatial structure information of the input image in parallel under a three-branch architecture in the sharing phase. Subsequently, in the reconstruction phase, we up-sample the previous features for high-resolution image reconstruction with a channel-wise and spatial attention mechanism. To coordinate two phases, we introduce a learnable “junction” unit with a dual-voting mechanism to selectively filter or preserve shared feature representations that come from sharing phase, learning an optimal combination for the following reconstruction phase. Finally, a curriculum learning-based training scheme is further proposed to improve the convergence of the whole framework. Extensive experimental results on synthetic and real-world low-resolution images show that the proposed all-in-one collaboration framework not only produces favorable high-resolution results while removing serious degradation, but also has high computational efficiency, outperforming state-of-the-art methods. We also have applied ACF to some image-quality sensitive practical task, such as pose estimation, to improve estimation accuracy of low-resolution images.


Sign in / Sign up

Export Citation Format

Share Document