scholarly journals Perceptual Metric Guided Deep Attention Network for Single Image Super-Resolution

Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1145
Author(s):  
Yubao Sun ◽  
Yuyang Shi ◽  
Ying Yang ◽  
Wangping Zhou

Deep learning has been widely applied to image super-resolution (SR) tasks and has achieved superior performance over traditional methods due to its excellent feature learning capabilities. However, most of these deep learning-based methods require training image sets to pre-train SR network parameters. In this paper, we propose a new single image SR network without the need of any pre-training. The proposed network is optimized to achieve the SR reconstruction only from a low resolution observation rather than training image sets, and it focuses on improving the visual quality of reconstructed images. Specifically, we designed an attention-based decoder-encoder network for predicting the SR reconstruction, in which a residual spatial attention (RSA) unit is deployed in each layer of decoder to capture key information. Moreover, we adopt the perceptual metric consisting of L1 metric and multi-scale structural similarity (MSSSIM) metric to learn the network parameters. Different than the conventional MSE (mean squared error) metric, the perceptual metric coincides well with perceptual characteristics of the human visual system. Under the guidance of the perceptual metric, the RSA units are capable of predicting the visually sensitive areas at different scales. The proposed network can thus pay more attention to these areas for preserving visual informative structures at multiple scales. The experimental results on the Set5 and Set14 image set demonstrate that the combination of Perceptual metric and RSA units can significantly improve the reconstruction quality. In terms of PSNR and structural similarity (SSIM) values, the proposed method achieves better reconstruction results than the related works, and it is even comparable to some pre-trained networks.

Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


2019 ◽  
Vol 16 (4) ◽  
pp. 413-426 ◽  
Author(s):  
Viet Khanh Ha ◽  
Jin-Chang Ren ◽  
Xin-Ying Xu ◽  
Sophia Zhao ◽  
Gang Xie ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


2019 ◽  
Vol 6 (1) ◽  
pp. 181074 ◽  
Author(s):  
Dongsheng Zhou ◽  
Ruyi Wang ◽  
Xin Yang ◽  
Qiang Zhang ◽  
Xiaopeng Wei

Depth image super-resolution (SR) is a technique that uses signal processing technology to enhance the resolution of a low-resolution (LR) depth image. Generally, external database or high-resolution (HR) images are needed to acquire prior information for SR reconstruction. To overcome the limitations, a depth image SR method without reference to any external images is proposed. In this paper, a high-quality edge map is first constructed using a sparse coding method, which uses a dictionary learned from the original images at different scales. Then, the high-quality edge map is used to guide the interpolation for depth images by a modified joint trilateral filter. During the interpolation, some information of gradient and structural similarity (SSIM) are added to preserve the detailed information and suppress the noise. The proposed method can not only preserve the sharpness of image edge, but also avoid the dependence on database. Experimental results show that the proposed method is superior to some state-of-the-art depth image SR methods.


Author(s):  
Lujun Lin ◽  
Yiming Fang ◽  
Xiaochen Du ◽  
Zhu Zhou

As the practical applications in other fields, high-resolution images are usually expected to provide a more accurate assessment for the air-coupled ultrasonic (ACU) characterization of wooden materials. This paper investigated the feasibility of applying single image super-resolution (SISR) methods to recover high-quality ACU images from the raw observations that were constructed directly by the on-the-shelf ACU scanners. Four state-of-the-art SISR methods were applied to the low-resolution ACU images of wood products. The reconstructed images were evaluated by visual assessment and objective image quality metrics, including peak signal-to-noise-ratio and structural similarity. Both qualitative and quantitative evaluations indicated that the substantial improvement of image quality can be yielded. The results of the experiments demonstrated the superior performance and high reproducibility of the method for generating high-quality ACU images. Sparse coding based super-resolution and super-resolution convolutional neural network (SRCNN) significantly outperformed other algorithms. SRCNN has the potential to act as an effective tool to generate higher resolution ACU images due to its flexibility.


Optik ◽  
2014 ◽  
Vol 125 (15) ◽  
pp. 4005-4008 ◽  
Author(s):  
Chengzhi Deng ◽  
Wei Tian ◽  
Shengqian Wang ◽  
Huasheng Zhu ◽  
Wei Rao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document