scholarly journals Memory‐ and time‐efficient dense network for single‐image super‐resolution

2021 ◽  
Author(s):  
Nasrin Imanpour ◽  
Ahmad R. Naghsh‐Nilchi ◽  
Amirhassan Monadjemi ◽  
Hossein Karshenas ◽  
Kamal Nasrollahi ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3351
Author(s):  
Yooho Lee ◽  
Dongsan Jun ◽  
Byung-Gyu Kim ◽  
Hunjoo Lee

Super resolution (SR) enables to generate a high-resolution (HR) image from one or more low-resolution (LR) images. Since a variety of CNN models have been recently studied in the areas of computer vision, these approaches have been combined with SR in order to provide higher image restoration. In this paper, we propose a lightweight CNN-based SR method, named multi-scale channel dense network (MCDN). In order to design the proposed network, we extracted the training images from the DIVerse 2K (DIV2K) dataset and investigated the trade-off between the SR accuracy and the network complexity. The experimental results show that the proposed method can significantly reduce the network complexity, such as the number of network parameters and total memory capacity, while maintaining slightly better or similar perceptual quality compared to the previous methods.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 555
Author(s):  
Yogendra Rao Musunuri ◽  
Oh-Seol Kwon

In this paper, we propose a deep residual dense network (DRDN) for single image super- resolution. Based on human perceptual characteristics, the residual in residual dense block strategy (RRDB) is exploited to implement various depths in network architectures. The proposed model exhibits a simple sequential structure comprising residual and dense blocks with skip connections. It improves the stability and computational complexity of the network, as well as the perceptual quality. We adopt a perceptual metric to learn and assess the quality of the reconstructed images. The proposed model is trained with the Diverse2k dataset, and the performance is evaluated using standard datasets. The experimental results confirm that the proposed model exhibits superior performance, with better reconstruction results and perceptual quality than conventional methods.


2021 ◽  
Author(s):  
Zeyu An ◽  
Junyuan Zhang ◽  
Ziyu Sheng ◽  
Xuanhe Er ◽  
Junjie Lv

Abstract Recent studies have shown that Super-Resolution Generative Adversarial Network (SRGAN) can significantly improve the quality of single-image super-resolution. However, the existing SRGAN approaches also have drawbacks, such as inadequate of features utilization, huge number of parameters and poor scalability. To further enhance the visual quality, we thoroughly study three key components of SRGAN: network architecture, adversarial loss and perceptual loss, and propose a DenseNet with Residual-in-Residual Bottleneck Block (RRBB) named as Residual Bottleneck Dense Network (RBDN) for single-image super-resolution. In particular, RBDN combines ResNet and DenseNet with different roles, in which ResNet refines feature values by addition and DenseNet memorizes feature values by concatenation. Specifically, the DenseNet adopts the Bottleneck structure to reduce the network parameters and improve the convergence rate. In addition, the proposed RRBB, as the basic network building unit, removes the batch normalization (BN) layer and employs the ELU function to reduce the opposite effects in the absence of BN. In this way, RBDN can enjoy the merits of the sufficient feature value refined by residual groups and the refined feature value memorized by dense connections, thus achieving better performance compared with most current residual blocks.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 40499-40511
Author(s):  
Jiayv Qin ◽  
Xianfang Sun ◽  
Yitong Yan ◽  
Longcun Jin ◽  
Xinyi Peng

Author(s):  
Qiang Yu ◽  
Feiqiang Liu ◽  
Long Xiao ◽  
Zitao Liu ◽  
Xiaomin Yang

Deep-learning (DL)-based methods are of growing importance in the field of single image super-resolution (SISR). The practical application of these DL-based models is a remaining problem due to the requirement of heavy computation and huge storage resources. The powerful feature maps of hidden layers in convolutional neural networks (CNN) help the model learn useful information. However, there exists redundancy among feature maps, which can be further exploited. To address these issues, this paper proposes a lightweight efficient feature generating network (EFGN) for SISR by constructing the efficient feature generating block (EFGB). Specifically, the EFGB can conduct plain operations on the original features to produce more feature maps with parameters slightly increasing. With the help of these extra feature maps, the network can extract more useful information from low resolution (LR) images to reconstruct the desired high resolution (HR) images. Experiments conducted on the benchmark datasets demonstrate that the proposed EFGN can outperform other deep-learning based methods in most cases and possess relatively lower model complexity. Additionally, the running time measurement indicates the feasibility of real-time monitoring.


Author(s):  
Vishal Chudasama ◽  
Kishor Upla ◽  
Kiran Raja ◽  
Raghavendra Ramachandra ◽  
Christoph Busch

Sign in / Sign up

Export Citation Format

Share Document