scholarly journals Deep Unsupervised Fusion Learning for Hyperspectral Image Super Resolution

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2348
Author(s):  
Zhe Liu ◽  
Yinqiang Zheng ◽  
Xian-Hua Han

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.

2020 ◽  
Vol 32 ◽  
pp. 03044
Author(s):  
Vanita Mane ◽  
Suchit Jadhav ◽  
Praneya Lal

Single image super-resolution using deep learning techniques has shown very high reconstruction performance over the last few years. We propose a novel three-dimensional convolutional neural network called 3D FSRCNN based on FSRCNN, which reinstates the high-resolution quality of structural MRI. The 3D neural network generates output brain images of high-resolution (HR) from a low-resolution (LR) input image. A simple design ensures less time complexity and high reconstruction quality. The network is trained using T1-weighted structural MRI images from the human connectome project dataset which is a large publicly available brain MRI database.


2021 ◽  
Author(s):  
Debjoy Chowdhury

Recovering a High-Resolution (HR) image from a Low-Resolution (LR) image is the main concept of image Super-Resolution (SR). Convolution Neural Networks (CNN) are becoming widely adopted in many applications including generation of HR images from LR images. Although CNNs are widely used with great performance improvements, there is still much room for improvement. There has always been a trade-off between the number of parameters and performance enhancement. This thesis presents a novel convolutional neural network architecture for high scale image SR inspired by the DenseNet and ResNet architecture. In particular, modifications can be made to the convolutional layers in the network: stacking the features and reusing the weight layers to increase the receptive field. It is shown how this method can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters and sacrificing the computation time. These modifications can easily be integrated into any convolutional neural network to improve the accuracy by efficient high-level feature extraction while reducing training time and parameter numbers. Proposed methods are especially effective for the challenging high scale SR due to edge and texture recovery through the expanded network receptive field. Experimental results show that the proposed model outperforms the state-of-the-art methods.


2021 ◽  
Author(s):  
Debjoy Chowdhury

Recovering a High-Resolution (HR) image from a Low-Resolution (LR) image is the main concept of image Super-Resolution (SR). Convolution Neural Networks (CNN) are becoming widely adopted in many applications including generation of HR images from LR images. Although CNNs are widely used with great performance improvements, there is still much room for improvement. There has always been a trade-off between the number of parameters and performance enhancement. This thesis presents a novel convolutional neural network architecture for high scale image SR inspired by the DenseNet and ResNet architecture. In particular, modifications can be made to the convolutional layers in the network: stacking the features and reusing the weight layers to increase the receptive field. It is shown how this method can be used to expand the receptive field and performance of super-resolution networks, without increasing the number of trainable parameters and sacrificing the computation time. These modifications can easily be integrated into any convolutional neural network to improve the accuracy by efficient high-level feature extraction while reducing training time and parameter numbers. Proposed methods are especially effective for the challenging high scale SR due to edge and texture recovery through the expanded network receptive field. Experimental results show that the proposed model outperforms the state-of-the-art methods.


2019 ◽  
Vol 11 (23) ◽  
pp. 2859 ◽  
Author(s):  
Jiaojiao Li ◽  
Ruxing Cui ◽  
Bo Li ◽  
Rui Song ◽  
Yunsong Li ◽  
...  

Hyperspectral image (HSI) super-resolution (SR) is of great application value and has attracted broad attention. The hyperspectral single image super-resolution (HSISR) task is correspondingly difficult in SR due to the unavailability of auxiliary high resolution images. To tackle this challenging task, different from the existing learning-based HSISR algorithms, in this paper we propose a novel framework, i.e., a 1D–2D attentional convolutional neural network, which employs a separation strategy to extract the spatial–spectral information and then fuse them gradually. More specifically, our network consists of two streams: a spatial one and a spectral one. The spectral one is mainly composed of the 1D convolution to encode a small change in the spectrum, while the 2D convolution, cooperating with the attention mechanism, is used in the spatial pathway to encode spatial information. Furthermore, a novel hierarchical side connection strategy is proposed for effectively fusing spectral and spatial information. Compared with the typical 3D convolutional neural network (CNN), the 1D–2D CNN is easier to train with less parameters. More importantly, our proposed framework can not only present a perfect solution for the HSISR problem, but also explore the potential in hyperspectral pansharpening. The experiments over widely used benchmarks on SISR and hyperspectral pansharpening demonstrate that the proposed method could outperform other state-of-the-art methods, both in visual quality and quantity measurements.


Sign in / Sign up

Export Citation Format

Share Document