scholarly journals Domain Transfer Learning for Hyperspectral Image Super-Resolution

2019 ◽  
Vol 11 (6) ◽  
pp. 694 ◽  
Author(s):  
Xiaoyan Li ◽  
Lefei Zhang ◽  
Jane You

A Hyperspectral Image (HSI) contains a great number of spectral bands for each pixel; however, the spatial resolution of HSI is low. Hyperspectral image super-resolution is effective to enhance the spatial resolution while preserving the high-spectral-resolution by software techniques. Recently, the existing methods have been presented to fuse HSI and Multispectral Images (MSI) by assuming that the MSI of the same scene is required with the observed HSI, which limits the super-resolution reconstruction quality. In this paper, a new framework based on domain transfer learning for HSI super-resolution is proposed to enhance the spatial resolution of HSI by learning the knowledge from the general purpose optical images (natural scene images) and exploiting the cross-correlation between the observed low-resolution HSI and high-resolution MSI. First, the relationship between low- and high-resolution images is learned by a single convolutional super-resolution network and then is transferred to HSI by the idea of transfer learning. Second, the obtained Pre-high-resolution HSI (pre-HSI), the observed low-resolution HSI, and high-resolution MSI are simultaneously considered to estimate the endmember matrix and the abundance code for learning the spectral characteristic. Experimental results on ground-based and remote sensing datasets demonstrate that the proposed method achieves comparable performance and outperforms the existing HSI super-resolution methods.

2018 ◽  
Vol 10 (10) ◽  
pp. 1574 ◽  
Author(s):  
Dongsheng Gao ◽  
Zhentao Hu ◽  
Renzhen Ye

Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. In recent years, various methods have been proposed to fuse HSI and multispectral image (MSI) from an unmixing or a spectral dictionary perspective. However, these methods extract the spectral information from each image individually, and therefore ignore the cross-correlation between the observed HSI and MSI. It is difficult to achieve high-spatial-resolution while preserving the spatial-spectral consistency between low-resolution HSI and high-resolution HSI. In this paper, a self-dictionary regression based method is proposed to utilize cross-correlation between the observed HSI and MSI. Both the observed low-resolution HSI and MSI are simultaneously considered to estimate the endmember dictionary and the abundance code. To preserve the spectral consistency, the endmember dictionary is extracted by performing a common sparse basis selection on the concatenation of observed HSI and MSI. Then, a consistent constraint is exploited to ensure the spatial consistency between the abundance code of low-resolution HSI and the abundance code of high-resolution HSI. Extensive experiments on three datasets demonstrate that the proposed method outperforms the state-of-the-art methods.


2019 ◽  
Vol 11 (10) ◽  
pp. 1229 ◽  
Author(s):  
Jing Hu ◽  
Minghua Zhao ◽  
Yunsong Li

Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we present a hyperspectral image (HSI) SR method based on a deep information distillation network (IDN) and an intra-fusion operation. Specifically, bands are firstly selected by a certain distance and super-resolved by an IDN. The IDN employs distillation blocks to gradually extract abundant and efficient features for reconstructing the selected bands. Second, the unselected bands are obtained via spectral correlation, yielding a coarse high-resolution (HR) HSI. Finally, the spectral-interpolated coarse HR HSI is intra-fused with the input HSI to achieve a finer HR HSI, making further use of the spatial-spectral information these unselected bands convey. Different from most existing fusion-based HSI SR methods, the proposed intra-fusion operation does not require any auxiliary co-registered image as the input, which makes this method more practical. Moreover, contrary to most single-based HSI SR methods whose performance decreases significantly as the image quality gets worse, the proposal deeply utilizes the spatial-spectral information and the mapping knowledge provided by the IDN, which achieves more robust performance. Experimental data and comparative analysis have demonstrated the effectiveness of this method.


Author(s):  
R. S. Hansen ◽  
D. W. Waldram ◽  
T. Q. Thai ◽  
R. B. Berke

Abstract Background High-resolution Digital Image Correlation (DIC) measurements have previously been produced by stitching of neighboring images, which often requires short working distances. Separately, the image processing community has developed super resolution (SR) imaging techniques, which improve resolution by combining multiple overlapping images. Objective This work investigates the novel pairing of super resolution with digital image correlation, as an alternative method to produce high-resolution full-field strain measurements. Methods First, an image reconstruction test is performed, comparing the ability of three previously published SR algorithms to replicate a high-resolution image. Second, an applied translation is compared against DIC measurement using both low- and super-resolution images. Third, a ring sample is mechanically deformed and DIC strain measurements from low- and super-resolution images are compared. Results SR measurements show improvements compared to low-resolution images, although they do not perfectly replicate the high-resolution image. SR-DIC demonstrates reduced error and improved confidence in measuring rigid body translation when compared to low resolution alternatives, and it also shows improvement in spatial resolution for strain measurements of ring deformation. Conclusions Super resolution imaging can be effectively paired with Digital Image Correlation, offering improved spatial resolution, reduced error, and increased measurement confidence.


Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


Author(s):  
Dr.Vani. K ◽  
Anto. A. Micheal

This paper is an attempt to combine high resolution panchromatic lunar image with low resolution multispectral lunar image to produce a composite image using wavelet approach. There are many sensors that provide us image data about the lunar surface. The spatial resolution and spectral resolution is unique for each sensor, thereby resulting in limitation in extraction of information about the lunar surface. The high resolution panchromatic lunar image has high spatial resolution but low spectral resolution; the low resolution multispectral image has low spatial resolution but high spectral resolution. Extracting features such as craters, crater morphology, rilles and regolith surfaces with a low spatial resolution in multispectral image may not yield satisfactory results. A sensor which has high spatial resolution can provide better information when fused with the high spectral resolution. These fused image results pertain to enhanced crater mapping and mineral mapping in lunar surface. Since fusion using wavelet preserve spectral content needed for mineral mapping, image fusion has been done using wavelet approach.


Author(s):  
Zheng Wang ◽  
Mang Ye ◽  
Fan Yang ◽  
Xiang Bai ◽  
Shin'ichi Satoh

Person re-identification (REID) is an important task in video surveillance and forensics applications. Most of previous approaches are based on a key assumption that all person images have uniform and sufficiently high resolutions. Actually, various low-resolutions and scale mismatching always exist in open world REID. We name this kind of problem as Scale-Adaptive Low Resolution Person Re-identification (SALR-REID). The most intuitive way to address this problem is to increase various low-resolutions (not only low, but also with different scales) to a uniform high-resolution. SR-GAN is one of the most competitive image super-resolution deep networks, designed with a fixed upscaling factor. However, it is still not suitable for SALR-REID task, which requires a network not only synthesizing high-resolution images with different upscaling factors, but also extracting discriminative image feature for judging person’s identity. (1) To promote the ability of scale-adaptive upscaling, we cascade multiple SRGANs in series. (2) To supplement the ability of image feature representation, we plug-in a reidentification network. With a unified formulation, a Cascaded Super-Resolution GAN (CSR-GAN) framework is proposed. Extensive evaluations on two simulated datasets and one public dataset demonstrate the advantages of our method over related state-of-the-art methods.


2011 ◽  
Vol 08 (04) ◽  
pp. 273-280
Author(s):  
YUXIANG YANG ◽  
ZENGFU WANG

This paper describes a successful application of Matting Laplacian Matrix to the problem of generating high-resolution range images. The Matting Laplacian Matrix in this paper exploits the fact that discontinuities in range and coloring tend to co-align, which enables us to generate high-resolution range image by integrating regular camera image into the range data. Using one registered and potentially high-resolution camera image as reference, we iteratively refine the input low-resolution range image, in terms of both spatial resolution and depth precision. We show that by using such a Matting Laplacian Matrix, we can get high-quality high-resolution range images.


2019 ◽  
Vol 11 (23) ◽  
pp. 2809 ◽  
Author(s):  
Tang ◽  
Xu ◽  
Huang ◽  
Huang ◽  
Sun

Hyperspectral image (HSI) super-resolution (SR) is an important technique for improving the spatial resolution of HSI. Recently, a method based on sparse representation improved the performance of HSI SR significantly. However, the spectral dictionary was learned under a fixed size, empirically, without considering the training data. Moreover, most of the existing methods fail to explore the relationship among the sparse coefficients. To address these crucial issues, an effective method for HSI SR is proposed in this paper. First, a spectral dictionary is learned, which can adaptively estimate a suitable size according to the input HSI without any prior information. Then, the proposed method exploits the nonlocal correlation of the sparse coefficients. Doubleregularized sparse representation is then introduced to achieve better reconstructions for HSI SR. Finally, a high spatial resolution HSI is generated by the obtained coefficients matrix and the learned adaptive size spectral dictionary. To evaluate the performance of the proposed method, we conduct experiments on two famous datasets. The experimental results demonstrate that it can outperform some relatively state-of-the-art methods in terms of the popular universal quality evaluation indexes.


2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


2021 ◽  
Vol 13 (12) ◽  
pp. 2308
Author(s):  
Masoomeh Aslahishahri ◽  
Kevin G. Stanley ◽  
Hema Duddu ◽  
Steve Shirtliffe ◽  
Sally Vail ◽  
...  

Unmanned aerial vehicle (UAV) imaging is a promising data acquisition technique for image-based plant phenotyping. However, UAV images have a lower spatial resolution than similarly equipped in field ground-based vehicle systems, such as carts, because of their distance from the crop canopy, which can be particularly problematic for measuring small-sized plant features. In this study, the performance of three deep learning-based super resolution models, employed as a pre-processing tool to enhance the spatial resolution of low resolution images of three different kinds of crops were evaluated. To train a super resolution model, aerial images employing two separate sensors co-mounted on a UAV flown over lentil, wheat and canola breeding trials were collected. A software workflow to pre-process and align real-world low resolution and high-resolution images and use them as inputs and targets for training super resolution models was created. To demonstrate the effectiveness of real-world images, three different experiments employing synthetic images, manually downsampled high resolution images, or real-world low resolution images as input to the models were conducted. The performance of the super resolution models demonstrates that the models trained with synthetic images cannot generalize to real-world images and fail to reproduce comparable images with the targets. However, the same models trained with real-world datasets can reconstruct higher-fidelity outputs, which are better suited for measuring plant phenotypes.


Sign in / Sign up

Export Citation Format

Share Document