scholarly journals Sparse Representation Graph for Hyperspectral Image Classification Assisted by Class Adjusted Spatial Distance

2020 ◽  
Vol 10 (21) ◽  
pp. 7740
Author(s):  
Wanghao Xu ◽  
Siqi Luo ◽  
Yunfei Wang ◽  
Youqiang Zhang ◽  
Guo Cao

In the past few years, the sparse representation (SR) graph-based semi-supervised learning (SSL) has drawn a lot of attention for its impressive performance in hyperspectral image classification with small numbers of training samples. Among these methods, the probabilistic class structure regularized sparse representation (PCSSR) approach, which introduces the probabilistic relationship between samples into the SR process, has shown its superiority over state-of-the-art approaches. However, this category of classification methods only apply another SR process to generate the probabilistic relationship, which focuses only on the spectral information but fails to utilize the spatial information. In this paper, we propose using the class adjusted spatial distance (CASD) to measure the distance between each two samples. We incorporate the proposed a CASD-based distance information into PCSSR mode to further increase the discriminability of original PCSSR approach. The proposed method considers not only the spectral information but also the spatial information of the hyperspectral data, consequently leading to significant performance improvement. Experimental results on different datasets demonstrate that compared with state-of-the-start classification models, the proposed method achieves the highest overall accuracies of 99.71%, 97.13%, and 97.07% on Botswana (BOT), Kennedy Space Center (KSC) and the truncated Indian Pines (PINE) datasets, respectively, with a small number of training samples selected from each class.

2020 ◽  
Vol 12 (4) ◽  
pp. 664 ◽  
Author(s):  
Binge Cui ◽  
Jiandi Cui ◽  
Yan Lu ◽  
Nannan Guo ◽  
Maoguo Gong

Hyperspectral image classification methods may not achieve good performance when a limited number of training samples are provided. However, labeling sufficient samples of hyperspectral images to achieve adequate training is quite expensive and difficult. In this paper, we propose a novel sample pseudo-labeling method based on sparse representation (SRSPL) for hyperspectral image classification, in which sparse representation is used to select the purest samples to extend the training set. The proposed method consists of the following three steps. First, intrinsic image decomposition is used to obtain the reflectance components of hyperspectral images. Second, hyperspectral pixels are sparsely represented using an overcomplete dictionary composed of all training samples. Finally, information entropy is defined for the vectorized sparse representation, and then the pixels with low information entropy are selected as pseudo-labeled samples to augment the training set. The quality of the generated pseudo-labeled samples is evaluated based on classification accuracy, i.e., overall accuracy, average accuracy, and Kappa coefficient. Experimental results on four real hyperspectral data sets demonstrate excellent classification performance using the new added pseudo-labeled samples, which indicates that the generated samples are of high confidence.


2020 ◽  
Vol 165 ◽  
pp. 03001
Author(s):  
Yanguo Fan ◽  
Shizhe Hou ◽  
Dingfeng Yu

Hyperspectral imagery contains both spectral information and spatial relationships among pixels. How to combine spatial information with spectral information effectively has always been a research hotspot of hyperspectral image classification. In this paper, a Spatial-Spectral Kernel Principal Component Analysis Network (SS-KPCANet) was proposed. The network is developed from the original structure of Principal Component Analysis Network. In which PCA is replaced by KPCA to extract more nonlinear features. In addition, the combination of spatial and spectral features also improves the performance of the network. At the end of the network, neighbourhood correction is added to further improve the classification accuracy. Experiments on three datasets show the effectiveness of the proposed method. Comparison with state-of-the-art deep learning-based methods indicate that the proposed method needs less training samples and has better performance.


Author(s):  
P. Zhong ◽  
Z. Q. Gong ◽  
C. Schönlieb

In recent years, researches in remote sensing demonstrated that deep architectures with multiple layers can potentially extract abstract and invariant features for better hyperspectral image classification. Since the usual real-world hyperspectral image classification task cannot provide enough training samples for a supervised deep model, such as convolutional neural networks (CNNs), this work turns to investigate the deep belief networks (DBNs), which allow unsupervised training. The DBN trained over limited training samples usually has many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons), which decrease the DBN’s description ability and thus finally decrease the hyperspectral image classification performance. This work proposes a new diversified DBN through introducing a diversity promoting prior over the latent factors during the DBN pre-training and fine-tuning procedures. The diversity promoting prior in the training procedures will encourage the latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of information and thus increase description ability and classification performance of the diversified DBNs. The proposed method was evaluated over the well-known real-world hyperspectral image dataset. The experiments demonstrate that the diversified DBNs can obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.


Sign in / Sign up

Export Citation Format

Share Document