scholarly journals Hyperspectral Classification via Superpixel Kernel Learning-Based Low Rank Representation

2018 ◽  
Vol 10 (10) ◽  
pp. 1639 ◽  
Author(s):  
Tianming Zhan ◽  
Le Sun ◽  
Yang Xu ◽  
Guowei Yang ◽  
Yan Zhang ◽  
...  

High dimensional image classification is a fundamental technique for information retrieval from hyperspectral remote sensing data. However, data quality is readily affected by the atmosphere and noise in the imaging process, which makes it difficult to achieve good classification performance. In this paper, multiple kernel learning-based low rank representation at superpixel level (Sp_MKL_LRR) is proposed to improve the classification accuracy for hyperspectral images. Superpixels are generated first from the hyperspectral image to reduce noise effect and form homogeneous regions. An optimal superpixel kernel parameter is then selected by the kernel matrix using a multiple kernel learning framework. Finally, a kernel low rank representation is applied to classify the hyperspectral image. The proposed method offers two advantages. (1) The global correlation constraint is exploited by the low rank representation, while the local neighborhood information is extracted as the superpixel kernel adaptively learns the high-dimensional manifold features of the samples in each class; (2) It can meet the challenges of multiscale feature learning and adaptive parameter determination in the conventional kernel methods. Experimental results on several hyperspectral image datasets demonstrate that the proposed method outperforms several state-of-the-art classifiers tested in terms of overall accuracy, average accuracy, and kappa statistic.

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Wenjia Niu ◽  
Kewen Xia ◽  
Baokai Zu ◽  
Jianchuan Bai

Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.


2018 ◽  
Vol 26 (4) ◽  
pp. 980-88 ◽  
Author(s):  
王庆超 WANG Qing-chao ◽  
付光远 FU Guang-yuan ◽  
汪洪桥 WANG Hong-qiao ◽  
王超 WANG Chao

2014 ◽  
Vol 143 ◽  
pp. 68-79 ◽  
Author(s):  
Alain Rakotomamonjy ◽  
Sukalpa Chanda

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ling Wang ◽  
Hongqiao Wang ◽  
Guangyuan Fu

Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real-world imbalanced tasks. The Nyström method is an effective technique to scale kernel methods. However, the standard Nyström method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi-Nyström method based on mixtures of Nyström approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low-rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model’s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.


2018 ◽  
Author(s):  
Sriniwas Govinda Surampudi ◽  
Joyneel Misra ◽  
Gustavo Deco ◽  
Raju Bapi Surampudi ◽  
Avinash Sharma ◽  
...  

AbstractOver the last decade there has been growing interest in understanding the brain activity in the absence of any task or stimulus captured by the resting-state functional magnetic resonance imaging (rsfMRI). These resting state patterns are not static, but exhibit complex spatio-temporal dynamics. In the recent years substantial effort has been put to characterize different FC configurations while brain states makes transitions over time. The dynamics governing this transitions and their relationship with stationary functional connectivity remains elusive. Over the last years a multitude of methods has been proposed to discover and characterize FC dynamics and one of the most accepted method is sliding window approach. Moreover, as these FC configurations are observed to be cyclically repeating in time there was further motivation to use of a generic clustering scheme to identify latent states of dynamics. We discover the underlying lower-dimensional manifold of the temporal structure which is further parameterized as a set of local density distributions, or latent transient states. We propose an innovative method that learns parameters specific to these latent states using a graph-theoretic model (temporal Multiple Kernel Learning, tMKL) and finally predicts the grand average functional connectivity (FC) of the unseen subjects by leveraging a state transition Markov model. tMKL thus learns a mapping between the underlying anatomical network and the temporal structure. Training and testing were done using the rs-fMRI data of 46 healthy participants and the results establish the viability of the proposed solution. Parameters of the model are learned via state-specific optimization formulations and yet the model performs at par or better than state-of-the-art models for predicting the grand average FC. Moreover, the model shows sensitivity towards subject-specific anatomy. The proposed model performs significantly better than the established models of predicting resting state functional connectivity based on whole-brain dynamic mean-field model, single diffusion kernel model and another version of multiple kernel learning model. In summary, We provide a novel solution that does not make strong assumption about underlying data and is generally applicable to resting or task data to learn subject specific state transitions and successful characterization of SC-dFC-FC relationship through an unifying framework.


Sign in / Sign up

Export Citation Format

Share Document