scholarly journals Hyperspectral Image Classification Based on Two-Stage Subspace Projection

2018 ◽  
Vol 10 (10) ◽  
pp. 1565 ◽  
Author(s):  
Xiaoyan Li ◽  
Lefei Zhang ◽  
Jane You

Hyperspectral image (HSI) classification is a widely used application to provide important information of land covers. Each pixel of an HSI has hundreds of spectral bands, which are often considered as features. However, some features are highly correlated and nonlinear. To address these problems, we propose a new discrimination analysis framework for HSI classification based on the Two-stage Subspace Projection (TwoSP) in this paper. First, the proposed framework projects the original feature data into a higher-dimensional feature subspace by exploiting the kernel principal component analysis (KPCA). Then, a novel discrimination-information based locality preserving projection (DLPP) method is applied to the preceding KPCA feature data. Finally, an optimal low-dimensional feature space is constructed for the subsequent HSI classification. The main contributions of the proposed TwoSP method are twofold: (1) the discrimination information is utilized to minimize the within-class distance in a small neighborhood, and (2) the subspace found by TwoSP separates the samples more than they would be if DLPP was directly applied to the original HSI data. Experimental results on two real-world HSI datasets demonstrate the effectiveness of the proposed TwoSP method in terms of classification accuracy.

2020 ◽  
pp. 147387162097820
Author(s):  
Haili Zhang ◽  
Pu Wang ◽  
Xuejin Gao ◽  
Yongsheng Qi ◽  
Huihui Gao

T-distributed stochastic neighbor embedding (t-SNE) is an effective visualization method. However, it is non-parametric and cannot be applied to steaming data or online scenarios. Although kernel t-SNE provides an explicit projection from a high-dimensional data space to a low-dimensional feature space, some outliers are not well projected. In this paper, bi-kernel t-SNE is proposed for out-of-sample data visualization. Gaussian kernel matrices of the input and feature spaces are used to approximate the explicit projection. Then principal component analysis is applied to reduce the dimensionality of the feature kernel matrix. Thus, the difference between inliers and outliers is revealed. And any new sample can be well mapped. The performance of the proposed method for out-of-sample projection is tested on several benchmark datasets by comparing it with other state-of-the-art algorithms.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Jimmy C. Azar ◽  
Martin Simonsson ◽  
Ewert Bengtsson ◽  
Anders Hast

Comparing staining patterns of paired antibodies designed towards a specific protein but toward different epitopes of the protein provides quality control over the binding and the antibodies’ ability to identify the target protein correctly and exclusively. We present a method for automated quantification of immunostaining patterns for antibodies in breast tissue using the Human Protein Atlas database. In such tissue, dark brown dye 3,3′-diaminobenzidine is used as an antibody-specific stain whereas the blue dye hematoxylin is used as a counterstain. The proposed method is based on clustering and relative scaling of features following principal component analysis. Our method is able (1) to accurately segment and identify staining patterns and quantify the amount of staining and (2) to detect paired antibodies by correlating the segmentation results among different cases. Moreover, the method is simple, operating in a low-dimensional feature space, and computationally efficient which makes it suitable for high-throughput processing of tissue microarrays.


2019 ◽  
Vol 11 (10) ◽  
pp. 1219 ◽  
Author(s):  
Lan Zhang ◽  
Hongjun Su ◽  
Jingwei Shen

Dimensionality reduction (DR) is an important preprocessing step in hyperspectral image applications. In this paper, a superpixelwise kernel principal component analysis (SuperKPCA) method for DR that performs kernel principal component analysis (KPCA) on each homogeneous region is proposed to fully utilize the KPCA’s ability to acquire nonlinear features. Moreover, for the proposed method, the differences in the DR results obtained based on different fundamental images (the first principal components obtained by principal component analysis (PCA), KPCA, and minimum noise fraction (MNF)) are compared. Extensive experiments show that when 5, 10, 20, and 30 samples from each class are selected, for the Indian Pines, Pavia University, and Salinas datasets: (1) when the most suitable fundamental image is selected, the classification accuracy obtained by SuperKPCA can be increased by 0.06%–0.74%, 3.88%–4.37%, and 0.39%–4.85%, respectively, when compared with SuperPCA, which performs PCA on each homogeneous region; (2) the DR results obtained based on different first principal components are different and complementary. By fusing the multiscale classification results obtained based on different first principal components, the classification accuracy can be increased by 0.54%–2.68%, 0.12%–1.10%, and 0.01%–0.08%, respectively, when compared with the method based only on the most suitable fundamental image.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Zhou Yuqing ◽  
Sun Bingtao ◽  
Li Fengping ◽  
Song Wenlei

This paper focuses on the fault diagnosis for NC machine tools and puts forward a fault diagnosis method based on kernel principal component analysis (KPCA) andk-nearest neighbor (kNN). A data-dependent KPCA based on covariance matrix of sample data is designed to overcome the subjectivity in parameter selection of kernel function and is used to transform original high-dimensional data into low-dimensional manifold feature space with the intrinsic dimensionality. ThekNN method is modified to adapt the fault diagnosis of tools that can determine thresholds of multifault classes and is applied to detect potential faults. An experimental analysis in NC milling machine tools is developed; the testing result shows that the proposed method is outperforming compared to the other two methods in tool fault diagnosis.


Author(s):  
S. Schmitz ◽  
U. Weidner ◽  
H. Hammer ◽  
A. Thiele

Abstract. In this paper, the nonlinear dimension reduction algorithm Uniform Manifold Approximation and Projection (UMAP) is investigated to visualize information contained in high dimensional feature representations of Polarimetric Interferometric Synthetic Aperture Radar (PolInSAR) data. Based on polarimetric parameters, target decomposition methods and interferometric coherences a wide range of features is extracted that spans the high dimensional feature space. UMAP is applied to determine a representation of the data in 2D and 3D euclidean space, preserving local and global structures of the data and still suited for classification. The performance of UMAP in terms of generating expressive visualizations is evaluated on PolInSAR data acquired by the F-SAR sensor and compared to that of Principal Component Analysis (PCA), Laplacian Eigenmaps (LE) and t-distributed Stochastic Neighbor embedding (t-SNE). For this purpose, a visual analysis of 2D embeddings is performed. In addition, a quantitative analysis is provided for evaluating the preservation of information in low dimensional representations with respect to separability of different land cover classes. The results show that UMAP exceeds the capability of PCA and LE in these regards and is competitive with t-SNE.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Bo She ◽  
Fuqing Tian ◽  
Weige Liang ◽  
Gang Zhang

The dimension reduction methods have been proved powerful and practical to extract latent features in the signal for process monitoring. A linear dimension reduction method called nonlocal orthogonal preserving embedding (NLOPE) and its nonlinear form named nonlocal kernel orthogonal preserving embedding (NLKOPE) are proposed and applied for condition monitoring and fault detection. Different from kernel orthogonal neighborhood preserving embedding (KONPE) and kernel principal component analysis (KPCA), the NLOPE and NLKOPE models aim at preserving global and local data structures simultaneously by constructing a dual-objective optimization function. In order to adjust the trade-off between global and local data structures, a weighted parameter is introduced to balance the objective function. Compared with KONPE and KPCA, NLKOPE combines both the advantages of KONPE and KPCA, and NLKOPE is also more powerful in extracting potential useful features in nonlinear data set than NLOPE. For the purpose of condition monitoring and fault detection, monitoring statistics are constructed in feature space. Finally, three case studies on the gearbox and bearing test rig are carried out to demonstrate the effectiveness of the proposed nonlinear fault detection method.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4391 ◽  
Author(s):  
Aimin Miao ◽  
Jiajun Zhuang ◽  
Yu Tang ◽  
Yong He ◽  
Xuan Chu ◽  
...  

Variety classification is an important step in seed quality testing. This study introduces t-distributed stochastic neighbourhood embedding (t-SNE), a manifold learning algorithm, into the field of hyperspectral imaging (HSI) and proposes a method for classifying seed varieties. Images of 800 maize kernels of eight varieties (100 kernels per variety, 50 kernels for each side of the seed) were imaged in the visible- near infrared (386.7–1016.7 nm) wavelength range. The images were pre-processed by Procrustes analysis (PA) to improve the classification accuracy, and then these data were reduced to low-dimensional space using t-SNE. Finally, Fisher’s discriminant analysis (FDA) was used for classification of the low-dimensional data. To compare the effect of t-SNE, principal component analysis (PCA), kernel principal component analysis (KPCA) and locally linear embedding (LLE) were used as comparative methods in this study, and the results demonstrated that the t-SNE model with PA pre-processing has obtained better classification results. The highest classification accuracy of the t-SNE model was up to 97.5%, which was much more satisfactory than the results of the other models (up to 75% for PCA, 85% for KPCA, 76.25% for LLE). The overall results indicated that the t-SNE model with PA pre-processing can be used for variety classification of waxy maize seeds and be considered as a new method for hyperspectral image analysis.


2018 ◽  
Author(s):  
Toni Bakhtiar

Kernel Principal Component Analysis (Kernel PCA) is a generalization of the ordinary PCA which allows mapping the original data into a high-dimensional feature space. The mapping is expected to address the issues of nonlinearity among variables and separation among classes in the original data space. The key problem in the use of kernel PCA is the parameter estimation used in kernel functions that so far has not had quite obvious guidance, where the parameter selection mainly depends on the objectivity of the research. This study exploited the use of Gaussian kernel function and focused on the ability of kernel PCA in visualizing the separation of the classified data. Assessments were undertaken based on misclassification obtained by Fisher Discriminant Linear Analysis of the first two principal components. This study results suggest for the visualization of kernel PCA by selecting the parameter in the interval between the closest and the furthest distances among the objects of original data is better than that of ordinary PCA.


Author(s):  
Seikh Mazharul Islam ◽  
Minakshi Banerjee ◽  
Siddhartha Bhattacharyya

This chapter proposes a content based image retrieval method dealing with higher dimensional feature of images. The kernel principal component analysis (KPCA) is done on MPEG-7 Color Structure Descriptor (CSD) (64-bins) to compute low-dimensional nonlinear-subspace. Also the Partitioning Around Medoids (PAM) algorithm is used to squeeze search space again where the number of clusters are counted by optimum average silhouette width. To refine these clusters further, the outliers from query image's belonging cluster are excluded by Support Vector Clus-tering (SVC). Then One-Class Support Vector Machine (OCSVM) is used for the prediction of relevant images from query image's belonging cluster and the initial retrieval results based on the similarity measurement is feed to OCSVM for training. Images are ranked from the positively labeled images. This method gives more than 95% precision before recall reaches at 0.5 for conceptually meaningful query categories. Also comparative results are obtained from: 1) MPEG-7 CSD features directly and 2) other dimensionality reduction techniques.


Sign in / Sign up

Export Citation Format

Share Document