scholarly journals Shearlet-Based Structure-Aware Filtering for Hyperspectral and LiDAR Data Classification

2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Sen Jia ◽  
Zhangwei Zhan ◽  
Meng Xu

The joint interpretation of hyperspectral images (HSIs) and light detection and ranging (LiDAR) data has developed rapidly in recent years due to continuously evolving image processing technology. Nowadays, most feature extraction methods are carried out by convolving the raw data with fixed-size filters, whereas the structural and texture information of objects in multiple scales cannot be sufficiently exploited. In this article, a shearlet-based structure-aware filtering approach, abbreviated as ShearSAF, is proposed for HSI and LiDAR feature extraction and classification. Specifically, superpixel-guided kernel principal component analysis (KPCA) is firstly adopted on raw HSIs to reduce the dimensions. Then, the KPCA-reduced HSI and LiDAR data are converted to the shearlet domain for texture and area feature extraction. In contrast, superpixel segmentation algorithm utilizes the raw HSI data to obtain the initial oversegmentation map. Subsequently, by utilizing a well-designed minimum merging cost that fully considers spectral (HSI and LiDAR data), texture, and area features, a region merging procedure is gradually conducted to produce a final merging map. Further, a scale map that locally indicates the filter size is achieved by calculating the edge distance. Finally, the KPCA-reduced HSI and LiDAR data are convolved with the locally adaptive filters for feature extraction, and a random forest (RF) classifier is thus adopted for classification. The effectiveness of our ShearSAF approach is verified on three real-world datasets, and the results show that the performance of ShearSAF can achieve an accuracy higher than that of comparison methods when exploiting small-size training sample problems. The codes of this work will be available at http://jiasen.tech/papers/ for the sake of reproducibility.

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Yun Mo ◽  
Zhongzhao Zhang ◽  
Yang Lu ◽  
Weixiao Meng ◽  
Gul Agha

With the fast developing of mobile terminals, positioning techniques based on fingerprinting method draw attention from many researchers even world famous companies. To conquer some shortcomings of the existing fingerprinting systems and further improve the system performance, on the one hand, in the paper, we propose a coarse positioning method based on random forest, which is able to customize several subregions, and classify test point to the region with an outstanding accuracy compared with some typical clustering algorithms. On the other hand, through the mathematical analysis in engineering, the proposed kernel principal component analysis algorithm is applied for radio map processing, which may provide better robustness and adaptability compared with linear feature extraction methods and manifold learning technique. We build both theoretical model and real environment for verifying the feasibility and reliability. The experimental results show that the proposed indoor positioning system could achieve 99% coarse locating accuracy and enhance 15% fine positioning accuracy on average in a strong noisy environment compared with some typical fingerprinting based methods.


2015 ◽  
Vol 2015 ◽  
pp. 1-13
Author(s):  
Shibin Xuan

Based on kernel principal component analysis, fuzzy set theory, and maximum margin criterion, a novel image feature extraction and recognition method, called fuzzy kernel maximum margin criterion (FKMMC), is proposed. In the proposed method, two new fuzzy scatter matrixes are redefined. The new fuzzy scatter matrix can reflect fully the relation between fuzzy membership degree and the offset of the training sample to subclass center. Besides, a concise reliable computational method of the fuzzy between-class scatter matrix is provided. Experimental results on four face databases (AR, extended Yale B, GTFD, and FERET) demonstrate that the proposed method outperforms other methods.


2013 ◽  
Vol 347-350 ◽  
pp. 2390-2394
Author(s):  
Xiao Fang Liu ◽  
Chun Yang

Nonlinear feature extraction used standard Kernel Principal Component Analysis (KPCA) method has large memories and high computational complexity in large datasets. A Greedy Kernel Principal Component Analysis (GKPCA) method is applied to reduce training data and deal with the nonlinear feature extraction problem for training data of large data in classification. First, a subset, which approximates to the original training data, is selected from the full training data using the greedy technique of the GKPCA method. Then, the feature extraction model is trained by the subset instead of the full training data. Finally, FCM algorithm classifies feature extraction data of the GKPCA, KPCA and PCA methods, respectively. The simulation results indicate that the feature extraction performance of both the GKPCA, and KPCA methods outperform the PCA method. In addition of retaining the performance of the KPCA method, the GKPCA method reduces computational complexity due to the reduced training set in classification.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2403
Author(s):  
Jakub Browarczyk ◽  
Adam Kurowski ◽  
Bozena Kostek

The aim of the study is to compare electroencephalographic (EEG) signal feature extraction methods in the context of the effectiveness of the classification of brain activities. For classification, electroencephalographic signals were obtained using an EEG device from 17 subjects in three mental states (relaxation, excitation, and solving logical task). Blind source separation employing independent component analysis (ICA) was performed on obtained signals. Welch’s method, autoregressive modeling, and discrete wavelet transform were used for feature extraction. Principal component analysis (PCA) was performed in order to reduce the dimensionality of feature vectors. k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Neural Networks (NN) were employed for classification. Precision, recall, F1 score, as well as a discussion based on statistical analysis, were shown. The paper also contains code utilized in preprocessing and the main part of experiments.


2019 ◽  
Vol 892 ◽  
pp. 200-209
Author(s):  
Rayner Pailus ◽  
Rayner Alfred

Adaboost Viola-Jones method is indeed a profound discovery in detecting face images mainly because it is fast, light and one of the easiest methods of detecting face images among other techniques of face detection. Viola Jones uses Haar wavelet filter to detect face images and it produces almost 80%accuracy of face detection. This paper discusses proposed methodology and algorithms that involved larger library of filters used to create more discrimination features among the images by processing the proposed 15 Haar rectangular features (an extension from 4 Haar wavelet filters of Viola Jones) and used them in multiple adaptive ensemble process of detecting face image. After facial detection, the process continues with normalization processes by applying feature extraction such as PCA combined with LDA or LPP to extract our week learners’ wavelet for more classification features. Upon the process of feature extraction proposed feature selection to index these extracted data. These extracted vectors are used for training and creating MADBoost (Multiple Adaptive Diversified Boost)(an improvement of Adaboost, which uses multiple feature extraction methods combined with multiple classifiers) is able to capture, recognize and distinguish face image (s) faster. MADBoost applies the ensemble approach with better weights for classification to produce better face recognition results. Three experiments have been conducted to investigate the performance of the proposed MADBoost with three other classifiers, Neural Network (NN), Support Vector Machines (SVM) and Adaboost classifiers using Principal Component Analysis (PCA) as the feature extraction method. These experiments were tested against obstacles of POIES (Pose, Obstruction, Illumination, Expression, Sizes). Based on the results obtained, Madboost is found to be able to improve the recognition performance in matching failures, incorrect matching, matching success percentages and acceptable time taken to perform the classification task.


Author(s):  
Saban Ozturk ◽  
Umut Ozkaya ◽  
Mucahid Barstugan

AbstractNecessary screenings must be performed to control the spread of the Corona Virus (COVID-19) in daily life and to make a preliminary diagnosis of suspicious cases. The long duration of pathological laboratory tests and the wrong test results led the researchers to focus on different fields. Fast and accurate diagnoses are essential for effective interventions with COVID-19. The information obtained by using X-ray and Computed Tomography (CT) images is vital in making clinical diagnoses. Therefore it was aimed to develop a machine learning method for the detection of viral epidemics by analyzing X-ray images. In this study, images belonging to 6 situations, including coronavirus images, are classified. Since the number of images in the dataset is deficient and unbalanced, it is more convenient to analyze these images with hand-crafted feature extraction methods. For this purpose, firstly, all the images in the dataset are extracted with the help of four feature extraction algorithms. These extracted features are combined in raw form. The unbalanced data problem is eliminated by producing feature vectors with the SMOTE algorithm. Finally, the feature vector is reduced in size by using a stacked auto-encoder and principal component analysis to remove interconnected features in the feature vector. According to the obtained results, it is seen that the proposed method has leveraging performance, especially in order to make the diagnosis of COVID-19 in a short time and effectively.


Sign in / Sign up

Export Citation Format

Share Document