scholarly journals Multiple Classifiers Based Semi-Supervised Polarimetric SAR Image Classification Method

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3006
Author(s):  
Lekun Zhu ◽  
Xiaoshuang Ma ◽  
Penghai Wu ◽  
Jiangong Xu

Polarimetric synthetic aperture radar (PolSAR) image classification has played an important role in PolSAR data application. Deep learning has achieved great success in PolSAR image classification over the past years. However, when the labeled training dataset is insufficient, the classification results are usually unsatisfactory. Furthermore, the deep learning approach is based on hierarchical features, which is an approach that cannot take full advantage of the scattering characteristics in PolSAR data. Hence, it is worthwhile to make full use of scattering characteristics to obtain a high classification accuracy based on limited labeled samples. In this paper, we propose a novel semi-supervised classification method for PolSAR images, which combines the deep learning technique with the traditional scattering trait-based classifiers. Firstly, based on only a small number of training samples, the classification results of the Wishart classifier, support vector machine (SVM) classifier, and a complex-valued convolutional neural network (CV-CNN) are used to conduct majority voting, thus generating a strong dataset and a weak dataset. The strong training set are then used as pseudo-labels to reclassify the weak dataset by CV-CNN. The final classification results are obtained by combining the strong training set and the reclassification results. Experiments on two real PolSAR images on agricultural and forest areas indicate that, in most cases, significant improvements can be achieved with the proposed method, compared to the base classifiers, and the improvement is approximately 3–5%. When the number of labeled samples was small, the superiority of the proposed method is even more apparent. The improvement for built-up areas or infrastructure objects is not as significant as forests.

2021 ◽  
Author(s):  
Yulong Wang ◽  
Xiaofeng Liao ◽  
Dewen Qiao ◽  
Jiahui Wu

Abstract With the rapid development of modern medical science and technology, medical image classification has become a more and more challenging problem. However, in most traditional classification methods, image feature extraction is difficult, and the accuracy of classifier needs to be improved. Therefore, this paper proposes a high-accuracy medical image classification method based on deep learning, which is called hybrid CQ-SVM. Specifically, we combine the advantages of convolutional neural network (CNN) and support vector machine (SVM), and integrate the novel hybrid model. In our scheme, quantum-behaved particle swarm optimization algorithm (QPSO) is adopted to set its parameters automatically for solving the SVM parameter setting problem, CNN works as a trainable feature extractor and SVM optimized by QPSO performs as a trainable classifier. This method can automatically extract features from original medical images and generate predictions. The experimental results show that this method can extract better medical image features, and achieve higher classification accuracy.


2019 ◽  
Author(s):  
Dali Wang ◽  
Zheng Lu ◽  
Yichi Xu ◽  
Zi Wang ◽  
Chengcheng Li ◽  
...  

AbstractMotivationCell shapes provide crucial biology information on complex tissues. Different cell types often have distinct cell shapes, and collective shape changes usually indicate morphogenetic events and mechanisms. The identification and detection of collective cell shape changes in an extensive collection of 3D time-lapse images of complex tissues is an important step in assaying such mechanisms but is a tedious and time-consuming task. Machine learning provides new opportunities to automatically detect cell shape changes. However, it is challenging to generate sufficient training samples for pattern identification through deep learning because of a limited amount of images and annotations.ResultWe present a deep learning approach with minimal well-annotated training samples and apply it to identify multicellular rosettes from 3D live images of the Caenorhabditis elegans embryo with fluorescently labelled cell membranes. Our strategy is to combine two approaches, namely, feature transfer and generative adversarial networks (GANs), to boost image classification with small training samples. Specifically, we use a GAN framework and conduct an unsupervised training to capture the general characteristics of cell membrane images with 11,250 unlabelled images. We then transfer the structure of the GAN discriminator into a new Alex-style neural network for further learning with several dozen labelled samples. Our experiments showed that with 10-15 well-labelled rosette images and 30-40 randomly selected non-rosette images our approach can identify rosettes with over 80% accuracy and capture over 90% of the model accuracy achieved with a training dataset that is five times larger. We also established a public benchmark dataset for rosette detection. This GAN-based transfer approach can be applied to study other cellular structures with minimal training [email protected], [email protected]


Author(s):  
S. Mirzaee ◽  
M. Motagh ◽  
H. Arefi ◽  
M. Nooryazdan

Due to its special imaging characteristics, Synthetic Aperture Radar (SAR) has become an important source of information for a variety of remote sensing applications dealing with environmental changes. SAR images contain information about both phase and intensity in different polarization modes, making them sensitive to geometrical structure and physical properties of the targets such as dielectric and plant water content. In this study we investigate multi temporal changes occurring to different crop types due to phenological changes using high-resolution TerraSAR-X imagers. The dataset includes 17 dual-polarimetry TSX data acquired from June 2012 to August 2013 in Lorestan province, Iran. Several features are extracted from polarized data and classified using support vector machine (SVM) classifier. Training samples and different features employed in classification are also assessed in the study. Results show a satisfactory accuracy for classification which is about 0.91 in kappa coefficient.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


Author(s):  
P. Burai ◽  
T. Tomor ◽  
L. Bekő ◽  
B. Deák

In our study we classified grassland vegetation types of an alkali landscape (Eastern Hungary), using different image classification methods for hyperspectral data. Our aim was to test the applicability of hyperspectral data in this complex system using various image classification methods. To reach the highest classification accuracy, we compared the performance of traditional image classifiers, machine learning algorithm, feature extraction (MNF-transformation) and various sizes of training dataset. Hyperspectral images were acquired by an AISA EAGLE II hyperspectral sensor of 128 contiguous bands (400–1000 nm), a spectral sampling of 5 nm bandwidth and a ground pixel size of 1 m. We used twenty vegetation classes which were compiled based on the characteristic dominant species, canopy height, and total vegetation cover. Image classification was applied to the original and MNF (minimum noise fraction) transformed dataset using various training sample sizes between 10 and 30 pixels. In the case of the original bands, both SVM and RF classifiers provided high accuracy for almost all classes irrespectively of the number of the training pixels. We found that SVM and RF produced the best accuracy with the first nine MNF transformed bands. Our results suggest that in complex open landscapes, application of SVM can be a feasible solution, as this method provides higher accuracies compared to RF and MLC. SVM was not sensitive for the size of the training samples, which makes it an adequate tool for cases when the available number of training pixels are limited for some classes.


2020 ◽  
pp. 3397-3407
Author(s):  
Nur Syafiqah Mohd Nafis ◽  
Suryanti Awang

Text documents are unstructured and high dimensional. Effective feature selection is required to select the most important and significant feature from the sparse feature space. Thus, this paper proposed an embedded feature selection technique based on Term Frequency-Inverse Document Frequency (TF-IDF) and Support Vector Machine-Recursive Feature Elimination (SVM-RFE) for unstructured and high dimensional text classificationhis technique has the ability to measure the feature’s importance in a high-dimensional text document. In addition, it aims to increase the efficiency of the feature selection. Hence, obtaining a promising text classification accuracy. TF-IDF act as a filter approach which measures features importance of the text documents at the first stage. SVM-RFE utilized a backward feature elimination scheme to recursively remove insignificant features from the filtered feature subsets at the second stage. This research executes sets of experiments using a text document retrieved from a benchmark repository comprising a collection of Twitter posts. Pre-processing processes are applied to extract relevant features. After that, the pre-processed features are divided into training and testing datasets. Next, feature selection is implemented on the training dataset by calculating the TF-IDF score for each feature. SVM-RFE is applied for feature ranking as the next feature selection step. Only top-rank features will be selected for text classification using the SVM classifier. Based on the experiments, it shows that the proposed technique able to achieve 98% accuracy that outperformed other existing techniques. In conclusion, the proposed technique able to select the significant features in the unstructured and high dimensional text document.


GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Shahenda Sarhan ◽  
Aida A. Nasr ◽  
Mahmoud Y. Shams

Multipose face recognition system is one of the recent challenges faced by the researchers interested in security applications. Different researches have been introduced discussing the accuracy improvement of multipose face recognition through enhancing the face detector as Viola-Jones, Real Adaboost, and Cascade Object Detector while others concentrated on the recognition systems as support vector machine and deep convolution neural networks. In this paper, a combined adaptive deep learning vector quantization (CADLVQ) classifier is proposed. The proposed classifier has boosted the weakness of the adaptive deep learning vector quantization classifiers through using the majority voting algorithm with the speeded up robust feature extractor. Experimental results indicate that, the proposed classifier provided promising results in terms of sensitivity, specificity, precision, and accuracy compared to recent approaches in deep learning, statistical, and classical neural networks. Finally, the comparison is empirically performed using confusion matrix to ensure the reliability and robustness of the proposed system compared to the state-of art.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4629 ◽  
Author(s):  
Ciaran Cooney ◽  
Attila Korik ◽  
Raffaella Folli ◽  
Damien Coyle

Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain–computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across several domains. However, it remains an open question whether DL methods provide significant advances over traditional machine learning (ML) approaches for classification of imagined speech. Furthermore, hyperparameter (HP) optimization has been neglected in DL-EEG studies, resulting in the significance of its effects remaining uncertain. In this study, we aim to improve classification of imagined speech EEG by employing DL methods while also statistically evaluating the impact of HP optimization on classifier performance. We trained three distinct convolutional neural networks (CNN) on imagined speech EEG using a nested cross-validation approach to HP optimization. Each of the CNNs evaluated was designed specifically for EEG decoding. An imagined speech EEG dataset consisting of both words and vowels facilitated training on both sets independently. CNN results were compared with three benchmark ML methods: Support Vector Machine, Random Forest and regularized Linear Discriminant Analysis. Intra- and inter-subject methods of HP optimization were tested and the effects of HPs statistically analyzed. Accuracies obtained by the CNNs were significantly greater than the benchmark methods when trained on both datasets (words: 24.97%, p < 1 × 10–7, chance: 16.67%; vowels: 30.00%, p < 1 × 10–7, chance: 20%). The effects of varying HP values, and interactions between HPs and the CNNs were both statistically significant. The results of HP optimization demonstrate how critical it is for training CNNs to decode imagined speech.


Sign in / Sign up

Export Citation Format

Share Document