scholarly journals Multi-Resolution Weed Classification via Convolutional Neural Network and Superpixel Based Local Binary Pattern Using Remote Sensing Images

2019 ◽  
Vol 11 (14) ◽  
pp. 1692 ◽  
Author(s):  
Adnan Farooq ◽  
Xiuping Jia ◽  
Jiankun Hu ◽  
Jun Zhou

Automatic weed detection and classification faces the challenges of large intraclass variation and high spectral similarity to other vegetation. With the availability of new high-resolution remote sensing data from various platforms and sensors, it is possible to capture both spectral and spatial characteristics of weed species at multiple scales. Effective multi-resolution feature learning is then desirable to extract distinctive intensity, texture and shape features of each category of weed to enhance the weed separability. We propose a feature extraction method using a Convolutional Neural Network (CNN) and superpixel based Local Binary Pattern (LBP). Both middle and high level spatial features are learned using the CNN. Local texture features from superpixel-based LBP are extracted, and are also used as input to Support Vector Machines (SVM) for weed classification. Experimental results on the hyperspectral and remote sensing datasets verify the effectiveness of the proposed method, and show that it outperforms several feature extraction approaches.

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Morteza Amini ◽  
MirMohsen Pedram ◽  
AliReza Moradi ◽  
Mahshad Ouchani

The automatic diagnosis of Alzheimer’s disease plays an important role in human health, especially in its early stage. Because it is a neurodegenerative condition, Alzheimer’s disease seems to have a long incubation period. Therefore, it is essential to analyze Alzheimer’s symptoms at different stages. In this paper, the classification is done with several methods of machine learning consisting of K -nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), linear discrimination analysis (LDA), and random forest (RF). Moreover, novel convolutional neural network (CNN) architecture is presented to diagnose Alzheimer’s severity. The relationship between Alzheimer’s patients’ functional magnetic resonance imaging (fMRI) images and their scores on the MMSE is investigated to achieve the aim. The feature extraction is performed based on the robust multitask feature learning algorithm. The severity is also calculated based on the Mini-Mental State Examination score, including low, mild, moderate, and severe categories. Results show that the accuracy of the KNN, SVM, DT, LDA, RF, and presented CNN method is 77.5%, 85.8%, 91.7%, 79.5%, 85.1%, and 96.7%, respectively. Moreover, for the presented CNN architecture, the sensitivity of low, mild, moderate, and severe status of Alzheimer patients is 98.1%, 95.2%,89.0%, and 87.5%, respectively. Based on the findings, the presented CNN architecture classifier outperforms other methods and can diagnose the severity and stages of Alzheimer’s disease with maximum accuracy.


2018 ◽  
Vol 10 (7) ◽  
pp. 1123 ◽  
Author(s):  
Yuhang Zhang ◽  
Hao Sun ◽  
Jiawei Zuo ◽  
Hongqi Wang ◽  
Guangluan Xu ◽  
...  

Aircraft type recognition plays an important role in remote sensing image interpretation. Traditional methods suffer from bad generalization performance, while deep learning methods require large amounts of data with type labels, which are quite expensive and time-consuming to obtain. To overcome the aforementioned problems, in this paper, we propose an aircraft type recognition framework based on conditional generative adversarial networks (GANs). First, we design a new method to precisely detect aircrafts’ keypoints, which are used to generate aircraft masks and locate the positions of the aircrafts. Second, a conditional GAN with a region of interest (ROI)-weighted loss function is trained on unlabeled aircraft images and their corresponding masks. Third, an ROI feature extraction method is carefully designed to extract multi-scale features from the GAN in the regions of aircrafts. After that, a linear support vector machine (SVM) classifier is adopted to classify each sample using their features. Benefiting from the GAN, we can learn features which are strong enough to represent aircrafts based on a large unlabeled dataset. Additionally, the ROI-weighted loss function and the ROI feature extraction method make the features more related to the aircrafts rather than the background, which improves the quality of features and increases the recognition accuracy significantly. Thorough experiments were conducted on a challenging dataset, and the results prove the effectiveness of the proposed aircraft type recognition framework.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


2020 ◽  
Vol 17 (4) ◽  
pp. 572-578
Author(s):  
Mohammad Parseh ◽  
Mohammad Rahmanimanesh ◽  
Parviz Keshavarzi

Persian handwritten digit recognition is one of the important topics of image processing which significantly considered by researchers due to its many applications. The most important challenges in Persian handwritten digit recognition is the existence of various patterns in Persian digit writing that makes the feature extraction step to be more complicated.Since the handcraft feature extraction methods are complicated processes and their performance level are not stable, most of the recent studies have concentrated on proposing a suitable method for automatic feature extraction. In this paper, an automatic method based on machine learning is proposed for high-level feature extraction from Persian digit images by using Convolutional Neural Network (CNN). After that, a non-linear multi-class Support Vector Machine (SVM) classifier is used for data classification instead of fully connected layer in final layer of CNN. The proposed method has been applied to HODA dataset and obtained 99.56% of recognition rate. Experimental results are comparable with previous state-of-the-art methods


2021 ◽  
pp. 004051752110592
Author(s):  
Zhiyu Zhou ◽  
Wenxiong Deng ◽  
Yaming Wang ◽  
Zefei Zhu

To improve accuracy in clothing image recognition, this paper proposes a clothing classification method based on a parallel convolutional neural network (PCNN) combined with an optimized random vector functional link (RVFL). The method uses the PCNN model to extract features of clothing images. Then, the structure-intensive and dual-channel convolutional neural network (i.e., the PCNN) is used to solve the problems of traditional convolutional neural networks (e.g., limited data and prone to overfitting). Each convolutional layer is followed by a batch normalization layer, and the leaky rectified linear unit activation function and max-pooling layers are used to improve the performance of the feature extraction. Then, dropout layers and fully connected layers are used to reduce the amount of calculation. The last layer uses the RVFL as optimized by the grasshopper optimization algorithm to replace the SoftMax layer and classify the features, further improving the stability and accuracy of classification. In this study, two aspects of the classification (feature extraction and feature classification) are improved, effectively improving the accuracy. The experimental results show that on the Fashion-Mnist dataset, the accuracy of the algorithm in this study reaches 92.93%. This value is 1.36%, 2.05%, 0.65%, and 3.76% higher than that of the local binary pattern (LBP)-support vector machine (SVM), histogram of oriented gradients (HOG)-SVM, LBP-HOG-SVM, and AlexNet-sparse representation-based classifier algorithms, respectively, effectively demonstrating the classification performance of the algorithm.


2020 ◽  
Vol 12 (14) ◽  
pp. 2292
Author(s):  
Xin Luo ◽  
Xiaohua Tong ◽  
Zhongwen Hu ◽  
Guofeng Wu

Moderate spatial resolution (MSR) satellite images, which hold a trade-off among radiometric, spectral, spatial and temporal characteristics, are extremely popular data for acquiring land cover information. However, the low accuracy of existing classification methods for MSR images is still a fundamental issue restricting their capability in urban land cover mapping. In this study, we proposed a hybrid convolutional neural network (H-ConvNet) for improving urban land cover mapping with MSR Sentinel-2 images. The H-ConvNet was structured with two streams: one lightweight 1D ConvNet for deep spectral feature extraction and one lightweight 2D ConvNet for deep context feature extraction. To obtain a well-trained 2D ConvNet, a training sample expansion strategy was introduced to assist context feature learning. The H-ConvNet was tested in six highly heterogeneous urban regions around the world, and it was compared with support vector machine (SVM), object-based image analysis (OBIA), Markov random field model (MRF) and a newly proposed patch-based ConvNet system. The results showed that the H-ConvNet performed best. We hope that the proposed H-ConvNet would benefit for the land cover mapping with MSR images in highly heterogeneous urban regions.


Sign in / Sign up

Export Citation Format

Share Document