scholarly journals Coffee Flower Identification Using Binarization Algorithm Based on Convolutional Neural Network for Digital Images

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Pengliang Wei ◽  
Ting Jiang ◽  
Huaiyue Peng ◽  
Hongwei Jin ◽  
Han Sun ◽  
...  

Crop-type identification is one of the most significant applications of agricultural remote sensing, and it is important for yield estimation prediction and field management. At present, crop identification using datasets from unmanned aerial vehicle (UAV) and satellite platforms have achieved state-of-the-art performances. However, accurate monitoring of small plants, such as the coffee flower, cannot be achieved using datasets from these platforms. With the development of time-lapse image acquisition technology based on ground-based remote sensing, a large number of small-scale plantation datasets with high spatial-temporal resolution are being generated, which can provide great opportunities for small target monitoring of a specific region. The main contribution of this paper is to combine the binarization algorithm based on OTSU and the convolutional neural network (CNN) model to improve coffee flower identification accuracy using the time-lapse images (i.e., digital images). A certain number of positive and negative samples are selected from the original digital images for the network model training. Then, the pretrained network model is initialized using the VGGNet and trained using the constructed training datasets. Based on the well-trained CNN model, the coffee flower is initially extracted, and its boundary information can be further optimized by using the extracted coffee flower result of the binarization algorithm. Based on the digital images with different depression angles and illumination conditions, the performance of the proposed method is investigated by comparison of the performances of support vector machine (SVM) and CNN model. Hence, the experimental results show that the proposed method has the ability to improve coffee flower classification accuracy. The results of the image with a 52.5° angle of depression under soft lighting conditions are the highest, and the corresponding Dice (F1) and intersection over union (IoU) have reached 0.80 and 0.67, respectively.

2021 ◽  
Vol 16 ◽  
Author(s):  
Farida Alaaeldin Mostafa ◽  
Yasmine Mohamed Afify ◽  
Rasha Mohamed Ismail ◽  
Nagwa Lotfy Badr

Background: Protein sequence analysis helps in the prediction of protein functions. As the number of proteins increases, it gives the bioinformaticians a challenge to analyze and study the similarity between them. Most of the existing protein analysis methods use Support Vector Machine. Deep learning did not receive much attention regarding protein analysis as it is noted that little work focused on studying the protein diseases classification. Objective: The contribution of this paper is to present a deep learning approach that classifies protein diseases based on protein descriptors. Methods: Different protein descriptors are used and decomposed into modified feature descriptors. Uniquely, we introduce using Convolutional Neural Network model to learn and classify protein diseases. The modified feature descriptors are fed to the Convolutional Neural Network model on a dataset of 1563 protein sequences classified into 3 different disease classes: Aids, Tumor suppressor, and Proto oncogene. Results: The usage of the modified feature descriptors shows a significant increase in the performance of the Convolutional Neural Network model over Support Vector Machine using different kernel functions. One modified feature descriptor improved by 19.8%, 27.9%, 17.6%, 21.5%, 17.3%, and 22% for evaluation metrics: Area Under the Curve, Matthews Correlation Coefficient, Accuracy, F1-score, Recall, and Precision, respectively. Conclusion: Results show that the prediction of the proposed modified feature descriptors significantly surpasses that of Support Vector Machine model.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


Author(s):  
Sumit S. Lad ◽  
◽  
Amol C. Adamuthe

Malware is a threat to people in the cyber world. It steals personal information and harms computer systems. Various developers and information security specialists around the globe continuously work on strategies for detecting malware. From the last few years, machine learning has been investigated by many researchers for malware classification. The existing solutions require more computing resources and are not efficient for datasets with large numbers of samples. Using existing feature extractors for extracting features of images consumes more resources. This paper presents a Convolutional Neural Network model with pre-processing and augmentation techniques for the classification of malware gray-scale images. An investigation is conducted on the Malimg dataset, which contains 9339 gray-scale images. The dataset created from binaries of malware belongs to 25 different families. To create a precise approach and considering the success of deep learning techniques for the classification of raising the volume of newly created malware, we proposed CNN and Hybrid CNN+SVM model. The CNN is used as an automatic feature extractor that uses less resource and time as compared to the existing methods. Proposed CNN model shows (98.03%) accuracy which is better than other existing CNN models namely VGG16 (96.96%), ResNet50 (97.11%) InceptionV3 (97.22%), Xception (97.56%). The execution time of the proposed CNN model is significantly reduced than other existing CNN models. The proposed CNN model is hybridized with a support vector machine. Instead of using Softmax as activation function, SVM performs the task of classifying the malware based on features extracted by the CNN model. The proposed fine-tuned model of CNN produces a well-selected features vector of 256 Neurons with the FC layer, which is input to SVM. Linear SVC kernel transforms the binary SVM classifier into multi-class SVM, which classifies the malware samples using the one-against-one method and delivers the accuracy of 99.59%.


2021 ◽  
Vol 13 (23) ◽  
pp. 4743
Author(s):  
Wei Yuan ◽  
Wenbo Xu

The segmentation of remote sensing images by deep learning technology is the main method for remote sensing image interpretation. However, the segmentation model based on a convolutional neural network cannot capture the global features very well. A transformer, whose self-attention mechanism can supply each pixel with a global feature, makes up for the deficiency of the convolutional neural network. Therefore, a multi-scale adaptive segmentation network model (MSST-Net) based on a Swin Transformer is proposed in this paper. Firstly, a Swin Transformer is used as the backbone to encode the input image. Then, the feature maps of different levels are decoded separately. Thirdly, the convolution is used for fusion, so that the network can automatically learn the weight of the decoding results of each level. Finally, we adjust the channels to obtain the final prediction map by using the convolution with a kernel of 1 × 1. By comparing this with other segmentation network models on a WHU building data set, the evaluation metrics, mIoU, F1-score and accuracy are all improved. The network model proposed in this paper is a multi-scale adaptive network model that pays more attention to the global features for remote sensing segmentation.


2020 ◽  
Vol 12 (5) ◽  
pp. 832 ◽  
Author(s):  
Chunhua Liao ◽  
Jinfei Wang ◽  
Qinghua Xie ◽  
Ayman Al Baz ◽  
Xiaodong Huang ◽  
...  

Annual crop inventory information is important for many agriculture applications and government statistics. The synergistic use of multi-temporal polarimetric synthetic aperture radar (SAR) and available multispectral remote sensing data can reduce the temporal gaps and provide the spectral and polarimetric information of the crops, which is effective for crop classification in areas with frequent cloud interference. The main objectives of this study are to develop a deep learning model to map agricultural areas using multi-temporal full polarimetric SAR and multi-spectral remote sensing data, and to evaluate the influence of different input features on the performance of deep learning methods in crop classification. In this study, a one-dimensional convolutional neural network (Conv1D) was proposed and tested on multi-temporal RADARSAT-2 and VENµS data for crop classification. Compared with the Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN) and non-deep learning methods including XGBoost, Random Forest (RF), and Support Vector Machina (SVM), the Conv1D performed the best when the multi-temporal RADARSAT-2 data (Pauli decomposition or coherency matrix) and VENµS multispectral data were fused by the Minimum Noise Fraction (MNF) transformation. The Pauli decomposition and coherency matrix gave similar overall accuracy (OA) for Conv1D when fused with the VENµS data by the MNF transformation (OA = 96.65 ± 1.03% and 96.72 ± 0.77%). The MNF transformation improved the OA and F-score for most classes when Conv1D was used. The results reveal that the coherency matrix has a great potential in crop classification and the MNF transformation of multi-temporal RADARSAT-2 and VENµS data can enhance the performance of Conv1D.


Author(s):  
Nguyen Thi Thanh Nhan ◽  
Do Thanh Binh ◽  
Nguyen Huy Hoang ◽  
Vu Hai ◽  
Tran Thi Thanh Hai ◽  
...  

This paper describes some fusion techniques for achieving high accuracy species identification from images of different plant organs. Given a series of different image organs such as branch, entire, flower, or leaf, we firstly extract confidence scores for each single organ using a deep convolutional neural network. Then, various late fusion approaches including conventional transformation-based approaches (sum rule, max rule, product rule), a classification-based approach (support vector machine), and our proposed hybrid fusion model are deployed to determine the identity of the plant of interest. For single organ identification, two schemes are proposed. The first scheme uses one Convolutional neural network (CNN) for each organ while the second one trains one CNN for all organs. Two famous CNNs (AlexNet and Resnet) are chosen in this paper. We evaluate the performances of the proposed method in a large number of images of 50 species which are collected from two primary resources: PlantCLEF 2015 dataset and Internet resources. The experiment exhibits the dominant results of the fusion techniques compared with those of individual organs. At rank-1, the highest species identification accuracy of a single organ is 75.6% for flower images, whereas by applying fusion technique for leaf and flower, the accuracy reaches to 92.6%. We also compare the fusion strategies with the multi-column deep convolutional neural networks (MCDCNN) [1]. The proposed hybrid fusion scheme outperforms MCDCNN in all combinations. It obtains from + 3.0% to + 13.8% improvement in rank-1 over MCDCNN method. The evaluation datasets as well as the source codes are publicly available. Keywords: Plant identification, Convolutional neural network, Deep learning, Fusion.  


2019 ◽  
Vol 8 (2) ◽  
pp. 3960-3963

In this paper, we have done exploratory experiments using deep learning convolutional neural network framework to classify crops into cotton, sugarcane and mulberry. In this contribution we have used Earth Observing-1 hyperion hyperspectral remote sensing data as the input. Structured data has been extracted from hyperspectral data using a remote sensing tool. An analytical assessment shows that convolutional neural network (CNN) gives more accuracy over classical support vector machine (SVM) and random forest methods. It has been observed that accuracy of SVM is 75 %, accuracy of random forest classification is 78 % and accuracy of CNN using Adam optimizer is 99.3 % and loss is 2.74 %. CNN using RMSProp also gives the same accuracy 99.3 % and the loss is 4.43 %. This identified crop information will be used for finding crop production and for deciding market strategies


2021 ◽  
Vol 13 (13) ◽  
pp. 2445
Author(s):  
Xiaohui Ding ◽  
Yong Li ◽  
Ji Yang ◽  
Huapeng Li ◽  
Lingjia Liu ◽  
...  

The capsule network (Caps) is a novel type of neural network that has great potential for the classification of hyperspectral remote sensing. However, the Caps suffers from the issue of gradient vanishing. To solve this problem, a powered activation regularization based adaptive capsule network (PAR-ACaps) was proposed for hyperspectral remote sensing classification, in which an adaptive routing algorithm without iteration was applied to amplify the gradient, and the powered activation regularization method was used to learn the sparser and more discriminative representation. The classification performance of PAR-ACaps was evaluated using two public hyperspectral remote sensing datasets, i.e., the Pavia University (PU) and Salinas (SA) datasets. The average overall classification accuracy (OA) of PAR-ACaps with shallower architecture was measured and compared with those of the benchmarks, including random forest (RF), support vector machine (SVM), 1-dimensional convolutional neural network (1DCNN), two-dimensional convolutional neural network (CNN), three-dimensional convolutional neural network (3DCNN), Caps, and the original adaptive capsule network (ACaps) with comparable network architectures. The OA of PAR-ACaps for PU and SA datasets was 99.51% and 94.52%, respectively, which was higher than those of benchmarks. Moreover, the classification performance of PAR-ACaps with relatively deeper neural architecture (four and six convolutional layers in the feature extraction stage) was also evaluated to demonstrate the effectiveness of gradient amplification. As shown in the experimental results, the classification performance of PAR-ACaps with relatively deeper neural architecture for PU and SA datasets was also superior to 1DCNN, CNN, 3DCNN, Caps, and ACaps with comparable neural architectures. Additionally, the training time consumed by PAR-ACaps was significantly lower than that of Caps. The proposed PAR-ACaps is, therefore, recommended as an effective alternative for hyperspectral remote sensing classification.


Sign in / Sign up

Export Citation Format

Share Document