scholarly journals Plant Recognition Using Morphological Feature Extraction and Transfer Learning over SVM and AdaBoost

Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 356
Author(s):  
Shubham Mahajan ◽  
Akshay Raina ◽  
Xiao-Zhi Gao ◽  
Amit Kant Pandit

Plant species recognition from visual data has always been a challenging task for Artificial Intelligence (AI) researchers, due to a number of complications in the task, such as the enormous data to be processed due to vast number of floral species. There are many sources from a plant that can be used as feature aspects for an AI-based model, but features related to parts like leaves are considered as more significant for the task, primarily due to easy accessibility, than other parts like flowers, stems, etc. With this notion, we propose a plant species recognition model based on morphological features extracted from corresponding leaves’ images using the support vector machine (SVM) with adaptive boosting technique. This proposed framework includes the pre-processing, extraction of features and classification into one of the species. Various morphological features like centroid, major axis length, minor axis length, solidity, perimeter, and orientation are extracted from the digital images of various categories of leaves. In addition to this, transfer learning, as suggested by some previous studies, has also been used in the feature extraction process. Various classifiers like the kNN, decision trees, and multilayer perceptron (with and without AdaBoost) are employed on the opensource dataset, FLAVIA, to certify our study in its robustness, in contrast to other classifier frameworks. With this, our study also signifies the additional advantage of 10-fold cross validation over other dataset partitioning strategies, thereby achieving a precision rate of 95.85%.

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 163912-163918
Author(s):  
Munish Kumar ◽  
Surbhi Gupta ◽  
Xiao-Zhi Gao ◽  
Amitoj Singh

2021 ◽  
pp. 1063293X2198894
Author(s):  
Prabira Kumar Sethy ◽  
Santi Kumari Behera ◽  
Nithiyakanthan Kannan ◽  
Sridevi Narayanan ◽  
Chanki Pandey

Paddy is an essential nutrient worldwide. Rice gives 21% of worldwide human per capita energy and 15% of per capita protein. Asia represented 60% of the worldwide populace, about 92% of the world’s rice creation, and 90% of worldwide rice utilization. With the increase in population, the demand for rice is increased. So, the productivity of farming is needed to be enhanced by introducing new technology. Deep learning and IoT are hot topics for research in various fields. This paper suggested a setup comprising deep learning and IoT for monitoring of paddy field remotely. The vgg16 pre-trained network is considered for the identification of paddy leaf diseases and nitrogen status estimation. Here, two strategies are carried out to identify images: transfer learning and deep feature extraction. The deep feature extraction approach is combined with a support vector machine (SVM) to classify images. The transfer learning approach of vgg16 for identifying four types of leaf diseases and prediction of nitrogen status results in 79.86% and 84.88% accuracy. Again, the deep features of Vgg16 and SVM results for identifying four types of leaf diseases and prediction of nitrogen status have achieved an accuracy of 97.31% and 99.02%, respectively. Besides, a framework is suggested for monitoring of paddy field remotely based on IoT and deep learning. The suggested prototype’s superiority is that it controls temperature and humidity like the state-of-the-art and can monitor the additional two aspects, such as detecting nitrogen status and diseases.


2020 ◽  
Vol 10 (17) ◽  
pp. 5792 ◽  
Author(s):  
Biserka Petrovska ◽  
Tatjana Atanasova-Pacemska ◽  
Roberto Corizzo ◽  
Paolo Mignone ◽  
Petre Lameski ◽  
...  

Remote Sensing (RS) image classification has recently attracted great attention for its application in different tasks, including environmental monitoring, battlefield surveillance, and geospatial object detection. The best practices for these tasks often involve transfer learning from pre-trained Convolutional Neural Networks (CNNs). A common approach in the literature is employing CNNs for feature extraction, and subsequently train classifiers exploiting such features. In this paper, we propose the adoption of transfer learning by fine-tuning pre-trained CNNs for end-to-end aerial image classification. Our approach performs feature extraction from the fine-tuned neural networks and remote sensing image classification with a Support Vector Machine (SVM) model with linear and Radial Basis Function (RBF) kernels. To tune the learning rate hyperparameter, we employ a linear decay learning rate scheduler as well as cyclical learning rates. Moreover, in order to mitigate the overfitting problem of pre-trained models, we apply label smoothing regularization. For the fine-tuning and feature extraction process, we adopt the Inception-v3 and Xception inception-based CNNs, as well the residual-based networks ResNet50 and DenseNet121. We present extensive experiments on two real-world remote sensing image datasets: AID and NWPU-RESISC45. The results show that the proposed method exhibits classification accuracy of up to 98%, outperforming other state-of-the-art methods.


2021 ◽  
Vol 13 (19) ◽  
pp. 3847
Author(s):  
Yaa Takyiwaa Acquaah ◽  
Balakrishna Gokaraju ◽  
Raymond C. Tesiero ◽  
Gregory H. Monty

The control of thermostats of a heating, ventilation, and air-conditioning (HVAC) system installed in commercial and residential buildings remains a pertinent problem in building energy efficiency and thermal comfort research. The ability to determine the number of people at a particular time in an area is imperative for energy efficiency in order to condition only occupied regions and thermally deficient regions. In this study of the best features comparison for detecting the number of people in an area, feature extraction techniques including wavelet scattering, wavelet decomposition, grey-level co-occurrence matrix (GLCM) and feature maps convolution neural network (CNN) layers were explored using thermal camera imagery. Specifically, the pretrained CNN networks explored are the deep residual (Resnet-50) and visual geometry group (VGG-16) networks. The discriminating potential of Haar, Daubechies and Symlets wavelet statistics on different distributions of data were investigated. The performance of VGG-16 and ResNet-50 in an end-to-end manner utilizing transfer learning approach was investigated. Experimental results showed the classification and regression trees (CART) model trained on only GLCM and Haar wavelet statistic features, individually achieved accuracies of approximately 80% and 84%, respectively, in the detection problem. Moreover, k-nearest neighbors (KNN) trained on the combined features of GLCM and Haar wavelet statistics achieved an accuracy of approximately 86%. In addition, the performance accuracy of the multi classification support vector machine (SVM) trained on deep features obtained from layers of pretrained ResNet-50 and VGG-16 was between 96% and 97%. Furthermore, ResNet-50 transfer learning outperformed the VGG-16 transfer learning model for occupancy detection using thermal imagery. Overall, the SVM model trained on features extracted from wavelet scattering emerged as the best performing classifier with an accuracy of 100%. A principal component analysis (PCA) on the wavelet scattering features proved that the first twenty (20) principal components achieved a similar accuracy level instead of training on the whole feature set to reduce the execution time. The occupancy detection models can be integrated into HVAC control systems for energy efficiency and security systems, and aid in the distribution of resources to people in an area.


GigaScience ◽  
2019 ◽  
Vol 8 (11) ◽  
Author(s):  
Robail Yasrab ◽  
Jonathan A Atkinson ◽  
Darren M Wells ◽  
Andrew P French ◽  
Tony P Pridmore ◽  
...  

Abstract Background In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. Results We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. Conclusions We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


2021 ◽  
Vol 11 (3) ◽  
pp. 997
Author(s):  
Jiaping Li ◽  
Wai Lun Lo ◽  
Hong Fu ◽  
Henry Shu Hung Chung

Meteorological visibility is an important meteorological observation indicator to measure the weather transparency which is important for the transport safety. It is a challenging problem to estimate the visibilities accurately from the image characteristics. This paper proposes a transfer learning method for the meteorological visibility estimation based on image feature fusion. Different from the existing methods, the proposed method estimates the visibility based on the data processing and features’ extraction in the selected subregions of the whole image and therefore it had less computation load and higher efficiency. All the database images were gray-averaged firstly for the selection of effective subregions and features extraction. Effective subregions are extracted for static landmark objects which can provide useful information for visibility estimation. Four different feature extraction methods (Densest, ResNet50, Vgg16, and Vgg19) were used for the feature extraction of the subregions. The features extracted by the neural network were then imported into the proposed support vector regression (SVR) regression model, which derives the estimated visibilities of the subregions. Finally, based on the weight fusion of the visibility estimates from the subregion models, an overall comprehensive visibility was estimated for the whole image. Experimental results show that the visibility estimation accuracy is more than 90%. This method can estimate the visibility of the image, with high robustness and effectiveness.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiyong Tao ◽  
Xinru Zhou ◽  
Zhixue Xu ◽  
Sen Lin ◽  
Yalei Hu ◽  
...  

Accuracy and efficiency are essential topics in the current biometric feature recognition and security research. This paper proposes a deep neural network using bidirectional feature extraction and transfer learning to improve finger-vein recognition performance. Above all, we make a new finger-vein database with the opposite position information of the original one and adopt transfer learning to make the network suitable for our overall recognition framework. Next, the feature extractor is constructed by adjusting the unidirectional database’s parameters, capturing vein features from top to bottom and vice versa. Correspondingly, we concatenate the above two features to form the finger-veins’ bidirectional features, which are trained and classified by Support Vector Machines (SVM) to realize recognition. Experiments are conducted on the Malaysian Polytechnic University’s published database (FV-USM) and finger veins of Signal and Information Processing Laboratory (FV-SIPL). The accuracy of our proposed algorithm reaches 99.67% and 99.31%, which is significantly higher than the unidirectional recognition under each database. Compared with the algorithms cited in this paper, our proposed model based on bidirectional feature enjoys higher accuracy, faster recognition speed than the state-of-the-art frameworks, and excellent practical value.


2018 ◽  
Vol 8 (7) ◽  
pp. 1210 ◽  
Author(s):  
Mahdieh Izadpanahkakhk ◽  
Seyyed Razavi ◽  
Mehran Taghipour-Gorjikolaie ◽  
Seyyed Zahiri ◽  
Aurelio Uncini

Palmprint verification is one of the most significant and popular approaches for personal authentication due to its high accuracy and efficiency. Using deep region of interest (ROI) and feature extraction models for palmprint verification, a novel approach is proposed where convolutional neural networks (CNNs) along with transfer learning are exploited. The extracted palmprint ROIs are fed to the final verification system, which is composed of two modules. These modules are (i) a pre-trained CNN architecture as a feature extractor and (ii) a machine learning classifier. In order to evaluate our proposed model, we computed the intersection over union (IoU) metric for ROI extraction along with accuracy, receiver operating characteristic (ROC) curves, and equal error rate (EER) for the verification task.The experiments demonstrated that the ROI extraction module could significantly find the appropriate palmprint ROIs, and the verification results were crucially precise. This was verified by different databases and classification methods employed in our proposed model. In comparison with other existing approaches, our model was competitive with the state-of-the-art approaches that rely on the representation of hand-crafted descriptors. We achieved a IoU score of 93% and EER of 0.0125 using a support vector machine (SVM) classifier for the contact-based Hong Kong Polytechnic University Palmprint (HKPU) database. It is notable that all codes are open-source and can be accessed online.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5984
Author(s):  
Juan Luis Ferrando Chacón ◽  
Telmo Fernández de Barrena ◽  
Ander García ◽  
Mikel Sáez de Buruaga ◽  
Xabier Badiola ◽  
...  

There is an increasing trend in the industry of knowing in real-time the condition of their assets. In particular, tool wear is a critical aspect, which requires real-time monitoring to reduce costs and scrap in machining processes. Traditionally, for the purpose of predicting tool wear conditions in machining, mathematical models have been developed to extract the information from the signal of sensors attached to the machines. To reduce the complexity of developing physical models, where an in-depth knowledge of the system being modelled is required, the current trend is to use machine-learning (ML) models based on data from the tool wear. The acoustic emission (AE) technique has been widely used to capture data from and understand the real-time condition of industrial assets such as cutting tools. However, AE signal interpretation and processing is rather complex. One of the most common features extracted from AE signals to predict the tool wear is the counts parameter, defined as the number of times that the amplitude of the signal exceeds a predefined threshold. A recurrent problem of this feature is to define the adequate threshold to obtain consistent wear prediction. Additionally, AE signal bandwidth is rather wide, and the selection of the optimum frequencies band for feature extraction has been pointed out as critical and complex by many authors. To overcome these problems, this paper proposes a methodology that applies multi-threshold count feature extraction at multiresolution level using wavelet packet transform, which extracts a redundant and non-optimal feature map from the AE signal. Next, recursive feature elimination is performed to reduce and optimize the vast number of predicting features generated in the previous step, and random forests regression provides the estimated tool wear. The methodology presented was tested using data captured when turning 19NiMoCr6 steel under pre-established cutting conditions. The results obtained were compared with several ML algorithms such as k-nearest neighbors, support vector machines, artificial neural networks and decision trees. Experimental results show that the proposed method can reduce the predicted root mean squared error by 36.53%.


Sign in / Sign up

Export Citation Format

Share Document