scholarly journals Rapid Identification of Potassium Nutrition Stress in Rice Based on Machine Vision and Object-Oriented Segmentation

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Lisu Chen ◽  
Shihan Huang ◽  
Yuanyuan Sun ◽  
Enyan Zhu ◽  
Ke Wang

Special symptoms could be observed on rice leaves when exposed to potassium deficiency, and these symptoms usually display differently under different potassium levels, which offer a foundation for rapid nutrition diagnosis. In this research study, two years of hydroponic experiments on rice (providing 5 levels of potassium nutrition from extremely short to normal) were carried out and the leaf images were acquired by optical scanning at four growth periods. To diagnose the potassium nutrition content, the special symptoms including the yellowish brown leaf margin and the necrotic spots were segmented and quantized by the object-oriented method from leaf images, and the 6 further spectral characteristics of leaf were extracted by the image color analyzing function of MATLAB software. Based on the relationship between potassium content and leaf characteristics, the G value (average value of G channel in the RGB color model) calculated from the entire leaf and leaf tip, the area of yellowish leaf margin, and the number of necrotic spots were applied in the establishment of the identification model of potassium stress by using the support vector machine (SVM). The results indicated that the overall identification accuracies of rice potassium nutrition contents were 90%, 94%, 94%, and 96% at four different growth periods (productive tillering stage, invalid tillering stage, jointing stage, and booting stage), respectively. The data obtained from another year were used to validate the model, and the identification accuracies were 94%, 78%, 80%, and 84%, respectively. Generally speaking, the extraction of the specific symptoms by using object-oriented segmentation is an extension of machine vision technology in diagnosing potassium deficiency, and its application in diagnosing plant nutrition is valuable for the quantization of effective characteristics and improvement of identification accuracy.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1994
Author(s):  
Qian Ma ◽  
Wenting Han ◽  
Shenjin Huang ◽  
Shide Dong ◽  
Guang Li ◽  
...  

This study explores the classification potential of a multispectral classification model for farmland with planting structures of different complexity. Unmanned aerial vehicle (UAV) remote sensing technology is used to obtain multispectral images of three study areas with low-, medium-, and high-complexity planting structures, containing three, five, and eight types of crops, respectively. The feature subsets of three study areas are selected by recursive feature elimination (RFE). Object-oriented random forest (OB-RF) and object-oriented support vector machine (OB-SVM) classification models are established for the three study areas. After training the models with the feature subsets, the classification results are evaluated using a confusion matrix. The OB-RF and OB-SVM models’ classification accuracies are 97.09% and 99.13%, respectively, for the low-complexity planting structure. The equivalent values are 92.61% and 99.08% for the medium-complexity planting structure and 88.99% and 97.21% for the high-complexity planting structure. For farmland with fragmentary plots and a high-complexity planting structure, as the planting structure complexity changed from low to high, both models’ overall accuracy levels decreased. The overall accuracy of the OB-RF model decreased by 8.1%, and that of the OB-SVM model only decreased by 1.92%. OB-SVM achieves an overall classification accuracy of 97.21%, and a single-crop extraction accuracy of at least 85.65%. Therefore, UAV multispectral remote sensing can be used for classification applications in highly complex planting structures.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1485
Author(s):  
Kaidong Lei ◽  
Chao Zong ◽  
Xiaodong Du ◽  
Guanghui Teng ◽  
Feiqi Feng

This study proposes a method and device for the intelligent mobile monitoring of oestrus on a sow farm, applied in the field of sow production. A bionic boar model that imitates the sounds, smells, and touch of real boars was built to detect the oestrus of sows after weaning. Machine vision technology was used to identify the interactive behaviour between empty sows and bionic boars and to establish deep belief network (DBN), sparse autoencoder (SAE), and support vector machine (SVM) models, and the resulting recognition accuracy rates were 96.12%, 98.25%, and 90.00%, respectively. The interaction times and frequencies between the sow and the bionic boar and the static behaviours of both ears during heat were further analysed. The results show that there is a strong correlation between the duration of contact between the oestrus sow and the bionic boar and the static behaviours of both ears. The average contact duration between the sows in oestrus and the bionic boars was 29.7 s/3 min, and the average duration in which the ears of the oestrus sows remained static was 41.3 s/3 min. The interactions between the sow and the bionic boar were used as the basis for judging the sow’s oestrus states. In contrast with the methods of other studies, the proposed innovative design for recyclable bionic boars can be used to check emotions, and machine vision technology can be used to quickly identify oestrus behaviours. This approach can more accurately obtain the oestrus duration of a sow and provide a scientific reference for a sow’s conception time.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


2020 ◽  
Vol 26 (4) ◽  
pp. 405-425
Author(s):  
Javed Miandad ◽  
Margaret M. Darrow ◽  
Michael D. Hendricks ◽  
Ronald P. Daanen

ABSTRACT This study presents a new methodology to identify landslide and landslide-susceptible locations in Interior Alaska using only geomorphic properties from light detection and ranging (LiDAR) derivatives (i.e., slope, profile curvature, and roughness) and the normalized difference vegetation index (NDVI), focusing on the effect of different resolutions of LiDAR images. We developed a semi-automated object-oriented image classification approach in ArcGIS 10.5 and prepared a landslide inventory from visual observation of hillshade images. The multistage work flow included combining derivatives from 1-, 2.5-, and 5-m-resolution LiDAR, image segmentation, image classification using a support vector machine classifier, and image generalization to clean false positives. We assessed classification accuracy by generating confusion matrix tables. Analysis of the results indicated that LiDAR image scale played an important role in the classification, and the use of NDVI generated better results. Overall, the LiDAR 5-m-resolution image with NDVI generated the best results with a kappa value of 0.55 and an overall accuracy of 83 percent. The LiDAR 1-m-resolution image with NDVI generated the highest producer accuracy of 73 percent in identifying landslide locations. We produced a combined overlay map by summing the individual classified maps that was able to delineate landslide objects better than the individual maps. The combined classified map from 1-, 2.5-, and 5-m-resolution LiDAR with NDVI generated producer accuracies of 60, 80, and 86 percent and user accuracies of 39, 51, and 98 percent for landslide, landslide-susceptible, and stable locations, respectively, with an overall accuracy of 84 percent and a kappa value of 0.58. This semi-automated object-oriented image classification approach demonstrated potential as a viable tool with further refinement and/or in combination with additional data sources.


Author(s):  
Sandy C. Lauguico ◽  
◽  
Ronnie S. Concepcion II ◽  
Jonnel D. Alejandrino ◽  
Rogelio Ruzcko Tobias ◽  
...  

The arising problem on food scarcity drives the innovation of urban farming. One of the methods in urban farming is the smart aquaponics. However, for a smart aquaponics to yield crops successfully, it needs intensive monitoring, control, and automation. An efficient way of implementing this is the utilization of vision systems and machine learning algorithms to optimize the capabilities of the farming technique. To realize this, a comparative analysis of three machine learning estimators: Logistic Regression (LR), K-Nearest Neighbor (KNN), and Linear Support Vector Machine (L-SVM) was conducted. This was done by modeling each algorithm from the machine vision-feature extracted images of lettuce which were raised in a smart aquaponics setup. Each of the model was optimized to increase cross and hold-out validations. The results showed that KNN having the tuned hyperparameters of n_neighbors=24, weights='distance', algorithm='auto', leaf_size = 10 was the most effective model for the given dataset, yielding a cross-validation mean accuracy of 87.06% and a classification accuracy of 91.67%.


Chemosensors ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 360
Author(s):  
Tianqi Lu ◽  
Ammar Al-Hamry ◽  
José Mauricio Rosolen ◽  
Zheng Hu ◽  
Junfeng Hao ◽  
...  

We investigated functionalized graphene materials to create highly sensitive sensors for volatile organic compounds (VOCs) such as formaldehyde, methanol, ethanol, acetone, and isopropanol. First, we prepared VOC-sensitive films consisting of mechanically exfoliated graphene (eG) and chemical graphene oxide (GO), which have different concentrations of structural defects. We deposited the films on silver interdigitated electrodes on Kapton substrate and submitted them to thermal treatment. Next, we measured the sensitive properties of the resulting sensors towards specific VOCs by impedance spectroscopy. We obtained the eG- and GO-based electronic nose composed of two eG films- and four GO film-based sensors with variable sensitivity to individual VOCs. The smallest relative change in impedance was 5% for the sensor based on eG film annealed at 180 °C toward 10 ppm formaldehyde, whereas the highest relative change was 257% for the sensor based on two-layers deposited GO film annealed at 200 °C toward 80 ppm ethanol. At 10 ppm VOC, the GO film-based sensors were sensitive enough to distinguish between individual VOCs, which implied excellent selectivity, as confirmed by Principle Component Analysis (PCA). According to a PCA-Support Vector Machine-based signal processing method, the electronic nose provided identification accuracy of 100% for individual VOCs. The proposed electronic nose can be used to detect multiple VOCs selectively because each sensor is sensitive to VOCs and has significant cross-selectivity to others.


2020 ◽  
Vol 8 (5) ◽  
pp. 2522-2527

In this paper, we design method for recognition of fingerprint and IRIS using feature level fusion and decision level fusion in Children multimodal biometric system. Initially, Histogram of Gradients (HOG), Gabour and Maximum filter response are extracted from both the domains of fingerprint and IRIS and considered for identification accuracy. The combination of feature vector of all the possible features is recommended by biometrics traits of fusion. For fusion vector the Principal Component Analysis (PCA) is used to select features. The reduced features are fed into fusion classifier of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Navie Bayes(NB). For children multimodal biometric system the suitable combination of features and fusion classifiers is identified. The experimentation conducted on children’s fingerprint and IRIS database and results reveal that fusion combination outperforms individual. In addition the proposed model advances the unimodal biometrics system.


Author(s):  
O. O. Kryvoshein ◽  
O. A. Kryvobok ◽  
T. I. Adamenko

The article studies one of the most important issues of agricultural production maintenance – development of a system of crops area estimation in Ukraine. The objective of this paper is to describe the similar system that uses high resolution satellite data and operational agrometeorological data from the network of the Hydrometeorological Centre of Ukraine as input information. The system is based on step-by-step solving of the following tasks: obtaining geoinformation data for individual agricultural crops; development of methods for multispectral satellite images classification; development of software applications to automate the process of these images classification with subsequent classification of crop areas. The research uses the following algorithms (or classifiers) to classify the agricultural land: SVM (support vector machine), RF ("random forest") and NN (neural networks). The choice of the most accurate of them formed the basis of the general method of classification. The values of spectral characteristics of red and infrared channels of a complete set of cloudless satellite images during the growing period were used as input data (features). As a result, in 2018 some test calculations were conducted to estimate the area of agricultural crops in Kyiv Region. The results of evaluation of accuracy of the satellite-based agricultural crops area estimation using the statistical data showed that the lowest accuracy is typical for winter wheat and corn. The accuracy of soybeans and spring barley classification is quite low for most of the tested fields. Sunflower and rapeseed crops showed the highest accuracy. In order to improve the accuracy of classification, it is necessary to introduce more classification features (in a temporary aspect) by processing more satellite images during the growing period, and to increase the number of test samples through systematic sampling of ground data across the regions in Ukraine. We suggest using the scheme of main agricultural crops area estimation satellite-based system by the Hydrometeorological Centre of Ukraine.


2020 ◽  
Vol 12 (24) ◽  
pp. 4115
Author(s):  
Xiaoli Li ◽  
Jinsong Chen ◽  
Longlong Zhao ◽  
Shanxin Guo ◽  
Luyi Sun ◽  
...  

The spatial fragmentation of high-resolution remote sensing images makes the segmentation algorithm put forward a strong demand for noise immunity. However, the stronger the noise immunity, the more serious the loss of detailed information, which easily leads to the neglect of effective characteristics. In view of the difficulty of balancing the noise immunity and effective characteristic retention, an adaptive distance-weighted Voronoi tessellation technology is proposed for remote sensing image segmentation. The distance between pixels and seed points in Voronoi tessellation is established by the adaptive weighting of spatial distance and spectral distance. The weight coefficient used to control the influence intensity of spatial distance is defined by a monotone decreasing function. Following the fuzzy clustering framework, a fuzzy segmentation model with Kullback–Leibler (KL) entropy regularization is established by using multivariate Gaussian distribution to describe the spectral characteristics and Markov Random Field (MRF) to consider the neighborhood effect of sub-regions. Finally, a series of parameter optimization schemes are designed according to parameter characteristics to obtain the optimal segmentation results. The proposed algorithm is validated on many multispectral remote sensing images with five comparing algorithms by qualitative and quantitative analysis. A large number of experiments show that the proposed algorithm can overcome the complex noise as well as better ensure effective characteristics.


Sign in / Sign up

Export Citation Format

Share Document