scholarly journals A Screen Location Method for Treating American Hyphantria cunea Larvae Using Convolutional Neural Network

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yan Gao ◽  
Ying Zhao ◽  
Yujie Ji ◽  
Dongjie Zhao ◽  
Chong Wang ◽  
...  

Chemical control is the major approach to handle the American Hyphantria cunea issue; however, it often causes chemical pollution and resource waste. How to precisely apply pesticide to reduce pollution and waste has been a difficult problem. The premise of accurate spraying of chemicals is to accurately determine the location of the spray target. In this paper, an algorithm based on a convolutional neural network (CNN) is proposed to locate the screen of American Hyphantria cunea. Specifically, comparing the effect of multicolor space-grouping convolution with that of the same color space-grouping convolution, the better effect of different color space-grouping convolution is first proved. Then, RGB and YIQ are employed to identify American Hyphantria cunea screen. Moreover, a noncoincident sliding window method is proposed to divide the image into multiple candidate boxes to reduce the number of convolutions. That is, the probability of American Hyphantria cunea is determined by grouping convolution in each candidate box, and two thresholds (E and Q) are set. When the probability is higher than E, the candidate box is regarded as excellent; when the probability is lower than Q, the candidate box is regarded as unqualified; when the probability is in between, the candidate box is regarded as qualified. The unqualified candidate box is eliminated, and the qualified candidate box cannot exit the above steps until the number of extractions of the candidate box reaches the set value or there is no qualified candidate box. Finally, all the excellent candidate boxes are fused to obtain the final recognition result. Experiments show that the recognition rate of this method is higher than 96%, and the processing time of a single picture is less than 150 ms.

2018 ◽  
Vol 13 ◽  
pp. 174830181881332 ◽  
Author(s):  
Liqun Lin ◽  
Weixing Wang ◽  
Bolin Chen

Accurate segmentation of leukocytes is a primary and very difficult problem because of the non-uniform color, uneven illumination of blood smear image. An improved algorithm based on feature weight adaptive K-means clustering for extracting complex leukocytes is proposed. In this paper, the initial clustering center is chosen according to the histogram distribution of a cell image; this approach not only improves the clustering effect but also reduces the time complexity of the algorithm from O (n) to O (1). Prior to white blood cell extraction, the color space is decomposed. Then, color space decomposition and K-means clustering are combined for image segmentation. And then adherent complex white blood cells are separated again based on watershed algorithm. Finally, classification experiments based on convolutional neural network were performed and compared with other methods; 368 representative images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.81% segmentation accuracy. The classification accuracy reached a maximum of 98.96%, and the average classification time is 0.39 s. Compared with those in the existing algorithms for WBC, convolutional neural network classification method not only presents obvious advantages but can also be easily improved.


Author(s):  
Benhui Xia ◽  
Dezhi Han ◽  
Ximing Yin ◽  
Gao Na

To secure cloud computing and outsourced data while meeting the requirements of automation, many intrusion detection schemes based on deep learn ing are proposed. Though the detection rate of many network intrusion detection solutions can be quite high nowadays, their identification accuracy on imbalanced abnormal network traffic still remains low. Therefore, this paper proposes a ResNet &Inception-based convolutional neural network (RICNN) model to abnormal traffic classification. RICNN can learn more traffic features through the Inception unit, and the degradation problem of the network is eliminated through the direct map ping unit of ResNet, thus the improvement of the model?s generalization ability can be achievable. In addition, to simplify the network, an improved version of RICNN, which makes it possible to reduce the number of parameters that need to be learnt without degrading identification accuracy, is also proposed in this paper. The experimental results on the dataset CICIDS2017 show that RICNN not only achieves an overall accuracy of 99.386% but also has a high detection rate across different categories, especially for small samples. The comparison experiments show that the recognition rate of RICNN outperforms a variety of CNN models and RNN models, and the best detection accuracy can be achieved.


2020 ◽  
Vol 17 (4) ◽  
pp. 572-578
Author(s):  
Mohammad Parseh ◽  
Mohammad Rahmanimanesh ◽  
Parviz Keshavarzi

Persian handwritten digit recognition is one of the important topics of image processing which significantly considered by researchers due to its many applications. The most important challenges in Persian handwritten digit recognition is the existence of various patterns in Persian digit writing that makes the feature extraction step to be more complicated.Since the handcraft feature extraction methods are complicated processes and their performance level are not stable, most of the recent studies have concentrated on proposing a suitable method for automatic feature extraction. In this paper, an automatic method based on machine learning is proposed for high-level feature extraction from Persian digit images by using Convolutional Neural Network (CNN). After that, a non-linear multi-class Support Vector Machine (SVM) classifier is used for data classification instead of fully connected layer in final layer of CNN. The proposed method has been applied to HODA dataset and obtained 99.56% of recognition rate. Experimental results are comparable with previous state-of-the-art methods


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Fu-Yan Guo ◽  
Yan-Chao Zhang ◽  
Yue Wang ◽  
Pei-Jun Ren ◽  
Ping Wang

Reciprocating compressors play a vital role in oil, natural gas, and general industrial processes. Their safe and stable operation directly affects the healthy development of the enterprise economy. Since the valve failure accounts for 60% of the total failures when the reciprocating compressor fails, it is of great significance to quickly find and diagnose the failure type of the valve for the fault diagnosis of the reciprocating compressor. At present, reciprocating compressor valve fault diagnosis based on deep neural networks requires sufficient labeled data for training, but valve in real-case reciprocating compressor (VRRC) does not have enough labeled data to train a reliable model. Fortunately, the data of valve in laboratory reciprocating compressor (VLRC) contains relevant fault diagnosis knowledge. Therefore, inspired by the idea of transfer learning, a fault diagnosis method for reciprocating compressor valves based on transfer learning convolutional neural network (TCNN) is proposed. This method uses convolutional neural network (CNN) to extract the transferable features of gas temperature and pressure data from VLRC and VRRC and establish pseudolabels for VRRC unlabeled data. Three regularization terms, the maximum mean discrepancy (MMD) of the transferable features of VLRC and VRRC data, the error between the VLRC sample label prediction and the actual label, and the error between the VRRC sample label prediction and the pseudolabel, are proposed. Their weighted sum is used as an objective function to train the model, thereby reducing the distribution difference of domain feature transfer and increasing the distance between learning feature classes. Experimental results show that this method uses VLRC data to identify the health status of VRRC, and the fault recognition rate can reach 98.32%. Compared with existing methods, this method has higher diagnostic accuracy, which proves the effectiveness of this method.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6685
Author(s):  
Pu Yanan ◽  
Yan Jilong ◽  
Zhang Heng

Compared with optical sensors, wearable inertial sensors have many advantages such as low cost, small size, more comprehensive application range, no space restrictions and occlusion, better protection of user privacy, and more suitable for sports applications. This article aims to solve irregular actions that table tennis enthusiasts do not know in actual situations. We use wearable inertial sensors to obtain human table tennis action data of professional table tennis players and non-professional table tennis players, and extract the features from them. Finally, we propose a new method based on multi-dimensional feature fusion convolutional neural network and fine-grained evaluation of human table tennis actions. Realize ping-pong action recognition and evaluation, and then achieve the purpose of auxiliary training. The experimental results prove that our proposed multi-dimensional feature fusion convolutional neural network has an average recognition rate that is 0.17 and 0.16 higher than that of CNN and Inception-CNN on the nine-axis non-professional test set, which proves that we can better distinguish different human table tennis actions and have a more robust generalization performance. Therefore, on this basis, we have better realized the enthusiast of table tennis the purpose of the action for auxiliary training.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yunhui Zhao ◽  
Junkai Xu ◽  
Qisong Chen

An esophageal cancer intelligent diagnosis system is developed to improve the recognition rate of esophageal cancer image diagnosis and the efficiency of physicians, as well as to improve the level of esophageal cancer image diagnosis in primary care institutions. In this paper, by collecting medical images related to esophageal cancer over the years, we establish an intelligent diagnosis system based on the convolutional neural network for esophageal cancer images through the steps of data annotation, image preprocessing, data enhancement, and deep learning to assist doctors in intelligent diagnosis. The convolutional neural network-based esophageal cancer image intelligent diagnosis system has been successfully applied in hospitals and widely praised by frontline doctors. This system is beneficial for primary care physicians to improve the overall accuracy of esophageal cancer diagnosis and reduce the risk of death of esophageal cancer patients. We also analyze that the efficacy of radiation therapy for esophageal cancer can be influenced by many factors, and clinical attention should be paid to grasp the relevant factors in order to improve the final treatment effect and prognosis of patients.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Xueyan Chen ◽  
Xiaofei Zhong

In order to help pathologists quickly locate the lesion area, improve the diagnostic efficiency, and reduce missed diagnosis, a convolutional neural network algorithm for the optimization of emergency nursing rescue efficiency of critical patients was proposed. Specifically, three convolution layers and convolution kernels of different sizes are used to extract the features of patients’ posture behavior, and the classifier of patients’ posture behavior recognition system is used to learn the feature information by capturing the nonlinear relationship between the features to achieve accurate classification. By testing the accuracy of patient posture behavior feature extraction, the recognition rate of a certain action, and the average recognition rate of all actions in the patient body behavior recognition system, it is proved that the convolution neural network algorithm can greatly improve the efficiency of emergency nursing. The algorithm is applied to the patient posture behavior detection system, so as to realize the identification and monitoring of patients and improve the level of intelligent medical care. Finally, the open source framework platform is used to test the patient behavior detection system. The experimental results show that the larger the test data set is, the higher the accuracy of patient posture behavior feature extraction is, and the average recognition rate of patient posture behavior category is 97.6%, thus verifying the effectiveness and correctness of the system, to prove that the convolutional neural network algorithm has a very large improvement of emergency nursing rescue efficiency.


2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>


Fruit grading is a process that affect quality control and fruit-processing industries to meet the efficiency of its production and society. However, these industries have suffered from lack of standards in quality control, higher time of grading and low product output because of the use of manual methods. To meet the increasing demand of quality fruit products, fruit-processing industries must consider automating their fruit grading process. Several algorithms have been proposed over the years to achieve this purpose and their works were based on color, shape and inability to handle large dataset which resulted in slow recognition accuracy. To mitigate these flaws, we develop an automated system for grading and classification of apple using Convolutional Neural Network (CNN) used in image recognition and classification. Two models were developed from CNN using ResNet50 as its convolutional base, a process called transfer learning. The first model, the apple checker model (ACM) performs the recognition of the image with two output connections (apple and non-apple) while the apple grader model (AGM) does the classification of the image that has four output classes (spoiled, grade A, grade B & grade C) if the image is an apple. A comparison evaluation of both models were conducted and experimental results show that the ACM achieved a test accuracy of 100% while the AGM obtained recognition rate of 99.89%.The developed system may be employed in food processing industries and related life applications.


2020 ◽  
Vol 2020 (9) ◽  
pp. 168-1-168-7
Author(s):  
Roger Gomez Nieto ◽  
Hernan Dario Benitez Restrepo ◽  
Roger Figueroa Quintero ◽  
Alan Bovik

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0:7749±0:0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.


Sign in / Sign up

Export Citation Format

Share Document