scholarly journals Identification of Navel Orange Diseases and Pests Based on the Fusion of DenseNet and Self-Attention Mechanism

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yin’e Zhang ◽  
Yong Ping Liu

The prevention and control of navel orange pests and diseases is an important measure to ensure the yield of navel oranges. Aiming at the problems of slow speed, strong subjectivity, high requirements for professional knowledge required, and high identification costs in the identification methods of navel orange pests and diseases, this paper proposes a method based on DenseNet and attention. The power mechanism fusion (DCPSNET) identification method of navel orange diseases and pests improves the traditional deep dense network DenseNet model to realize accurate and efficient identification of navel orange diseases and pests. Due to the difficulty in collecting data of navel orange pests and diseases, this article uses image enhancement technology to expand. The experimental results show that, in the case of small samples, compared with the traditional model, the DCPSNET model can accurately identify different types of navel orange diseases and pests images and the accuracy of identifying six types of navel orange diseases and pests on the test set is as high as 96.90%. The method proposed in this paper has high recognition accuracy, realizes the intelligent recognition of navel orange diseases and pests, and also provides a way for high-precision recognition of small sample data sets.

2021 ◽  
Vol 2068 (1) ◽  
pp. 012025
Author(s):  
Jian Zheng ◽  
Zhaoni Li ◽  
Jiang Li ◽  
Hongling Liu

Abstract It is difficult to detect the anomalies in big data using traditional methods due to big data has the characteristics of mass and disorder. For the common methods, they divide big data into several small samples, then analyze these divided small samples. However, this manner increases the complexity of segmentation algorithms, moreover, it is difficult to control the risk of data segmentation. To address this, here proposes a neural network approch based on Vapnik risk model. Firstly, the sample data is randomly divided into small data blocks. Then, a neural network learns these divided small sample data blocks. To reduce the risks in the process of data segmentation, the Vapnik risk model is used to supervise data segmentation. Finally, the proposed method is verify on the historical electricity price data of Mountain View, California. The results show that our method is effectiveness.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Lei Wang ◽  
Qian Li ◽  
Jin Qin

Error diagnosis and detection have become important in modern production due to the importance of spinning equipment. Artificial neural network pattern recognition methods are widely utilized in rotating equipment fault detection. These methods often need a large quantity of sample data to train the model; however, sample data (especially fault samples) are uncommon in engineering. Preliminary work focuses on dimensionality reduction for big data sets using semisupervised methods. The rotary machine’s polar coordinate signal is used to build a GAN network structure. ANN and tiny samples are utilized to identify DCGAN model flaws. The time-conditional generative adversarial network is proposed for one-dimensional vibration signal defect identification under data imbalance. Finally, auxiliary samples are gathered under similar conditions, and CCNs learn about target sample characteristics. Convolutional neural networks handle the problem of defect identification with small samples in different ways. In high-dimensional data sets with nonlinearities, low fault type recognition rates and fewer marked fault samples may be addressed using kernel semisupervised local Fisher discriminant analysis. The SELF method is used to build the optimum projection transformation matrix from the data set. The KNN classifier then learns low-dimensional features and detects an error kind. Because DCGAN training is unstable and the results are incorrect, an improved deep convolutional generative adversarial network (IDCGAN) is proposed. The tests indicate that the IDCGAN generates more real samples and solves the problem of defect identification in small samples. Time-conditional generation adversarial network data improvement lowers fault diagnosis effort and deep learning model complexity. The TCGAN and CNN are combined to provide superior fault detection under data imbalance. Modeling and experiments demonstrate TCGAN’s use and superiority.


2021 ◽  
Vol 2030 (1) ◽  
pp. 012048
Author(s):  
Meng Zhou ◽  
Zhigang Lv ◽  
Ye Li ◽  
RuoHai Di ◽  
Hongjie Zhu ◽  
...  

Author(s):  
Chih-Cheng Chen ◽  
Zhen Liu ◽  
Guangsong Yang ◽  
Chia-Chun Wu ◽  
Qiubo Ye

The diagnosis of a rolling bearing for monitoring its status is critical for maintaining industrial equipment using rolling bearings. The traditional method of diagnosing faults of the rolling bearing has low identification accuracy, which needs artificial feature extraction to enhance the accuracy. 1D-CNN method not only can diagnose bearing faults accurately but also overcome shortcomings of the traditional methods. Different from machine learning and other deep learning models, the 1D-CNN method does not need pre-processing one-dimensional data of rolling bearing’s vibration. Thus, it enhances the processing speed and improves the network structure to have a reasonable design for small sample data sets. This study proposes and tests a 1D-CNN method for diagnosing rolling bearings. By introducing the dropout operation, the method obtains high accuracy and improves the generalizing ability. The experimental results show 99.52% of the average accuracy under a single load and 98.26% under different loads.


2001 ◽  
Vol 2 (1) ◽  
pp. 28-34 ◽  
Author(s):  
Edward R. Dougherty

In order to study the molecular biological differences between normal and diseased tissues, it is desirable to perform classification among diseases and stages of disease using microarray-based gene-expression values. Owing to the limited number of microarrays typically used in these studies, serious issues arise with respect to the design, performance and analysis of classifiers based on microarray data. This paper reviews some fundamental issues facing small-sample classification: classification rules, constrained classifiers, error estimation and feature selection. It discusses both unconstrained and constrained classifier design from sample data, and the contributions to classifier error from constrained optimization and lack of optimality owing to design from sample data. The difficulty with estimating classifier error when confined to small samples is addressed, particularly estimating the error from training data. The impact of small samples on the ability to include more than a few variables as classifier features is explained.


Author(s):  
Biu O. Emmanuel ◽  
Nwakuya T. Maureen ◽  
Nduka Wonu

The paper provides five tests of data normality at different sample sizes. The tests are the Shapiro-Wilk (SW) test, Anderson-Darling (AD) test, Kolmogorov-Smirnov (KS) test, Ryan-Joiner (RJ) test, and Jarque-Bera (JB) test. These tests were used to test for normality for two secondary data sets with sample size (155) for large and (40) for small; and then test the simulated scenario with standard normal “N(0,1)” data sets; where the large samples of sizes (150, 140, 130, 130, 110 and 100) and small samples of sizes (40. 35, 30, 25, 20, 15 and 10) are considered at two levels of significance (5% and 10%). However, the aim of this paper is to detect and compare the performance of the different normality tests considered. The normality test results shows Kolmogorov-Smirnov (KS) test is a most powerful test than other tests since it detect the simulated large sample data sets do not follow a normal distribution at 5%, while for small sample sizes at 5% level of significance; the results showed the Jarque-Bera (JB) test is a most powerful test than other tests since it detects that the simulated small sample data do not follow a normal distribution at 5%. This paper recommended JB test for normality test when the sample size is small and KS test when the sample size is large at 5% level of significance.


CONVERTER ◽  
2021 ◽  
pp. 359-372
Author(s):  
Jun Zhang, Xiaohong Peng, Zixiang Liang, Rongfa Chen, ZhaoLi

Objectives: Underwater target recognition through simulation robot, or manual acquisition of seabed image data, the cost of sampling is high, the sample data obtained is limited, and the image quality is poor, and the data can be used for training is small. Methods: Aiming at this problem, this paper improves the algorithm based on yoov4, modifies its feature extraction backbone network, and proposes three kinds of YOLOV4 algorithms based on different Mobile net backbone networks to test the underwater target recognition in the case of small samples. In this paper, the real image of the seabed is used as the original data for training, and the data which is different from the training set is used for prediction. Result: Compared with the original YOLOV4 algorithm under the same conditions, the experimental results of MobilenetV1_YOLOV4 algorithm has the best MAP(86.04%) and FPS(52); and the histogram equalization method is used to enhance the image, which can be used as a further supplementary recognition of the missed target, and reduce the missed rate. Conclusions: The algorithm takes into account both lightweight and accuracy, and provides support for underwater target recognition in marine operation development and aquaculture


1994 ◽  
Vol 33 (02) ◽  
pp. 180-186 ◽  
Author(s):  
H. Brenner ◽  
O. Gefeller

Abstract:The traditional concept of describing the validity of a diagnostic test neglects the presence of chance agreement between test result and true (disease) status. Sensitivity and specificity, as the fundamental measures of validity, can thus only be considered in conjunction with each other to provide an appropriate basis for the evaluation of the capacity of the test to discriminate truly diseased from truly undiseased subjects. In this paper, chance-corrected analogues of sensitivity and specificity are presented as supplemental measures of validity, which pay attention to the problem of chance agreement and offer the opportunity to be interpreted separately. While recent proposals of chance-correction techniques, suggested by several authors in this context, lead to measures which are dependent on disease prevalence, our method does not share this major disadvantage. We discuss the extension of the conventional ROC-curve approach to chance-corrected measures of sensitivity and specificity. Furthermore, point and asymptotic interval estimates of the parameters of interest are derived under different sampling frameworks for validation studies. The small sample behavior of the estimates is investigated in a simulation study, leading to a logarithmic modification of the interval estimate in order to hold the nominal confidence level for small samples.


Author(s):  
Y. Arockia Suganthi ◽  
Chitra K. ◽  
J. Magelin Mary

Dengue fever is a painful mosquito-borne infection caused by different types of virus in various localities of the world. There is no particular medicine or vaccine to treat person suffering from dengue fever. Dengue viruses are transmitted by the bite of female Aedes (Ae) mosquitoes. Dengue fever viruses are mainly transmitted by Aedes which can be active in tropical or subtropical climates. Aedes Aegypti is the key step to avoid infection transmission to save millions of people in all over the world. This paper provides a standard guideline in the planning of dengue prevention and control measures. At the same time gives the priorities including clinical management and hospitalized dengue patients have to address essentially.


2017 ◽  
Vol 4 (1) ◽  
pp. 41-52
Author(s):  
Dedy Loebis

This paper presents the results of work undertaken to develop and test contrasting data analysis approaches for the detection of bursts/leaks and other anomalies within wate r supply systems at district meter area (DMA)level. This was conducted for Yorkshire Water (YW) sample data sets from the Harrogate and Dales (H&D), Yorkshire, United Kingdom water supply network as part of Project NEPTUNE EP/E003192/1 ). A data analysissystem based on Kalman filtering and statistical approach has been developed. The system has been applied to the analysis of flow and pressure data. The system was proved for one dataset case and have shown the ability to detect anomalies in flow and pres sure patterns, by correlating with other information. It will be shown that the Kalman/statistical approach is a promising approach at detecting subtle changes and higher frequency features, it has the potential to identify precursor features and smaller l eaks and hence could be useful for monitoring the development of leaks, prior to a large volume burst event.


Sign in / Sign up

Export Citation Format

Share Document