scholarly journals Adaptive-Cognitive Kalman Filter and Neural Network for an Upgraded Nondispersive Thermopile Device to Detect and Analyze Fusarium Spores

Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4900
Author(s):  
Son Pham ◽  
Anh Dinh

Noises such as thermal noise, background noise or burst noise can reduce the reliability and confidence of measurement devices. In this work, a recursive and adaptive Kalman filter is proposed to detect and process burst noise or outliers and thermal noise, which are popular in electrical and electronic devices. The Kalman filter and neural network are used to preprocess data of three detectors of a nondispersive thermopile device, which is used to detect and quantify Fusarium spores. The detectors are broadband (1 µm to 20 µm), λ 1 (6.09 ± 0.06 µm) and λ 2 (9.49 ± 0.44 µm) thermopiles. Additionally, an artificial neural network (NN) is applied to process background noise effects. The adaptive and cognitive Kalman Filter helps to improve the training time of the neural network and the absolute error of the thermopile data. Without applying the Kalman filter for λ 1 thermopile, it took 12 min 09 s to train the NN and reach the absolute error of 2.7453 × 104 (n. u.). With the Kalman filter, it took 46 s to train the NN to reach the absolute error of 1.4374 × 104 (n. u.) for λ 1 thermopile. Similarly, to the λ 2 (9.49 ± 0.44 µm) thermopile, the training improved from 9 min 13 s to 1 min and the absolute error of 2.3999 × 105 (n. u.) to the absolute error of 1.76485 × 105 (n. u.) respectively. The three-thermopile system has proven that it can improve the reliability in detection of Fusarium spores by adding the broadband thermopile. The method developed in this work can be employed for devices that encounter similar noise problems.

2021 ◽  
Vol 12 (4) ◽  
pp. 178
Author(s):  
Gilles Van Van Kriekinge ◽  
Cedric De De Cauwer ◽  
Nikolaos Sapountzoglou ◽  
Thierry Coosemans ◽  
Maarten Messagie

The increasing penetration rate of electric vehicles, associated with a growing charging demand, could induce a negative impact on the electric grid, such as higher peak power demand. To support the electric grid, and to anticipate those peaks, a growing interest exists for forecasting the day-ahead charging demand of electric vehicles. This paper proposes the enhancement of a state-of-the-art deep neural network to forecast the day-ahead charging demand of electric vehicles with a time resolution of 15 min. In particular, new features have been added on the neural network in order to improve the forecasting. The forecaster is applied on an important use case of a local charging site of a hospital. The results show that the mean-absolute error (MAE) and root-mean-square error (RMSE) are respectively reduced by 28.8% and 19.22% thanks to the use of calendar and weather features. The main achievement of this research is the possibility to forecast a high stochastic aggregated EV charging demand on a day-ahead horizon with a MAE lower than 1 kW.


Author(s):  
T.K. Biryukova

Classic neural networks suppose trainable parameters to include just weights of neurons. This paper proposes parabolic integrodifferential splines (ID-splines), developed by author, as a new kind of activation function (AF) for neural networks, where ID-splines coefficients are also trainable parameters. Parameters of ID-spline AF together with weights of neurons are vary during the training in order to minimize the loss function thus reducing the training time and increasing the operation speed of the neural network. The newly developed algorithm enables software implementation of the ID-spline AF as a tool for neural networks construction, training and operation. It is proposed to use the same ID-spline AF for neurons in the same layer, but different for different layers. In this case, the parameters of the ID-spline AF for a particular layer change during the training process independently of the activation functions (AFs) of other network layers. In order to comply with the continuity condition for the derivative of the parabolic ID-spline on the interval (x x0, n) , its parameters fi (i= 0,...,n) should be calculated using the tridiagonal system of linear algebraic equations: To solve the system it is necessary to use two more equations arising from the boundary conditions for specific problems. For exam- ple the values of the grid function (if they are known) in the points (x x0, n) may be used for solving the system above: f f x0 = ( 0) , f f xn = ( n) . The parameters Iii+1 (i= 0,...,n−1 ) are used as trainable parameters of neural networks. The grid boundaries and spacing of the nodes of ID-spline AF are best chosen experimentally. The optimal selection of grid nodes allows improving the quality of results produced by the neural network. The formula for a parabolic ID-spline is such that the complexity of the calculations does not depend on whether the grid of nodes is uniform or non-uniform. An experimental comparison of the results of image classification from the popular FashionMNIST dataset by convolutional neural 0, x< 0 networks with the ID-spline AFs and the well-known ReLUx( ) =AF was carried out. The results reveal that the usage x x, ≥ 0 of the ID-spline AFs provides better accuracy of neural network operation than the ReLU AF. The training time for two convolutional layers network with two ID-spline AFs is just about 2 times longer than with two instances of ReLU AF. Doubling of the training time due to complexity of the ID-spline formula is the acceptable price for significantly better accuracy of the network. Wherein the difference of an operation speed of the networks with ID-spline and ReLU AFs will be negligible. The use of trainable ID-spline AFs makes it possible to simplify the architecture of neural networks without losing their efficiency. The modification of the well-known neural networks (ResNet etc.) by replacing traditional AFs with ID-spline AFs is a promising approach to increase the neural network operation accuracy. In a majority of cases, such a substitution does not require to train the network from scratch because it allows to use pre-trained on large datasets neuron weights supplied by standard software libraries for neural network construction thus substantially shortening training time.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Fereshteh Mataeimoghadam ◽  
M. A. Hakim Newton ◽  
Abdollah Dehzangi ◽  
Abdul Karim ◽  
B. Jayaram ◽  
...  

Abstract Protein structure prediction is a grand challenge. Prediction of protein structures via the representations using backbone dihedral angles has recently achieved significant progress along with the on-going surge of deep neural network (DNN) research in general. However, we observe that in the protein backbone angle prediction research, there is an overall trend to employ more and more complex neural networks and then to throw more and more features to the neural networks. While more features might add more predictive power to the neural network, we argue that redundant features could rather clutter the scenario and more complex neural networks then just could counterbalance the noise. From artificial intelligence and machine learning perspectives, problem representations and solution approaches do mutually interact and thus affect performance. We also argue that comparatively simpler predictors can more easily be reconstructed than the more complex ones. With these arguments in mind, we present a deep learning method named Simpler Angle Predictor (SAP) to train simpler DNN models that enhance protein backbone angle prediction. We then empirically show that SAP can significantly outperform existing state-of-the-art methods on well-known benchmark datasets: for some types of angles, the differences are 6–8 in terms of mean absolute error (MAE). The SAP program along with its data is available from the website https://gitlab.com/mahnewton/sap.


2011 ◽  
Vol 402 ◽  
pp. 476-479
Author(s):  
Wei Wang ◽  
Zhi Hui Xu ◽  
Long Long Yang ◽  
Zheng Liang Xue ◽  
Dong Nan Zhao ◽  
...  

Micum strength is an important indicator of quality of sinter; BP artificial neural network model is built to predict the strength of sinter drum. The neural network use the main factors that influence the sinter drum as input data, and output is Micum strength. Experiment results shows that the maximum absolute error between the Micum strength predicted by neural network and real value from the sinter plant is 0.3346, and the average absolute error is 0.1154. These prove that the prediction is accuracy. In addition, because of the "black box" characteristic of the neural network model, the neural network model can not give the law of how the various factors affect the micum strength of sinter ore, this paper also uses the model to analysis the law of how TFe, SiO2 content affect the micum strength. The results not only consist with the sintering theory, but also verify the validity of the model.


2011 ◽  
Vol 287-290 ◽  
pp. 1112-1115
Author(s):  
Jun Hong Zhang

In order to reduce the coke consumption of Blast Furnace(BF),a relevance analysis is carried out for operation parameters and fuel rate of BF,and a prediction method that is combining clustering analysis and artificial neural network for coke rate is proposed. The data cluster is divided into several classes by clustering analysis,the data similarity is high,and the neural network model is used to realize the prediction of coke rate. By combining the neural network with clustering analysis,the data in one BF is simulated,and the results are compared with the traditional neural network model. The result shows that the improved neural network has a higher accuracy, the average absolute error can be decreased by 3.13kg/t, and the average relative error can be decreased by 5.19%, it will have a good using foreground.


2020 ◽  
Vol 2 (1) ◽  
pp. 29-36
Author(s):  
M. I. Zghoba ◽  
◽  
Yu. I. Hrytsiuk ◽  

The peculiarities of neural network training for forecasting taxi passenger demand using graphics processing units are considered, which allowed to speed up the training procedure for different sets of input data, hardware configurations, and its power. It has been found that taxi services are becoming more accessible to a wide range of people. The most important task for any transportation company and taxi driver is to minimize the waiting time for new orders and to minimize the distance from drivers to passengers on order receiving. Understanding and assessing the geographical passenger demand that depends on many factors is crucial to achieve this goal. This paper describes an example of neural network training for predicting taxi passenger demand. It shows the importance of a large input dataset for the accuracy of the neural network. Since the training of a neural network is a lengthy process, parallel training was used to speed up the training. The neural network for forecasting taxi passenger demand was trained using different hardware configurations, such as one CPU, one GPU, and two GPUs. The training times of one epoch were compared along with these configurations. The impact of different hardware configurations on training time was analyzed in this work. The network was trained using a dataset containing 4.5 million trips within one city. The results of this study show that the training with GPU accelerators doesn't necessarily improve the training time. The training time depends on many factors, such as input dataset size, splitting of the entire dataset into smaller subsets, as well as hardware and power characteristics.


MATEMATIKA ◽  
2019 ◽  
Vol 35 (4) ◽  
pp. 53-64
Author(s):  
Siti Nabilah Syuhada Abdullah ◽  
Ani Shabri ◽  
Ruhaidah Samsudin

Since rice is a staple food in Malaysia, its price fluctuations pose risks to the producers, suppliers and consumers. Hence, an accurate prediction of paddy price is essential to aid the planning and decision-making in related organizations. The artificial neural network (ANN) has been widely used as a promising method for time series forecasting. In this paper, the effectiveness of integrating empirical mode decomposition (EMD) into an ANN model to forecast paddy price is investigated. The hybrid method is applied on a series of monthly paddy prices fromFebruary 1999 up toMay 2018 as recorded in the Malaysian Ringgit (MYR) per metric tons. The performance of the simple ANN model and the EMD-ANN model was measured and compared based on their root mean squared Error (RMSE), mean absolute error (MAE) and mean percentage error (MPE). This study finds that the integration of EMD into the neural network model improves the forecasting capabilities. The use of EMD in the ANN model made the forecast errors reduced significantly, and the RMSE was reduced by 0.012, MAE by 0.0002 and MPE by 0.0448.


2020 ◽  
Vol 2020 ◽  
pp. 1-6 ◽  
Author(s):  
Ghassane Benrhmach ◽  
Khalil Namir ◽  
Abdelwahed Namir ◽  
Jamal Bouyaghroumni

Time series analysis and prediction are major scientific challenges that find their applications in fields as diverse as finance, biology, economics, meteorology, and so on. Obtaining the method with the least prediction error is one of the difficult problems of financial market and investment analysts. State space modelling is an efficient and flexible method for statistical inference of a broad class of time series and other data. The neural network is an important tool for analyzing time series especially when it is nonlinear and nonstationary. Essential tools for the study of Box-Jenkins methodology, neural networks, and extended Kalman filter were put together. We examine the use of the nonlinear autoregressive neural network method as a prediction technique for financial time series and the application of the extended Kalman filter algorithm to improve the accuracy of the model. As application on a real example, we are analyzing the time series of the daily price of steel over a 790-day period for establishing the superiority of this method over other existing methods. The simulation results using MATLAB and R software show that the model is capable of producing a reasonable accuracy.


Sign in / Sign up

Export Citation Format

Share Document