A New Feedforward Neural Network Structural Learning Algorithm—Augmentation by Training With Residuals

1995 ◽  
Vol 117 (3) ◽  
pp. 411-415 ◽  
Author(s):  
C. James Li ◽  
Taehee Kim

A fully automatic feedforward neural network structural and weight learning algorithm is described. The Augmentation by Training with Residuals, ATR, requires neither guess of initial weight values nor the number of neurons in the hidden layer from users. The algorithm takes an incremental approach in which a hidden neuron is trained to model the mapping between the input and output of current exemplars, and is augmented to the existing network. The exemplars are then made orthogonal to the newly identified hidden neuron and used for the training of next hidden neuron. The improvement continues until a desired accuracy is reached. This new structural and weight learning algorithm is applied to the identification of a two-degree-of-freedom planar robot, a Van der Pol oscillator and a Mackay-Glass equation. The algorithm is shown to be effective in modeling all three systems and is far superior to a linear modeling scheme in the case of the robot.

2018 ◽  
Vol 54 (3A) ◽  
pp. 91
Author(s):  
Huynh Trung Hieu

This study presents an approach for glucose correction in handheld devices by reducing the effects of hematocrit. The hematocrit values are estimated from the transduced current curves which are produced during the chemical reactions of glucose measurement process in the handheld devices. The hematocrit estimation is performed by applying the single-hidden layer feedforward neural network which is trained by the non-iterative learning algorithm. The experimental results show that the proposed approach can improve the accuracy of glucose measurement by using the handheld devices.


2019 ◽  
Vol 13 ◽  
pp. 302-309
Author(s):  
Jakub Basiakowski

The following paper presents the results of research on the impact of machine learning in the construction of a voice-controlled interface. Two different models were used for the analysys: a feedforward neural network containing one hidden layer and a more complicated convolutional neural network. What is more, a comparison of the applied models was presented. This comparison was performed in terms of quality and the course of training.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Jingyi Liu ◽  
Xinxin Liu ◽  
Chongmin Liu ◽  
Ba Tuan Le ◽  
Dong Xiao

Extreme learning machine is originally proposed for the learning of the single hidden layer feedforward neural network to overcome the challenges faced by the backpropagation (BP) learning algorithm and its variants. Recent studies show that ELM can be extended to the multilayered feedforward neural network in which the hidden node could be a subnetwork of nodes or a combination of other hidden nodes. Although the ELM algorithm with multiple hidden layers shows stronger nonlinear expression ability and stability in both theoretical and experimental results than the ELM algorithm with the single hidden layer, with the deepening of the network structure, the problem of parameter optimization is also highlighted, which usually requires more time for model selection and increases the computational complexity. This paper uses Cholesky factorization strategy and Givens rotation transformation to choose the hidden nodes of MELM and obtains the number of nodes more suitable for the network. First, the initial network has a large number of hidden nodes and then uses the idea of ridge regression to prune the nodes. Finally, a complete neural network can be obtained. Therefore, the ELM algorithm eliminates the need to manually set nodes and achieves complete automation. By using information from the previous generation’s connection weight matrix, it can be evitable to re-calculate the weight matrix in the network simplification process. As in the matrix factorization methods, the Cholesky factorization factor is calculated by Givens rotation transform to achieve the fast decreasing update of the current connection weight matrix, thus ensuring the numerical stability and high efficiency of the pruning process. Empirical studies on several commonly used classification benchmark problems and the real datasets collected from coal industry show that compared with the traditional ELM algorithm, the pruning multilayered ELM algorithm proposed in this paper can find the optimal number of hidden nodes automatically and has better generalization performance.


2016 ◽  
Vol 5 (4) ◽  
pp. 126 ◽  
Author(s):  
I MADE DWI UDAYANA PUTRA ◽  
G. K. GANDHIADI ◽  
LUH PUTU IDA HARINI

Weather information has an important role in human life in various fields, such as agriculture, marine, and aviation. The accurate weather forecasts are needed in order to improve the performance of various fields. In this study, use artificial neural network method with backpropagation learning algorithm to create a model of weather forecasting in the area of ??South Bali. The aim of this study is to determine the effect of the number of neurons in the hidden layer and to determine the level of accuracy of the method of artificial neural network with backpropagation learning algorithm in weather forecast models. Weather forecast models in this study use input of the factors that influence the weather, namely air temperature, dew point, wind speed, visibility, and barometric pressure.The results of testing the network with a different number of neurons in the hidden layer of artificial neural network method with backpropagation learning algorithms show that the increase in the number of neurons in the hidden layer is not directly proportional to the value of the accuracy of the weather forecasts, the increase in the number of neurons in the hidden layer does not necessarily increase or decrease value accuracy of weather forecasts we obtain the best accuracy rate of 51.6129% on a network model with three neurons in the hidden layer.


2019 ◽  
Vol 10 (37) ◽  
pp. 31-44
Author(s):  
Engin Kandıran ◽  
Avadis Hacınlıyan

Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with single hidden layer, Lyapunov exponents of the studied systems are forecasted.


Author(s):  
Untari Novia Wisesty

The eye state detection is one of various task toward Brain Computer Interface system. The eye state can be read in brain signals. In this paper use EEG Eye State dataset (Rosler, 2013) from UCI Machine Learning Repository Database. Dataset is consisting of continuous 14 EEG measurements in 117 seconds. The eye states were marked as “1” or “0”. “1” indicates the eye-closed and “0” the eye-open state. The proposed schemes use Multi Layer Neural Network with Levenberg Marquardt optimization learning algorithm, as classification method.  Levenberg Marquardt method used to optimize the learning algorithm of neural network, because the standard algorithm has a weak convergence rate. It is need many iterations to have minimum error. Based on the analysis towards the experiment on the EEG dataset, it can be concluded that the proposed scheme can be implemented to detect the Eye State. The best accuracy gained from combination variable sigmoid function, data normalization and number of neurons are 31 (95.71%) for one hidden layer, and 98.912% for two hidden layers with number of neurons are 39 and 47 neurons and linear function.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 147 ◽  
Author(s):  
Jun Ye ◽  
Wenhua Cui

Neural networks are powerful universal approximation tools. They have been utilized for functions/data approximation, classification, pattern recognition, as well as their various applications. Uncertain or interval values result from the incompleteness of measurements, human observation and estimations in the real world. Thus, a neutrosophic number (NsN) can represent both certain and uncertain information in an indeterminate setting and imply a changeable interval depending on its indeterminate ranges. In NsN settings, however, existing interval neural networks cannot deal with uncertain problems with NsNs. Therefore, this original study proposes a neutrosophic compound orthogonal neural network (NCONN) for the first time, containing the NsN weight values, NsN input and output, and hidden layer neutrosophic neuron functions, to approximate neutrosophic functions/NsN data. In the proposed NCONN model, single input and single output neurons are the transmission notes of NsN data and hidden layer neutrosophic neurons are constructed by the compound functions of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. In addition, illustrative and actual examples are provided to verify the effectiveness and learning performance of the proposed NCONN model for approximating neutrosophic nonlinear functions and NsN data. The contribution of this study is that the proposed NCONN can handle the approximation problems of neutrosophic nonlinear functions and NsN data. However, the main advantage is that the proposed NCONN implies a simple learning algorithm, higher speed learning convergence, and higher learning accuracy in indeterminate/NsN environments.


Sign in / Sign up

Export Citation Format

Share Document