scholarly journals A Novel Learning Scheme for Chebyshev Functional Link Neural Networks

2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Satchidananda Dehuri

A hybrid learning scheme (ePSO-BP) to train Chebyshev Functional Link Neural Network (CFLNN) for classification is presented. The proposed method is referred as hybrid CFLNN (HCFLNN). The HCFLNN is a type of feed-forward neural networks which have the ability to transform the nonlinear input space into higher dimensional-space where linear separability is possible. Moreover, the proposed HCFLNN combines the best attribute of particle swarm optimization (PSO), back propagation learning (BP learning), and functional link neural networks (FLNNs). The proposed method eliminates the need of hidden layer by expanding the input patterns using Chebyshev orthogonal polynomials. We have shown its effectiveness of classifying the unknown pattern using the publicly available datasets obtained from UCI repository. The computational results are then compared with functional link neural network (FLNN) with a generic basis functions, PSO-based FLNN, and EFLN. From the comparative study, we observed that the performance of the HCFLNN outperforms FLNN, PSO-based FLNN, and EFLN in terms of classification accuracy.

Author(s):  
Satchidananda Dehuri ◽  
Sung-Bae Cho

In this chapter, the primary focus is on theoretical and empirical study of functional link neural networks (FLNNs) for classification. We present a hybrid Chebyshev functional link neural network (cFLNN) without hidden layer with evolvable particle swarm optimization (ePSO) for classification. The resulted classifier is then used for assigning proper class label to an unknown sample. The hybrid cFLNN is a type of feed-forward neural networks, which have the ability to transform the non-linear input space into higher dimensional space where linear separability is possible. In particular, the proposed hybrid cFLNN combines the best attribute of evolvable particle swarm optimization (ePSO), back-propagation learning (BP-Learning), and Chebyshev functional link neural networks (CFLNN). We have shown its effectiveness of classifying the unknown pattern using the datasets obtained from UCI repository. The computational results are then compared with other higher order neural networks (HONNs) like functional link neural network with a generic basis functions, Pi-Sigma neural network (PSNN), radial basis function neural network (RBFNN), and ridge polynomial neural network (RPNN).


Author(s):  
Asma Elyounsi ◽  
Hatem Tlijani ◽  
Mohamed Salim Bouhlel

Traditional neural networks are very diverse and have been used during the last decades in the fields of data classification. These networks like MLP, back propagation neural networks (BPNN) and feed forward network have shown inability to scale with problem size and with the slow convergence rate. So in order to overcome these numbers of drawbacks, the use of higher order neural networks (HONNs) becomes the solution by adding input units along with a stronger functioning of other neural units in the network and transforms easily these input units to hidden layers. In this paper, a new metaheuristic method, Firefly (FFA), is applied to calculate the optimal weights of the Functional Link Artificial Neural Network (FLANN) by using the flashing behavior of fireflies in order to classify ISA-Radar target. The average classification result of FLANN-FFA which reached 96% shows the efficiency of the process compared to other tested methods.


Author(s):  
Tutut Herawan ◽  
Yana Mazwin Mohmad Hassim ◽  
Rozaida Ghazali

Functional Link Neural Network (FLNN) has emerged as an important tool for solving non-linear classification problem and has been successfully applied in many engineering and scientific problems. The FLNN structure is much more modest than ordinary feed forward network like the Multilayer Perceptron (MLP) due to its flat network architecture which employs less tuneable weights for training. However, the standard Backpropagation (BP) learning uses for FLNN training prone to get trap in local minima which affect the FLNN classification performance. To recover the BP-learning drawback, this paper proposes an Artificial Bee Colony (ABC) optimization with modification on bee foraging behaviour (mABC) as an alternative learning scheme for FLNN. This is motivated by good exploration and exploitation capabilities of searching optimal weight parameters exhibit by ABC algorithm. The result of the classification accuracy made by FLNN with mABC (FLNN-mABC) is compared with the original FLNN architecture with standard Backpropagation (BP) (FLNN-BP) and standard ABC algorithm (FLNN-ABC). The FLNN-mABC algorithm provides better learning scheme for the FLNN network with average overall improvement of 4.29% as compared to FLNN-BP and FLNN-ABC.


Author(s):  
M. HARLY ◽  
I. N. SUTANTRA ◽  
H. P. MAURIDHI

Fixed order neural networks (FONN), such as high order neural network (HONN), in which its architecture is developed from zero order of activation function and joint weight, regulates only the number of weight and their value. As a result, this network only produces a fixed order model or control level. These obstacles, which affect preceeding architectures, have been performing finite ability to adapt uncertainty character of real world plant, such as driving dynamics and its desired control performance. This paper introduces a new concept of neural network neuron. In this matter, exploiting discrete z-function builds new neuron activation. Instead of zero order joint weight matrices, the discrete z-function weight matrix will be provided to realize uncertainty or undetermined real word plant and desired adaptive control system that their order has probably been changing. Instead of using bias, an initial condition value is developed. Neural networks using new neurons is called Varied Order Neural Network (VONN). For optimization process, updating order, coefficient and initial value of node activation function uses GA; while updating joint weight, it applies both back propagation (combined LSE-gauss Newton) and NPSO. To estimate the number of hidden layer, constructive back propagation (CBP) was also applied. Thorough simulation was conducted to compare the control performance between FONN and MONN. In order to control, vehicle stability was equipped by electronics stability program (ESP), electronics four wheel steering (4-EWS), and active suspension (AS). 2000, 4000, 6000, 8000 data that are from TODS, a hidden layer, 3 input nodes, 3 output nodes were provided to train and test the network of both the uncertainty model and its adaptive control system. The result of simulation, therefore, shows that stability parameter such as yaw rate error, vehicle side slip error, and rolling angle error produces better performance control in the form of smaller performance index using FDNN than those using MONN.


2018 ◽  
Vol 7 (2.13) ◽  
pp. 402
Author(s):  
Y Yusmartato ◽  
Zulkarnain Lubis ◽  
Solly Arza ◽  
Zulfadli Pelawi ◽  
A Armansah ◽  
...  

Lockers are one of the facilities that people use to store stuff. Artificial neural networks are computational systems where architecture and operations are inspired by the knowledge of biological neurons in the brain, which is one of the artificial representations of the human brain that always tries to stimulate the learning process of the human brain. One of the utilization of artificial neural network is for pattern recognition. The face of a person must be different but sometimes has a shape similar to the face of others, because the facial pattern is a good pattern to try to be recognized by using artificial neural networks. Pattern recognition on artificial neural network can be done by back propagation method. Back propagation method consists of input layer, hidden layer and output layer.  


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


Author(s):  
William C. Carpenter ◽  
Margery E. Hoffman

AbstractThis paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.


Sign in / Sign up

Export Citation Format

Share Document