scholarly journals Weight Optimization in Recurrent Neural Networks with Hybrid Metaheuristic Cuckoo Search Techniques for Data Classification

2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Nazri Mohd Nawi ◽  
Abdullah Khan ◽  
M. Z. Rehman ◽  
Haruna Chiroma ◽  
Tutut Herawan

Recurrent neural network (RNN) has been widely used as a tool in the data classification. This network can be educated with gradient descent back propagation. However, traditional training algorithms have some drawbacks such as slow speed of convergence being not definite to find the global minimum of the error function since gradient descent may get stuck in local minima. As a solution, nature inspired metaheuristic algorithms provide derivative-free solution to optimize complex problems. This paper proposes a new metaheuristic search algorithm called Cuckoo Search (CS) based on Cuckoo bird’s behavior to train Elman recurrent network (ERN) and back propagation Elman recurrent network (BPERN) in achieving fast convergence rate and to avoid local minima problem. The proposed CSERN and CSBPERN algorithms are compared with artificial bee colony using BP algorithm and other hybrid variants algorithms. Specifically, some selected benchmark classification problems are used. The simulation results show that the computational efficiency of ERN and BPERN training process is highly enhanced when coupled with the proposed hybrid method.

2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Jiao-hong Yi ◽  
Wei-hong Xu ◽  
Yuan-tao Chen

The traditional Back Propagation (BP) has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS), called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN). Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way.


2021 ◽  
pp. 1-13
Author(s):  
Nuzhat Fatema ◽  
Saeid Gholami Farkoush ◽  
Mashhood Hasan ◽  
H Malik

In this paper, a novel hybrid approach for deterministic and probabilistic occupancy detection is proposed with a novel heuristic optimization and Back-Propagation (BP) based algorithms. Generally, PB based neural network (BPNN) suffers with the optimal value of weight, bias, trapping problem in local minima and sluggish convergence rate. In this paper, the GSA (Gravitational Search Algorithm) is implemented as a new training technique for BPNN is order to enhance the performance of the BPNN algorithm by decreasing the problem of trapping in local minima, enhance the convergence rate and optimize the weight and bias value to reduce the overall error. The experimental results of BPNN with and without GSA are demonstrated and presented for fair comparison and adoptability. The demonstrated results show that BPNNGSA has outperformance for training and testing phase in form of enhancement of processing speed, convergence rate and avoiding the trapping problem of standard BPNN. The whole study is analyzed and demonstrated by using R language open access platform. The proposed approach is validated with different hidden-layer neurons for both experimental studies based on BPNN and BPNNGSA.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


1991 ◽  
Vol 3 (4) ◽  
pp. 589-603 ◽  
Author(s):  
Pierre Baldi ◽  
Yves Chauvin

We study generalization in a simple framework of feedforward linear networks with n inputs and n outputs, trained from examples by gradient descent on the usual quadratic error function. We derive analytical results on the behavior of the validation function corresponding to the LMS error function calculated on a set of validation patterns. We show that the behavior of the validation function depends critically on the initial conditions and on the characteristics of the noise. Under certain simple assumptions, if the initial weights are sufficiently small, the validation function has a unique minimum corresponding to an optimal stopping time for training for which simple bounds can be calculated. There exists also situations where the validation function can have more complicated and somewhat unexpected behavior such as multiple local minima (at most n) of variable depth and long but finite plateau effects. Additional results and possible extensions are briefly discussed.


2009 ◽  
Vol 19 (06) ◽  
pp. 437-448 ◽  
Author(s):  
MD. ASADUZZAMAN ◽  
MD. SHAHJAHAN ◽  
KAZUYUKI MURASE

Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back propagation (BP) is a famous training method used in the multilayer networks but it often suffers from the drawback of slow convergence. To make the learning faster, we propose 'Fusion of Activation Functions' (FAF) in which different conventional activation functions (AFs) are combined to compute final activation. This has not been studied extensively yet. One of the sub goals of the paper is to check the role of linear AFs in combination. We investigate whether FAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging nine real benchmark classification and time series prediction problems. The FAF has been applied to 2-bit, 3-bit and 4-bit parity, the breast cancer, Diabetes, Heart disease, Iris, wine, Glass and Soybean classification problems. The algorithm is also tested with Mackey-Glass chaotic time series prediction problem. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid (SIG), arctangent (ATAN), logarithmic (LOG).


Sign in / Sign up

Export Citation Format

Share Document