Fast backpropagation learning using optimization of learning rate for pulsed neural networks

2011 ◽  
Vol 94 (7) ◽  
pp. 27-34
Author(s):  
Kenji Yamamoto ◽  
Seiichi Koakutsu ◽  
Takashi Okamoto ◽  
Hironori Hirata
1996 ◽  
Vol 8 (2) ◽  
pp. 451-460 ◽  
Author(s):  
Georg Thimm ◽  
Perry Moerland ◽  
Emile Fiesler

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.


2008 ◽  
Vol 128 (7) ◽  
pp. 1137-1142 ◽  
Author(s):  
Kenji Yamamoto ◽  
Seiichi Koakutsu ◽  
Takashi Okamoto ◽  
Hironori Hirata

2020 ◽  
Author(s):  
B Wang ◽  
Y Sun ◽  
Bing Xue ◽  
Mengjie Zhang

© 2019, Springer Nature Switzerland AG. Image classification is a difficult machine learning task, where Convolutional Neural Networks (CNNs) have been applied for over 20 years in order to solve the problem. In recent years, instead of the traditional way of only connecting the current layer with its next layer, shortcut connections have been proposed to connect the current layer with its forward layers apart from its next layer, which has been proved to be able to facilitate the training process of deep CNNs. However, there are various ways to build the shortcut connections, it is hard to manually design the best shortcut connections when solving a particular problem, especially given the design of the network architecture is already very challenging. In this paper, a hybrid evolutionary computation (EC) method is proposed to automatically evolve both the architecture of deep CNNs and the shortcut connections. Three major contributions of this work are: Firstly, a new encoding strategy is proposed to encode a CNN, where the architecture and the shortcut connections are encoded separately; Secondly, a hybrid two-level EC method, which combines particle swarm optimisation and genetic algorithms, is developed to search for the optimal CNNs; Lastly, an adjustable learning rate is introduced for the fitness evaluations, which provides a better learning rate for the training process given a fixed number of epochs. The proposed algorithm is evaluated on three widely used benchmark datasets of image classification and compared with 12 peer Non-EC based competitors and one EC based competitor. The experimental results demonstrate that the proposed method outperforms all of the peer competitors in terms of classification accuracy.


Author(s):  
Ade chandra Saputra

One of the weakness in backpropagation Artificial neural network(ANN) is being stuck in local minima. Learning rate parameter is an important parameter in order to determine how fast the ANN Learning. This research is conducted to determine a method of finding the value of learning rate parameter using a genetic algorithm when neural network learning stops and the error value is not reached the stopping criteria or has not reached the convergence. Genetic algorithm is used to determine the value of learning rate used is based on the calculation of the fitness function with the input of the ANN weights, gradient error, and bias. The calculation of the fitness function will produce an error value of each learning rate which represents each candidate solutions or individual genetic algorithms. Each individual is determined by sum of squared error value. One with the smallest SSE is the best individual. The value of learning rate has chosen will be used to continue learning so that it can lower the value of the error or speed up the learning towards convergence. The final result of this study is to provide a new solution to resolve the problem in the backpropagation learning that often have problems in determining the learning parameters. These results indicate that the method of genetic algorithms can provide a solution for backpropagation learning in order to decrease the value of SSE when learning of ANN has been static in large error conditions, or stuck in local minima


Sign in / Sign up

Export Citation Format

Share Document