Speed and Resource Optimization of BFGS Quasi-Newton Implementation on FPGA Using Inexact Line Search Method for Neural Network Training

Author(s):  
Jia Liu ◽  
Qiang Liu
2018 ◽  
Vol 26 (8) ◽  
pp. 1575-1579 ◽  
Author(s):  
Qiang Liu ◽  
Jia Liu ◽  
Ruoyu Sang ◽  
Jiajun Li ◽  
Tao Zhang ◽  
...  

Author(s):  
Hesam Karim ◽  
Sharareh R. Niakan ◽  
Reza Safdari

<span lang="EN-US">Heart disease is the first cause of death in different countries. Artificial neural network (ANN) technique can be used to predict or classification patients getting a heart disease. There are different training algorithms for ANN. We compared eight neural network training algorithms for classification of heart disease data from UCI repository containing 303 samples. Performance measures of each algorithm containing the speed of training, the number of epochs, accuracy, and mean square error (MSE) were obtained and analyzed. Our results showed that training time for gradient descent algorithms was longer than other training algorithms (8-10 seconds). In contrast, Quasi-Newton algorithms were faster than others (&lt;=0 second). MSE for all algorithms was between 0.117 and 0.228. While there was a significant association between training algorithms and training time (p&lt;0.05), the number of neurons in hidden layer had not any significant effect on the MSE and/or accuracy of the models (p&gt;0.05). Based on our findings, for development an ANN classification model for heart diseases, it is best to use Quasi-Newton training algorithms because of the best speed and accuracy.</span>


2021 ◽  
Vol 12 (3) ◽  
pp. 554-574
Author(s):  
Shahrzad Mahboubi ◽  
Indrapriyadarsini S ◽  
Hiroshi Ninomiya ◽  
Hideki Asai

2012 ◽  
Vol 21 (01) ◽  
pp. 1250009 ◽  
Author(s):  
IOANNIS E. LIVIERIS ◽  
PANAGIOTIS PINTELAS

Conjugate gradient methods constitute excellent neural network training methods which are characterized by their simplicity and their very low memory requirements. In this paper, we propose a new spectral conjugate gradient method which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, we establish the global convergence of our proposed method under some assumptions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness.


Sign in / Sign up

Export Citation Format

Share Document