scholarly journals SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

Author(s):  
Javier Bernal ◽  
Jose Torres-Jimenez
Energies ◽  
2020 ◽  
Vol 13 (19) ◽  
pp. 5164
Author(s):  
Chin-Hsiang Cheng ◽  
Yu-Ting Lin

The present study develops a novel optimization method for designing a Stirling engine by combining a variable-step simplified conjugate gradient method (VSCGM) and a neural network training algorithm. As compared with existing gradient-based methods, like the conjugate gradient method (CGM) and simplified conjugate gradient method (SCGM), the VSCGM method is a further modified version presented in this study which allows the convergence speed to be greatly accelerated while the form of the objective function can still be defined flexibly. Through the automatic adjustment of the variable step size, the optimal design is reached more efficiently and accurately. Therefore, the VSCGM appears to be a potential and alternative tool in a variety of engineering applications. In this study, optimization of a low-temperature-differential gamma-type Stirling engine was attempted as a test case. The optimizer was trained by the neural network algorithm based on the training data provided from three-dimensional computational fluid dynamic (CFD) computation. The optimal design of the influential parameters of the Stirling engine is yielded efficiently. Results show that the indicated work and thermal efficiency are increased with the present approach by 102.93% and 5.24%, respectively. Robustness of the VSCGM is tested by giving different sets of initial guesses.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity, numerical efficiency, and their very low memory requirements. In this paper, we propose a conjugate gradient neural network training algorithm which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed by Li et al. (2007). Under mild conditions, we establish that the proposed method is globally convergent for general functions under the strong Wolfe conditions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.


Buildings ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 13
Author(s):  
Jee-Heon Kim ◽  
Nam-Chul Seong ◽  
Won-Chang Choi

The performance of various multilayer neural network algorithms to predict the energy consumption of an absorption chiller in an air conditioning system under the same conditions was compared and evaluated in this study. Each prediction model was created using 12 representative multilayer shallow neural network algorithms. As training data, about a month of actual operation data during the heating period was used, and the predictive performance of 12 algorithms according to the training size was evaluated. The prediction results indicate that the error rates using the measured values are 0.09% minimum, 5.76% maximum, and 1.94 standard deviation (SD) for the Levenberg–Marquardt backpropagation model and 0.41% minimum, 5.05% maximum, and 1.68 SD for the Bayesian regularization backpropagation model. The conjugate gradient with Polak–Ribiére updates backpropagation model yielded lower values than the other two models, with 0.31% minimum, 5.73% maximum, and 1.76 SD. Based on the results for the predictive performance evaluation index, CvRMSE, all other models (conjugate gradient with Fletcher–Reeves updates backpropagation, one-step secant backpropagation, gradient descent with momentum and adaptive learning rate backpropagation, gradient descent with momentum backpropagation) except for the gradient descent backpropagation model yielded results that satisfy ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) Guideline 14. The results of this study confirm that the prediction performance may differ for each multilayer neural network training algorithm. Therefore, selecting the appropriate model to fit the characteristics of a specific project is essential.


1996 ◽  
Vol 118 (2) ◽  
pp. 272-277 ◽  
Author(s):  
X. P. Xu ◽  
R. T. Burton ◽  
C. M. Sargent

An experimental approach of using a neural network model to identifying a nonlinear non-pressure-compensated flow valve is described in this paper. The conjugate gradient method with Polak-Ribiere formula is applied to train the neural network to approximate the nonlinear relationships represented by noisy data. The ability of the trained neural network to reproduce and to generalize is demonstrated by its excellent approximation of the experimental data. The training algorithm derived from the conjugate gradient method is shown to lead to a stable solution.


Sign in / Sign up

Export Citation Format

Share Document