incremental gradient descent
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Ivo Bukovsky ◽  
Peter M. Benes ◽  
Martin Vesely

This chapter recalls the nonlinear polynomial neurons and their incremental and batch learning algorithms for both plant identification and neuro-controller adaptation. Authors explain and demonstrate the use of feed-forward as well as recurrent polynomial neurons for system approximation and control via fundamental, though for practice efficient machine learning algorithms such as Ridge Regression, Levenberg-Marquardt, and Conjugate Gradients, authors also discuss the use of novel optimizers such as ADAM and BFGS. Incremental gradient descent and RLS algorithms for plant identification and control are explained and demonstrated. Also, novel BIBS stability for recurrent HONUs and for closed control loops with linear plant and nonlinear (HONU) controller is discussed and demonstrated.


Author(s):  
Yuanyuan Liu ◽  
Fanhua Shang ◽  
Licheng Jiao

Recently, research on variance reduced incremental gradient descent methods (e.g., SAGA) has made exciting progress (e.g., linear convergence for strongly convex (SC) problems). However, existing accelerated methods (e.g., point-SAGA) suffer from drawbacks such as inflexibility. In this paper, we design a novel and simple momentum to accelerate the classical SAGA algorithm, and propose a direct accelerated incremental gradient descent algorithm. In particular, our theoretical result shows that our algorithm attains a best known oracle complexity for strongly convex problems and an improved convergence rate for the case of n>=L/\mu. We also give experimental results justifying our theoretical results and showing the effectiveness of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document