scholarly journals Convergence rate of moments in stochastic approximation with simultaneous perturbation gradient approximation and resetting

1999 ◽  
Vol 44 (5) ◽  
pp. 894-905 ◽  
Author(s):  
L. Gerencser
2021 ◽  
Vol 7 (1) ◽  
pp. 1445-1459
Author(s):  
Yiyuan Cheng ◽  
◽  
Yongquan Zhang ◽  
Xingxing Zha ◽  
Dongyin Wang ◽  
...  

<abstract><p>In this paper, we consider stochastic approximation algorithms for least-square and logistic regression with no strong-convexity assumption on the convex loss functions. We develop two algorithms with varied step-size motivated by the accelerated gradient algorithm which is initiated for convex stochastic programming. We analyse the developed algorithms that achieve a rate of $ O(1/n^{2}) $ where $ n $ is the number of samples, which is tighter than the best convergence rate $ O(1/n) $ achieved so far on non-strongly-convex stochastic approximation with constant-step-size, for classic supervised learning problems. Our analysis is based on a non-asymptotic analysis of the empirical risk (in expectation) with less assumptions that existing analysis results. It does not require the finite-dimensionality assumption and the Lipschitz condition. We carry out controlled experiments on synthetic and some standard machine learning data sets. Empirical results justify our theoretical analysis and show a faster convergence rate than existing other methods.</p></abstract>


1998 ◽  
Vol 8 (1) ◽  
pp. 217-247 ◽  
Author(s):  
Pierre L'Ecuyer ◽  
George Yin

Sign in / Sign up

Export Citation Format

Share Document