scholarly journals Convergence rate of linear two-time-scale stochastic approximation

2004 ◽  
Vol 14 (2) ◽  
pp. 796-819 ◽  
Author(s):  
Vijay R. Konda ◽  
John N. Tsitsiklis
Sadhana ◽  
1997 ◽  
Vol 22 (4) ◽  
pp. 525-543 ◽  
Author(s):  
Vivek S Borkar ◽  
Vijaymohan R Konda

2021 ◽  
Vol 7 (1) ◽  
pp. 1445-1459
Author(s):  
Yiyuan Cheng ◽  
◽  
Yongquan Zhang ◽  
Xingxing Zha ◽  
Dongyin Wang ◽  
...  

<abstract><p>In this paper, we consider stochastic approximation algorithms for least-square and logistic regression with no strong-convexity assumption on the convex loss functions. We develop two algorithms with varied step-size motivated by the accelerated gradient algorithm which is initiated for convex stochastic programming. We analyse the developed algorithms that achieve a rate of $ O(1/n^{2}) $ where $ n $ is the number of samples, which is tighter than the best convergence rate $ O(1/n) $ achieved so far on non-strongly-convex stochastic approximation with constant-step-size, for classic supervised learning problems. Our analysis is based on a non-asymptotic analysis of the empirical risk (in expectation) with less assumptions that existing analysis results. It does not require the finite-dimensionality assumption and the Lipschitz condition. We carry out controlled experiments on synthetic and some standard machine learning data sets. Empirical results justify our theoretical analysis and show a faster convergence rate than existing other methods.</p></abstract>


Sign in / Sign up

Export Citation Format

Share Document