scholarly journals Stabilization of stochastic approximation by step size adaptation

2012 ◽  
Vol 61 (4) ◽  
pp. 543-548 ◽  
Author(s):  
Sameer Kamal
2014 ◽  
Vol 31 (04) ◽  
pp. 1450026 ◽  
Author(s):  
ZI XU ◽  
YINGYING LI ◽  
XINGFANG ZHAO

This paper proposes one new stochastic approximation algorithm for solving simulation-based optimization problems. It employs a weighted combination of two independent current noisy gradient measurements as the iterative direction. It can be regarded as a stochastic approximation algorithm with a special matrix step size. The almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Our numerical experiments show that it outperforms the classical Robbins–Monro (RM) algorithm and several other existing algorithms for one noisy nonlinear function minimization problem, several unconstrained optimization problems and one typical simulation-based optimization problem, i.e., (s, S)-inventory problem.


2021 ◽  
Vol 7 (1) ◽  
pp. 1445-1459
Author(s):  
Yiyuan Cheng ◽  
◽  
Yongquan Zhang ◽  
Xingxing Zha ◽  
Dongyin Wang ◽  
...  

<abstract><p>In this paper, we consider stochastic approximation algorithms for least-square and logistic regression with no strong-convexity assumption on the convex loss functions. We develop two algorithms with varied step-size motivated by the accelerated gradient algorithm which is initiated for convex stochastic programming. We analyse the developed algorithms that achieve a rate of $ O(1/n^{2}) $ where $ n $ is the number of samples, which is tighter than the best convergence rate $ O(1/n) $ achieved so far on non-strongly-convex stochastic approximation with constant-step-size, for classic supervised learning problems. Our analysis is based on a non-asymptotic analysis of the empirical risk (in expectation) with less assumptions that existing analysis results. It does not require the finite-dimensionality assumption and the Lipschitz condition. We carry out controlled experiments on synthetic and some standard machine learning data sets. Empirical results justify our theoretical analysis and show a faster convergence rate than existing other methods.</p></abstract>


2020 ◽  
Vol 87 (5) ◽  
Author(s):  
Xiaojia Shelly Zhang ◽  
Eric de Sturler ◽  
Alexander Shapiro

Abstract Practical engineering designs typically involve many load cases. For topology optimization with many deterministic load cases, a large number of linear systems of equations must be solved at each optimization step, leading to an enormous computational cost. To address this challenge, we propose a mirror descent stochastic approximation (MD-SA) framework with various step size strategies to solve topology optimization problems with many load cases. We reformulate the deterministic objective function and gradient into stochastic ones through randomization, derive the MD-SA update, and develop algorithmic strategies. The proposed MD-SA algorithm requires only low accuracy in the stochastic gradient and thus uses only a single sample per optimization step (i.e., the sample size is always one). As a result, we reduce the number of linear systems to solve per step from hundreds to one, which drastically reduces the total computational cost, while maintaining a similar design quality. For example, for one of the design problems, the total number of linear systems to solve and wall clock time are reduced by factors of 223 and 22, respectively.


Sign in / Sign up

Export Citation Format

Share Document