scholarly journals Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients

2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Deming Yuan ◽  
Yang Yang

This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.

PAMM ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Ritukesh Bharali ◽  
Fredrik Larsson ◽  
Ralf Jänicke

2020 ◽  
Vol 60 (11) ◽  
pp. 1787-1809
Author(s):  
M. S. Alkousa ◽  
A. V. Gasnikov ◽  
D. M. Dvinskikh ◽  
D. A. Kovalev ◽  
F. S. Stonyakin

2013 ◽  
Vol 46 (3) ◽  
Author(s):  
Alicja Smoktunowicz ◽  
Felicja Okulicka-Dłużewska

AbstractNumerical stability of two main direct methods for solving the symmetric saddle point problem are analyzed. The first one is a generalization of Golub’s method for the augmented system formulation (ASF) and uses the Householder QR decomposition. The second method is supported by the singular value decomposition (SVD). Numerical comparison of some direct methods are given.


Acta Numerica ◽  
2013 ◽  
Vol 22 ◽  
pp. 509-575 ◽  
Author(s):  
Yurii Nesterov ◽  
Arkadi Nemirovski

In the past decade, problems related to l1/nuclear norm minimization have attracted much attention in the signal processing, machine learning and optimization communities. In this paper, devoted to l1/nuclear norm minimization as ‘optimization beasts’, we give a detailed description of two attractive first-order optimization techniques for solving problems of this type. The first one, aimed primarily at lasso-type problems, comprises fast gradient methods applied to composite minimization formulations. The second approach, aimed at Dantzig-selector-type problems, utilizes saddle-point first-order algorithms and reformulation of the problem of interest as a generalized bilinear saddle-point problem. For both approaches, we give complete and detailed complexity analyses and discuss the application domains.


OPSEARCH ◽  
2016 ◽  
Vol 53 (4) ◽  
pp. 917-933
Author(s):  
Maria C. Maciel ◽  
Sandra A. Santos ◽  
Graciela N. Sottosanto

Sign in / Sign up

Export Citation Format

Share Document