scholarly journals A Broyden Class of Quasi-Newton Methods for Riemannian Optimization

2015 ◽  
Vol 25 (3) ◽  
pp. 1660-1685 ◽  
Author(s):  
Wen Huang ◽  
K. A. Gallivan ◽  
P.-A. Absil
Author(s):  
David Ek ◽  
Anders Forsgren

AbstractThe main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give a class of limited-memory quasi-Newton Hessian approximations which generate search directions parallel to those of the BFGS method, or equivalently, to those of the method of preconditioned conjugate gradients. In the setting of reduced Hessians, the class provides a dynamical framework for the construction of limited-memory quasi-Newton methods. These methods attain finite termination on quadratic optimization problems in exact arithmetic. We show performance of the methods within this framework in finite precision arithmetic by numerical simulations on sequences of related systems of linear equations, which originate from the CUTEst test collection. In addition, we give a compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components.


Author(s):  
Anton Rodomanov ◽  
Yurii Nesterov

AbstractWe study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these methods converge superlinearly, the corresponding rates of convergence still remain unknown. In this paper, we address this problem. We obtain first explicit non-asymptotic rates of superlinear convergence for the standard quasi-Newton methods, which are based on the updating formulas from the convex Broyden class. In particular, for the well-known DFP and BFGS methods, we obtain the rates of the form $$(\frac{n L^2}{\mu ^2 k})^{k/2}$$ ( n L 2 μ 2 k ) k / 2 and $$(\frac{n L}{\mu k})^{k/2}$$ ( nL μ k ) k / 2 respectively, where k is the iteration counter, n is the dimension of the problem, $$\mu $$ μ is the strong convexity parameter, and L is the Lipschitz constant of the gradient.


1985 ◽  
Vol 47 (4) ◽  
pp. 393-399 ◽  
Author(s):  
F. Biegler-König
Keyword(s):  

2013 ◽  
Vol 33 (3) ◽  
pp. 517-542 ◽  
Author(s):  
El-Sayed M. E. Mostafa ◽  
Mohamed A. Tawhid ◽  
Eman R. Elwan

1994 ◽  
Vol 50 (1-3) ◽  
pp. 305-323 ◽  
Author(s):  
J.A. Ford ◽  
I.A. Moghrabi
Keyword(s):  

2015 ◽  
Vol 406 ◽  
pp. 194-208 ◽  
Author(s):  
Dan Vladimir Nichita ◽  
Martin Petitfrere

2016 ◽  
Author(s):  
Steven H. Waldrip ◽  
Robert K. Niven

Sign in / Sign up

Export Citation Format

Share Document