scholarly journals A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Min Sun ◽  
Jing Liu

A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak) conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factorγis attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Shijie Sun ◽  
Meiling Feng ◽  
Luoyi Shi

Abstract This paper considers an iterative algorithm of solving the multiple-sets split equality problem (MSSEP) whose step size is independent of the norm of the related operators, and investigates its sublinear and linear convergence rate. In particular, we present a notion of bounded Hölder regularity property for the MSSEP, which is a generalization of the well-known concept of bounded linear regularity property, and give several sufficient conditions to ensure it. Then we use this property to conclude the sublinear and linear convergence rate of the algorithm. In the end, some numerical experiments are provided to verify the validity of our consequences.


Author(s):  
Ran Gu ◽  
Qiang Du

Abstract How to choose the step size of gradient descent method has been a popular subject of research. In this paper we propose a modified limited memory steepest descent method (MLMSD). In each iteration we propose a selection rule to pick a unique step size from a candidate set, which is calculated by Fletcher’s limited memory steepest descent method (LMSD), instead of going through all the step sizes in a sweep, as in Fletcher’s original LMSD algorithm. MLMSD is motivated by an inexact super-linear convergence rate analysis. The R-linear convergence of MLMSD is proved for a strictly convex quadratic minimization problem. Numerical tests are presented to show that our algorithm is efficient and robust.


2014 ◽  
Vol 511-512 ◽  
pp. 950-953
Author(s):  
Huan Xin Peng ◽  
Wen Kai Wang ◽  
Bin Liu

The convergence rate is very important in the distributed consensus problems, especially, for the distributed consensus algorithms based on large-scale complex networks. In order to accelerate the convergence rate of the distributed consensus algorithms, in the paper, we propose an optimized topology model by adding randomly a few shortcuts to the nearest neighbor coupling networks, and the shortcuts follow a normal distribution. By analyses and simulations, the results show that the algebraic connectivity of the new model is bigger than that of the NMW model, and the convergence rate of the distributed consensus based on the new model is higher than that based on the NMW model


1992 ◽  
Vol 70 (2) ◽  
pp. 296-300 ◽  
Author(s):  
Susumu Narita ◽  
Tai-ichi Shibuya

A new method is proposed for obtaining a few eigenvalues and eigenvectors of a large-scale RPA-type equation. Some numerical tests are carried out to study the convergence behaviors of this method. It is found that the convergence rate is very fast and quite satisfactory. It depends strongly on the way of estimating the deviation vectors. Our proposed scheme gives a better estimation for the deviation vectors than Davidson's scheme. This scheme is applicable to the eigenvalue problems of nondiagonally dominant matrices as well. Keywords: large-scale eigenvalue problem, RPA-type equation, fast convergence.


Sign in / Sign up

Export Citation Format

Share Document