On modified proximal point algorithms for solving minimization problems and fixed point problems in CAT( κ ) spaces

Author(s):  
Nuttapol Pakkaranang ◽  
Poom Kumam ◽  
Ching‐Feng Wen ◽  
Jen‐Chih Yao ◽  
Yeol Je Cho
2018 ◽  
Vol 51 (1) ◽  
pp. 277-294 ◽  
Author(s):  
Kazeem O. Aremu ◽  
Chinedu Izuchukwu ◽  
Godwin C. Ugwunnadi ◽  
Oluwatosin T. Mewomo

Abstract In this paper, we introduce and study the class of demimetric mappings in CAT(0) spaces.We then propose a modified proximal point algorithm for approximating a common solution of a finite family of minimization problems and fixed point problems in CAT(0) spaces. Furthermore,we establish strong convergence of the proposed algorithm to a common solution of a finite family of minimization problems and fixed point problems for a finite family of demimetric mappings in complete CAT(0) spaces. A numerical example which illustrates the applicability of our proposed algorithm is also given. Our results improve and extend some recent results in the literature.


Optimization ◽  
2019 ◽  
Vol 69 (7-8) ◽  
pp. 1655-1680 ◽  
Author(s):  
Nopparat Wairojjana ◽  
Nuttapol Pakkaranang ◽  
Izhar Uddin ◽  
Poom Kumam ◽  
Aliyu Muhammed Awwal

2017 ◽  
Vol 96 (1) ◽  
pp. 162-170 ◽  
Author(s):  
SHIN-YA MATSUSHITA

The Krasnosel’skiĭ–Mann (KM) iteration is a widely used method to solve fixed point problems. This paper investigates the convergence rate for the KM iteration. We first establish a new convergence rate for the KM iteration which improves the known big-$O$ rate to little-$o$ without any other restrictions. The proof relies on the connection between the KM iteration and a useful technique on the convergence rate of summable sequences. Then we apply the result to give new results on convergence rates for the proximal point algorithm and the Douglas–Rachford method.


2021 ◽  
Vol 10 (1) ◽  
pp. 1154-1177
Author(s):  
Patrick L. Combettes ◽  
Lilian E. Glaudin

Abstract Various strategies are available to construct iteratively a common fixed point of nonexpansive operators by activating only a block of operators at each iteration. In the more challenging class of composite fixed point problems involving operators that do not share common fixed points, current methods require the activation of all the operators at each iteration, and the question of maintaining convergence while updating only blocks of operators is open. We propose a method that achieves this goal and analyze its asymptotic behavior. Weak, strong, and linear convergence results are established by exploiting a connection with the theory of concentrating arrays. Applications to several nonlinear and nonsmooth analysis problems are presented, ranging from monotone inclusions and inconsistent feasibility problems, to variational inequalities and minimization problems arising in data science.


Sign in / Sign up

Export Citation Format

Share Document