Dykstra's Algorithm and Robust Stopping Criteria

2005 ◽  
Vol 26 (4) ◽  
pp. 1405-1414 ◽  
Author(s):  
Ernesto G. Birgin ◽  
Marcos Raydan

Author(s):  
Alexander Haberl ◽  
Dirk Praetorius ◽  
Stefan Schimanko ◽  
Martin Vohralík

AbstractWe consider a second-order elliptic boundary value problem with strongly monotone and Lipschitz-continuous nonlinearity. We design and study its adaptive numerical approximation interconnecting a finite element discretization, the Banach–Picard linearization, and a contractive linear algebraic solver. In particular, we identify stopping criteria for the algebraic solver that on the one hand do not request an overly tight tolerance but on the other hand are sufficient for the inexact (perturbed) Banach–Picard linearization to remain contractive. Similarly, we identify suitable stopping criteria for the Banach–Picard iteration that leave an amount of linearization error that is not harmful for the residual a posteriori error estimate to steer reliably the adaptive mesh-refinement. For the resulting algorithm, we prove a contraction of the (doubly) inexact iterates after some amount of steps of mesh-refinement/linearization/algebraic solver, leading to its linear convergence. Moreover, for usual mesh-refinement rules, we also prove that the overall error decays at the optimal rate with respect to the number of elements (degrees of freedom) added with respect to the initial mesh. Finally, we prove that our fully adaptive algorithm drives the overall error down with the same optimal rate also with respect to the overall algorithmic cost expressed as the cumulated sum of the number of mesh elements over all mesh-refinement, linearization, and algebraic solver steps. Numerical experiments support these theoretical findings and illustrate the optimal overall algorithmic cost of the fully adaptive algorithm on several test cases.


Author(s):  
Fred J. Hickernell ◽  
Sou-Cheng T. Choi ◽  
Lan Jiang ◽  
Lluís Antoni Jiménez Rugama

2020 ◽  
Vol 14 (4) ◽  
pp. 285-311
Author(s):  
Bernd Bassimir ◽  
Manuel Schmitt ◽  
Rolf Wanka

Abstract We study the variant of Particle Swarm Optimization that applies random velocities in a dimension instead of the regular velocity update equations as soon as the so-called potential of the swarm falls below a certain small bound in this dimension, arbitrarily set by the user. In this case, the swarm performs a forced move. In this paper, we are interested in how, by counting the forced moves, the swarm can decide for itself to stop its movement because it is improbable to find better candidate solutions than the already-found best solution. We formally prove that when the swarm is close to a (local) optimum, it behaves like a blind-searching cloud and that the frequency of forced moves exceeds a certain, objective function-independent value. Based on this observation, we define stopping criteria and evaluate them experimentally showing that good candidate solutions can be found much faster than setting upper bounds on the iterations and better solutions compared to applying other solutions from the literature.


Sign in / Sign up

Export Citation Format

Share Document