scholarly journals On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods

2013 ◽  
Vol 237 (1) ◽  
pp. 363-372 ◽  
Author(s):  
Miquel Grau-Sánchez ◽  
Miquel Noguera ◽  
Sergio Amat
Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1855 ◽  
Author(s):  
Petko D. Proinov ◽  
Maria T. Vasileva

One of the famous third-order iterative methods for finding simultaneously all the zeros of a polynomial was introduced by Ehrlich in 1967. In this paper, we construct a new family of high-order iterative methods as a combination of Ehrlich’s iteration function and an arbitrary iteration function. We call these methods Ehrlich’s methods with correction. The paper provides a detailed local convergence analysis of presented iterative methods for a large class of iteration functions. As a consequence, we obtain two types of local convergence theorems as well as semilocal convergence theorems (with computer verifiable initial condition). As special cases of the main results, we study the convergence of several particular iterative methods. The paper ends with some experiments that show the applicability of our semilocal convergence theorems.


2018 ◽  
Vol 34 (1) ◽  
pp. 85-92
Author(s):  
ION PAVALOIU ◽  

We consider an Aitken-Steffensen type method in which the nodes are controlled by Newton and two-step Newton iterations. We prove a local convergence result showing the q-convergence order 7 of the iterations. Under certain supplementary conditions, we obtain monotone convergence of the iterations, providing an alternative to the usual ball attraction theorems. Numerical examples show that this method may, in some cases, have larger (possibly sided) convergence domains than other methods with similar convergence orders.


Mathematics ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 135
Author(s):  
Stoil I. Ivanov

In this paper, we establish two local convergence theorems that provide initial conditions and error estimates to guarantee the Q-convergence of an extended version of Chebyshev–Halley family of iterative methods for multiple polynomial zeros due to Osada (J. Comput. Appl. Math. 2008, 216, 585–599). Our results unify and complement earlier local convergence results about Halley, Chebyshev and Super–Halley methods for multiple polynomial zeros. To the best of our knowledge, the results about the Osada’s method for multiple polynomial zeros are the first of their kind in the literature. Moreover, our unified approach allows us to compare the convergence domains and error estimates of the mentioned famous methods and several new randomly generated methods.


2019 ◽  
Vol 28 (1) ◽  
pp. 19-26
Author(s):  
IOANNIS K. ARGYROS ◽  
◽  
SANTHOSH GEORGE ◽  

We present the local as well as the semi-local convergence of some iterative methods free of derivatives for Banach space valued operators. These methods contain the secant and the Kurchatov method as special cases. The convergence is based on weak hypotheses specializing to Lipschitz continuous or Holder continuous hypotheses. The results are of theoretical and practical interest. In particular the method is compared favorably ¨ to other methods using concrete numerical examples to solve systems of equations containing a nondifferentiable term.


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1251
Author(s):  
Munish Kansal ◽  
Alicia Cordero ◽  
Sonia Bhalla ◽  
Juan R. Torregrosa

In the recent literature, very few high-order Jacobian-free methods with memory for solving nonlinear systems appear. In this paper, we introduce a new variant of King’s family with order four to solve nonlinear systems along with its convergence analysis. The proposed family requires two divided difference operators and to compute only one inverse of a matrix per iteration. Furthermore, we have extended the proposed scheme up to the sixth-order of convergence with two additional functional evaluations. In addition, these schemes are further extended to methods with memory. We illustrate their applicability by performing numerical experiments on a wide variety of practical problems, even big-sized. It is observed that these methods produce approximations of greater accuracy and are more efficient in practice, compared with the existing methods.


Author(s):  
Anderas Griewank

Iterative methods for solving a square system of nonlinear equations g(x) = 0 often require that the sum of squares residual γ (x) ≡ ½∥g(x)∥2 be reduced at each step. Since the gradient of γ depends on the Jacobian ∇g, this stabilization strategy is not easily implemented if only approximations Bk to ∇g are available. Therefore most quasi-Newton algorithms either include special updating steps or reset Bk to a divided difference estimate of ∇g whenever no satisfactory progress is made. Here the need for such back-up devices is avoided by a derivative-free line search in the range of g. Assuming that the Bk are generated from an rbitrary B0 by fixed scale updates, we establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.


Mathematics ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 260 ◽  
Author(s):  
Janak Sharma ◽  
Ioannis Argyros ◽  
Sunil Kumar

The convergence order of numerous iterative methods is obtained using derivatives of a higher order, although these derivatives are not involved in the methods. Therefore, these methods cannot be used to solve equations with functions that do not have such high-order derivatives, since their convergence is not guaranteed. The convergence in this paper is shown, relying only on the first derivative. That is how we expand the applicability of some popular methods.


Mathematics ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 99 ◽  
Author(s):  
Ioannis Argyros ◽  
Stepan Shakhno ◽  
Yurii Shunkin

We study an iterative differential-difference method for solving nonlinear least squares problems, which uses, instead of the Jacobian, the sum of derivative of differentiable parts of operator and divided difference of nondifferentiable parts. Moreover, we introduce a method that uses the derivative of differentiable parts instead of the Jacobian. Results that establish the conditions of convergence, radius and the convergence order of the proposed methods in earlier work are presented. The numerical examples illustrate the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document