scholarly journals New Families of Third-Order Iterative Methods for Finding Multiple Roots

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
R. F. Lin ◽  
H. M. Ren ◽  
Z. Šmarda ◽  
Q. B. Wu ◽  
Y. Khan ◽  
...  

Two families of third-order iterative methods for finding multiple roots of nonlinear equations are developed in this paper. Mild conditions are given to assure the cubic convergence of two iteration schemes (I) and (II). The presented families include many third-order methods for finding multiple roots, such as the known Dong's methods and Neta's method. Some new concrete iterative methods are provided. Each member of the two families requires two evaluations of the function and one of its first derivative per iteration. All these methods require the knowledge of the multiplicity. The obtained methods are also compared in their performance with various other iteration methods via numerical examples, and it is observed that these have better performance than the modified Newton method, and demonstrate at least equal performance to iterative methods of the same order.

2021 ◽  
Vol 40 (3) ◽  
Author(s):  
Lv Zhang ◽  
Qing-Biao Wu ◽  
Min-Hong Chen ◽  
Rong-Fei Lin

AbstractIn this paper, we mainly discuss the iterative methods for solving nonlinear systems with complex symmetric Jacobian matrices. By applying an FPAE iteration (a fixed-point iteration adding asymptotical error) as the inner iteration of the Newton method and modified Newton method, we get the so–called Newton-FPAE method and modified Newton-FPAE method. The local and semi-local convergence properties under Lipschitz condition are analyzed. Finally, some numerical examples are given to expound the feasibility and validity of the two new methods by comparing them with some other iterative methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Gustavo Fernández-Torres ◽  
Juan Vásquez-Aquino

We present new modifications to Newton's method for solving nonlinear equations. The analysis of convergence shows that these methods have fourth-order convergence. Each of the three methods uses three functional evaluations. Thus, according to Kung-Traub's conjecture, these are optimal methods. With the previous ideas, we extend the analysis to functions with multiple roots. Several numerical examples are given to illustrate that the presented methods have better performance compared with Newton's classical method and other methods of fourth-order convergence recently published.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Fiza Zafar ◽  
Gulshan Bibi

We present a family of fourteenth-order convergent iterative methods for solving nonlinear equations involving a specific step which when combined with any two-step iterative method raises the convergence order by n+10, if n is the order of convergence of the two-step iterative method. This new class include four evaluations of function and one evaluation of the first derivative per iteration. Therefore, the efficiency index of this family is 141/5 =1.695218203. Several numerical examples are given to show that the new methods of this family are comparable with the existing methods.


2017 ◽  
Vol 51 (1) ◽  
pp. 1-14
Author(s):  
Ioannis K. Argyros ◽  
Santhosh George

We present a local convergence analysis for a family of Steffensen-type third-order methods in order to approximate a solution of a nonlinear equation. We use hypothesis up to the first derivative in contrast to earlier studies such as [2, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] using hypotheses up to the fourth derivative. This way the applicability of these methods is extended under weaker hypothesis. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study.


2016 ◽  
Vol 11 (10) ◽  
pp. 5774-5780
Author(s):  
Rajinder Thukral

New one-point iterative method for solving nonlinear equations is constructed.  It is proved that the new method has the convergence order of three. Per iteration the new method requires two evaluations of the function.  Kung and Traub conjectured that the multipoint iteration methods, without memory based on n evaluations, could achieve maximum convergence order2n-1  but, the new method produces convergence order of three, which is better than expected maximum convergence order of two.  Hence, we demonstrate that the conjecture fails for a particular set of nonlinear equations. Numerical comparisons are included to demonstrate exceptional convergence speed of the proposed method using only a few function evaluations.


2008 ◽  
Vol 18 (1) ◽  
pp. 53-61 ◽  
Author(s):  
Kuo-Jung Wu ◽  
Hsui-Li Lei ◽  
Sung-Te Jung ◽  
Peter Chu

This paper is a response for the paper of Dohi, Kaio and Osaki, that was published in RAIRO: Operations Research, 26, 1-14 (1992) for an EPQ model with present value. The purpose of this paper is threefold. First, the convex and increasing properties for the first derivative of the objective function are proved. Second, we apply the Newton method to find the optimal cycle time. Third, we provide some numerical examples to demonstrate that the Newton method is more efficient than the bisection method. .


Author(s):  
Ştefan Măruşter

Abstract The aim of this paper is to investigate the local convergence of the Modified Newton method, i.e. the classical Newton method in which the first derivative is re-evaluated periodically after m steps. The convergence order is shown to be m + 1. A new algorithm is proposed for the estimation the convergence radius of the method. We propose also a threshold for the number of steps after which is recommended to re-evaluate the first derivative in the Modified Newton method.


2020 ◽  
pp. 102-109
Author(s):  
Ioannis K. Argyros ◽  
Santhosh George

The local convergence analysis of iterative methods is important since it demonstrates the degree of diffculty for choosing initial points. In the present study, we introduce generalized multi-step high order methods for solving nonlinear equations. The local convergence analysis is given using hypotheses only on the first derivative which actually appears in the methods in contrast to earlier works using hypotheses on higher order derivatives. This way we extend the applicability of these methods. The analysis includes computable radius of convergence as well as error bounds based on Lipschitz-type conditions not given in earlier studies. Numerical examples conclude this study.


Sign in / Sign up

Export Citation Format

Share Document