98.07 The order of convergence of Newton's Method in special cases

2014 ◽  
Vol 98 (541) ◽  
pp. 119-120
Author(s):  
John Michael McNamee
2014 ◽  
Vol 10 (2) ◽  
pp. 21-31
Author(s):  
Manoj Kumar

Abstract The aim of the present paper is to introduce and investigate a new Open type variant of Newton's method for solving nonlinear equations. The order of convergence of the proposed method is three. In addition to numerical tests verifying the theory, a comparison of the results for the proposed method and some of the existing ones have also been given.


2014 ◽  
Vol 2014 ◽  
pp. 1-18 ◽  
Author(s):  
Fiza Zafar ◽  
Nawab Hussain ◽  
Zirwah Fatimah ◽  
Athar Kharal

We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.


Mathematics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 299 ◽  
Author(s):  
Ioannis Argyros ◽  
Á. Magreñán ◽  
Lara Orcos ◽  
Íñigo Sarría

The aim of this paper is to present a new semi-local convergence analysis for Newton’s method in a Banach space setting. The novelty of this paper is that by using more precise Lipschitz constants than in earlier studies and our new idea of restricted convergence domains, we extend the applicability of Newton’s method as follows: The convergence domain is extended; the error estimates are tighter and the information on the location of the solution is at least as precise as before. These advantages are obtained using the same information as before, since new Lipschitz constant are tighter and special cases of the ones used before. Numerical examples and applications are used to test favorable the theoretical results to earlier ones.


Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1106
Author(s):  
Alicia Cordero ◽  
Jonathan Franceschi ◽  
Juan R. Torregrosa ◽  
Anna C. Zagati

Several authors have designed variants of Newton’s method for solving nonlinear equations by using different means. This technique involves a symmetry in the corresponding fixed-point operator. In this paper, some known results about mean-based variants of Newton’s method (MBN) are re-analyzed from the point of view of convex combinations. A new test is developed to study the order of convergence of general MBN. Furthermore, a generalization of the Lehmer mean is proposed and discussed. Numerical tests are provided to support the theoretical results obtained and to compare the different methods employed. Some dynamical planes of the analyzed methods on several equations are presented, revealing the great difference between the MBN when it comes to determining the set of starting points that ensure convergence and observing their symmetry in the complex plane.


1967 ◽  
Vol 63 (1) ◽  
pp. 183-186 ◽  
Author(s):  
Irwin Manning

AbstractA technique is developed for improving the speed and range of convergence of iteration procedures. The method is shown to include, as special cases, Newton's method and Aitken's δ2-acceleration process.


2014 ◽  
Vol 540 ◽  
pp. 435-438
Author(s):  
Liang Fang

With the rapid development of information technology and wide application of science and technology, nonlinear problems become an important direction of research in the field of numerical calculation. In this paper, we mainly study the iterative algorithm of nonlinear equations. We present and analyze two modified Newton-type methods with order of convergence six for solving nonlinear equations. The methods are free from second derivatives. Both of them require three evaluations of the functions and two evaluations of derivatives in each step. Therefore the efficiency index of the presented methods is 1.431 which is better than that of the classical Newton’s method 1.414. Some numerical results illustrate that the proposed methods are more efficient and perform better than the classical Newton's method.


2013 ◽  
Vol 22 (2) ◽  
pp. 127-134
Author(s):  
GHEORGHE ARDELEAN ◽  
◽  
LASZLO BALOG ◽  

In [YoonMe Ham et al., Some higher-order modifications of Newton’s method for solving nonlinear equations, J. Comput. Appl. Math., 222 (2008) 477–486], some higher-order modifications of Newton’s method for solving nonlinear equations are presented. In [Liang Fang et al., Some modifications of Newton’s method with higher-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 228 (2009) 296–303], the authors point out some flaws in the results of YoonMe Ham et al. and present some modified variants of the method. In this paper we point out that the paper of Liang Fang et al. itself contains some flaw results and we correct them by using symbolic computation in Mathematica. Moreover, we show that the main result in Theorem 3 of Liang Fang et al. is wrong. The order of convergence of the method is’nt 3m+2, but is 2m+4. We give the general expression of convergence error too.


Mathematics ◽  
2018 ◽  
Vol 6 (12) ◽  
pp. 274
Author(s):  
Ioannis Argyros ◽  
Daniel González

We use Newton’s method to solve previously unsolved problems, expanding the applicability of the method. To achieve this, we used the idea of restricted domains which allows for tighter Lipschitz constants than previously seen, this in turn led to a tighter convergence analysis. The new developments were obtained using special cases of functions which had been used in earlier works. Numerical examples are used to illustrate the superiority of the new results.


Author(s):  
Mykhailo Bartish ◽  
Olha Kovalchuk ◽  
Nataliia Ohorodnyk

The use of the perturbation operator to construct new modifications of Newton's method for solving minimization problems, in particular the Ulm method of split differences, Steffensen's method, is considered. and as a result of its work we obtain a sequence of points that converge to the solution point.


Sign in / Sign up

Export Citation Format

Share Document