scholarly journals A Convex Combination Approach for Mean-Based Variants of Newton’s Method

Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1106
Author(s):  
Alicia Cordero ◽  
Jonathan Franceschi ◽  
Juan R. Torregrosa ◽  
Anna C. Zagati

Several authors have designed variants of Newton’s method for solving nonlinear equations by using different means. This technique involves a symmetry in the corresponding fixed-point operator. In this paper, some known results about mean-based variants of Newton’s method (MBN) are re-analyzed from the point of view of convex combinations. A new test is developed to study the order of convergence of general MBN. Furthermore, a generalization of the Lehmer mean is proposed and discussed. Numerical tests are provided to support the theoretical results obtained and to compare the different methods employed. Some dynamical planes of the analyzed methods on several equations are presented, revealing the great difference between the MBN when it comes to determining the set of starting points that ensure convergence and observing their symmetry in the complex plane.

2014 ◽  
Vol 10 (2) ◽  
pp. 21-31
Author(s):  
Manoj Kumar

Abstract The aim of the present paper is to introduce and investigate a new Open type variant of Newton's method for solving nonlinear equations. The order of convergence of the proposed method is three. In addition to numerical tests verifying the theory, a comparison of the results for the proposed method and some of the existing ones have also been given.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 463 ◽  
Author(s):  
Ioannis K. Argyros ◽  
Ángel Alberto Magreñán ◽  
Lara Orcos ◽  
Íñigo Sarría

Under the hypotheses that a function and its Fréchet derivative satisfy some generalized Newton–Mysovskii conditions, precise estimates on the radii of the convergence balls of Newton’s method, and of the uniqueness ball for the solution of the equations, are given for Banach space-valued operators. Some of the existing results are improved with the advantages of larger convergence region, tighter error estimates on the distances involved, and at-least-as-precise information on the location of the solution. These advantages are obtained using the same functions and Lipschitz constants as in earlier studies. Numerical examples are used to test the theoretical results.


2014 ◽  
Vol 2014 ◽  
pp. 1-18 ◽  
Author(s):  
Fiza Zafar ◽  
Nawab Hussain ◽  
Zirwah Fatimah ◽  
Athar Kharal

We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.


Mathematics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 299 ◽  
Author(s):  
Ioannis Argyros ◽  
Á. Magreñán ◽  
Lara Orcos ◽  
Íñigo Sarría

The aim of this paper is to present a new semi-local convergence analysis for Newton’s method in a Banach space setting. The novelty of this paper is that by using more precise Lipschitz constants than in earlier studies and our new idea of restricted convergence domains, we extend the applicability of Newton’s method as follows: The convergence domain is extended; the error estimates are tighter and the information on the location of the solution is at least as precise as before. These advantages are obtained using the same information as before, since new Lipschitz constant are tighter and special cases of the ones used before. Numerical examples and applications are used to test favorable the theoretical results to earlier ones.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
S. Artidiello ◽  
A. Cordero ◽  
Juan R. Torregrosa ◽  
M. P. Vassileva

A class of optimal iterative methods for solving nonlinear equations is extended up to sixteenth-order of convergence. We design them by using the weight function technique, with functions of three variables. Some numerical tests are made in order to confirm the theoretical results and to compare the new methods with other known ones.


2020 ◽  
Vol 36 (36) ◽  
pp. 542-560
Author(s):  
Peter Kunkel

A smooth version of Sylvester's law of inertia is presented for symmetric matrix functions of constant rank. The techniques used in the proof are constructive but the resulting numerical approaches are unstable, and therefore require stabilization. Two different stabilization techniques are suggested, one based on a descent method and one based on Newton's method. Some numerical tests are included to demonstrate the applicability of the obtained numerical methods.


2014 ◽  
Vol 540 ◽  
pp. 435-438
Author(s):  
Liang Fang

With the rapid development of information technology and wide application of science and technology, nonlinear problems become an important direction of research in the field of numerical calculation. In this paper, we mainly study the iterative algorithm of nonlinear equations. We present and analyze two modified Newton-type methods with order of convergence six for solving nonlinear equations. The methods are free from second derivatives. Both of them require three evaluations of the functions and two evaluations of derivatives in each step. Therefore the efficiency index of the presented methods is 1.431 which is better than that of the classical Newton’s method 1.414. Some numerical results illustrate that the proposed methods are more efficient and perform better than the classical Newton's method.


2013 ◽  
Vol 22 (2) ◽  
pp. 127-134
Author(s):  
GHEORGHE ARDELEAN ◽  
◽  
LASZLO BALOG ◽  

In [YoonMe Ham et al., Some higher-order modifications of Newton’s method for solving nonlinear equations, J. Comput. Appl. Math., 222 (2008) 477–486], some higher-order modifications of Newton’s method for solving nonlinear equations are presented. In [Liang Fang et al., Some modifications of Newton’s method with higher-order convergence for solving nonlinear equations, J. Comput. Appl. Math., 228 (2009) 296–303], the authors point out some flaws in the results of YoonMe Ham et al. and present some modified variants of the method. In this paper we point out that the paper of Liang Fang et al. itself contains some flaw results and we correct them by using symbolic computation in Mathematica. Moreover, we show that the main result in Theorem 3 of Liang Fang et al. is wrong. The order of convergence of the method is’nt 3m+2, but is 2m+4. We give the general expression of convergence error too.


Author(s):  
Mykhailo Bartish ◽  
Olha Kovalchuk ◽  
Nataliia Ohorodnyk

The use of the perturbation operator to construct new modifications of Newton's method for solving minimization problems, in particular the Ulm method of split differences, Steffensen's method, is considered. and as a result of its work we obtain a sequence of points that converge to the solution point.


Sign in / Sign up

Export Citation Format

Share Document