ArticlePDF Available

A New Derivative-free Iterative Method for Solving Nonlinear Equations with Third Order Convergence

Authors:

Abstract and Figures

In this paper a new iterative method is proposed to find a root of a nonlinear equation. The new method does not use derivatives. When the starting value is selected to be close to a root, the proposed method has a cubic convergence order. To show the efficiency of the method, we give some numerical examples using the test functions in the references and compare the results obtained with the classical Newton method of second order.
Content may be subject to copyright.
ISSN 1749-3889 (print), 1749-3897 (online)
International Journal of Nonlinear Science
Vol.13(2012) No.4,pp.505-512
A New Derivative-free Iterative Method for Solving Nonlinear Equations with
Third Order Convergence
Gustavo Fern´
andez-Torres
Department of Oil Engineering, Universidad Del Istmo Tehuantepec, Oaxaca, 70760, M´
exico
(Received 7 October 2011, accepted 4 May 2012)
Abstract: In this paper a new iterative method is proposed to find a root of a nonlinear equation. The new
method does not use derivatives. When the starting value is selected to be close to a root, the proposed method
has a cubic convergence order. To show the efficiency of the method, we give some numerical examples us-
ing the test functions in the references and compare the results obtained with the classical Newton method of
second order.
Keywords: Cubic approximation; iterative method; nonlinear equations; root finding method
1 Introduction
Until recently many iterative methods for solving the non-linear equation f(x) = 0 have been proposed [1–8, 12]. Most
of these methods are based on extensions of Newton’s method and use differential derivatives of higher orders. The use
of these derivatives is a serious disadvantage for these methods because of the cost of computational calculates and the
difficulty in finding the derivative.
To overcome the difficulties many methods have been proposed without the use of derivatives as in [2], but the order
of convergence of these methods is small. In [5] the author has proposed an iterative method with a convergence order
near 2 and without the use of derivatives of the function f(x). The motivation to use this method is Muller’s method but
using a cubic polynomial.
In [5], we take an initial point x0sufficiently close to the desired root and the iterative method is defined as
xn+1 =xn+6(xnxn1)xnK
(KL)(2xn+xn1)+(MK)(4xnxn1),
where
M=f(xn1), K =f(xn), L =f(2xnxn1).
The error in this method depends on the choice of the Newton method to approximate the root of a cubic polynomial
and in the choice of xnxn1to maintain the Newton method convergence order.
The purpose of this work is to develop some new simple iterative methods to improve in a significative way, the
error of the previous method. As a result, the proposed methods will have several convergence orders without requiring
derivatives of the function f(x). The method maintains the initial considerations of Muller’s method.
2 New Iterative Method
Under the assumption that a sufficiently derivable function f(x)has a unique zero on an interval (a, b)with f(a)f(b)<0,
let x0= 0 be an approximation sufficiently close to the root rof f(x). Take 1< β < 1sufficiently small, β= 0, and
consider the following particular interpolating polynomial that passes through the three points x0, x0β, x0+β,
p(x) = A(xx0)(xx0β)(xx0+β) + 3Ax0(xx0)(xx0β) + B(xx0) + C. (1)
Corresponding author.E-mail address: gusfer@sandunga.unistmo.edu.mx
Copyright c
World Academic Press, World Academic Union
IJNS.2012.06.30/631
506 International Journal of Nonlinear Science, Vol.13(2012), No.4, pp. 505-512
Consider the following system of equations that the polynomial p(x)must verify
f(x0) = K=C=p(x0),(2)
f(x0+β) = L=βB +C=p(x0+β),(3)
f(x0β) = M= 6β2x0AβB +C=p(x0β).(4)
The solutions of the system are very clear
C=K, B =LK
β, A =M+L2K
6β2x0
.(5)
Now, consider the derivatives of polynomial p(x)
p(x) = 3A(xx0)22+ 6Ax0(xx0)3x0+B= 3A(xx0)(x+x0)23x0+B,
p′′(x) = 6Ax, p′′′ (x) = 6A.
The function f(x)can be approximated with the Taylor’s polynomial of second order about x0and since
p(x0) = C, p(x0) = 23 x0+B, p′′ (x0) = 6Ax0, p′′′ (x0)=6A,
we have
f(x)C+ (23x0+B)(xx0) + 3Ax0(xx0)2.(6)
Observe that the coefficient of xx0in the above equation can be written as
23x0+B=(3x0+β) + B=
=3x0(LM) + β(2KLM)
6βx0
=LM
2β+2KLM
6x0
.
Define Sas
S=LM+β(2KLM)
3x0
.
For x=r, and the above equations in (6), we have
6Ax0β(rx0)2+S(rx0) + 2βK 0.
We can resolve the above equation to give
rx0+S±S248AKβ 2x0
12Aβx0
that can be written as
rx0+β
2(2KLM)[SS2+ 8K(2KLM)].
As in Muller’s method, to avoid the problems in the calculation of the error caused by the subtraction of nearly equal
numbers, we apply the formula
rx04
S±S2+ 8K(2KLM)
This formula provides two possibilities, depending on the sign preceding the radical term. The sign is chosen in such a
way that the denominator will be the largest in magnitude and will result in being selected as the closest point to x0. With
this we have the following iterative method.
Let x0= 0 be an approximation sufficiently close to a root rof f(x). Take 1< β < 1sufficiently small, β= 0,
then for n0we have
xn+1 =xn4
S+sign(S)S2+ 8K(2KLM),(7)
with
S=LM+β(2KLM)
3xn
,(8)
IJNS email for contribution: editor@nonlinearscience.org.uk
G. Fern´
andez-Torres: A New Derivative-free Iterative Method for Solving Nonlinear Equations with Third Order Convergence 507
and
f(xn) = K, f (xn+β) = L=βB +C, f (xnβ) = M. (9)
These methods depend on the selection of β. Now we analyse four different values for βto define the new iterative
methods.
a) β= (xnxn1)2,
b) β=xn|xnxn1|,
c) β=xn(xnxn1),
d) β= [xn(xnxn1)]9/8.
3 Convergence Analysis
Theorem 1 Let rIbe a zero of a sufficiently differentiable function f:I for an interval I. If x0= 0 is
sufficiently close to r, then the iterative method (7) with conditions (8) and (9) has orders of convergence 2, 2.4, 2.7 and 3
for the selection of β, a), b), c) and d), respectively. Moreover, for any choice for βthe maximum convergence order is 3.
Proof. Suppose that the point xnhas been calculated starting the process with x0. Using (1), we know that
f(x)p(x) = E(x),
where E(x)is the error of an interpolating polynomial that can be calculated for the common process and is given by
E(x) = (xxn)(xxn+β)(xxnβ)f′′′(ξ1)6A
6
for some ξ1between xnβand xn+β. Thus, f(x)can be written as
C+ (23xn+B)(xxn)+3Axn(xxn)2+ (xxn)3f′′′(ξ1)
6β2(xxn)f′′′(ξ1)
6+β2(xxn)A.
If xn+1 is a root of polynomial (6), C+ (23xn+B)(xxn)+3Axn(xxn)2then
f(xn+1) = (xn+1 xn)3f′′′ (ξ1)
6β2(xn+1 xn)f′′′(ξ1)
6+β2(xn+1 xn)A.
We consider that xn+1 = 0, this is always possible since it represents a shift with respect to origin. Thus the above
expression can be written as
f(0) = f′′′(ξ1)
6x3
n+β2xn
f′′′(ξ1)
6Axn
We know that we can write (5) using divided differences and for ξ2(xnβ, xn+β),Acan be expressed as
A=f′′(ξ2)
3xn
.
Therefore
f(0) = xn(β2x2
n)f′′′(ξ1)
6+β2f′′(ξ2)
3.
We now assume that the points xn, xn1lie in a neighborhood of the root r. Thus if we let εn+1 =xn+1 r=r, εn=
xnr, we have xnxn1=εnεn1. Consider also the cases for β, then we have
a)β= (xnxn1)2
f(0) = xn(xnxn1)4x2
nf(ξ1)
6+ (xnxn1)4f′′(ξ2)
3.
b)β=xn|xnxn1|
f(0) = x3
n[xnxn11] f(ξ1)
6+x2
n(xnxn1)f′′(ξ2)
3.
IJNS homepage: http://www.nonlinearscience.org.uk/
508 International Journal of Nonlinear Science, Vol.13(2012), No.4, pp. 505-512
c)β=xn(xnxn1)
f(0) = x3
n(xnxn1)21f(ξ1)
6+x2
n(xnxn1)2f′′(ξ2)
3.
d)β= [xn(xnxn1)]9/8
f(0) = xn[xn(xnxn1)]9/4x2
nf(ξ1)
6+ [xn(xnxn1)]9/4f′′(ξ2)
3.
We assume that the magnitude of the quantity εnis less than some upper bound εm. With this we make the assumption
εn+1 < εmthat is clear by the equality xn+1 r= (xn+1 y) + (yr)with ythe root of polynomial p(x). Note that
εn+1 =O(ε3
m), thus if we expand all functions around r, we obtain
a)εn+1f(r) = ε4
n1f′′(r)
3+O(ε3
m).
b)εn+1f(r) = ε2
nεn1f′′(r)
3+O(ε3
m).
c)εn+1f(r) = ε2
nε2
n1f′′(r)
3+O(ε3
m).
d)εn+1f(r) = ε9/4
nε9/4
n1f′′(r)
3+O(ε3
m).
We suppose that εn+1 is asymptotic to p
nwith p > 1. Consequently, by expressing the above equations in terms of
εn1and neglecting the terms of O(εm), we obtain
a)cp+1εp2
n1ε4
n1f′′(r)
3f(r).
b)cp+1εp2
n1ε2p+1
n1f′′(r)
3f(r).
c)cp+1εp2
n1ε2p+2
n1f′′(r)
3f(r).
d)cp+1εp2
n1ε
9
4(p+1)
n1f′′(r)
3f(r).
In order to satisfy the previous asymptotic equations, it is evident that in each case we have
a) p2= 4 therefore the maximun value is p= 2.
a) p2= 2p+ 1 therefore the maximun value is p= 1 + 22.41.
a) p2= 2p+ 2 therefore the maximun value is p= 1 + 32.73.
a) p2=9
4(p+ 1) therefore the maximun value is p= 3.
For any other value of β, such as β=xn(xnxn1)2or β= [xn(xnxn1)]3/2), it signifies that the term ϵ2
nε4
n1
or ϵ3
nε3
n1gives the equations p2= 2p+ 4 or p2= 3p+ 3 and the solutions p= 3.23 or p= 3.79 respectively. These are
greater than p= 3. In these cases, we have
εn+1f(r) = ε3
nf′′′(r)
6+O(ε3
m)
Thus the associated equation will be
εn+1f(r)
ε3
n
=f′′′(r)
6+O(ε3
m)
therefore the convergence order will also be 3.
IJNS email for contribution: editor@nonlinearscience.org.uk
G. Fern´
andez-Torres: A New Derivative-free Iterative Method for Solving Nonlinear Equations with Third Order Convergence 509
4 Numerical experiments
We present some numerical test functions taking into account the proposed method. The test functions are the following
1) f(x) = sin2xx2+ 1. 6) f(x) = 1
xsin x+ 1.
2) f(x) = cos xx. 7) f(x) = x3+ 4x210.
3) f(x) = (x1)31. 8) f(x) = ex3x2.
4) f(x) = x310. 9) f(x) = x2sin xcos x.
5) f(x) = x32x25. 10) f(x) = sin xx
2.
The next tables present the method with cubic order convergence (New) and is compared with other methods proposed
in [1–4, 6–8, 12] and with the classical Newton method of second order. In the tables below the initial value (x0), the
number of iterations (N) to obtain the approximation, the number of evaluations of the function (NOFE) and the error for
f(xn)are shown. The abbreviation (ND) indicates that the result is not defined in the paper referred above.
Table 1: Number of iterations and solution obtained for different methods
f(x) = sin2xx2+ 1
Method x0N Obtained solution f(xn)NOFE
Newton 1 6 1.40449164821621534112 2.75E-16 12
3 6 1.40449164821621534112 2.75E-16 12
[12] 1 4 1.40449164821621 1E-14 12
3 3 1.40449164821621 1E-14 9
[8] 1 3 ND 1E-14 9
3 3 ND 1E-14 9
[6] 1 4 ND 1E-14 12
[3] 1 4 ND 1E-15 9
New 1 3 1.40449164821534134 2.75E-16 9
3 4 1.40449164821621534112 2.75E-16 12
The exact solution expected is r= 1.40449164821534122603509
Table 2: Number of iterations and solution obtained for different methods
f(x) = cos xx
Method x0N Obtained solution f(xn)NOFE
Newton -0.3 6 0.73908513321416067 5.12E-16 12
1 4 0.73908513321416067 5.12E-16 8
[12] -0.3 3 0.739085133214758 1E-14 9
1 2 0.739085133214758 1E-14 6
1.7 3 0.739085133214758 1E-14 9
[2] 0 3 ND 1E-10 9
New -0.3 3 0.73908513321416067 5.12E-16 9
1 3 0.73908513321416067 5.12E-16 9
1.7 3 0.73908513321416089 4.22E-16 9
The exact solution expected is r= 0.739085133215160641655312
Table 3: Number of iterations and solution obtained for different methods
f(x) = (x1)31
Method x0N Obtained solution f(xn)NOFE
Newton 2.5 6 2 0 12
[12] 2.5 4 2 0 10
[8] 2.5 3 ND ND 9
New 2.5 4 2 0 12
The exact solution expected is r= 2
IJNS homepage: http://www.nonlinearscience.org.uk/
510 International Journal of Nonlinear Science, Vol.13(2012), No.4, pp. 505-512
Table 4: Number of iterations and solution obtained for different methods
f(x) = x310
Method x0N Obtained solution f(xn)NOFE
Newton 1.5 6 2.15443469003188381 1.29E-15 12
[12] 1.5 4 2.15443469003367 1E-14 12
[8] 1.5 4 ND 1E-14 12
New 1.5 3 2.15443469003188381 1.29E-15 9
The exact solution expected is r= 2.15443469003188372175929
Table 5: Number of iterations and solution obtained for different methods
f(x) = x32x25
Method x0N Obtained solution f(xn)NOFE
Newton 2 6 2.69064744802861355 1.29E-15 12
[2] 2 3 ND 1E-10 9
New 2 3 2.69064744802861355 1.29E-15 9
The exact solution expected is r= 2.69064744802861375035079
Table 6: Number of iterations and solution obtained for different methods
f(x) = 1
xsin x+ 1
Method x0N Obtained solution f(xn)NOFE
Newton -1.3 25 -0.62944648407333337 1.25E-16 50
[2] -1.3 4 ND 1E-15 12
New -1.3 4 -0.62944648407333337 1.25E-16 12
The exact solution expected is r=0.629446484073333329964537
Table 7: Number of iterations and solution obtained for different methods
f(x) = x3+ 4x210
Method x0N Obtained solution f(xn)NOFE
Newton 1 5 1.36523001341409689 6.95E-16 10
2 5 1.36523001341409689 6.95E-16 10
[12] 1 3 1.36523001341448 1E-14 9
2 3 1.36523001341448 1E-14 9
[8] 1 3 ND 1E-14 9
2 3 ND 1E-14 9
[2] 1 4 ND 1E-14 12
2 4 ND 1E-14 12
[7] 1 3 ND 1E-15 9
2 3 ND 1E-14 9
[6] 1 3 ND 1E-15 9
2 3 ND 1E-14 9
New 1 3 1.36523001341409689 6.95E-16 9
2 3 1.36523001341409689 6.95E-16 9
The exact solution expected is r= 1.36523001341409684576081
IJNS email for contribution: editor@nonlinearscience.org.uk
G. Fern´
andez-Torres: A New Derivative-free Iterative Method for Solving Nonlinear Equations with Third Order Convergence 511
Table 8: Number of iterations and solution obtained for different methods
f(x) = ex3x2
Method x0N Obtained solution f(xn)NOFE
Newton 0.5 6 0.91000757248870912 -1.63E-16 12
[4] 0.5 4 0.9100075727 ND 16
[1] 0.5 11 0.910007572489 0.0268E-12 33
New 0.5 3 0.91000757248870912 -1.63E-16 9
The exact solution expected is r= 0.910007572488709060657338
Table 9: Number of iterations and solution obtained for different methods
f(x) = x2sin xcos x
Method x0N Obtained solution f(xn)NOFE
Newton 6 5 6.30830895523815105 -1.31E-14 10
[6] 6 3 6.3083089552381511 -1.32E-14 9
New 6 3 6.30830895523815105 -1.31E-14 9
The exact solution expected is r= 6.30830895523815137755327
Table 10: Number of iterations and solution obtained for different methods
f(x) = sin xx
2
Method x0N Obtained solution f(xn)NOFE
Newton 1.6 5 1.89549426703398075 3.65E-16 10
2 4 1.89549426703398075 3.65E-16 8
[7] 1.6 3 ND 1E-14 9
2 2 ND 1E-14 6
New 1.6 3 1.89549426703398094 6.17E-18 9
2 3 1.89549426703398094 6.17E-18 9
The exact solution expected is r= 1.895494267033980947144036
5 Conclusions
The tables show that the new method can compete with the other methods which are presented in [1–8, 12]. The methods
presented in those papers use the derivatives of the function, except in [1, 2, 5], but the results of the new method are more
accurate.
In this paper, we have developed a new simple iterative method given by (7) with the following important properties:
i. Any derivatives of f(x)is not required.
ii. The convergence order of the method is 3. In practice, for many numerical examples the present method is better
than the Newton’s method and equal to or better than other methods that use the derivatives and have the same convergence
order or higher.
iii. As in Muller’s method, the use of square root in the solution allows this method to obtain complex roots of
polynomials.
Acknowledgments
The author wishes to acknowledge the valuable participation of Professor Nicole Mercier and Professor Ted Fondulas in
the proof-reading of this paper.
IJNS homepage: http://www.nonlinearscience.org.uk/
512 International Journal of Nonlinear Science, Vol.13(2012), No.4, pp. 505-512
References
[1] I. Abu-Alshaikh. A New Iterative Method for Solving Nonlinear Equations. World Academic Science, Engineering
and Technology, 5(2005):190-193.
[2] G. Calder ´
on. M´
etodos Iterativos para Resolver Ecuaciones no Lineales. Revista Notas de Matem´
atica, 3(1)
(2007):33-44.
[3] C. Chun. A Family of Composite Fourth-Order Iterative Methods for Solving Nonlinear Equations. Applied Mathe-
matics and Computation, 2 (2007):951-956.
[4] J. Feng. A New Two-step Method for Solving Nonlinear Equations. International Journal of Nonlinear Science, 8
(2009):40-44.
[5] G. Fern´
andez-Torres. A New Derivative-free Iterative Method for Solving Nonlinear Equations, preprint.
[6] K. Jisheng, L. Yitian, W. Xiuhua. A Modification of Newton Method with Third Order Convergence. Applied
mathematics and computation, 181 (2006):1106-1111.
[7] K. Jisheng, L. Yitian, W. Xiuhua. Composite Fourth-Order Iterative Method for Solving non-linear Equations.
Applied Mathematics and Computation, 2 (2007):951-956.
[8] T. Luki´
c, N. M. Ralevi´
c. Newton’s Method with Accelerated Convergence Modified by an Aggregation Operator.
Proceedings of 3rd Serbian-Hungarian Joint Symposium on Intelligent Systems, SCG, Subotica, (2005):121-128.
[9] J. M. Ortega and W. C. Reinboldt. Iterative Solution of Nonlinear Equations in Several Variables. Academic Press,
New York. 1970.
[10] A. M. Ostrowski. Solution of Equations and Systems of Equations. Academic Press, New York. 1960.
[11] J. F. Traub. Iterative Methods for the Solutions of Equations. Chelsea publishing, New York. 1977.
[12] S. Weerakoon, and T. G. I. Fernando. A Variant of Newtons Method with Accelerated Third Order Convergence.
Applied Mathematics Letters, 13 (2000):87-93.
IJNS email for contribution: editor@nonlinearscience.org.uk
... (0) − A (4) (0))) + A (4) (0)) − 60c 3 2 (3f (α)4 (−22 − 8H(0) +B(0)(2 − H(0)(−28 + A (3) (0))) + A (3) (0)) + c 3 (B(0)(2(−30 +A (3) (0)) + H(0)(−2070 + 89A (3) (0) − 2A (4) (0))) + 2(786 −8H(0)(−33 + A (3) (0)) − 37A (3) (0) + A (4) (0)))) + c 5 2 (−3(−20400 +1240A (3) (0) + 10H(0)(−1104 + 48A (3) (0) − A (4) (0)) − 65A (4) (0) +A(5) (0)) + B(0)(−10(−30 + A (3) (0)) 2 + 3H(0)(−31440 +1720A (3) (0) − 75A (4) (0) + A (5) (0))))}e 6 n + ... + O(e 8 n ). (2.11) ...
Article
Full-text available
The purpose of this work is to develop two new iterative methods for solving nonlinear equations which does not require any derivative evaluations. These composed formulae have seventh and eighth order convergence and desire only four function evaluations per iteration which support the Kung-Traub conjecture on optimal order for without memory schemes. Finally, numerical comparison is provided to show its e ectiveness and performances over other similar iterative algorithms in high precision computation.
Article
Full-text available
We suggest a new two-step iterative method for solving nonlinear equations. The new iterative method has quadratic convergence. Some numerical experiments illustrate that the new method can compete with Basto method.[M. Basto, V. Semiao, F. L. Calheiros, A new iterative method to compute nonlinear equations, Appl. Math. Comput. 173(2006)468-483.].
Article
In this paper, we present a family of new fourth-order iterative methods for solving nonlinear equations. Per iteration the methods consisting of the family require only two evaluations of the function and one evaluation of its derivative. Several numerical examples are given to illustrate the efficiency and performance of some of the presented methods.
Article
In the given method, we suggest an improvement to the iteration of Newton's method. Derivation of Newton's method involves an indefinite integral of the derivative of the function, and the relevant area is approximated by a rectangle. In the proposed scheme, we approximate this indefinite integral by a trapezoid instead of a rectangle, thereby reducing the error in the approximation. It is shown that the order of convergence of the new method is three, and computed results support this theory. Even though we have shown that the order of convergence is three, in several cases, computational order of convergence is even higher. For most of the functions we tested, the order of convergence in Newton's method was less than two and for our method, it was always close to three.
Article
In this paper, we present a new modification of Newton method for solving non-linear equations. Analysis of convergence shows that the new method is cubically convergent. Per iteration the new method requires two evaluations of the function and one evaluation of its first derivative. Thus, the new method is preferable if the computational costs of the first derivative are equal or more than those of the function itself. Its practical utility is demonstrated by numerical examples. (c) 2006 Elsevier Inc. All rights reserved.
Conference Paper
In this study, a new root-finding method for solving nonlinear equations is proposed. This method requires two starting values that do not necessarily bracketing a root. However, when the starting values are selected to be close to a root, the proposed method converges to the root quicker than the secant method. Another advantage over all iterative methods is that; the proposed method usually converges to two distinct roots when the given function has more than one root, that is, the odd iterations of this new technique converge to a root and the even iterations converge to another root. Some numerical examples, including a sine-polynomial equation, are solved by using the proposed method and compared with results obtained by the secant method; perfect agreements are found.
Article
In this study, a new root-finding technique for solving nonlinear equations is proposed. Then, two new more algorithms are derived from this new technique by employing the Adomian decomposition method (ADM). These three algorithms require two starting values that do not necessarily bracketing the root of a given nonlinear equation, however, when the starting values are closed enough or bracketed the root, then the proposed methods converge to the root faster than the secant method. Another advantage over all iterative methods is that; the proposed methods usually converge to two distinct roots when the handled nonlinear equation has more than one root, that is, the odd iterations of the new techniques converge to a root and the even iterations converge to another root. Some numerical examples, including a sine-polynomial equation, are solved by using the proposed methods and compared with the secant method; perfect agreements were found.