Content uploaded by Xiaofeng Wang
Author content
All content in this area was uploaded by Xiaofeng Wang on Nov 11, 2017
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=gcom20
Download by: [202.107.78.14] Date: 25 August 2017, At: 06:40
International Journal of Computer Mathematics
ISSN: 0020-7160 (Print) 1029-0265 (Online) Journal homepage: http://www.tandfonline.com/loi/gcom20
A family of Newton-type iterative methods using
some special self-accelerating parameters
Xiaofeng Wang
To cite this article: Xiaofeng Wang (2017): A family of Newton-type iterative methods using
some special self-accelerating parameters, International Journal of Computer Mathematics, DOI:
10.1080/00207160.2017.1366459
To link to this article: http://dx.doi.org/10.1080/00207160.2017.1366459
Accepted author version posted online: 10
Aug 2017.
Published online: 23 Aug 2017.
Submit your article to this journal
Article views: 19
View related articles
View Crossmark data
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2017
https://doi.org/10.1080/00207160.2017.1366459
A family of Newton-type iterative methods using some special
self-accelerating parameters
Xiaofeng Wang
School of Mathematics and Physics, Bohai University, Jinzhou, China
ABSTRACT
In this paper, a family of Newton-type iterative methods with memory is
obtained for solving nonlinear equations, which uses some special self-
accelerating parameters. To this end, we first present two optimal fourth-
order iterative methods with memory for solving nonlinear equations. Then
we give a novel way to construct the self-accelerating parameter and
obtain a family of Newton-type iterative methods with memory. The self-
accelerating parameters have the properties of simple structure and easy
calculation, which do not increase the computational cost of the iterative
methods. The convergence order of the new iterative method is increased
from 4 to 2 +√7≈4.64575. Numerical comparisons are made with some
known methods by using the basins of attraction and through numerical
computations to demonstrate the efficiency and the performance of the
new methods. Experiment results show that, compared with the existing
methods, the new iterative methods with memory have the advantage of
costing less computing time.
ARTICLE HISTORY
Received 19 October 2016
Revised 12 April 2017
Accepted 19 June 2017
KEYWORDS
Iterative method with
memory; self-accelerating
parameter; root-finding;
Newton method;
convergence order
2010 AMS SUBJECT
CLASSIFICATIONS
65H05; 65B99
1. Introduction
In this paper, a family of Newton type iterative methods with memory for nding a simple root of
nonlinear equation f(x)=0isgiven,wheref:I⊂R→RforanopenintervalIis a scalar function.
Iterative method with memory was considered for the rst time by Traub [17] in 1964, who proposed
the following method
x0,γ0are given suitably,
xn+1=xn−f(xn)
f[xn,wn],
N1(x)=f(xn)+(x−xn)f[xn,wn],
γn+1=−1/N1(xn),
wn+1=xn+1+γn+1f(xn+1).
(1)
The convergence order of method (1) is 1 +√2. The self-accelerating parameter γnis calculated by
using information from the current and previous iterations. Method (1) tells us that it is possible to
increase the convergence order using a suitable self-accelerating parameter. Iterative method which
uses the self-accelerating parameter is called the self-accelerating type method in this paper.
CONTACT Xiaofeng Wang w200888w@163.com School of Mathematics and Physics, Bohai University, Jinzhou,
Liaoning 121000, China
© 2017 Informa UK Limited, trading as Taylor & FrancisGroup
Downloaded by [202.107.78.14] at 06:40 25 August 2017
2 X. WANG
Inspired by Traub’s idea, many ecient self-accelerating type methods have been proposed in
recent years, see [6–9,11,13–16,18–24] and references therein. For the self-accelerating type method,
the convergence order can be improved by using more information to construct the self-accelerating
parameter or by increasing the number of self-accelerating parameters. For example, using a self-
accelerating parameter calculated by secant approach, Petković et al. [13]proposedthefollowing
two-step iterative method with order 2 +√5≈4.236
zn=xn−γnf(xn),yn=xn−f(xn)
f[xn,zn],γn=xn−xn−1
f(xn)−f(xn−1),
xn+1=yn−f(yn)
f[xn,zn]1+f(yn)
f(xn)+f(yn)
f(zn).
(2)
Zheng et al. [24] obtained the following two-step method with order (3+√13)/2≈3.3028
¯
xn+1=xn−λnf2(xn)
f(xn+λnf(xn)) −f(xn),λn=− xn−xn−1
f(xn)−f(xn−1),
xn+1=xn−λnf3(xn)
[f(xn+λnf(xn)) −f(xn)][f(xn)−f(¯
xn+1)].
(3)
Using more information to calculate the self-accelerating parameter, Džunić and Petković [7]gave
an-point method with order 23 ·2(n−4)(n>4), in which the self-accelerating parameter is calcu-
lated by Newton interpolation polynomial of third degree. Wang and Zhang [18,20]proposedsome
iterative methods with memory, one of them is the following method
yn=xn−f(xn)
λnf(xn)+f(xn),λn=−H2(xn)
f(xn),
xn+1=yn−f(yn)
2λnf(xn)+f(xn)1+2f(yn)
f(xn)+f(yn)
f(xn)2,
(4)
where the self-accelerating parameter λnis calculated by Hermite interpolation polynomial H2(x)=
H2(x;xn,xn,yn−1)of second degree. The convergence order of method (4) is (5+√17)/2≈4.5616.
The convergence order of the self-accelerating type method can be improved greatly by increas-
ing the number of self-accelerating parameters. Using two self-accelerating parameters, Džunić [6]
obtained an ecient two-step method with order 7
γ0,p0are given, wk=xk+γkf(xk),
γk=− 1
N(xk),pk=−N4(wk)
2N4(wk)for k≥1,
yk=xk−f(xk)
f[xk,wk]+pkf(wk),
xk+1=yk−(1+tk)f(xk)
f[xk,wk]+pkf(wk),tk=f(yk)
f(xk),
(5)
where N3(x)=N3(x;xk,yk−1,wk−1,xk−1)and N4(x)=N4(x;wk,xk,yk−1,wk−1,xk−1)are Newton
inter-polating polynomials of third and fourth degree, respectively. Wang and Zhang [19]also
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 3
proposed a two-parameter iterative method with order 5.3059
yn=xn−f(xn)
f(xn)−γnf(xn),γn=H4(xn)
2f(xn),βn=−H4(xn)
6f(xn),
xn+1=yn−f(yn)
2f[xn,yn]−f(xn)1+βnf(yn)
f(xn)2,
(6)
where H4(x)=H4(x;xn,xn,yn−1,xn−1,xn−1)is a Hermite interpolation polynomial of fourth
degree. Furthermore, using three self-accelerating parameters, Soleymani et al. [16], Lot and Assari
[11]andWangetal.[22] presented some ecient iterative methods with memory, respectively. Using
n+1 self-accelerating parameters, Wang and Zhang [21] derived a iterative method with the maxi-
mal convergence order (2n+1−1+√22(n+1)+1)/2. Lot’s method [11] and Džunić’s method [6]
canbeseenasthespecialcasesofourmethod[21]. Other self-accelerating type methods are discussed
in [8,9,14,15].
For the self-accelerating type methods, the convergence speed of the iterative method is consider-
ably accelerated by employing the self-accelerating parameter. The increase of convergence order is
attained without any additional function evaluations. In summary, the self-accelerating parameters
of self-accelerating type methods can be constructed by the interpolation polynomial or by secant
approach. Besides these ways, it is worthy of researching on whether there are other ways to construct
the self-accelerating parameter.
In this paper, we will give a novel way to construct the self-accelerating parameter. This paper is
organized as follows. In Section 2, we rst derive two optimal fourth-order iterative methods with-
out memory for solving nonlinear equations. Some novel self-accelerating parameters with simple
structure are given in Section 3. Using these novel self-accelerating parameters, we obtain a fam-
ily of Newton-type iterative methods with memory. The maximal convergence order of the new
Newton-type iterative method with memory is 2 +√7≈4.64575. Since acceleration of convergence
is obtained without additional function evaluations, the computational eciency of the new methods
is signicantly increased. Numerical examples are given in Section 4 to conrm theoretical results.
Dynamic behaviour of the iterative methods is analysed in Section 5. Section 6 is a short conclusion.
2. Two optimal fourth-order iterative methods without memory
Firstly, we consider the following one-parameter iterative scheme
zn=xn−f(xn)
f(xn),
yn=xn+zn−xn
1−λ(zn−xn),
xn+1=yn−f(yn)
f(yn),
(7)
where λ∈R. To avoid the evaluation of the rst derivative f(yn),weapproximatef(x)by a rational
linear function of the form
P(x)=A+B(x−xn)
1+C(x−xn),(8)
where the parameters A,Band Care determined by the following conditions
P(xn)=f(xn),P(yn)=f(yn),P(xn)=f(xn).(9)
Downloaded by [202.107.78.14] at 06:40 25 August 2017
4 X. WANG
According to Equations (8) and (9), we obtain
A=f(xn), (10)
B=f(xn)+f(xn)f[xn,yn]−f(xn)
f(xn)−f(yn), (11)
C=f[xn,yn]−f(xn)
f(xn)−f(yn), (12)
where f[xn,yn]=(f(xn)−f(yn))/(xn−yn)is rst-divided dierence.
Dierentiation of Equation (8) gives
P(x)=B−AC
[1 +C(x−xn)]2. (13)
We approximate the derivative f(yn)by P(yn)and obtain
f(yn)≈P(yn)=(f[xn,yn])2
f(xn). (14)
Substituting Equation (14) into Equation (7), we get a new one-parameter iterative method
zn=xn−f(xn)
f(xn),
yn=xn+zn−xn
1−λ(zn−xn),
xn+1=yn−f(yn)
f[xn,yn]
f(xn)
f[xn,yn],
(15)
where λ∈R.Takingλ=0, we get the Kung–Traub fourth-order method [23].
Furthermore, we construct the following two-parameter iterative method by adding a new step to
method (15)
zn=xn−f(xn)
f(xn),
yn=xn+zn−xn
1−λ(zn−xn),
wn=yn−f(yn)
f[xn,yn]
f(xn)
f[xn,yn],
xn+1=wn−γ(wn−yn)(yn−xn)2,
(16)
where λ,γ∈R.
Using the symbolic computation in the programming package Mathematica, we can nd the con-
vergence order and the asymptotic error constant(AEC) of the methods (15) and (16). For simplicity,
we omit the iteration index nand write einstead of en. The approximation xn+1to the root awill
be denoted by ˆ
x.Formethod(16),wedenetheerrorse=x−a,ey =y−a,ez =z−a,ew =
w−a,e1=ˆ
x−a.
The following abbreviations are used in the program.
ck =f(k)(a)/(k!f(a)),e=x−a,ey=y−a,ez=z−a,ew=w−a,
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 5
e1 =ˆ
x−a,lp=λ,r=γ
fx =f(x),fy=f(y),df=f(x),fxy=f[x,y].
Program (written in Mathematica)
fx =a*(e+c2*eˆ2+c3*eˆ3+c4*eˆ4+c5*eˆ5+c6*eˆ6);
df =D[fx,e];
ez =Series[e-fx/(df),{e,0,6}]//Simplify
ey =Series[e+(ez-e)/(1-lp*(ez-e)),{e,0,4}]//Simplify
fy =a*(ey+c2*eyˆ2+c3*eyˆ3);
fxy =(fx-fy)/(e-ey);
ew =Series[ey-fy*df/(fxy*fxy),{e,0,4}]//Simplify
e1 =Series[ew-r*(ew-ey)*(ey-e)ˆ2,{e,0,4}]//Simplify
Out[ez] =c2e2+(−2c2
2+2c3)e3+O[e]4, (17)
Out[ey] =(c2+λ)e2+(−2c2
2+2c3−2c2λ−λ2)e3+O[e]4, (18)
Out[ew] =(c2+λ)(2c2
2−c3+c2λ)e4+O[e]5. (19)
Out[e1] =(c2+λ)(2c2
2−c3+c2λ+γ)e4+O[e]5. (20)
The outputs (19) and (20) of the above program mean that the convergence order of the iterative
methods (15) and (16) is four. Altogether, we can state the following theorem.
Theorem 2.1: Let a ∈I be a simple zero of a suciently dierentiable function f :I⊂R→Rfor
an open interval I. Then the iterative methods dened by Equations (15) and (16) are of fourth-order
convergence and satisfy the following the following error equations
en+1=(c2+λ)(2c2
2−c3+c2λ)e4
n+O(e5
n)(21)
and
en+1=(c2+λ)(2c2
2−c3+c2λ+γ)e4
n+O(e5
n), (22)
respectively.
Remark 2.2: Methods (15) and (16) reach the optimal order four requiring only three function evalu-
ations per step, which agree with the conjecture of Kung–Traub [10]. The Kung–Traub optimal fourth
method [10] is a special case of method (15) by taking λ=0. The rst two steps of the methods (15)
and (16) belong to the second-order method developed by Wu [23]. In order to give a new technique
to construct the self-accelerating parameter, we use the variant of Wu’s method as the rst two steps
of the methods (15) and (16).
3. The new methods with memory and some novel self-accelerating parameters
In this section, we will rstly improve the convergence order of the method (15) by using a sim-
ple self-accelerating parameter λnto substitute the parameter λ. Then, using two self- accelerating
parameters, we improve the convergence order of method (16). Similar to the methods (4) and (6),
we can construct the self-accelerating parameter by interpolation polynomial. But, we do not use this
technique in this paper. Here, we give a novel way to construct the self-accelerating parameter. If
λ=−c2, then the convergence order of the method (15) can be improved. Thus, we should choose
suitable self-accelerating parameter λnto substitute the parameter λ.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
6 X. WANG
It is well known that Newton’s method [12]: xn+1=xn−f(xn)/f(xn)converges quadratically. If
the sequence {xn}generated by Newton method converges to a simple root aof nonlinear equation,
then the sequence {xn}satises the following expression
lim
n→∞
xn+1−a
(xn−a)2=lim
n→∞
en+1
e2
n=c2, (23)
where c2=f(a)/(2f(a)) is the AEC, en+1=xn+1−aand en=xn−a.
If λn=−(xn+1−a)/(xn−a)2, then the convergence order of the method (15) can be improved.
Since the root ain Equation (23) is unknown, we use information from the current and previous
iterations to approximate the root aand construct the following formulas for λn:
Formula 1:
λn=xn−zn−1
(yn−1−xn−1)2. (24)
Formula 2:
λn=2(xn−zn−1)
(zn−xn−1)(xn−xn−1)+(zn−xn)
(xn−yn−1)(zn−1−xn−1)−(xn−zn−1)
(yn−1−xn−1)2. (25)
Remark 3.1: Now, we obtain the following one-parameter iterative method with memory
zn=xn−f(xn)
f(xn),
yn=xn+zn−xn
1−λn(zn−xn),
xn+1=yn−f(yn)
f[xn,yn]
f(xn)
f[xn,yn],
(26)
where λnis calculated by using one of the formulas (24)–(25) without any additional function
evaluations.
The concept of the R-order of convergence [12] and the following assertion (see [1, p. 287]) will
be applied to estimate the convergence order of the iterative methods with memory (26).
Theorem 3.2: Iftheerrorsofapproximationse
j=xj−a obtained in an iterative root-nding method
IM satisfy
ek+1∼
n
i=0
(ek−i)mi,k≥k({ek}),
then the R-order of convergence of IM, denoted with OR(IM, a),satisestheinequalityO
R(IM, a)≥s∗,
where s∗is the unique positive solution of the equation sn+1−n
i=0misn−i=0.
Theorem 3.3: Let the varying parameters λnin the iterative method (26) be calculated by Equation
(24).Ifaninitialapproximationx
0is suciently close to a simple root a of f (x),thentheR-orderof
convergence of the iterative method (26) with memory is at least 2+√5≈4.2361.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 7
Proof: Let the sequence {xn}be generated by an iterative method converges to the root aof f(x)with
the R-order OR(IM, a)≥r,wecanwrite
en+1∼Dn,rer
n,en=xn−a, (27)
where Dn,rtends to the AEC Drof (IM) when n→∞.So,
en+1∼Dn,r(Dn−1,rer
n−1)r=Dn,rDr
n−1,rer2
n−1. (28)
In this case, we assume that the R-order of iterative sequence {yn}is p,then
en,y∼Dn,pep
n∼Dn,p(Dn−1,rer
n−1)p=Dn,pDp
n−1,rerp
n−1. (29)
According to Equations (18) and (19), we get the corresponding error relations of method (26) with
memory
en,y=yn−a∼(c2+λn)e2
n, (30)
en+1=xn+1−a∼(c2+λn)(2c2
2−c3+λnc2)e4
n+O(e5
n). (31)
Here, the higher order terms in Equations (30)–(31) are omitted.
Substituting λby λnand nby n−1 in Equations (17), (18) and (19), we have
xn−zn−1=−c2e2
n−1+2(c2
2−c3)e3
n−1+(−2c3
2−3c4+3c2
2λn−1−c3λn−1
+c2(6c3+λ2
n−1))e4
n−1+O(e5
n), (32)
yn−1−xn−1=−en−1+(c2+λn−1)e2
n−1+(−2c2
2+2c3−2c2λn−1−λ2
n−1)e3
n−1
+(4c3
2−7c2c3+3c4+5c2
2λn−1−4c3λn−1+3c2λ2
n−1+λ3
n−1)e4
n−1+O(e5
n−1),
(33)
then
λn=xn−zn−1
(yn−1−xn−1)2
=−c2−2(c3+c2λn−1)en−1+(3c3
2−2c2c3−3c4+5λn−1c2
2−5c3λn−1)e2
n−1+O(e3
n−1),
(34)
c2+λn∼−2(c3+c2λn−1)en−1. (35)
According to Equations (31) and (35), we get
en+1∼−2(c3+c2λn−1)en−1(2c2
2−c3+λnc2)e4
n
∼−2(c3+c2λn−1)en−1(2c2
2−c3+λnc2)(Dn−1,rer
n−1)4
∼−2(c3+c2λn−1)(2c2
2−c3+λnc2)D4
n−1,re4r+1
n−1.
(36)
By comparing exponents of en−1appearing in relations (28) and (36), we get the following equation
4r+1=r2. (37)
Positive solution of Equation (37) is given by r=2+√5≈4.2361. Therefore, the R-order of the
method with memory (26), when λnis calculated by Equation (24), is at least 4.2361.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
8 X. WANG
Theorem 3.4: Let the varying parameters λnin the iterative method (26) be calculated by Equation
(25).Ifaninitialapproximationx
0is suciently close to a simple root a of f (x),thentheR-orderof
convergence of the iterative method (26) with memory is at least 2+√6≈4.4495.
Proof: Substituting λby λnand nby n−1 in Equations (17)–(19), we have
zn−xn−1=−en−1+(c2+λn−1)2(2c2
2−c3+c2λn−1)2e8
n−1+O(e9
n−1), (38)
xn−xn−1=−en−1+(c2+λn−1)(2c2
2−c3+c2λn−1)e4
n−1+O(e5
n−1). (39)
zn−xn=−(c2+λn−1)(2c2
2−c3+c2λn−1)e4
n−1+(10c4
2+2c2
3+(16c3
2+2c4−3c3λn−1)λn−1
+c2
2(−14c3+9λ2
n−1)+2c2(c4−7c3λn−1+λ3
n−1))e5
n−1+O(e6
n−1),
(40)
xn−yn−1=−(c2+λn−1)e2
n−1+(2c2
2−2c3+2c2λn−1+λ2
n−1)e3
n−1+(−2c3
2−2c2
2λn−1
+6c2c3−3c4+3c3λn−1−2c2λ2
n−1−λ3
n−1)e4
n−1+O(e5
n−1),(41)
zn−1−xn−1=−en−1+c2e2
n−1+2(c3−c2
2)e3
n−1+(4c3
2−7c2c3+3c4)e4
n−1+O(e5
n−1). (42)
According to Equations (34) and (36)–(42), we obtain
λn=2(xn−zn−1)
(zn−xn−1)(xn−xn−1)+(zn−xn)
(xn−yn−1)(zn−1−xn−1)−(xn−zn−1)
(yn−1−xn−1)2
=−c2+(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1+O(e3
n−1),
(43)
c2+λn∼(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1+O(e3
n−1). (44)
According to Equations (31) and (44), we get
en+1∼(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1(2c2
2−c3+λnc2)e4
n
∼(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1(2c2
2−c3+λnc2)(Dn−1,rer
n−1)4
∼(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)(2c2
2−c3+λnc2)D4
n−1,re4r+2
n−1.
(45)
By comparing exponents of en−1appearing in relation (28) and (45), we get the following equation
4r+2=r2. (46)
Positive solution of Equation (46) is given by r=2+√6≈4.4495. Therefore, the R-order of the
method with memory (26), when λnis calculated by Equation (25), is at least 4.4495.
The proof is completed.
Remark 3.5: From the error equation (22), we can see that the convergence order of the iterative
method (16) can be further improved by taking the parameters λ=−c2and γ=c3−c2.Similar
to the method (26), we can construct the self-accelerating parameters λnand γnto substitute the
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 9
parameters λand γ, respectively. So, we obtain a two-parameter iterative method with memory as
follows
zn=xn−f(xn)
f(xn),
yn=xn+zn−xn
1−λn(zn−xn),
wn=yn−f(yn)
f[xn,yn]
f(xn)
f[xn,yn],
xn+1=wn−γn(wn−yn)(yn−xn)2,
(47)
where the self-accelerating parameter λncan be calculated by Equation (25) and written as follows
λn=2(wn−1−zn−1)
(zn−xn−1)(wn−1−xn−1)+(zn−wn−1)
(wn−1−yn−1)(zn−1−xn−1)−(wn−1−zn−1)
(yn−1−xn−1)2. (48)
and the self-accelerating parameter γnis calculated by the following scheme
γn=λn−(wn−1−zn−1)
(zn−xn−1)(wn−1−xn−1)
2(xn−1−wn−1). (49)
Base on Equation (22), we get the corresponding error relation of method (47) with memor y as follows
en+1=xn+1−a∼(c2+λn)[(c2+λn)c2+rn−c3+c2
2]e4
n+O(e5
n). (50)
Theorem 3.6: Let the varying parameters λnin the iterative method (47) be calculated by Equation
(48) and the varying parameters γnbe calculated by Equation (49).Ifaninitialapproximationx
0is
suciently close to a simple root a of f (x), then the R-order of convergence of the iterative method (47)
with memory is at least 2+√7≈4.64575.
Proof: Using the results of Theorems 3.3 and 3.4, we have
wn−1−zn−1=−c2e2
n−1+2(c2
2−c3)e3
n−1+(−2c3
2−3c4+3c2
2λn−1−c3λn−1
+c2(6c3+λ2
n−1))e4
n−1+O(e5
n), (51)
wn−1−xn−1=−en−1+(c2+λn−1)(2c2
2−c3+c2λn−1)e4
n−1+O(e5
n−1). (52)
zn−wn−1=−(c2+λn−1)(2c2
2−c3+c2λn−1)e4
n−1+(10c4
2+2c2
3+(16c3
2+2c4−3c3λn−1)λn−1
+c2
2(−14c3+9λ2
n−1)+2c2(c4−7c3λn−1+λ3
n−1))e5
n−1+O(e6
n−1),
(53)
wn−1−yn−1=−(c2+λn−1)e2
n−1+(2c2
2−2c3+2c2λn−1+λ2
n−1)e3
n−1+(−2c3
2−2c2
2λn−1
+6c2c3−3c4+3c3λn−1−2c2λ2
n−1−λ3
n−1)e4
n−1+O(e5
n−1),
(54)
From Equations (51)–(54), we get
λn=2(wn−1−zn−1)
(zn−xn−1)(wn−1−xn−1)+(zn−wn−1)
(wn−1−yn−1)(zn−1−xn−1)−(wn−1−zn−1)
(yn−1−xn−1)2
=−c2+(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1+O(e3
n−1),
(55)
Downloaded by [202.107.78.14] at 06:40 25 August 2017
10 X. WANG
γn=λn−(wn−1−zn−1)
(zn−xn−1)(wn−1−xn−1)
2(xn−1−wn−1)
=(−c2
2+c3)+1
2(3c3
2−6c2c3+4c4+4c2
2λn−1+3c2λ2
n−1)en−1+O(e2
n−1).
(56)
According to Equations (18), (20), (22), (55) and (56), we get
en+1∼(c2+λn)(2c2
2−c3+c2λn+γn)e4
n+O(e5
n)
∼(c2+λn)[c2(c2+λn)+(c2
2−c3)+γn]e4
n+O(e5
n)
∼1
2(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)e2
n−1(3c3
2−6c2c3
+4c4+4c2
2λn−1+3c2λ2
n−1)en−1e4
n
Tab le 1 . Numerical results for f1(x)by the methods with and without memory.
Methods |x1−a||x2−a||x3−a||x4−a|ρ
(15) 0.23054e−3 0.93481e−14 0.25273e−55 0.13502e−221 4.0000000
(16) 0.23801e−3 0.11069e−13 0.51787e−55 0.24812e−220 4.0000000
ZM 0.52653e−2 0.44751e−7 0.15980e−23 0.61894e−78 3.3082725
PM 0.47398e−4 0.30448e−17 0.20742e−73 0.28696e−311 4.2348771
WM1 0.51371e−3 0.47887e−13 0.10861e−55 0.26710e−236 4.2352409
((26),(24)) 0.23054e−3 0.29797e−15 0.73980e−66 0.11504e−280 4.2447985
((26),(25)) 0.23054e−3 0.11530e−16 0.37701e−75 0.10351e−335 4.4551453
(47) 0.23801e−3 0.28636e−17 0.16584e−80 0.30286e−375 4.6608388
Tab le 2 . Numerical results for f2(x)by the methods with and without memory.
Methods |x1−a||x2−a||x3−a||x4−a|ρ
(15) 0.43473e−2 0.70961e−9 0.50929e−36 0.13512e−144 4.0000000
(16) 0. 45508e−2 0. 90076e−9 0. 13987e−35 0. 81331e−143 4.0000000
ZM 0.42368e−1 0.19255e−4 0.35571e−15 0.10391e−50 3.3106279
PM 0.34012e−1 0.56555e−6 0.47462e−26 0.36556e−111 4.2395330
WM1 0.80255e−2 0.10174e−8 0.35663e−38 0.66482e−163 4.2345381
((26),(24)) 0.43473e−2 0.20165e−10 0.55196e−45 0.89701e−192 4.2470385
((26),(25)) 0.43473e−2 0.32346e−11 0.15068e−50 0.40255e−226 4.4639026
(47) 0. 45508e−2 0. 80166e−12 0. 22689e−5 0. 84517e−259 4.6713527
Tab le 3 . Numerical results for f3(x)by the methods with and without memory.
Methods |x1−a||x2−a||x3−a||x4−a|ρ
(15) 0.34960e−3 0.42142e−15 0.88966e−63 0.17671e−253 4.0000000
(16) 0.51365e−3 0.58462e−15 0.98016e−63 0.77446e−254 4.0000000
ZM 0.12353e−1 0.19384e−7 0.17066e−26 0.18131e−89 3.3047880
PM 0.28807e−2 0.17703e−12 0.14180e−55 0.35820e−238 4.2369602
WM1 0.48052e−2 0.32661e−10 0.33069e−45 0.23474e−193 4.2334759
((26),(24)) 0.34960e−3 0.18423e−14 0.85142e−64 0.58907e−272 4.2192971
((26),(25)) 0.34960e−3 0.23180e−15 0.10493e−70 0.21832e−316 4.4391605
(47) 0.51365e−3 0.14740e−14 0.16327e−70 0.66622e−329 4.6177560
Tab le 4 . Mean CPU time for the stopping criterion |xk+1−xk|<10−200.
fZM PM DM WM1 WM2 (26), (24) (26), (25) (47)
f13.65167 3.83668 7.86026 3.33717 6.07311 2.88321 3.01768 2.08198
f21.37031 1.60400 4.57582 1.83301 3.67257 1.67264 1.56937 1.22523
f31.68761 1.99587 4.43760 2.32441 4.15212 1.23490 1.28794 1.38684
Total 6.70959 7.43655 16.87368 7.49459 13.8978 5.79075 5.87454 4.69405
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 11
Tab le 5 . Mean CPU time for the stopping criterion|xk+1−xk|<10−300.
fZM PM DM WM1 WM2 (26), (24) (26), (25) (47)
f15.25068 4.27068 9.04868 4.81075 6.26188 3.24762 2.99178 2.03238
f21.66484 1.76561 4.72589 1.90789 3.71531 1.60244 1.76468 1.46360
f31.91787 2.06982 5.24912 2.85419 4.16959 1.31540 1.77685 1.47826
Total 8.83339 8.10611 19.02369 9.57283 14.14678 6.16546 6.53331 4.97424
Tab le 6 . Mean CPU time for the stopping criterion |xk+1−xk|<10−400.
fZM PM DM WM1 WM2 (26),(24) (26),(25) (47)
f16.13801 5.53397 9.93476 5.00732 7.82657 3.74620 4.00423 2.64390
f22.68477 2.38338 5.85721 2.33408 4.25477 1.67139 1.92037 1.64955
f32.49570 2.58680 6.37576 2.78305 5.07970 1.38154 2.20242 1.65985
Total 11.31848 10.50451 22.16773 10.12445 17.16104 6.79913 8.12702 5.95330
(a) (b)
(c) (d)
Figure 1. Fractal results for the complex polynomial z2−1. (a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e):
Method WM1; (f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
Downloaded by [202.107.78.14] at 06:40 25 August 2017
12 X. WANG
(e) (f)
(
g
) (h)
Figure 1. continued
∼1
2(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)(3c3
2−6c2c3+4c4+4c2
2λn−1+3c2λ2
n−1)e3
n−1e4
n
∼1
2(c3
2+c4+7c2
2λn−1−c3λn−1+4c2λ2
n−1)(3c3
2−6c2c3
+4c4+4c2
2λn−1+3c2λ2
n−1)D4
n−1,re4r+3
n−1. (57)
By comparing exponents of en−1appearing in the relation (28) and (57), we get the following equation
4r+3=r2. (58)
Positive solution of the Equation (58) is given by r=2+√7≈4.64575. Therefore, the R-order of
the method with memory (47) is at least 4.64575.
The proof is completed.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 13
4. Numerical results
The method (2) (PM), method (3) (ZM), method (4) (WM1) and the new methods with and without
memory are employed to solve nonlinear functions fi(x)(i=1, 2, 3).Theabsoluteerrors|xk−a|in
therstfouriterationsaregiveninTables1–3,whereais the exact root computed with 2000 signi-
cant digits. The parametersλ=0.1, λ0=0.1, γ=0.1 and γ0=0.1 are used in the rst iteration. The
computational order of convergence ρis dened by [5]:
ρ≈ln(|xn+1−xn|/|xn−xn−1|)
ln(|xn−xn−1|/|xn−1−xn−2|). (59)
Ourmethodsarecomparedwithhigh-ordermethod(5)(DM)andmethod(6)(WM2)inTables4–6.
Tables 4–6give the mean CPU time (in seconds) after 50 performances of dierent methods.
(a) (b)
(c) (d)
Figure 2. Fractal results for the complex polynomial z3−1. (a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e):
Method WM1; (f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
Downloaded by [202.107.78.14] at 06:40 25 August 2017
14 X. WANG
(e) (f)
(
g
) (h)
Figure 2. continued
Following test functions are used:
f1(x)=xex2−sin2(x)+3cos(x)+5, a≈−1.2076478271309189, x0=−1.3,
f2(x)=x5+x4+4x2−15, a≈1.3474280989683050, x0=1.6,
f3(x)=arcsin(x2−1)−0.5x+1, a≈0.59481096839836918, x0=1.
From Tables 1–3, we observe that the convergence orders of the methods (26) and (47) with mem-
ory are increased relative to the corresponding basic methods (15) and (16) without memory. The
computational orders of convergence ρare consistent with the theoretical orders. The convergence
behaviour of the methods (26) and (47) with memory is better than the methods ZM, PM and WM1
for most examples. According to the results presented in Tables 4–6,wecanseethatourmethod(47)
uses minimum computing time and is better than the high-order methods WM2 and DM. The main
reason is that the accelerating parameters of methods WM2 and DM are too complex, which cost
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 15
more computing time than that of our methods (26) and (47). For convenient for comparing, the
computing time of our methods is set to bold values in Tables 4–6.
5. Dynamical analysis
The stability and reliability of iterative method can be judged by its dynamical behaviour. Many
authors study the dynamics of dierent iterative methods in [2–4]. In this section, we compare our
new methods with methods ZM, PM and WM1 by using the basins of attraction for two complex
polynomials f(z)=zk−1, k=2, 3. We give a rectangle D =[−2.0, 2.0] ×[−2.0, 2.0]⊂Candtake
a grid of 300 ×300 points z0in D. The colour is assigned to each point z0∈Daccording to the simple
root at which the corresponding iterative method starting from z0converges, and the point is painted
with black if the iterative method does not converge after the maximum number of iterations. The
maximum number of iterations is 25. The sequence generated by iterative method reaches a zero z∗
of the polynomial with a tolerance |zk−z∗|<10−5. The parameters used in iterative methods with-
out memory are λ=0.1 and γ=0.1. The parameter λ0=0.1 and γ0=0.1 are used in the methods
with memory.
From Figures 1and 2, we can see that our new methods (26) and (47) with memory have faster con-
vergence speed than the iterative methods PM, ZM and (15)–(16). The new methods with memory
have fewer diverging points than the iterative methods (16) and PM.
6. Conclusions
The main contribution of this paper is that a new approximate method to construct the self-
accelerating parameter is presented. Firstly, we presented two new fourth-order iterative methods
without memory for solving nonlinear equations. Based on the new optimal methods without mem-
ory, the new methods with memory is presented, which use some special self-accelerating parameters.
The new self-accelerating parameters have the properties of simple structure and easy calculation,
which do not increase the computational cost of the iterative method. Numerical comparisons are
made with some known methods by using the basins of attraction and through numerical compu-
tations to demonstrate the eciency and the performance of the new methods. Experiment results
show that our methods have better convergence behaviour.
Disclosure statement
No potential conict of interest was reported by the author.
Funding
The project was supported by the NationalNatural Science Foundation of China [Nos. 11547005 and 61572082], Doctor
Startup Foundation of Liaoning Province of China [No. 201501196] and the Educational Commission Foundation of
Liaoning Province of China [No. L2015012].
References
[1] G. Alefeld and J. Herzberger, Introduction to Interval Computation,AcademicPress,NewYork,1983.
[2] F.I. Chicharro, A. Cordero, J.M. Gutiérrez, and J.R. Torregrosa, Complex dynamics of derivative-free methods for
nonlinear equations, Appl. Math. Comput. 219 (2013), pp. 7023–7035.
[3] C. Chun and B. Neta, The basins of attraction of Murakami’s fth order family of methods,Appl.Numer.Math.110
(2016), pp. 14–25.
[4] A.Cordero,F.Soleymani,J.R.Torregrosa,andM.ZakaUllah,Numerically stable improved Chebyshev-Halley type
schemes for matrix sign function, J. Comput. Appl. Math. 318 (2017), pp. 189–198.
[5] A. Cordero and J.R. Torregrosa, Variants of Newton’s Method using fth-order quadrature formulas,Appl.Math.
Comput. 190 (2007), pp. 686–698.
[6] J. Džunić, On ecient two-parameter methods for solving nonlinear equations,Numer.Algorithms63(2013), pp.
549–569.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
16 X. WANG
[7] J. Džunić and M.S. Petković, On generalized multipoint root-solvers with memory,J.Comput.Appl.Math.236
(2012), pp. 2909–2920.
[8] J. Džunić and M.S. Petković, On generalized biparametr ic multipoint root nding methods with memor y,J.Comput.
Appl. Math. 255 (2014), pp. 362–375.
[9] M. Kansal, V. Kanwar, and S. Bhatia, Ecient derivative-free variants of Hansen-Patrick’s family with memory for
solving nonlinear equations,Numer.Algorithms73(2016), pp. 1017–1036. doi:10.1007/s11075-016-0127-6
[10] H.T. Kung and J.F. Traub, Optimal order of one-point and multipoint iterations,J.ACM21(1974), pp. 643–651.
[11] T. Lot and P. Assari, New three- and four-parametric iterative with memory methods with eciency index near 2,
Appl. Math. Comput. 270 (2015), pp. 1004–1010.
[12] J.M. Ortega and W.C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press,
New York, 1970.
[13] M.S. Petković, S. Ilić, and J. Džunić, Derivative free two-point methods with and without memory for solving
nonlinear equations, Appl. Math. Comput. 217 (2010), pp. 1887–1895.
[14] S. Shari, S. Siegmud, and M. Salimi, Solving nonlinear equations by a derivative-free form of the King’s family with
memory,Calcolo53(2016), pp. 201–215.
[15] J.R. Sharma, R.K. Guha, and P. Gupta, Some ecient derivative free methods with memory for solving nonlinear
equations, Appl. Math. Comput. 219 (2012), pp. 699–707.
[16] F. Soleymani, T. Lot, E. Tavakoli, and F.K. Haghani, Several iterative methods with memory using-accelerators,
Appl. Math. Comput. 254 (2015), pp. 452–458.
[17] J.F. Traub, IterativeMethodfortheSolutionofEquations,Prenticehall,NewYork,1964.
[18] X. Wang and T. Zhang, A new family of Newton-type iterative methods with and without memory for solving
nonlinear equations,Calcolo51(2014), pp. 1–15.
[19] X. Wang and T. Zhang, High-order Newton-type iterative methods with memory for solving nonlinear equations,
Math. Commun. 19 (2014), pp. 91–109.
[20] X. Wang and T. Zhang, Some Newton-type iterative methods with and without memory for solving nonlinear
equations,Int.J.Comput.Methods11(2014), p. 1350078.
[21] X. Wang and T. Zhang, Ecient n-point iterative methods with memory for solving nonlinear equations,Numer.
Algorithms 70 (2015), pp. 357–375.
[22] X. Wang, T. Zhang, and Y. Qin, Ecient two-step derivative-free iterative methods with memory and their dynamics,
Int. J. Comput. Math. 93 (2016), pp. 1423–1446.
[23] X. Wu, A new continuation Newton-like method and its deformation, Appl. Math. Comput. 112 (2000), pp. 75–78.
[24] Q. Zheng, P. Zhao, L. Zhang, and W. Ma, Variants of Steensen-secant method and applications,Appl.Math.
Comput. 216 (2010), pp. 3486–3496.
Downloaded by [202.107.78.14] at 06:40 25 August 2017