ArticlePDF Available

A family of Newton-type iterative methods using some special self-accelerating parameters

Authors:

Abstract and Figures

In this paper, a family of Newton-type iterative methods with memory is obtained for solving nonlinear equations, which uses some special self-accelerating parameters. To this end, we first present two optimal fourth-order iterative methods without memory for solving nonlinear equations. Then we give a novel way to construct the self-accelerating parameter and obtain a family of Newton-type iterative methods without memory. The self-accelerating parameters have the properties of simple structure and easy calculation, which do not increase the computational cost of the iterative methods. The convergence order of the new iterative method is increased from 4 to 2+√7=4.64575. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the new methods. Experiment results show that, compared with the existing methods, the new iterative methods with memory have the advantage of costing less computing time.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=gcom20
Download by: [202.107.78.14] Date: 25 August 2017, At: 06:40
International Journal of Computer Mathematics
ISSN: 0020-7160 (Print) 1029-0265 (Online) Journal homepage: http://www.tandfonline.com/loi/gcom20
A family of Newton-type iterative methods using
some special self-accelerating parameters
Xiaofeng Wang
To cite this article: Xiaofeng Wang (2017): A family of Newton-type iterative methods using
some special self-accelerating parameters, International Journal of Computer Mathematics, DOI:
10.1080/00207160.2017.1366459
To link to this article: http://dx.doi.org/10.1080/00207160.2017.1366459
Accepted author version posted online: 10
Aug 2017.
Published online: 23 Aug 2017.
Submit your article to this journal
Article views: 19
View related articles
View Crossmark data
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2017
https://doi.org/10.1080/00207160.2017.1366459
A family of Newton-type iterative methods using some special
self-accelerating parameters
Xiaofeng Wang
School of Mathematics and Physics, Bohai University, Jinzhou, China
ABSTRACT
In this paper, a family of Newton-type iterative methods with memory is
obtained for solving nonlinear equations, which uses some special self-
accelerating parameters. To this end, we first present two optimal fourth-
order iterative methods with memory for solving nonlinear equations. Then
we give a novel way to construct the self-accelerating parameter and
obtain a family of Newton-type iterative methods with memory. The self-
accelerating parameters have the properties of simple structure and easy
calculation, which do not increase the computational cost of the iterative
methods. The convergence order of the new iterative method is increased
from 4 to 2 +74.64575. Numerical comparisons are made with some
known methods by using the basins of attraction and through numerical
computations to demonstrate the efficiency and the performance of the
new methods. Experiment results show that, compared with the existing
methods, the new iterative methods with memory have the advantage of
costing less computing time.
ARTICLE HISTORY
Received 19 October 2016
Revised 12 April 2017
Accepted 19 June 2017
KEYWORDS
Iterative method with
memory; self-accelerating
parameter; root-finding;
Newton method;
convergence order
2010 AMS SUBJECT
CLASSIFICATIONS
65H05; 65B99
1. Introduction
In this paper, a family of Newton type iterative methods with memory for nding a simple root of
nonlinear equation f(x)=0isgiven,wheref:IRRforanopenintervalIis a scalar function.
Iterative method with memory was considered for the rst time by Traub [17] in 1964, who proposed
the following method
x0,γ0are given suitably,
xn+1=xnf(xn)
f[xn,wn],
N1(x)=f(xn)+(xxn)f[xn,wn],
γn+1=−1/N1(xn),
wn+1=xn+1+γn+1f(xn+1).
(1)
The convergence order of method (1) is 1 +2. The self-accelerating parameter γnis calculated by
using information from the current and previous iterations. Method (1) tells us that it is possible to
increase the convergence order using a suitable self-accelerating parameter. Iterative method which
uses the self-accelerating parameter is called the self-accelerating type method in this paper.
CONTACT Xiaofeng Wang w200888w@163.com School of Mathematics and Physics, Bohai University, Jinzhou,
Liaoning 121000, China
© 2017 Informa UK Limited, trading as Taylor & FrancisGroup
Downloaded by [202.107.78.14] at 06:40 25 August 2017
2 X. WANG
Inspired by Traub’s idea, many ecient self-accelerating type methods have been proposed in
recent years, see [69,11,1316,1824] and references therein. For the self-accelerating type method,
the convergence order can be improved by using more information to construct the self-accelerating
parameter or by increasing the number of self-accelerating parameters. For example, using a self-
accelerating parameter calculated by secant approach, Petković et al. [13]proposedthefollowing
two-step iterative method with order 2 +54.236
zn=xnγnf(xn),yn=xnf(xn)
f[xn,zn],γn=xnxn1
f(xn)f(xn1),
xn+1=ynf(yn)
f[xn,zn]1+f(yn)
f(xn)+f(yn)
f(zn).
(2)
Zheng et al. [24] obtained the following two-step method with order (3+13)/23.3028
¯
xn+1=xnλnf2(xn)
f(xn+λnf(xn)) f(xn),λn=− xnxn1
f(xn)f(xn1),
xn+1=xnλnf3(xn)
[f(xn+λnf(xn)) f(xn)][f(xn)f(¯
xn+1)].
(3)
Using more information to calculate the self-accelerating parameter, Džunić and Petković [7]gave
an-point method with order 23 ·2(n4)(n>4), in which the self-accelerating parameter is calcu-
lated by Newton interpolation polynomial of third degree. Wang and Zhang [18,20]proposedsome
iterative methods with memory, one of them is the following method
yn=xnf(xn)
λnf(xn)+f(xn),λn=−H2(xn)
f(xn),
xn+1=ynf(yn)
2λnf(xn)+f(xn)1+2f(yn)
f(xn)+f(yn)
f(xn)2,
(4)
where the self-accelerating parameter λnis calculated by Hermite interpolation polynomial H2(x)=
H2(x;xn,xn,yn1)of second degree. The convergence order of method (4) is (5+17)/24.5616.
The convergence order of the self-accelerating type method can be improved greatly by increas-
ing the number of self-accelerating parameters. Using two self-accelerating parameters, Džunić [6]
obtained an ecient two-step method with order 7
γ0,p0are given, wk=xk+γkf(xk),
γk=− 1
N(xk),pk=−N4(wk)
2N4(wk)for k1,
yk=xkf(xk)
f[xk,wk]+pkf(wk),
xk+1=yk(1+tk)f(xk)
f[xk,wk]+pkf(wk),tk=f(yk)
f(xk),
(5)
where N3(x)=N3(x;xk,yk1,wk1,xk1)and N4(x)=N4(x;wk,xk,yk1,wk1,xk1)are Newton
inter-polating polynomials of third and fourth degree, respectively. Wang and Zhang [19]also
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 3
proposed a two-parameter iterative method with order 5.3059
yn=xnf(xn)
f(xn)γnf(xn),γn=H4(xn)
2f(xn),βn=−H4(xn)
6f(xn),
xn+1=ynf(yn)
2f[xn,yn]f(xn)1+βnf(yn)
f(xn)2,
(6)
where H4(x)=H4(x;xn,xn,yn1,xn1,xn1)is a Hermite interpolation polynomial of fourth
degree. Furthermore, using three self-accelerating parameters, Soleymani et al. [16], Lot and Assari
[11]andWangetal.[22] presented some ecient iterative methods with memory, respectively. Using
n+1 self-accelerating parameters, Wang and Zhang [21] derived a iterative method with the maxi-
mal convergence order (2n+11+22(n+1)+1)/2. Lot’s method [11] and Džunić’s method [6]
canbeseenasthespecialcasesofourmethod[21]. Other self-accelerating type methods are discussed
in [8,9,14,15].
For the self-accelerating type methods, the convergence speed of the iterative method is consider-
ably accelerated by employing the self-accelerating parameter. The increase of convergence order is
attained without any additional function evaluations. In summary, the self-accelerating parameters
of self-accelerating type methods can be constructed by the interpolation polynomial or by secant
approach. Besides these ways, it is worthy of researching on whether there are other ways to construct
the self-accelerating parameter.
In this paper, we will give a novel way to construct the self-accelerating parameter. This paper is
organized as follows. In Section 2, we rst derive two optimal fourth-order iterative methods with-
out memory for solving nonlinear equations. Some novel self-accelerating parameters with simple
structure are given in Section 3. Using these novel self-accelerating parameters, we obtain a fam-
ily of Newton-type iterative methods with memory. The maximal convergence order of the new
Newton-type iterative method with memory is 2 +74.64575. Since acceleration of convergence
is obtained without additional function evaluations, the computational eciency of the new methods
is signicantly increased. Numerical examples are given in Section 4 to conrm theoretical results.
Dynamic behaviour of the iterative methods is analysed in Section 5. Section 6 is a short conclusion.
2. Two optimal fourth-order iterative methods without memory
Firstly, we consider the following one-parameter iterative scheme
zn=xnf(xn)
f(xn),
yn=xn+znxn
1λ(znxn),
xn+1=ynf(yn)
f(yn),
(7)
where λR. To avoid the evaluation of the rst derivative f(yn),weapproximatef(x)by a rational
linear function of the form
P(x)=A+B(xxn)
1+C(xxn),(8)
where the parameters A,Band Care determined by the following conditions
P(xn)=f(xn),P(yn)=f(yn),P(xn)=f(xn).(9)
Downloaded by [202.107.78.14] at 06:40 25 August 2017
4 X. WANG
According to Equations (8) and (9), we obtain
A=f(xn), (10)
B=f(xn)+f(xn)f[xn,yn]f(xn)
f(xn)f(yn), (11)
C=f[xn,yn]f(xn)
f(xn)f(yn), (12)
where f[xn,yn]=(f(xn)f(yn))/(xnyn)is rst-divided dierence.
Dierentiation of Equation (8) gives
P(x)=BAC
[1 +C(xxn)]2. (13)
We approximate the derivative f(yn)by P(yn)and obtain
f(yn)P(yn)=(f[xn,yn])2
f(xn). (14)
Substituting Equation (14) into Equation (7), we get a new one-parameter iterative method
zn=xnf(xn)
f(xn),
yn=xn+znxn
1λ(znxn),
xn+1=ynf(yn)
f[xn,yn]
f(xn)
f[xn,yn],
(15)
where λR.Takingλ=0, we get the Kung–Traub fourth-order method [23].
Furthermore, we construct the following two-parameter iterative method by adding a new step to
method (15)
zn=xnf(xn)
f(xn),
yn=xn+znxn
1λ(znxn),
wn=ynf(yn)
f[xn,yn]
f(xn)
f[xn,yn],
xn+1=wnγ(wnyn)(ynxn)2,
(16)
where λ,γR.
Using the symbolic computation in the programming package Mathematica, we can nd the con-
vergence order and the asymptotic error constant(AEC) of the methods (15) and (16). For simplicity,
we omit the iteration index nand write einstead of en. The approximation xn+1to the root awill
be denoted by ˆ
x.Formethod(16),wedenetheerrorse=xa,ey =ya,ez =za,ew =
wa,e1=ˆ
xa.
The following abbreviations are used in the program.
ck =f(k)(a)/(k!f(a)),e=xa,ey=ya,ez=za,ew=wa,
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 5
e1 =ˆ
xa,lp=λ,r=γ
fx =f(x),fy=f(y),df=f(x),fxy=f[x,y].
Program (written in Mathematica)
fx =a*(e+c2*eˆ2+c3*eˆ3+c4*eˆ4+c5*eˆ5+c6*eˆ6);
df =D[fx,e];
ez =Series[e-fx/(df),{e,0,6}]//Simplify
ey =Series[e+(ez-e)/(1-lp*(ez-e)),{e,0,4}]//Simplify
fy =a*(ey+c2*eyˆ2+c3*eyˆ3);
fxy =(fx-fy)/(e-ey);
ew =Series[ey-fy*df/(fxy*fxy),{e,0,4}]//Simplify
e1 =Series[ew-r*(ew-ey)*(ey-e)ˆ2,{e,0,4}]//Simplify
Out[ez] =c2e2+(2c2
2+2c3)e3+O[e]4, (17)
Out[ey] =(c2+λ)e2+(2c2
2+2c32c2λλ2)e3+O[e]4, (18)
Out[ew] =(c2+λ)(2c2
2c3+c2λ)e4+O[e]5. (19)
Out[e1] =(c2+λ)(2c2
2c3+c2λ+γ)e4+O[e]5. (20)
The outputs (19) and (20) of the above program mean that the convergence order of the iterative
methods (15) and (16) is four. Altogether, we can state the following theorem.
Theorem 2.1: Let a I be a simple zero of a suciently dierentiable function f :IRRfor
an open interval I. Then the iterative methods dened by Equations (15) and (16) are of fourth-order
convergence and satisfy the following the following error equations
en+1=(c2+λ)(2c2
2c3+c2λ)e4
n+O(e5
n)(21)
and
en+1=(c2+λ)(2c2
2c3+c2λ+γ)e4
n+O(e5
n), (22)
respectively.
Remark 2.2: Methods (15) and (16) reach the optimal order four requiring only three function evalu-
ations per step, which agree with the conjecture of Kung–Traub [10]. The Kung–Traub optimal fourth
method [10] is a special case of method (15) by taking λ=0. The rst two steps of the methods (15)
and (16) belong to the second-order method developed by Wu [23]. In order to give a new technique
to construct the self-accelerating parameter, we use the variant of Wu’s method as the rst two steps
of the methods (15) and (16).
3. The new methods with memory and some novel self-accelerating parameters
In this section, we will rstly improve the convergence order of the method (15) by using a sim-
ple self-accelerating parameter λnto substitute the parameter λ. Then, using two self- accelerating
parameters, we improve the convergence order of method (16). Similar to the methods (4) and (6),
we can construct the self-accelerating parameter by interpolation polynomial. But, we do not use this
technique in this paper. Here, we give a novel way to construct the self-accelerating parameter. If
λ=−c2, then the convergence order of the method (15) can be improved. Thus, we should choose
suitable self-accelerating parameter λnto substitute the parameter λ.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
6 X. WANG
It is well known that Newton’s method [12]: xn+1=xnf(xn)/f(xn)converges quadratically. If
the sequence {xn}generated by Newton method converges to a simple root aof nonlinear equation,
then the sequence {xn}satises the following expression
lim
n→∞
xn+1a
(xna)2=lim
n→∞
en+1
e2
n=c2, (23)
where c2=f(a)/(2f(a)) is the AEC, en+1=xn+1aand en=xna.
If λn=−(xn+1a)/(xna)2, then the convergence order of the method (15) can be improved.
Since the root ain Equation (23) is unknown, we use information from the current and previous
iterations to approximate the root aand construct the following formulas for λn:
Formula 1:
λn=xnzn1
(yn1xn1)2. (24)
Formula 2:
λn=2(xnzn1)
(znxn1)(xnxn1)+(znxn)
(xnyn1)(zn1xn1)(xnzn1)
(yn1xn1)2. (25)
Remark 3.1: Now, we obtain the following one-parameter iterative method with memory
zn=xnf(xn)
f(xn),
yn=xn+znxn
1λn(znxn),
xn+1=ynf(yn)
f[xn,yn]
f(xn)
f[xn,yn],
(26)
where λnis calculated by using one of the formulas (24)–(25) without any additional function
evaluations.
The concept of the R-order of convergence [12] and the following assertion (see [1, p. 287]) will
be applied to estimate the convergence order of the iterative methods with memory (26).
Theorem 3.2: Iftheerrorsofapproximationse
j=xja obtained in an iterative root-nding method
IM satisfy
ek+1
n
i=0
(eki)mi,kk({ek}),
then the R-order of convergence of IM, denoted with OR(IM, a),satisestheinequalityO
R(IM, a)s,
where sis the unique positive solution of the equation sn+1n
i=0misni=0.
Theorem 3.3: Let the varying parameters λnin the iterative method (26) be calculated by Equation
(24).Ifaninitialapproximationx
0is suciently close to a simple root a of f (x),thentheR-orderof
convergence of the iterative method (26) with memory is at least 2+54.2361.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 7
Proof: Let the sequence {xn}be generated by an iterative method converges to the root aof f(x)with
the R-order OR(IM, a)r,wecanwrite
en+1Dn,rer
n,en=xna, (27)
where Dn,rtends to the AEC Drof (IM) when n→∞.So,
en+1Dn,r(Dn1,rer
n1)r=Dn,rDr
n1,rer2
n1. (28)
In this case, we assume that the R-order of iterative sequence {yn}is p,then
en,yDn,pep
nDn,p(Dn1,rer
n1)p=Dn,pDp
n1,rerp
n1. (29)
According to Equations (18) and (19), we get the corresponding error relations of method (26) with
memory
en,y=yna(c2+λn)e2
n, (30)
en+1=xn+1a(c2+λn)(2c2
2c3+λnc2)e4
n+O(e5
n). (31)
Here, the higher order terms in Equations (30)–(31) are omitted.
Substituting λby λnand nby n1 in Equations (17), (18) and (19), we have
xnzn1=−c2e2
n1+2(c2
2c3)e3
n1+(2c3
23c4+3c2
2λn1c3λn1
+c2(6c3+λ2
n1))e4
n1+O(e5
n), (32)
yn1xn1=−en1+(c2+λn1)e2
n1+(2c2
2+2c32c2λn1λ2
n1)e3
n1
+(4c3
27c2c3+3c4+5c2
2λn14c3λn1+3c2λ2
n1+λ3
n1)e4
n1+O(e5
n1),
(33)
then
λn=xnzn1
(yn1xn1)2
=−c22(c3+c2λn1)en1+(3c3
22c2c33c4+5λn1c2
25c3λn1)e2
n1+O(e3
n1),
(34)
c2+λn∼−2(c3+c2λn1)en1. (35)
According to Equations (31) and (35), we get
en+1∼−2(c3+c2λn1)en1(2c2
2c3+λnc2)e4
n
∼−2(c3+c2λn1)en1(2c2
2c3+λnc2)(Dn1,rer
n1)4
∼−2(c3+c2λn1)(2c2
2c3+λnc2)D4
n1,re4r+1
n1.
(36)
By comparing exponents of en1appearing in relations (28) and (36), we get the following equation
4r+1=r2. (37)
Positive solution of Equation (37) is given by r=2+54.2361. Therefore, the R-order of the
method with memory (26), when λnis calculated by Equation (24), is at least 4.2361.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
8 X. WANG
Theorem 3.4: Let the varying parameters λnin the iterative method (26) be calculated by Equation
(25).Ifaninitialapproximationx
0is suciently close to a simple root a of f (x),thentheR-orderof
convergence of the iterative method (26) with memory is at least 2+64.4495.
Proof: Substituting λby λnand nby n1 in Equations (17)–(19), we have
znxn1=−en1+(c2+λn1)2(2c2
2c3+c2λn1)2e8
n1+O(e9
n1), (38)
xnxn1=−en1+(c2+λn1)(2c2
2c3+c2λn1)e4
n1+O(e5
n1). (39)
znxn=−(c2+λn1)(2c2
2c3+c2λn1)e4
n1+(10c4
2+2c2
3+(16c3
2+2c43c3λn1n1
+c2
2(14c3+9λ2
n1)+2c2(c47c3λn1+λ3
n1))e5
n1+O(e6
n1),
(40)
xnyn1=−(c2+λn1)e2
n1+(2c2
22c3+2c2λn1+λ2
n1)e3
n1+(2c3
22c2
2λn1
+6c2c33c4+3c3λn12c2λ2
n1λ3
n1)e4
n1+O(e5
n1),(41)
zn1xn1=−en1+c2e2
n1+2(c3c2
2)e3
n1+(4c3
27c2c3+3c4)e4
n1+O(e5
n1). (42)
According to Equations (34) and (36)–(42), we obtain
λn=2(xnzn1)
(znxn1)(xnxn1)+(znxn)
(xnyn1)(zn1xn1)(xnzn1)
(yn1xn1)2
=−c2+(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1+O(e3
n1),
(43)
c2+λn(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1+O(e3
n1). (44)
According to Equations (31) and (44), we get
en+1(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1(2c2
2c3+λnc2)e4
n
(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1(2c2
2c3+λnc2)(Dn1,rer
n1)4
(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)(2c2
2c3+λnc2)D4
n1,re4r+2
n1.
(45)
By comparing exponents of en1appearing in relation (28) and (45), we get the following equation
4r+2=r2. (46)
Positive solution of Equation (46) is given by r=2+64.4495. Therefore, the R-order of the
method with memory (26), when λnis calculated by Equation (25), is at least 4.4495.
The proof is completed.
Remark 3.5: From the error equation (22), we can see that the convergence order of the iterative
method (16) can be further improved by taking the parameters λ=−c2and γ=c3c2.Similar
to the method (26), we can construct the self-accelerating parameters λnand γnto substitute the
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 9
parameters λand γ, respectively. So, we obtain a two-parameter iterative method with memory as
follows
zn=xnf(xn)
f(xn),
yn=xn+znxn
1λn(znxn),
wn=ynf(yn)
f[xn,yn]
f(xn)
f[xn,yn],
xn+1=wnγn(wnyn)(ynxn)2,
(47)
where the self-accelerating parameter λncan be calculated by Equation (25) and written as follows
λn=2(wn1zn1)
(znxn1)(wn1xn1)+(znwn1)
(wn1yn1)(zn1xn1)(wn1zn1)
(yn1xn1)2. (48)
and the self-accelerating parameter γnis calculated by the following scheme
γn=λn(wn1zn1)
(znxn1)(wn1xn1)
2(xn1wn1). (49)
Base on Equation (22), we get the corresponding error relation of method (47) with memor y as follows
en+1=xn+1a(c2+λn)[(c2+λn)c2+rnc3+c2
2]e4
n+O(e5
n). (50)
Theorem 3.6: Let the varying parameters λnin the iterative method (47) be calculated by Equation
(48) and the varying parameters γnbe calculated by Equation (49).Ifaninitialapproximationx
0is
suciently close to a simple root a of f (x), then the R-order of convergence of the iterative method (47)
with memory is at least 2+74.64575.
Proof: Using the results of Theorems 3.3 and 3.4, we have
wn1zn1=−c2e2
n1+2(c2
2c3)e3
n1+(2c3
23c4+3c2
2λn1c3λn1
+c2(6c3+λ2
n1))e4
n1+O(e5
n), (51)
wn1xn1=−en1+(c2+λn1)(2c2
2c3+c2λn1)e4
n1+O(e5
n1). (52)
znwn1=−(c2+λn1)(2c2
2c3+c2λn1)e4
n1+(10c4
2+2c2
3+(16c3
2+2c43c3λn1n1
+c2
2(14c3+9λ2
n1)+2c2(c47c3λn1+λ3
n1))e5
n1+O(e6
n1),
(53)
wn1yn1=−(c2+λn1)e2
n1+(2c2
22c3+2c2λn1+λ2
n1)e3
n1+(2c3
22c2
2λn1
+6c2c33c4+3c3λn12c2λ2
n1λ3
n1)e4
n1+O(e5
n1),
(54)
From Equations (51)–(54), we get
λn=2(wn1zn1)
(znxn1)(wn1xn1)+(znwn1)
(wn1yn1)(zn1xn1)(wn1zn1)
(yn1xn1)2
=−c2+(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1+O(e3
n1),
(55)
Downloaded by [202.107.78.14] at 06:40 25 August 2017
10 X. WANG
γn=λn(wn1zn1)
(znxn1)(wn1xn1)
2(xn1wn1)
=(c2
2+c3)+1
2(3c3
26c2c3+4c4+4c2
2λn1+3c2λ2
n1)en1+O(e2
n1).
(56)
According to Equations (18), (20), (22), (55) and (56), we get
en+1(c2+λn)(2c2
2c3+c2λn+γn)e4
n+O(e5
n)
(c2+λn)[c2(c2+λn)+(c2
2c3)+γn]e4
n+O(e5
n)
1
2(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)e2
n1(3c3
26c2c3
+4c4+4c2
2λn1+3c2λ2
n1)en1e4
n
Tab le 1 . Numerical results for f1(x)by the methods with and without memory.
Methods |x1a||x2a||x3a||x4a|ρ
(15) 0.23054e3 0.93481e14 0.25273e55 0.13502e221 4.0000000
(16) 0.23801e3 0.11069e13 0.51787e55 0.24812e220 4.0000000
ZM 0.52653e2 0.44751e7 0.15980e23 0.61894e78 3.3082725
PM 0.47398e4 0.30448e17 0.20742e73 0.28696e311 4.2348771
WM1 0.51371e3 0.47887e13 0.10861e55 0.26710e236 4.2352409
((26),(24)) 0.23054e3 0.29797e15 0.73980e66 0.11504e280 4.2447985
((26),(25)) 0.23054e3 0.11530e16 0.37701e75 0.10351e335 4.4551453
(47) 0.23801e3 0.28636e17 0.16584e80 0.30286e375 4.6608388
Tab le 2 . Numerical results for f2(x)by the methods with and without memory.
Methods |x1a||x2a||x3a||x4a|ρ
(15) 0.43473e2 0.70961e9 0.50929e36 0.13512e144 4.0000000
(16) 0. 45508e2 0. 90076e9 0. 13987e35 0. 81331e143 4.0000000
ZM 0.42368e1 0.19255e4 0.35571e15 0.10391e50 3.3106279
PM 0.34012e1 0.56555e6 0.47462e26 0.36556e111 4.2395330
WM1 0.80255e2 0.10174e8 0.35663e38 0.66482e163 4.2345381
((26),(24)) 0.43473e2 0.20165e10 0.55196e45 0.89701e192 4.2470385
((26),(25)) 0.43473e2 0.32346e11 0.15068e50 0.40255e226 4.4639026
(47) 0. 45508e2 0. 80166e12 0. 22689e5 0. 84517e259 4.6713527
Tab le 3 . Numerical results for f3(x)by the methods with and without memory.
Methods |x1a||x2a||x3a||x4a|ρ
(15) 0.34960e3 0.42142e15 0.88966e63 0.17671e253 4.0000000
(16) 0.51365e3 0.58462e15 0.98016e63 0.77446e254 4.0000000
ZM 0.12353e1 0.19384e7 0.17066e26 0.18131e89 3.3047880
PM 0.28807e2 0.17703e12 0.14180e55 0.35820e238 4.2369602
WM1 0.48052e2 0.32661e10 0.33069e45 0.23474e193 4.2334759
((26),(24)) 0.34960e3 0.18423e14 0.85142e64 0.58907e272 4.2192971
((26),(25)) 0.34960e3 0.23180e15 0.10493e70 0.21832e316 4.4391605
(47) 0.51365e3 0.14740e14 0.16327e70 0.66622e329 4.6177560
Tab le 4 . Mean CPU time for the stopping criterion |xk+1xk|<10200.
fZM PM DM WM1 WM2 (26), (24) (26), (25) (47)
f13.65167 3.83668 7.86026 3.33717 6.07311 2.88321 3.01768 2.08198
f21.37031 1.60400 4.57582 1.83301 3.67257 1.67264 1.56937 1.22523
f31.68761 1.99587 4.43760 2.32441 4.15212 1.23490 1.28794 1.38684
Total 6.70959 7.43655 16.87368 7.49459 13.8978 5.79075 5.87454 4.69405
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 11
Tab le 5 . Mean CPU time for the stopping criterion|xk+1xk|<10300.
fZM PM DM WM1 WM2 (26), (24) (26), (25) (47)
f15.25068 4.27068 9.04868 4.81075 6.26188 3.24762 2.99178 2.03238
f21.66484 1.76561 4.72589 1.90789 3.71531 1.60244 1.76468 1.46360
f31.91787 2.06982 5.24912 2.85419 4.16959 1.31540 1.77685 1.47826
Total 8.83339 8.10611 19.02369 9.57283 14.14678 6.16546 6.53331 4.97424
Tab le 6 . Mean CPU time for the stopping criterion |xk+1xk|<10400.
fZM PM DM WM1 WM2 (26),(24) (26),(25) (47)
f16.13801 5.53397 9.93476 5.00732 7.82657 3.74620 4.00423 2.64390
f22.68477 2.38338 5.85721 2.33408 4.25477 1.67139 1.92037 1.64955
f32.49570 2.58680 6.37576 2.78305 5.07970 1.38154 2.20242 1.65985
Total 11.31848 10.50451 22.16773 10.12445 17.16104 6.79913 8.12702 5.95330
(a) (b)
(c) (d)
Figure 1. Fractal results for the complex polynomial z21. (a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e):
Method WM1; (f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
Downloaded by [202.107.78.14] at 06:40 25 August 2017
12 X. WANG
(e) (f)
(
g
) (h)
Figure 1. continued
1
2(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)(3c3
26c2c3+4c4+4c2
2λn1+3c2λ2
n1)e3
n1e4
n
1
2(c3
2+c4+7c2
2λn1c3λn1+4c2λ2
n1)(3c3
26c2c3
+4c4+4c2
2λn1+3c2λ2
n1)D4
n1,re4r+3
n1. (57)
By comparing exponents of en1appearing in the relation (28) and (57), we get the following equation
4r+3=r2. (58)
Positive solution of the Equation (58) is given by r=2+74.64575. Therefore, the R-order of
the method with memory (47) is at least 4.64575.
The proof is completed.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 13
4. Numerical results
The method (2) (PM), method (3) (ZM), method (4) (WM1) and the new methods with and without
memory are employed to solve nonlinear functions fi(x)(i=1, 2, 3).Theabsoluteerrors|xka|in
therstfouriterationsaregiveninTables13,whereais the exact root computed with 2000 signi-
cant digits. The parametersλ=0.1, λ0=0.1, γ=0.1 and γ0=0.1 are used in the rst iteration. The
computational order of convergence ρis dened by [5]:
ρln(|xn+1xn|/|xnxn1|)
ln(|xnxn1|/|xn1xn2|). (59)
Ourmethodsarecomparedwithhigh-ordermethod(5)(DM)andmethod(6)(WM2)inTables46.
Tables 46give the mean CPU time (in seconds) after 50 performances of dierent methods.
(a) (b)
(c) (d)
Figure 2. Fractal results for the complex polynomial z31. (a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e):
Method WM1; (f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
Downloaded by [202.107.78.14] at 06:40 25 August 2017
14 X. WANG
(e) (f)
(
g
) (h)
Figure 2. continued
Following test functions are used:
f1(x)=xex2sin2(x)+3cos(x)+5, a≈−1.2076478271309189, x0=−1.3,
f2(x)=x5+x4+4x215, a1.3474280989683050, x0=1.6,
f3(x)=arcsin(x21)0.5x+1, a0.59481096839836918, x0=1.
From Tables 13, we observe that the convergence orders of the methods (26) and (47) with mem-
ory are increased relative to the corresponding basic methods (15) and (16) without memory. The
computational orders of convergence ρare consistent with the theoretical orders. The convergence
behaviour of the methods (26) and (47) with memory is better than the methods ZM, PM and WM1
for most examples. According to the results presented in Tables 46,wecanseethatourmethod(47)
uses minimum computing time and is better than the high-order methods WM2 and DM. The main
reason is that the accelerating parameters of methods WM2 and DM are too complex, which cost
Downloaded by [202.107.78.14] at 06:40 25 August 2017
INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS 15
more computing time than that of our methods (26) and (47). For convenient for comparing, the
computing time of our methods is set to bold values in Tables 46.
5. Dynamical analysis
The stability and reliability of iterative method can be judged by its dynamical behaviour. Many
authors study the dynamics of dierent iterative methods in [24]. In this section, we compare our
new methods with methods ZM, PM and WM1 by using the basins of attraction for two complex
polynomials f(z)=zk1, k=2, 3. We give a rectangle D =[2.0, 2.0] ×[2.0, 2.0]Candtake
a grid of 300 ×300 points z0in D. The colour is assigned to each point z0Daccording to the simple
root at which the corresponding iterative method starting from z0converges, and the point is painted
with black if the iterative method does not converge after the maximum number of iterations. The
maximum number of iterations is 25. The sequence generated by iterative method reaches a zero z
of the polynomial with a tolerance |zkz|<105. The parameters used in iterative methods with-
out memory are λ=0.1 and γ=0.1. The parameter λ0=0.1 and γ0=0.1 are used in the methods
with memory.
From Figures 1and 2, we can see that our new methods (26) and (47) with memory have faster con-
vergence speed than the iterative methods PM, ZM and (15)–(16). The new methods with memory
have fewer diverging points than the iterative methods (16) and PM.
6. Conclusions
The main contribution of this paper is that a new approximate method to construct the self-
accelerating parameter is presented. Firstly, we presented two new fourth-order iterative methods
without memory for solving nonlinear equations. Based on the new optimal methods without mem-
ory, the new methods with memory is presented, which use some special self-accelerating parameters.
The new self-accelerating parameters have the properties of simple structure and easy calculation,
which do not increase the computational cost of the iterative method. Numerical comparisons are
made with some known methods by using the basins of attraction and through numerical compu-
tations to demonstrate the eciency and the performance of the new methods. Experiment results
show that our methods have better convergence behaviour.
Disclosure statement
No potential conict of interest was reported by the author.
Funding
The project was supported by the NationalNatural Science Foundation of China [Nos. 11547005 and 61572082], Doctor
Startup Foundation of Liaoning Province of China [No. 201501196] and the Educational Commission Foundation of
Liaoning Province of China [No. L2015012].
References
[1] G. Alefeld and J. Herzberger, Introduction to Interval Computation,AcademicPress,NewYork,1983.
[2] F.I. Chicharro, A. Cordero, J.M. Gutiérrez, and J.R. Torregrosa, Complex dynamics of derivative-free methods for
nonlinear equations, Appl. Math. Comput. 219 (2013), pp. 7023–7035.
[3] C. Chun and B. Neta, The basins of attraction of Murakami’s fth order family of methods,Appl.Numer.Math.110
(2016), pp. 14–25.
[4] A.Cordero,F.Soleymani,J.R.Torregrosa,andM.ZakaUllah,Numerically stable improved Chebyshev-Halley type
schemes for matrix sign function, J. Comput. Appl. Math. 318 (2017), pp. 189–198.
[5] A. Cordero and J.R. Torregrosa, Variants of Newton’s Method using fth-order quadrature formulas,Appl.Math.
Comput. 190 (2007), pp. 686–698.
[6] J. Džunić, On ecient two-parameter methods for solving nonlinear equations,Numer.Algorithms63(2013), pp.
549–569.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
16 X. WANG
[7] J. Džunić and M.S. Petković, On generalized multipoint root-solvers with memory,J.Comput.Appl.Math.236
(2012), pp. 2909–2920.
[8] J. Džunić and M.S. Petković, On generalized biparametr ic multipoint root nding methods with memor y,J.Comput.
Appl. Math. 255 (2014), pp. 362–375.
[9] M. Kansal, V. Kanwar, and S. Bhatia, Ecient derivative-free variants of Hansen-Patrick’s family with memory for
solving nonlinear equations,Numer.Algorithms73(2016), pp. 1017–1036. doi:10.1007/s11075-016-0127-6
[10] H.T. Kung and J.F. Traub, Optimal order of one-point and multipoint iterations,J.ACM21(1974), pp. 643–651.
[11] T. Lot and P. Assari, New three- and four-parametric iterative with memory methods with eciency index near 2,
Appl. Math. Comput. 270 (2015), pp. 1004–1010.
[12] J.M. Ortega and W.C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press,
New York, 1970.
[13] M.S. Petković, S. Ilić, and J. Džunić, Derivative free two-point methods with and without memory for solving
nonlinear equations, Appl. Math. Comput. 217 (2010), pp. 1887–1895.
[14] S. Shari, S. Siegmud, and M. Salimi, Solving nonlinear equations by a derivative-free form of the King’s family with
memory,Calcolo53(2016), pp. 201–215.
[15] J.R. Sharma, R.K. Guha, and P. Gupta, Some ecient derivative free methods with memory for solving nonlinear
equations, Appl. Math. Comput. 219 (2012), pp. 699–707.
[16] F. Soleymani, T. Lot, E. Tavakoli, and F.K. Haghani, Several iterative methods with memory using-accelerators,
Appl. Math. Comput. 254 (2015), pp. 452–458.
[17] J.F. Traub, IterativeMethodfortheSolutionofEquations,Prenticehall,NewYork,1964.
[18] X. Wang and T. Zhang, A new family of Newton-type iterative methods with and without memory for solving
nonlinear equations,Calcolo51(2014), pp. 1–15.
[19] X. Wang and T. Zhang, High-order Newton-type iterative methods with memory for solving nonlinear equations,
Math. Commun. 19 (2014), pp. 91–109.
[20] X. Wang and T. Zhang, Some Newton-type iterative methods with and without memory for solving nonlinear
equations,Int.J.Comput.Methods11(2014), p. 1350078.
[21] X. Wang and T. Zhang, Ecient n-point iterative methods with memory for solving nonlinear equations,Numer.
Algorithms 70 (2015), pp. 357–375.
[22] X. Wang, T. Zhang, and Y. Qin, Ecient two-step derivative-free iterative methods with memory and their dynamics,
Int. J. Comput. Math. 93 (2016), pp. 1423–1446.
[23] X. Wu, A new continuation Newton-like method and its deformation, Appl. Math. Comput. 112 (2000), pp. 75–78.
[24] Q. Zheng, P. Zhao, L. Zhang, and W. Ma, Variants of Steensen-secant method and applications,Appl.Math.
Comput. 216 (2010), pp. 3486–3496.
Downloaded by [202.107.78.14] at 06:40 25 August 2017
... In recent years, many researchers have devoted themselves to designing a class of high-order iterative methods to solve nonlinear equations by using different techniques to improve Newton's scheme. Rational linear function method [2], combination method [3] and weight-function method [4] are often used to generate new iterative methods or to improve the convergence order of known methods. ...
... In the current study, we will select the concrete form of H(u n ) with two-parameter t, k ∈ C: H(u n ) = t+(2t+k)u n t+ku n . Hence, we obtain the following biparametric family of sixth-order method (OM): (2) The fixed point operator of (2) or iteration function is: ...
... In the current study, we will select the concrete form of H(u n ) with two-parameter t, k ∈ C: H(u n ) = t+(2t+k)u n t+ku n . Hence, we obtain the following biparametric family of sixth-order method (OM): (2) The fixed point operator of (2) or iteration function is: ...
Article
Full-text available
In this paper, a family of Ostrowski-type iterative schemes with a biparameter was analyzed. We present the dynamic view of the proposed method and study various conjugation properties. The stability of the strange fixed points for special parameter values is studied. The parameter spaces related to the critical points and dynamic planes are used to visualize their dynamic properties. Eventually, we find the most stable member of the biparametric family of six-order Ostrowski-type methods. Some test equations are examined for supporting the theoretical results.
... Table 12 lists the the results obtained by Algorithm 4. Some E.I.s are larger than 1.682. There are some self-accelerating iterative methods for simple roots [3,52,53], which are then extended to the self-accelerating technique for the iterative methods for multiple roots [8,54]. In Equations (86)-(88), the self-accelerating technique for A n and B n is quite simple as compared to that in the literature. ...
... In addition to the given initial guesses A 0 and B 0 , A n+1 and B n+1 -calculated by Equations (86)-(88) and Equations (89)-(91)-only need the current values of x n , y n , z n , f (x n ), f (y n ), and f (z n ), such that we point out that both Algorithms 4 and 5 are without the use of the memory of previous values. Other memorydependent techniques can be seen in [3,[52][53][54][55][56]. ...
Article
Full-text available
In the paper, two nonlinear variants of the Newton method are developed for solving nonlinear equations. The derivative-free nonlinear fractional type of the one-step iterative scheme of a fourth-order convergence contains three parameters, whose optimal values are obtained by a memory-dependent updating method. Then, as the extensions of a one-step linear fractional type method, we explore the fractional types of two- and three-step iterative schemes, which possess sixth- and twelfth-order convergences when the parameters’ values are optimal; the efficiency indexes are 6 and 123, respectively. An extra variable is supplemented into the second-degree Newton polynomial for the data interpolation of the two-step iterative scheme of fractional type, and a relaxation factor is accelerated by the memory-dependent method. Three memory-dependent updating methods are developed in the three-step iterative schemes of linear fractional type, whose performances are greatly strengthened. In the three-step iterative scheme, when the first step involves using the nonlinear fractional type model, the order of convergence is raised to sixteen. The efficiency index also increases to 163, and a third-degree Newton polynomial is taken to update the values of optimal parameters.
... The memory-dependent techniques to obtain the suitable parameters' values were found in [34,[44][45][46][47][48]. Let A = f ′ (r) and B = −β = f ′′ (r)/(2 f ′ (r)). ...
... The role of w n , which does not engage in the iteration, is different from x n and y n ; x n and y n are step variables used in the iteration in Equations (55) and (56), and w n is computed from Equation (54) to provide an extra datum used in Equations (57) and (58) to update the values of A n+1 and B n+1 . Therefore, the present parameters' updating technique is different from the memorydependent accelerating techniques in [34,[44][45][46][47][48]. In the FNMUM, no previous iteration values of w n−1 and y n−1 were used in addition to the initial values a 0 and b 0 . ...
Article
Full-text available
In the paper, we iteratively solve a scalar nonlinear equation f(x)=0, where f∈C(I,R),x∈I⊂R, and I includes at least one real root r. Three novel two-step iterative schemes equipped with memory updating methods are developed; they are variants of the fixed-point Newton method. A triple data interpolation is carried out by the two-degree Newton polynomial, which is used to update the values of f′(r) and f′′(r). The relaxation factor in the supplementary variable is accelerated by imposing an extra condition on the interpolant. The new memory method (NMM) can raise the efficiency index (E.I.) significantly. We apply the NMM to five existing fourth-order iterative methods, and the computed order of convergence (COC) and E.I. are evaluated by numerical tests. When the relaxation factor acceleration technique is combined with the modified Dzˇunic´’s memory method, the value of E.I. is much larger than that predicted by the paper [Kung, H.T.; Traub, J.F. J. Assoc. Comput. Machinery 1974, 21]. for the iterative method without memory.
... The self-accelerating parameters is a variable parameter, which can be constructed by Newton interpolation or Hermite interpolation. Many efficient self-accelerating type iterative methods have been presented in recent years, see [1][2][3][4][5][6][7][8][9][10][11]. Džunić et al. [1], Soleymani et al. [2] and Sharma et al. [3] have proposed some derivative free iterative methods with one self-accelerating parameter for solving nonlinear equations. ...
... Džunić et al. [1], Soleymani et al. [2] and Sharma et al. [3] have proposed some derivative free iterative methods with one self-accelerating parameter for solving nonlinear equations. We [4][5][6] have obtained some Newton type iterative methods with memory using one simple self-accelerating parameter, which is constructed by the iterative sequences. By increasing the numbers of the self-accelerating parameters, Cordero et al. [7], Lotfi et al. [8] and Zafar et al. [9] have obtained some iterative method with high efficiency. ...
Article
Full-text available
In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the convergence order 4.5616. Secondly, a three-step iterative method of order 10.1311 is obtained, which requires four function evaluations per iteration. Herzberger’s matrix method is used to prove the convergence order of new methods. Finally, numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
... where x * 0 = x 0 and (3) is defined as MWM in this paper. Many efficiency iterative methods with memory have been studied in recent years, see [4][5][6][7][8][9][10]. Most of them achieved higher convergence order by using the self-accelerating parameters. ...
... The new Newton method (5) can be accelerated with the use of information from the current and previous iterations. The minimization of the error relation Equation (9) can be obtained by recalculating the free parameter T = T n = c 2 . If the variable parameter T n satisfies lim n→∞ T n = c 2 , then the asymptotic convergence constant to be zero in Equation (10). ...
Article
Full-text available
A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. Substituting the invariant parameter of Newton method without memory by a variable self-accelerating parameter, we obtain a novel Newton method with memory. The convergence order of the new Newton method with memory is 1 + 2 . The acceleration of the convergence rate is attained without any additional function evaluations. The main innovation is that the self-accelerating parameter is constructed by a simple way. Numerical experiments show the presented method has faster convergence speed than existing methods.
... The convergence theorems for fixed point iterative methods based on the admissible function concept were presented in [4]. A family of Newton-type iterative methods for solving nonlinear equations was recently presented in [5]. Self-accelerating parameters are shown to increase the convergence order without extra computational cost. ...
... Meanwhile, in the research by Chicharro et al. [2013], they conducted a complex dynamical analysis of the fourth-order Kim's iterative family on quadratic polynomials, employing plots of the dynamical plane and parameter plane. In the work of Wang [2018], a novel method for constructing self-accelerating parameters was proposed and validated for its effectiveness and performance through basin of attraction visualizations and numerical calculations, alongside a comparison with existing methods. Furthermore, in separate studies by Lee [Lee & Kim, 2020] and Geum [Geum & Kim, 2020], periodic components and their geometric properties in the parameter space and dynamical plane were visualized, complemented by theoretical analyses of bifurcation points. ...
Article
Full-text available
In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting the first two steps as an optimal fourth-order iterative method, we derive an optimal eighth-order one-parameter iterative method, which can solve nonlinear systems. Employing fractal theory, we investigate the dynamic behavior of rational operators associated with the iterative method through the Scaling theorem and Möbius transformation. Subsequently, we conduct a comprehensive study of the chaotic dynamics and stability of the iterative method. Our analysis involves the examination of strange fixed points and their stability, critical points, and the parameter spaces generated on the complex plane with critical points as initial points. We utilize these findings to intuitively select parameter values from the figures. Furthermore, we generate dynamical planes for the selected parameter values and ultimately determine the range of unstable parameter values, thus obtaining the range of stable parameter values. The bifurcation diagram shows the influence of parameter selection on the iteration sequence. In addition, by drawing attractive basins, it can be seen that this iterative method is superior to the same-order iterative method in terms of convergence speed and average iterations. Finally, the matrix sign function, nonlinear equation and nonlinear system are solved by this iterative method, which shows the applicability of this iterative method.
... where x 0 is the initial approximation and F (x) −1 is the inverse of Fréchet derivative F (x) of the function F(x). In recent years, a number of methods with higher order of convergence for systems of nonlinear equations have been developed in the literature, for example, see [1,[3][4][5][6][7][8][9]11,12,[16][17][18] and references therein. The aim of this paper is to extend higher order methods presented in [23] to systems of nonlinear equations. ...
Article
Full-text available
In this paper several families of order p (4≤p≤6) for the solution of systems of nonlinear equations are developed and compared to existing methods. The necessary and sufficient conditions for p-th order of convergence are given in terms of parameter matrices τk and αk. Several choices of parameter matrix Θk determining τk are suggested. The proposed families include some well–known methods as particular cases. We also discuss the extension of the developed methods to Banach spaces. The comparison is made based on the total cost of an iteration and the CPU time.
... To discuss further, there are two kinds of iterative methods: one of them uses derivative(s) besides the function, and the other one does not use derivative(s). Both of them have been well studied recently; see, eg, other studies [8][9][10][11] and the references cited therein. ...
Article
Full-text available
In this paper, we derive two general adaptive methods with memory in the class of Newton‐type methods by modifying and introducing one and two self accelerators over a variant of Ostrowski's method. The idea of introducing adaptive self‐accelerator (via all the old information for Newton‐type methods) is new and efficient in order to obtain a higher high efficiency index. Finally, we provide convergence analysis and numerical implementations to show the feasibility and applicability of the proposed methods.
Article
We present simple yet efficient three- and four-point iterative methods for solving nonlinear equations. The methodology is based on fourth order Ostrowski’s method and further developed by using inverse rational function approximation. Three-point method requires four function evaluations and has the order of convergence eight, whereas the four-point method requires the evaluation of five functions and has the order of convergence sixteen, that means, the methods are optimal in the sense of Kung–Traub (J ACM 21:643–651, 1974) hypothesis. The methods are tested through numerical experimentation. It is observed that new algorithms in general are more accurate than existing counterparts and very effective in high precision computations. Moreover, the presented basins of attraction also confirm stable nature of the algorithms as compared to existing ones.
Article
Full-text available
In this paper, we present a new tri-parametric derivative-free family of Hansen-Patrick type methods for solving nonlinear equations numerically. The proposed family requires only three functional evaluations to achieve optimal fourth order of convergence. In addition, acceleration of convergence speed is attained by suitable variation of free parameters in each iterative step. The self-accelerating parameters are estimated from the current and previous iteration. These self-accelerating parameters are calculated using Newton’s interpolation polynomials of third and fourth degrees. Consequently, the R-order of convergence is increased from 4 to 7, without any additional functional evaluation. Furthermore, the most striking feature of this contribution is that the proposed schemes can also determine the complex zeros without having to start from a complex initial guess as would be necessary with other methods. Numerical experiments and the comparison of the existing robust methods are included to confirm the theoretical results and high computational efficiency.
Article
Full-text available
In this work, we develop two new with memory methods. They have three and four parameters. We approximate these parameters and increase the convergence order to 15 and 15.5156, respectively, convergence analysis along with numerical illustrations are provided to justify and accomplish theoretical and practical aspects of the proposed methods.
Article
Full-text available
In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is four requiring three functional evaluations. Based on the new fourth-order method without memory, we present a family of derivative-free methods with memory. Using three self-accelerating parameters, calculated by Newton interpolatory polynomials, the convergence order of the new methods with memory are increased from 4 to 7.0174 and 7.5311 without any additional calculations. Compared with the existing methods with memory, the new method with memory can obtain higher convergence order by using relatively simple self-accelerating parameters. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
Article
Full-text available
In this paper, we proposed a family of n-point iterative methods with and without memory for solving nonlinear equations. The convergence order of the new n-point iterative methods without memory is 2 n requiring n+1 functional evaluations in per full iteration, which implies that the new n-point iterative methods without memory are optimal according to Kung-Traub conjecture. Based on the n-point methods without memory, the n-point iterative methods with memory are obtained by using n+1self-accelerating parameters. The maximal convergence order of the new n-point iterative methods with memory is \((2^{n+1}-1+\sqrt {2^{2(n+1)}+1} )/2\), which is higher than any existing iterative methods with memory. Numerical examples are demonstrated to confirm theoretical results.
Article
Full-text available
In this paper, we present an iterative three-point method with memory based on the family of King's methods to solve nonlinear equations. This proposed method has eighth order convergence and costs only four function evaluations per iteration which supports the Kung-Traub conjecture on the optimal order of convergence. An acceleration of the convergence speed is achieved by an appropriate variation of a free parameter in each step. This self accelerator parameter is estimated using Newton's interpolation polynomial of fourth degree. The order of convergence is increased from 8 to 12 without any extra function evaluation. Consequently, this method, possesses a high computational efficiency. Finally, a numerical comparison of the proposed method with related methods shows its effectiveness and performance in high precision computations.
Article
Full-text available
In this paper, we present a new family of two-step Newton-type iterative meth-ods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting param-eters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.7016 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
Article
A general family of iterative methods including a free parameter is derived and proved to be convergent for computing matrix sign function under some restrictions on the parameter. Several special cases including global convergence behavior are dealt with. It is analytically shown that they are asymptotically stable. A variety of numerical experiments for matrices with different sizes is considered to show the effectiveness of the proposed members of the family.
Article
In this paper we analyze Murakami's family of fifth order methods for the solution of nonlinear equations. We show how to find the best performer by using a measure of closeness of the extraneous fixed points to the imaginary axis. We demonstrate the performance of these members as compared to the two members originally suggested by Murakami. We found several members for which the extraneous fixed points are on the imaginary axis, only one of these has 6 such points (compared to 8 for the other members). We show that this member is the best performer.
Article
We derive new iterative methods with memory for approximating a simple zero of a nonlinear single variable function. To this end, we first consider several modifications on some existing optimal classes without memory in such a way that their extensions to cases with memory could obtain the higher efficiency index 1214≈1.86120. Furthermore, we construct our main method with memory using three self-accelerators. It is demonstrated that this new scheme possesses the very high efficiency index 7.2381413≈1.93438.