DataPDF Available

A family of Newton-type iterative methods using some special self-accelerating parameters

Authors:
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=gcom20
Download by: [Purdue University Libraries] Date: 10 August 2017, At: 06:47
International Journal of Computer Mathematics
ISSN: 0020-7160 (Print) 1029-0265 (Online) Journal homepage: http://www.tandfonline.com/loi/gcom20
A family of Newton-type iterative methods using
some special self-accelerating parameters
Xiaofeng Wang
To cite this article: Xiaofeng Wang (2017): A family of Newton-type iterative methods using
some special self-accelerating parameters, International Journal of Computer Mathematics, DOI:
10.1080/00207160.2017.1366459
To link to this article: http://dx.doi.org/10.1080/00207160.2017.1366459
Accepted author version posted online: 10
Aug 2017.
Submit your article to this journal
View related articles
View Crossmark data
1
Publisher: Taylor & Francis & Informa UK Limited, trading as Taylor & Francis Group
Journal: International Journal of Computer Mathematics
DOI: 10.1080/00207160.2017.1366459
A family of Newton-type iterative methods using some special self-accelerating parameters
Xiaofeng Wang
a,
a
School of Mathematics and Physics, Bohai University, Jinzhou 121000 Liaoning, China
w200888w@163.com
Abstract
In this paper, a family of Newton-type iterative methods with memory is obtained for solving
nonlinear equations, which uses some special self-accelerating parameters. To this end, we first
present two optimal fourth-order iterative methods without memory for solving nonlinear equations.
Then we give a novel way to construct the self-accelerating parameter and obtain a family of
Newton-type iterative methods without memory. The self-accelerating parameters have the
properties of simple structure and easy calculation, which do not increase the computational cost of
the iterative methods. The convergence order of the new iterative method is increased from 4 to
2 7 4.64575
. Numerical comparisons are made with some known methods by using the basins
of attraction and through numerical computations to demonstrate the efficiency and the performance
of the new methods. Experiment results show that, compared with the existing methods, the new
iterative methods with memory have the advantage of costing less computing time.
MSC: 65H05 65B99
Keywords: Iterative method with memory; Self-accelerating parameter; Root-finding; Newton
method; Convergence order
1. Introduction
In this paper, a family of Newton type iterative methods with memory for finding a simple root of
nonlinear equation
( ) 0fx
is given, where
:f I R R
for an open interval
is a scalar
function. Iterative method with memory was considered for the first time by Traub [1] in 1964, who
proposed the following method
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
2
00
1
1
11
1 1 1 1
, are given suitably,
()
,
[ , ]
( ) ( ) ( ) [ , ],
1 / ( ),
( ).
n
nn
nn
n n n n
nn
n n n n
x
fx
xx
f x w
N x f x x x f x w
Nx
w x f x



(1)
The convergence order of method (1) is
12
. The self-accelerating parameter
n
is calculated
by using information from the current and previous iterations. Method (1) tells us that it is possible
to increase the convergence order using a suitable self-accelerating parameter. Iterative method
which uses the self-accelerating parameter is called the self-accelerating type method in this paper.
Inspired by Traub’s idea, many efficient self-accelerating type methods have been proposed in
recent years, see [2-17] and references therein. For the self-accelerating type method, the
convergence order can be improved by using more information to construct the self-accelerating
parameter or by increasing the number of self-accelerating parameters. For example, using a
self-accelerating parameter calculated by secant approach, Petković et al. [2] proposed the
following two-step iterative method with order
2 5 4.236
1
1
1
()
( ), , ,
[ , ] ( ) ( )
( ) ( ) ( )
1.
[ , ] ( ) ( )
n n n
n n n n n n n
n n n n
n n n
nn
n n n n
f x x x
z x f x y x f x z f x f x
f y f y f y
xy
f x z f x f z




(2)
Zheng et al. [3] obtained the following two-step method with order
(3 13) / 2
3.3028
2
1
1
1
3
1
1
() ,,
( ( )) ( ) ( ) ( )
() .
[ ( ( )) ( )][ ( ) ( )]
n n n n
n n n
n n n n n n
nn
nn
n n n n n n
f x x x
xx
f x f x f x f x f x
fx
xxf x f x f x f x f x

(3)
Using more information to calculate the self-accelerating parameter, Džunić et al. [4] gave a
n
-point method with order
( 4)
23 2 n
( 4)n
, in which the self-accelerating parameter is calculated
by Newton interpolation polynomial of third degree. We [5-6] proposed some iterative methods
with memory, one of them is the following method
2
2
1
( ) ( )
, ,
( ) ( ) ( )
( ) ( ) ( )
1 2 ,
2 ( ) ( ) ( ) ( )
nn
n n n
n n n n
n n n
nn
n n n n n
f x H x
yx f x f x f x
f y f y f y
xy f x f x f x f x









(4)
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
3
where the self-accelerating parameter
n
is calculated by Hermite interpolation polynomial
2 2 1
( ) ( ; , , )
n n n
H x H x x x y
of second degree. The convergence order of method (4) is
(5 17) / 2
4.5616
.
The convergence order of the self-accelerating type method can be improved greatly by
increasing the number of self-accelerating parameters. Using two self-accelerating parameters,
Džunić et al. [7] obtained an efficient two-step method with order 7
00
4
4
1
, are given, ( ),
1 ( )
, for 1,
( ) 2 ( )
() ,
[ , ] ( )
( ) ( )
(1 ) , ,
[ , ] ( ) ( )
k k k k
k
kk
kk
k
kk
k k k k
kk
k k k k
k k k k k
p w x f x
Nw
pk
N x N w
fx
yxf x w p f w
f x f y
x y t t
f x w p f w f x





(5)
where
3 3 1 1 1
( ) ( ; , , , )
k k k k
N x N x x y w x
and
4 4 1 1 1
( ) ( ; , , , , )
k k k k k
N x N x w x y w x
are Newton inter-
polating polynomials of third and fourth degree, respectively. We [8] also proposed a two-parameter
iterative method with order 5.3059
44
2
1
( ) ( ) ( )
, , ,
( ) ( ) 2 ( ) 6 ( )
( ) ( )
1,
2 [ , ] ( ) ( )
n n n
n n n n
n n n n n
nn
n n n
n n n n
f x H x H x
yxf x f x f x f x
f y f y
xyf x y f x f x








(6)
where
4 4 1 1 1
( ) ( ; , , , , )
n n n n n
H x H x x x y x x

is a Hermite interpolation polynomial of fourth degree.
Furthermore, using three self-accelerating parameters, Soleymani et al. [9], Lotfi et al. [10] and
Wang et al. [11] presented some efficient iterative methods with memory, respectively. Using
1n
self-accelerating parameters, we [12] derived a iterative method with the maximal convergence
order
1 2( 1)
2 1 2 1 / 2
nn
. Lotfi’s method [10] and Džunić’s method [7] can be seen as the
special cases of our method [12]. Other self-accelerating type methods are discussed in [13-16].
For the self-accelerating type methods, the convergence speed of the iterative method is
considerably accelerated by employing the self-accelerating parameter. The increase of convergence
order is attained without any additional function evaluations. In summary, the self-accelerating
parameters of self-accelerating type methods can be constructed by the interpolation polynomial or
by secant approach. Besides these ways, it is worthy of researching on whether there are other ways
to construct the self-accelerating parameter.
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
4
In this paper, we will give a novel way to construct the self-accelerating parameter. This paper is
organized as follows. In Section 2, we first derive two optimal fourth-order iterative methods
without memory for solving nonlinear equations. Some novel self-accelerating parameters with
simple structure are given in Section 3. Using these novel self-accelerating parameters, we obtain a
family of Newton-type iterative methods with memory. The maximal convergence order of the new
Newton-type iterative method with memory is
2 7 4.64575
. Since acceleration of
convergence is obtained without additional function evaluations, the computational efficiency of the
new methods is significantly increased. Numerical examples are given in Section 4 to confirm
theoretical results. Dynamic behavior of the iterative methods is analyzed in Section 5. Section 6 is
a short conclusion.
2. Two optimal fourth-order iterative methods without memory
Firstly, we consider the following one-parameter iterative scheme
1
()
,
()
,
1 ( )
()
,
()
n
nn
n
nn
nn
nn
n
nn
n
fx
zxfx
zx
yx zx
fy
xy
fy




(7)
where
R
. To avoid the evaluation of the first derivative
()
n
fy
, we approximate
()fx
by a
rational linear function of the form
()
( ) ,
1 ( )
n
n
A B x x
Px C x x


(8)
where the parameters
,AB
and
C
are determined by the following conditions
( ) ( ), ( ) ( ), ( ) ( ).
n n n n n n
P x f x P y f y P x f x

(9)
According to (8) and (9), we obtain
( ),
n
A f x
(10)
[ , ] ( )
( ) ( ) ,
( ) ( )
n n n
nn
nn
f x y f x
B f x f x f x f y

(11)
[ , ] ( ) ,
( ) ( )
n n n
nn
f x y f x
Cf x f y
(12)
where
( ) ( )
[ , ] nn
nn
nn
f x f y
f x y xy
is first-divided difference.
Differentiation of (8) gives
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
5
2
( ) .
1 ( )
n
B AC
Px C x x

(13)
We approximate the derivative
()
n
fy
by
()
n
Py
and obtain
2
( [ , ])
( ) ( ) .
()
nn
nn
n
f x y
f y P y fx


(14)
Substituting (14) into (7), we get a new one-parameter iterative method
1
()
,
()
,
1 ( )
( ) ( ) ,
[ , ] [ , ]
n
nn
n
nn
nn
nn
nn
nn
n n n n
fx
zxfx
zx
yx zx
f y f x
xy
f x y f x y




(15)
where
R
. Taking
0
, we get the Kung-Traub fourth-order method [17].
Furthermore, we construct the following two-parameter iterative method by adding a new step to
method (15)
2
1
()
,
()
,
1 ( )
( ) ( ) ,
[ , ] [ , ]
( )( ) ,
n
nn
n
nn
nn
nn
nn
nn
n n n n
n n n n n n
fx
zxfx
zx
yx zx
f y f x
wyf x y f x y
x w w y y x





(16)
where
,R

.
Using the symbolic computation in the programming package Mathematica, we can find the
convergence order and the asymptotic error constant (AEC) of the methods (15) and (16). For
simplicity, we omit the iteration index
n
and write
e
instead of
n
e
. The approximation
1n
x
to
the root
a
will be denoted by
ˆ
x
. For method (16), we define the errors
-,e x a
-,ey y a
ˆ
- , , 1 .ez z a ew w a e x a
The following abbreviations are used in the program.
() ˆ
ck ( ) / ( ! ( )), e , ey , ez , ew , e1 , lp= , r
k
f a k f a x a y a z a w a x a

fx ( ), fy ( ),f x f y
df ( ), fxy [ , ].f x f x y

Program (written in Mathematica)
fx=fla*(e+c2*e^2+c3*e^3+c4*e^4+c5*e^5+c6*e^6);
df=D[fx,e];
ez=Series[e-fx/(df),{e,0,6}]//Simplify
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
6
ey=Series[e+(ez-e)/(1-lp*(ez-e)),{e,0,4}]//Simplify
fy=fla*(ey+c2*ey^2+c3*ey^3);
fxy=(fx-fy)/(e-ey);
ew=Series[ey-fy*df/(fxy*fxy),{e,0,4}]//Simplify
e1=Series[ew-r*(ew-ey)*(ey-e)^2,{e,0,4}]//Simplify
2 423
2 2 3
Out[ez] ,( 2 2 ) [ ]c e c c e O e
(17)
2 2 2 3
2 2 3 2
4
Out ( ) ( 2 2 2 ) [[ey] ,]c e c c c e O e
(18)
2
2
54
2 3 2
O ( )(2ut[ew] ) .[]c c c c e O e

(19)
24
2
5
2 3 2
( )(2 )Out[e1 [] ] .c c c c e O e
(20)
The outputs (19) and (20) of the above program mean that the convergence order of the iterative
methods (15) and (16) is four. Altogether, we can state the following theorem.
Theorem 1. Let
aI
be a simple zero of a sufficiently differentiable function
: f I R R
for an open interval
I
. Then the iterative methods defined by (15) and (16) are of fourth-order
convergence and satisfy the following the following error equations
54
31
2
2 2 2
() ((2 ) )
nnn
c c ce c Oe e


(21)
and
24
2 2 3 2
5
1( )(2 ) ()
nnn
c c c c ee O e

(22)
, respectively.
Remark 1: Methods (15) and (16) reach the optimal order four requiring only three function
evaluations per step, which agree with the conjecture of Kung-Traub [17]. The Kung-Traub optimal
fourth method [17] is a special case of method (15) by taking
0
. The first two steps of the
methods (15) and (16) belong to the second order method developed by Wu [18]. In order to give a
new technique to construct the self-accelerating parameter, we use the variant of Wu’s method as
the first two steps of the methods (15) and (16).
3. The new methods with memory and some novel self-accelerating parameters
In this section, we will firstly improve the convergence order of the method (15) by using a
simple self-accelerating parameter
n
to substitute the parameter
. Then, using two self-
accelerating parameters, we improve the convergence order of method (16). Similar to the methods
(4) and (6), we can construct the self-accelerating parameter by interpolation polynomial. But, we
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
7
do not use this technique in this paper. Here, we give a novel way to construct the self-accelerating
parameter. If
2
c

, then the convergence order of the method (15) can be improved. Thus, we
should choose suitable self-accelerating parameter
n
to substitute the parameter
.
It is well known that Newton’s method [19]:
1( ) / ( )
n n n n
x x f x f x

converges quadratically.
If the sequence
{}
n
x
generated by Newton method converges to a simple root
a
of nonlinear
equation, then the sequence
{}
n
x
satisfies the following expression
11
2
22
lim lim
()
nn
nn
nn
x a e c
x a e

 

, (23)
where
2( ) / (2 ( ))c f a f a
is the asymptotic error constant,
11nn
e x a


and
nn
e x a
.
If
2
1
( ) / ( )
n n n
x a x a
, then the convergence order of the method (15) can be improved.
Since the root
a
in (23) is unknown, we use information from the current and previous iterations
to approximate the root
a
and construct the following formulas for
n
:
Formula 1:
1
2
11
.
()
nn
n
nn
xz
yx

(24)
Formula 2:
11
2
1 1 1 1 1 1 1
( ) ( ) ( )
2 + .
( )( ) ( )( ) ( )
n n n n n n
n
n n n n n n n n n n
x z z x x z
z x x x x y z x y x





(25)
Remark 2. Now, we obtain the following one-parameter iterative method with memory
1
()
,
()
,
1 ( )
( ) ( ) ,
[ , ] [ , ]
n
nn
n
nn
nn
n n n
nn
nn
n n n n
fx
zxfx
zx
yx zx
f y f x
xy
f x y f x y




(26)
where
n
is calculated by using one of the formulas (24)-(25) without any additional function
evaluations.
The concept of the R-order of convergence [19] and the following assertion (see [20, p.287]) will be
applied to estimate the convergence order of the iterative methods with memory (26).
Theorem 2. If the errors of approximations
jj
e x a
obtained in an iterative root-finding
method IM satisfy
1
0
~ ( ) , ( ),
i
nm
k k i k
i
e e k k e

Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
8
then the R-order of convergence of IM, denoted with
(IM, ),
R
Oa
satisfies the inequality
*
(IM, ) ,
R
O a s
where
*
s
is the unique positive solution of the equation
1
0
0.
n
n n i
i
i
s m s


Theorem 3. Let the varying parameters
n
in the iterative method (26) be calculated by (24). If
an initial approximation
0
x
is sufficiently close to a simple root
a
of
()fx
, then the R-order of
convergence of the iterative method (26) with memory is at least
2 5 4.2361
.
Proof. Let the sequence
n
x
be generated by an iterative method converges to the root
a
of
()fx
with the R-order
(IM, ) ,
R
O a r
we can write
1,
~ , ,
r
n n r n n n
e D e e x a

(27)
where
,nr
D
tends to the asymptotic error constant
r
D
of (IM) when
.n
So,
2
1 , 1, 1 , 1, 1
~ ( ) .
r r r r
n n r n r n n r n r n
e D D e D D e
(28)
In this case, we assume that the R-order of iterative sequence
n
y
is
p
, then
, , , 1, 1 , 1, 1
~ ~ ( ) .
p r p p rp
n y n p n n p n r n n p n r n
e D e D D e D D e
(29)
According to (18) and (19), we get the corresponding error relations of method (26) with memory
2
,2
~ ( ) ,
n y n n n
e y a c e
(30)
2 4 5
2 2 21 31 ( )(2 ) ( ).~ n n n nnn
e c c O ex ca ce

 
(31)
Here, the higher order terms in (30)-(31) are omitted.
Substituting
by
n
and
n
by
1n
in (17), (18) and (19), we have
2 3 3 2 2 4 5
2 1 3 1 2 4 2
2
1 3 1 2 31 112
2( ) ( 2 3 3 (6 )) ( ),
n n n nn nnn n
c e c cxz e c c c c c c e O e

(32)
2 2 3
2 1 1 3
2
1 1 1 2 1 1 12
) ( 2 2 2 )( n n n nn nnn
y x e c e c c c e
3 2 2 3 4 5
2 2 3 4 2 1 3 1 2 1 1 1 1
(4 7 3 5 4 3 ) ( ),
n n n n n n
c c c c c c c e O e
(33)
then
1
2
11
()
nn
n
nn
xz
yx

3 2 2 3
2 3 2 1 1 2 2 3 4 1 2 3 1 1 1
2( ) (3 2 3 5 5 ) ( ),
n n n n n n
c c c e c c c c c c e O e
(34)
2 3 2 1 1
~ 2( ) .
n n n
c c c e


(35)
According to (31) and (35), we get
13
24
2321 21
~ 2( ) (2 )
n n n nn
cce c c e ce
 
2
2
4
3 2 1,21 311
~ 2( ) ( )(2 )
nr
n n n r n
cc c c ce D e

4 4 1
32 23 1, 1
2
21 (~ 2)2( ) .
r
nrnnn
ccc c D ec

(36)
By comparing exponents of
1n
e
appearing in relations (28) and (36), we get the following
equation
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
9
2
41rr
(37)
Positive solution of the equation (37) is given by
2 5 4.2361r
. Therefore, the R-order of
the method with memory (26), when
n
is calculated by (24), is at least 4.2361.
Theorem 4. Let the varying parameters
n
in the iterative method (26) be calculated by (25). If
an initial approximation
0
x
is sufficiently close to a simple root
a
of
()fx
, then the R-order of
convergence of the iterative method (26) with memory is at least
2 6 4.4495
.
Proof. Substituting
by
n
and
n
by
1n
in (17), (18) and (19), we have
22 2 8 9
2 1 2 1 1 11 1 2 3
) (2 ) ( ),( n n nn nnn
z c c c e O ex e c


(38)
245
2 1 3 2 1 1 11 1 2
( )(2 ) ( ).
n n n n n n n
x c c c c ex Oee


(39)
24
2 1 2 3 2 1 1
4 2 3
2 3 2 4 3 1 1
(( )(2 ) 10 2 (16 2 3 )
n n n n n nn
ccz x c c c c c cce
 

2 2 3 5 6
2 3 1 2 4 3 1 1 1 1
( 14 9 ) 2 ( 7 )) ( ),
n n n n n
c c c c c e O e
(40)
2 2 2 3
1 2 1 1 2 3 2 1 1
32
1221
( ) (2 2 2 ) ( 2 2
n n n n n n n n
x y c e c c c e cc

2 3 4 5
2 3 4 3 1 2 1 1 1 1
6 3 3 2 ) ( ),
n n n n n
c c c c c e O e
(41)
2 3 4 5
1 1 1 2 1 1 1
23
3 2 2 2 3 4 1
2( ) (4 7 (3) ).
n n n n n n n
z x e c e e e O ec c c c c c
(42)
According to (34) and (36)-(42), we obtain
11
2
1 1 1 1 1 1 1
( ) ( ) ( )
2+
( )( ) ( )( ) ( )
n n n n n n
n
n n n n n n n n n n
x z z x x z
z x x x x y z x y x





2 2 2 3
2 4 2 1 32 1 2 1 1 1
3
( 7 4 ) ( ),
n n n n n
c c c c c e O ec

(43)
2 2 2 3
4 2 1 3 1 2 1 1 1
3
22
( 7 4 ) .~ ()
n n nn nn
c c c c e Oc ec
(44)
According to (31) and (44), we get
2 2 2 2 4
4 2 1 3 1 2 1 1 2
3
2231( 7 4 ) )~ (2
nnn n n n n
c c c c e c c cec e

2 2 2 2
4 2 1 3 1 2 1 1 2 3
3
,2
4
2 1 1
( 7 4 )~ )()(2 r
nrn n n n n n
ccDc c c e c c ec
223 4 4 2
2 1,
2
4 2 1 13 1 2 1 2 3 2
( 7 4 )( .2)~n n n n r
n r n
c c c c c c cc D e
(45)
By comparing exponents of
1n
e
appearing in relation (28) and (45), we get the following
equation
2
4 2 .rr
(46)
Positive solution of the equation (46) is given by
2 6 4.4495r
. Therefore, the R-order of
the method with memory (26), when
n
is calculated by (25), is at least 4.4495.
The proof is completed.
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
10
Remark 3. From the error equation (22), we can see that the convergence order of the iterative
method (16) can be further improved by taking the parameters
2
c

and
32
cc

. Similar to
the method (26), we can construct the self-accelerating parameters
n
and
n
to substitute the
parameters
and
, respectively. So, we obtain a two-parameter iterative method with memory
as follows
2
1
()
,
()
,
1 ( )
( ) ( ) ,
[ , ] [ , ]
( )( ) ,
n
nn
n
nn
nn
n n n
nn
nn
n n n n
n n n n n n n
fx
zxfx
zx
yx zx
f y f x
wyf x y f x y
x w w y y x





(47)
where the self-accelerating parameter
n
can be calculated by (25) and written as follows
1 1 1 1 1
2
1 1 1 1 1 1 1 1 1
( ) ( ) ( )
2 + .
( )( ) ( )( ) ( )
n n n n n n
n
n n n n n n n n n n
w z z w w z
z x w x w y z x y x




(48)
and the self-accelerating parameter
n
is calculated by the following scheme
11
1 1 1
11
()
( )( ) .
2( )
nn
n
n n n n
n
nn
wz
z x w x
xw



(49)
Base on the (22), we get the corresponding error relation of method (47) with memory as follows
2 4 5
2 2 2 3 211 ( )[( ) ] ( )~ .
n n nn nn n
c c c r c ce e O exa

 
(50)
Theorem 5. Let the varying parameters
n
in the iterative method (47) be calculated by (48) and
the varying parameters
n
be calculated by (49). If an initial approximation
0
x
is sufficiently
close to a simple root
a
of
()fx
, then the R-order of convergence of the iterative method (47)
with memory is at least
2 7 4.64575
.
Proof. Using the results of Theorems 3 and 4, we have
2 3 3 2 2 4 5
2 1 3 1 2 4 2 1 3 1 2 3 1
2
1 1 2 1
2( ) ( 2 3 3 (6 )) ( ),
n n n n n nnn n
c e c c e c c c c c c e Ow ez

(51)
2
1 1 1 2
45
2 1 3 2 1 1 1
)(2 )( ( ).
nnn nn nn
w c c c c e O exe


(52)
24
1 2 1 2 3 2 1 1
4 2 3
2 3 2 4 3 1 1
(10( )( 2 (16 22 3) )
n n n nn nn
z w c c c c c c c c ce
 

 
2 2 3 5 6
2 3 1 2 4 3 1 1 1 1
( 14 9 ) 2 ( 7 )) ( ),
n n n n n
c c c c c e O e
(53)
322 2 2 3
1 1 2 1 1 2 3 2 121 211
( ) (2 22 2 ) (2
n n n n n n n n
w y c e c c ccce

2 3 4 5
2 3 4 3 1 2 1 1 1 1
6 3 3 2 ) ( ),
n n n n n
c c c c c e O e
(54)
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
11
From (51)-(54), we get
1 1 1 1 1
2
1 1 1 1 1 1 1 1 1
( ) ( ) ( )
2+
( )( ) ( )( ) ( )
n n n n n n
n
n n n n n n n n n n
w z z w w z
z x w x w y z x y x




2 2 2 3
2 4 2 1 32 1 2 1 1 1
3
( 7 4 ) ( ),
n n n n n
c c c c c e O ec

(55)
11
1 1 1
11
()
( )( )
2( )
nn
n
n n n n
n
nn
wz
z x w x
xw



2 3 2 2 2
2 3 2 2 3 4 2 1 2 1 1 1
1
( ) (3 6 4 4 3 ) ( ).
2n n n n
c c c c c c c c e O e

(56)
According to (18), (20), (22), (55) and (56), we get
24
2 2 3 2
5
1()()(2~)
nnnnn n
eOcc ec c e
 
24
2 2 2 2 3
5
( )[ ( ) ( ) ]~ ( )
n n n nn
c c c c Oece
2 2 2 3 2 2 4
4 2 1 3 1 2 1 1 2 2
3
3 4 2 1 2 12 1
1( 7 4 ) (3 6 4 4 3 )
2n n n n n n n n
c c c c c e c c c c c c e e
2 2 3 2 2 3 4
4 2 1 3 1 2 1 2 2 3
3
4 2 1 2 1 12
1( 7 4 )(3 6 4 4 3 )
2n n n n n n n
c c c c c c c c c ec ce
3 4 4 3
2 1, 1
2 2 3 2 2
4 2 1 3 1 2 1 2 2 3 4 2 1 2 1
1( 7 4~ )(3 6 4 4 3 .)
2
r
n r nn n n n n
c c c c c cc D ec c c c

(57)
By comparing exponents of
1n
e
appearing in the relation (28) and (57), we get the following
equation
2
43rr
(58)
Positive solution of the equation (58) is given by
2 7 4.64575r
. Therefore, the R-order of
the method with memory (47) is at least 4.64575.
The proof is completed.
4. Numerical results
The method (2) (PM), method (3) (ZM), method (4) (WM1) and the new methods with and
without memory are employed to solve nonlinear functions
( )( 1,2,3)
i
f x i
. The absolute errors
k
xa
in the first four iterations are given in Tables 1-3, where
a
is the exact root computed with
2000 significant digits. The parameters
0.1
,
00.1
,
0.1
and
00.1
are used in the
first iteration. The computational order of convergence
is defined by [21]:
11
1 1 2
ln( / ) .
ln( / )
n n n n
n n n n
x x x x
x x x x



(59)
Our methods are compared with high-order method (5) (DM) and method (6) (WM2) in Tables 4-6.
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
12
Tables 4-6 give the mean CPU time (in seconds) after 50 performances of different methods.
Following test functions are used:
22
10
( ) sin ( ) 3cos( ) 5, 1.2076478271309189, 1.3,
x
f x xe x x a x
5 4 2
20
( ) 4 15, 1.3474280989683050, 1.6, f x x x x a x
2
30
( ) arcsin( 1) 0.5 1, 0.59481096839836918, 1.f x x x a x
Table 1 Numerical results for
1()fx
by the methods with and without memory
Methods
1
xa
2
xa
3
xa
4
xa
(15)
0.23054e-3
0.93481e-14
0.25273e-55
0.13502e-221
4.0000000
(16)
0. 23801e-3
0. 11069e-13
0. 51787e-55
0. 24812e-220
4.0000000
ZM
0.52653e-2
0.44751e-7
0.15980e-23
0.61894e-78
3.3082725
PM
0.47398e-4
0.30448e-17
0.20742e-73
0.28696e-311
4.2348771
WM1
0.51371e-3
0.47887e-13
0.10861e-55
0.26710e-236
4.2352409
((26),(24))
0.23054e-3
0.29797e-15
0.73980e-66
0.11504e-280
4.2447985
((26),(25))
0.23054e-3
0.11530e-16
0.37701e-75
0.10351e-335
4.4551453
(47)
0. 23801e-3
0. 28636e-17
0. 16584e-80
0. 30286e-375
4.6608388
Table 2 Numerical results for
2()fx
by the methods with and without memory
Methods
1
xa
2
xa
3
xa
4
xa
(15)
0.43473e-2
0.70961e-9
0.50929e-36
0.13512e-144
4.0000000
(16)
0. 45508e-2
0. 90076e-9
0. 13987e-35
0. 81331e-143
4.0000000
ZM
0.42368e-1
0.19255e-4
0.35571e-15
0.10391e-50
3.3106279
PM
0.34012e-1
0.56555e-6
0.47462e-26
0.36556e-111
4.2395330
WM1
0.80255e-2
0.10174e-8
0.35663e-38
0.66482e-163
4.2345381
((26),(24))
0.43473e-2
0.20165e-10
0.55196e-45
0.89701e-192
4.2470385
((26),(25))
0.43473e-2
0.32346e-11
0.15068e-50
0.40255e-226
4.4639026
(47)
0. 45508e-2
0. 80166e-12
0. 22689e-55
0. 84517e-259
4.6713527
Table 3 Numerical results for
3()fx
by the methods with and without memory
Methods
1
xa
2
xa
3
xa
4
xa
(15)
0.34960e-3
0.42142e-15
0.88966e-63
0.17671e-253
4.0000000
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
13
(16)
0. 51365e-3
0. 58462e-15
0. 98016e-63
0.77446e-254
4.0000000
ZM
0.12353e-1
0.19384e-7
0.17066e-26
0.18131e-89
3.3047880
PM
0.28807e-2
0.17703e-12
0.14180e-55
0.35820e-238
4.2369602
WM1
0.48052e-2
0.32661e-10
0.33069e-45
0.23474e-193
4.2334759
((26),(24))
0.34960e-3
0.18423e-14
0.85142e-64
0.58907e-272
4.2192971
((26),(25))
0.34960e-3
0.23180e-15
0.10493e-70
0.21832e-316
4.4391605
(47)
0. 51365e-3
0. 14740e-14
0. 16327e-70
0. 66622e-329
4.6177560
Table 4 Mean CPU time for the stopping criterion
200
1
| | 10
kk
xx

f
ZM
PM
DM
WM1
WM2
(26),(24)
(26),(25)
(47)
1
f
3.65167
3.83668
7.86026
3.33717
6.07311
2.88321
3.01768
2.08198
2
f
1.37031
1.60400
4.57582
1.83301
3.67257
1.67264
1.56937
1.22523
3
f
1.68761
1.99587
4.43760
2.32441
4.15212
1.23490
1.28794
1.38684
Total
6.70959
7.43655
16.87368
7.49459
13.8978
5.79075
5.87454
4.69405
Table 5 Mean CPU time for the stopping criterion
300
1
| | 10
kk
xx

f
ZM
PM
DM
WM1
WM2
(26),(24)
(26),(25)
(47)
1
f
5.25068
4.27068
9.04868
4.81075
6.26188
3.24762
2.99178
2.03238
2
f
1.66484
1.76561
4.72589
1.90789
3.71531
1.60244
1.76468
1.46360
3
f
1.91787
2.06982
5.24912
2.85419
4.16959
1.31540
1.77685
1.47826
Total
8.83339
8.10611
19.02369
9.57283
14.14678
6.16546
6.53331
4.97424
Table 6 Mean CPU time for the stopping criterion
400
1
| | 10
kk
xx

f
ZM
PM
DM
WM1
WM2
(26),(24)
(26),(25)
(47)
1
f
6.13801
5.53397
9.93476
5.00732
7.82657
3.74620
4.00423
2.64390
2
f
2.68477
2.38338
5.85721
2.33408
4.25477
1.67139
1.92037
1.64955
3
f
2.49570
2.58680
6.37576
2.78305
5.07970
1.38154
2.20242
1.65985
Total
11.31848
10.50451
22.16773
10.12445
17.16104
6.79913
8.12702
5.95330
From Tables 1-3, we observe that the convergence orders of the methods (26) and (47) with
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
14
memory are increased relative to the corresponding basic methods (15) and (16) without memory.
The computational orders of convergence
are consistent with the theoretical orders. The
convergence behavior of the methods (26) and (47) with memory is better than the methods ZM,
PM and WM1 for most examples. According to the results presented in Tables 4-6, we can see that
our method (47) uses minimum computing time and is better than the high order methods WM2 and
DM. The main reason is that the accelerating parameters of methods WM2 and DM are too complex,
which cost more computing time than that of our methods (26) and (47).
5. Dynamical Analysis
The stability and reliability of iterative method can be judged by its dynamical behavior. Many
authors study the dynamics of different iterative methods in [22-24]. In this section, we compare
our new methods with methods ZM, PM and WM1 by using the basins of attraction for two
complex polynomials
( ) 1, 2,3.
k
f z z k
We give a rectangle D=[-2.0, 2.0]×[-2.0, 2.0]
C
and take a grid of 300×300 points
0
z
in
D
. The color is assigned to each point
0
zD
according to the simple root at which the corresponding iterative method starting from
0
z
converges, and the point is painted with black if the iterative method does not converge after the
maximum number of iterations. The maximum number of iterations is 25. The sequence generated
by iterative method reaches a zero
*
z
of the polynomial with a tolerance
*5
| | 10
k
zz

. The
parameters used in iterative methods without memory are
0.1
and
0.1
. The parameter
00.1
and
00.1
are used in the methods with memory.
(a) (b)
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
15
(c) (d)
(e) (f)
(g) (h)
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
16
Figure 1: Fractal results for the complex polynomial
21.z
(a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e): Method WM1;
(f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
(a) (b)
(c) (d)
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
17
(e) (f)
(g) (h)
Figure 2: Fractal results for the complex polynomial
31.z
(a): Method (15); (b): Method (16); (c): Method PM; (d): Method ZM; (e): Method WM1;
(f): Method ((26) with (24)); (g): Method ((26) with (25)); (h): Method (47).
From figures 1-2, we can see that our new methods (26) and (47) with memory have faster
convergence speed than the iterative methods PM, ZM and (15)-(16). The new methods with
memory have fewer diverging points than the iterative methods (16) and PM.
6. Conclusions
The main contribution of this paper is that a new approximate method to construct the
self-accelerating parameter is presented. Firstly, we presented two new fourth-order iterative
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
18
methods without memory for solving nonlinear equations. Based on the new optimal methods
without memory, the new methods with memory is presented, which use some special
self-accelerating parameters. The new self-accelerating parameters have the properties of simple
structure and easy calculation, which do not increase the computational cost of the iterative method.
Numerical comparisons are made with some known methods by using the basins of attraction and
through numerical computations to demonstrate the efficiency and the performance of the new
methods. Experiment results show that our methods have better convergence behavior.
Acknowledgments
The project supported by the National Natural Science Foundation of China (Nos. 11547005 and
61572082), Doctor Startup Foundation of Liaoning Province of China (No. 201501196) and the
Educational Commission Foundation of Liaoning Province of China (No. L2015012).
References
[1] J. F. Traub, Iterative Method for the Solution of Equations, Prentice hall, New York, 1964.
[2] M. S. Petković, S. Ilić, J. Džunić, Derivative free two-point methods with and without memory for solving
nonlinear equations, Appl. Math. Comput. 217 (2010) 1887-1895
[3] Q. Zheng, P. Zhao, L. Zhang, W. Ma, Variants of Steffensen-secant method and applications, Appl. Math.
Comp., 216(2010) 3486-3496.
[4] J. Džunić, M. S. Petković, On generalized multipoint root-solvers with memory, J. Comput. Appl. Math, 236
(2012) 2909-2920.
[5] X. Wang, T. Zhang, A new family of Newton-type iterative methods with and without memory for solving
nonlinear equations, Calcolo, 51 (2014) 1-15.
[6] X. Wang, T. Zhang, Some Newton-type iterative methods with and without memory for solving nonlinear
equations, Int. J. Comput. Methods, 11 (2014) 1350078.
[7] J. Džunić, On efficient two-parameter methods for solving nonlinear equations, Numer. Algor, 63 (2013)
549-569.
[8] X. Wang, T. Zhang, High-order Newton-type iterative methods with memory for solving nonlinear equations,
Math. Commun. 19 (2014) 91-109.
[9] F. Soleymani, T. Lotfi, E. Tavakoli, F. K. Haghani, Several iterative methods with memory using- accelerators,
Appl. Math. Comput., 254 (2015) 452-458.
[10] T. Lotfi, P. Assari, New three- and four-parametric iterative with memory methods with efficiency index near
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017
19
2, Appl. Math. Comp. 270 (2015) 1004-1010
[11] X. Wang, T. Zhang, Y. Qin, Efficient two-step derivative-free iterative methods with memory and their
dynamics, Int. J. Comput. Math., 93 (2016)1423-1446
[12] X. Wang, T. Zhang, Efficient n-point iterative methods with memory for solving nonlinear equations, Numer.
Algor., 70 (2015) 357-375.
[13] J. R. Sharma, R. K. Guha, P. Gupta, Some efficient derivative free methods with memory for solving
nonlinear equations, Appl. Math. Comp. 219 (2012) 699-707.
[14] S. Sharifi, S. Siegmud, M. Salimi, Solving nonlinear equations by a derivative-free form of the King’s family
with memory, Calcolo, 53 (2016) 201-215 .
[15] M. Kansal, V. Kanwar, S. Bhatia, Efficient derivative-free variants of Hansen-Patrick’s family with memory
for solving nonlinear equations, Numer. Algor., (2016) doi:10.1007/s11075-016-0127-6
[16] J. Džunić, M. S. Petković, On generalized biparametric multipoint root finding methods with memory, J.
Comput. Appl. Math, 255 (2014) 362-375.
[17] X. Wu, A new continuation Newton-like method and its deformation. Appl. Math. Comput. 112(2000)75-78.
[18] H. T. Kung, J. F. Traub, Optimal order of one-point and multipoint iterations, J. ACM, 21 (1974) 643-651.
[19] J. M. Ortega, W. C. Rheinbolt, Iterative Solution of Nonlinear Equations in Several Variables, Academic
Press, New York, 1970.
[20] G. Alefeld, J. Herzberger, Introduction to Interval Computation, Academic Press, New York,1983.
[21] A. Cordero, J. R. Torregrosa, Variants of Newton’s Method using fifth-order quadrature formulas. Appl.
Math. Comput. 190 (2007) 686-698.
[22] A. Cordero, F. Soleymani, J. R. Torregrosa, M. Zaka Ullah, Numerically stable improved Chebyshev-Halley
type schemes for matrix sign function, J. Comput. Appl. Math. 318 (2017) 189-198.
[23] C. Chun, B. Neta, The basins of attraction of Murakami’s fifth order family of methods, Appl. Nume. Math.
110 (2016) 14-25.
[24] F. I. Chicharro, A. Cordero, J. M. Gutiérrez, J. R. Torregrosa, Complex dynamics of derivative-free methods
for nonlinear equations, Appl. Math. Comput. 219 (2013) 7023-7035.
Downloaded by [Purdue University Libraries] at 06:47 10 August 2017

File (1)

Content uploaded by Xiaofeng Wang
Author content
Article
Full-text available
In this paper, we focus on a class of optimal eighth-order iterative methods, initially proposed by Sharma et al., whose second step can choose any fourth-order iterative method. By selecting the first two steps as an optimal fourth-order iterative method, we derive an optimal eighth-order one-parameter iterative method, which can solve nonlinear systems. Employing fractal theory, we investigate the dynamic behavior of rational operators associated with the iterative method through the Scaling theorem and Möbius transformation. Subsequently, we conduct a comprehensive study of the chaotic dynamics and stability of the iterative method. Our analysis involves the examination of strange fixed points and their stability, critical points, and the parameter spaces generated on the complex plane with critical points as initial points. We utilize these findings to intuitively select parameter values from the figures. Furthermore, we generate dynamical planes for the selected parameter values and ultimately determine the range of unstable parameter values, thus obtaining the range of stable parameter values. The bifurcation diagram shows the influence of parameter selection on the iteration sequence. In addition, by drawing attractive basins, it can be seen that this iterative method is superior to the same-order iterative method in terms of convergence speed and average iterations. Finally, the matrix sign function, nonlinear equation and nonlinear system are solved by this iterative method, which shows the applicability of this iterative method.
Article
Full-text available
In this paper, we derive two general adaptive methods with memory in the class of Newton‐type methods by modifying and introducing one and two self accelerators over a variant of Ostrowski's method. The idea of introducing adaptive self‐accelerator (via all the old information for Newton‐type methods) is new and efficient in order to obtain a higher high efficiency index. Finally, we provide convergence analysis and numerical implementations to show the feasibility and applicability of the proposed methods.
Article
Full-text available
In this paper, we present a new tri-parametric derivative-free family of Hansen-Patrick type methods for solving nonlinear equations numerically. The proposed family requires only three functional evaluations to achieve optimal fourth order of convergence. In addition, acceleration of convergence speed is attained by suitable variation of free parameters in each iterative step. The self-accelerating parameters are estimated from the current and previous iteration. These self-accelerating parameters are calculated using Newton’s interpolation polynomials of third and fourth degrees. Consequently, the R-order of convergence is increased from 4 to 7, without any additional functional evaluation. Furthermore, the most striking feature of this contribution is that the proposed schemes can also determine the complex zeros without having to start from a complex initial guess as would be necessary with other methods. Numerical experiments and the comparison of the existing robust methods are included to confirm the theoretical results and high computational efficiency.
Article
Full-text available
In this work, we develop two new with memory methods. They have three and four parameters. We approximate these parameters and increase the convergence order to 15 and 15.5156, respectively, convergence analysis along with numerical illustrations are provided to justify and accomplish theoretical and practical aspects of the proposed methods.
Article
Full-text available
In this paper, we present a family of three-parameter derivative-free iterative methods with and without memory for solving nonlinear equations. The convergence order of the new method without memory is four requiring three functional evaluations. Based on the new fourth-order method without memory, we present a family of derivative-free methods with memory. Using three self-accelerating parameters, calculated by Newton interpolatory polynomials, the convergence order of the new methods with memory are increased from 4 to 7.0174 and 7.5311 without any additional calculations. Compared with the existing methods with memory, the new method with memory can obtain higher convergence order by using relatively simple self-accelerating parameters. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
Article
Full-text available
In this paper, we present a new family of two-step Newton-type iterative meth-ods with memory for solving nonlinear equations. In order to obtain a Newton-type method with memory, we first present an optimal two-parameter fourth-order Newton-type method without memory. Then, based on the two-parameter method without memory, we present a new two-parameter Newton-type method with memory. Using two self-correcting param-eters calculated by Hermite interpolatory polynomials, the R-order of convergence of a new Newton-type method with memory is increased from 4 to 5.7016 without any additional calculations. Numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.
Article
A general family of iterative methods including a free parameter is derived and proved to be convergent for computing matrix sign function under some restrictions on the parameter. Several special cases including global convergence behavior are dealt with. It is analytically shown that they are asymptotically stable. A variety of numerical experiments for matrices with different sizes is considered to show the effectiveness of the proposed members of the family.
Article
In this paper we analyze Murakami's family of fifth order methods for the solution of nonlinear equations. We show how to find the best performer by using a measure of closeness of the extraneous fixed points to the imaginary axis. We demonstrate the performance of these members as compared to the two members originally suggested by Murakami. We found several members for which the extraneous fixed points are on the imaginary axis, only one of these has 6 such points (compared to 8 for the other members). We show that this member is the best performer.
Article
We derive new iterative methods with memory for approximating a simple zero of a nonlinear single variable function. To this end, we first consider several modifications on some existing optimal classes without memory in such a way that their extensions to cases with memory could obtain the higher efficiency index 1214≈1.86120. Furthermore, we construct our main method with memory using three self-accelerators. It is demonstrated that this new scheme possesses the very high efficiency index 7.2381413≈1.93438.
Article
A general family of biparametric n-point methods with memory for solving nonlinear equations is proposed using an original accelerating procedure with two parameters. This family is based on derivative free classes of n-point methods without memory of interpolatory type and Steffensen-like method with two free parameters. The convergence rate of the presented family is considerably increased by self-accelerating parameters which are calculated in each iteration using information from the current and previous iteration and Newton's interpolating polynomials with divided differences. The improvement of convergence order is achieved without any additional function evaluations so that the proposed family has a high computational efficiency. Numerical examples are included to confirm theoretical results and demonstrate convergence behavior of the proposed methods.