Conference PaperPDF Available

System identification using hierarchical fuzzy neural networks with stabel learnig algorithms

Authors:

Abstract

Hierarchical fuzzy neural networks can use less rules to model nonlinear system with high accuracy. But the structure is very complex, the normal trainig for hierarchical fuzzy neural networks is difficult to realize. In this paper we use backpropagation-like approach to train the membership dunctions. The new learnig schemes employ a time-varying learning rate that is determined from input-output data and model structure. Stable learning algorithms for the premise and the consequence parts of fuzzy rules are proposed. The calculation of the learning rate does not need any prior information such as estimation of the modeling error bounds. The new algorithms are very simple, we can even train each sub-block of the hierarchical fuzzy neural networks independently.
System identication using hierarchical fuzzy neural networks with
stable learning algorithms
Wen Yu and Marco A. Moreno-Armendariz
Abstract Hierarchical fuzzy neural networks can use less
rules to model nonlinear system with high accuracy. But the
structure is very complex, the normal training for hierarchical
fuzzy neural networks is difcult to realize. In this paper we
use backpropagation-like approach to train the membership
functions. The new learning schemes employ a time-varying
learning rate that is determined from input-output data and
model structure. Stable learning algorithms for the premise
and the consequence parts of fuzzy rules are proposed. The
calculation of the learning rate does not need any prior
information such as estimation of the modeling error bounds.
The new algorithms are very simple, we can even train each sub-
block of the hierarchical fuzzy neural networks independently.
I. INTRODUCTION
Both neural networks and fuzzy logic are universal esti-
mators, they can approximate any nonlinear function to any
prescribed accuracy, provided that sufcient hidden neurons
and fuzzy rules are available. Resent results show that the
fusion procedure of these two different technologies seems
to be very effective for nonlinear systems identication
[1][2][8]. Gradient descent and backpropagation are always
used to adjust the parameters of membership functions (fuzzy
sets) and the weights of defuzzication (neural networks)
for fuzzy neural networks. Slow convergence and local
minimum are main drawbacks of these algorithms [9]. Some
modications were derived in recently published literatures.
[3] suggested a robust backpropagation law to resist the noise
effect and reject errors drift during the approximation. [1]
used B-spline membership functions to minimize a robust
object function, their algorithm can improve convergence
speed. In [18] structure and parameters of fuzzy neural
systems were determined by RBF neural networks.
In the design of the fuzzy systems is common to use a table
look-up approach, which is a time-consuming task. Espe-
cially when the number of inputs and membership functions
are huge, the number of fuzzy rules increase exponentially.
The huge rule base would be overload the memory and make
the fuzzy system very hard to implement. Generally qinput
variables and pmembership functions for each variable,
neuro-fuzzy systems require p
q
rules. This phenomenon
is called “curse of dimensionality”. In order to deal with
the rule-explosion problem, a number of low-dimensional
Wen Yu is with the Departamento de Control Automatico,
CINVESTAV-IPN, Av. IPN 2508, México D.F., 07360, México
yuw@ctrl.cinvestav.mx
Marco A. Moreno-Armendariz is with Escuela de Ingeniería, Di-
rección de Posgrado e Investigación, LIDETEA, Universidad La Salle,
Benjamin Franklin 47, Col. Condesa, México D.F, 06140, México.
mmoreno@ci.ulsa.mx
fuzzy systems in a hierarchical form are consisted, instead
of a single high-dimensional fuzzy system. This is main
idea of hierarchical fuzzy systems (HFS) [12][15]. It has
been proven that hierarchical fuzzy systems also universal
estimators [17]. In [19] a hierarchical prioritized structure
are able to introduce exceptions to more general rules by
giving them a priority and introducing them to a higher
level. The lowest level contains de default rules about the
relationship being modeled. The middle level contains rules
based on aggregation of exceptions to these default rules.
The highest level of the structure contains specic exceptions
not accounted for by the rest of the model. In [7] a method
using intermediate mapping variables as temporal variables
is presented to avoid the designing of intermediate outputs.
In [13] a type of HFS, called Hierarchical Classifying-
Type Fuzzy System (HCTFS) is used instead of repetitive
defuzzication process between subsystem layers, analyzes
the computational complexity in terms of the mathematical
process and electronic components used.
When we cannot decide the membership functions in prior,
we should use the input/output data to train the parameters
of the membership function, for example ANFIS [6] and gra-
dient learning [15]. Even for a single fuzzy neural networks,
the training algorithm is complex [22]. It is very difcult
to realize learning for hierarchical fuzzy neural networks
if we use normal learning [16]. By using backpropagation
technique, gradient descent algorithms can be simplied
for multilayer neural networks training. Nevertheless, can
hierarchical fuzzy neural networks be trained by the similar
technique? To the best of our knowledge, the training for
hierarchical fuzzy neural system was still used the normal
gradient algorithm [6][16].
The stability problem of fuzzy neural identication is
very important in applications. It is well known that normal
identication algorithms (for example, gradient descent and
least square) are stable in ideal conditions. In the pres-
ence of unmodeled dynamics, they might become unstable.
The lack of robustness of the parameter identication was
demonstrated in [14] and became a hot issue in 1980s, when
some robust modication techniques were suggested [5]. The
learning procedure of fuzzy neural networks can be regarded
as a type of parameter identication. Gradient descent and
backpropagation algorithms are stable, if fuzzy neural models
can match nonlinear plants exactly. However, some robust
modications must be applied to assure stability with respect
to uncertainties. Projection operator is an effective tool to
guarantee fuzzy modeling bounded [15]. It was also used by
many fuzzy-neural systems [10]. Another general approach
Proceedings of the
44th IEEE Conference on Decision and Control, and
the European Control Conference 2005
Seville, Spain, December 12-15, 2005
TuIC19.4
0-7803-9568-9/05/$20.00 ©2005 IEEE 4089
is to use robust adaptive techniques [5] in fuzzy neural mod-
eling. For example, [16] applied a switch modication
to prevent parameters drift. By using passivity theory, we
successfully proved that for continuous-time recurrent neural
networks, gradient descent algorithms without robust modi-
cation were stable and robust to any bounded uncertainties
[20], and for continuous-time identication they were also
robustly stable [21]. Nevertheless, do hierarchical fuzzy
neural networks have the similar characteristics?
In this paper backpropagation-like approach is applied to
system identication via hierarchical fuzzy neural networks.
Both the premise and the consequent membership functions
are assumed to be unknown. The new algorithms are very
simple, we can even train the parameters of each sub-
block independently. The new stable algorithms with time-
varying learning rates are applied to hierarchical fuzzy neural
networks. One example is given to illustrate the effectiveness
of the suggested algorithms.
II. HIERARCHICAL FUZZY NEURAL NETWORKS
Consider following discrete-time nonlinear system
|(n)=k[{(n)] = [[(n)]
=[|(n4)>|(n2) >···x(n4)>x(n2) >···]
(1)
where
[(n)=[|(n4)>|(n2) >···x(n)>x(n4)>···]
W
(2)
A conventional fuzzy model with one output is presented as a
collection of fuzzy rules in the following form (for example,
Mamdani fuzzy model [15])
R
l
:IF{
1
is D
l
1
and ··· and {
q
is D
l
q
THEN b|
1
is E
l
We us e o(l=4>2>··· >o)fuzzy IF-THEN rules to
perform a mapping from an input linguistic vector [=
[{
1
>··· >{
q
]5<
q
to an output linguistic b|= D
l
1
>··· >D
l
q
and E
l
are standard fuzzy sets. Each input variable
{
m
(m=4>2>===>q)has o
m
fuzzy sets. In the case of full
connection, o=o
1
×o
2
×···o
q
=
In order to design a conventional fuzzy system with
a required accuracy, the number of rules has to increase
exponentially with the number of input variables to the fuzzy
system. Consider qinput variables and pfuzzy sets for each
input variable, then the number of rules in the fuzzy system is
p
q
. When n is large, the number of rules is a huge number.
A serious problem facing fuzzy system applications is how
to deal with this rule explosion problem. One approach to
deal with this difculty is use hierarchical fuzzy systems.
This kind of systems have the nice property that the number
of rules needed to construct the fuzzy system increases only
linearly with the number of variables [16].To represent the
output of each hierarchical block, the s_wk level output
(sA4)is
b|
s
=
o
s
P
l=1
z
l
s
"
q
s1
Q
m=1
D
l
s>m
({
s>m
)
q
s2
Q
m=1
G
l
s>m
(b|
s1>m
)#
o
s
P
l=1
"
q
s1
Q
m=1
D
l
s>m
({
s>m
)
q
s2
Q
m=1
G
l
s>m
(b|
s1>m
)#(3)
where
D
l
s>m
and
G
l
s>m
are the membership functions of the
fuzzy sets D
l
s>m
and
G
l
s>m
>z
l
s
is the point at which
E
l
s
=4
We use the following example to explain how to use
the backpropagation technique for hierarchical fuzzy neural
networks. Three fuzzy neural networks (FS1, FS2, FS3) form
a hierarchical fuzzy neural networks.
For each subsystem , there are ofuzzy rule and qinput, 4
output. If we use singleton fuzzier, Mamdani implications,
center average defuzzier, the output can be expressed as
(3), where
D
m
m
=exp
·³
{
m
f
l
m
l
m
´
2
¸is the membership
functions of the fuzzy sets D
l
m
> |
l
is the point at which
E
l
=4>b|is the output of each fuzzy system. Let us dene
}
l
=
q
Q
m=1
exp ·³
{
m
f
l
m
l
m
´
2
¸
d=
o
P
m=1
z
l
}
l
>e=
o
P
l=1
}
l
(4)
So b|=
d
e
=The the object of identication problem is to
determine parameters z
l
>f
l
m
and
l
m
such that the output of
the fuzzy neural networks b|converge to the output of the
plant |. Using the chain rule
CM
Cz
l
=CM
Cb|
Cb|
Cz
l
=(b||)C¡
d
e
¢
Cz
l
=(b||)
e}
l
(5)
z
l
is updated by
z
l
(n+4)=z
l
(n)}
l
e(b||)(6)
since
Ch
{
C{
=h
{
CM
Cf
l
m
=
CM
Ce|
Ce|
C}l
C}
l
Cf
l
m
=(b||)h
z
l
e
d
e
2
i}
l
·2(
{
m
f
l
m
)
(
l
m
)
2
¸(7)
So f
m
l
is trained by
f
l
m
(n+4)=f
l
m
(n)2(b||)}
l
¡z
l
b|¢¡{
m
f
l
m
¢
e¡
l
m
¢
2
(8)
Similar
CM
C
l
m
=
CM
Ce|
Ce|
C}
m
C}
m
C
m
l
=(b||)
C
(
d
e
)
C}
m
q
Q
l=1
exp ·³
{
l
f
m
l
m
l
´
2
¸·2(
{
l
f
m
l
)
2
(
m
l
)
3
¸(9)
So
m
l
is trained by
l
m
(n+4)=
l
m
(n)2(b||)}
l
¡z
l
b|¢¡{
m
f
l
m
¢
2
e¡
l
m
¢
3
(10)
4090
plant
FS1
FS2
3
e
1
ˆ
y
2
ˆ
y
1,3
x
2,3
x
n
x
,3
1,1
x
n
x
,1
1,2
x
n
x
,2
y
FS3
3
ˆ
y
1
e
2
e
Fig. 1. A hierarchical fuzzy neural networks for identication
If we dene the identication error as h=b|(n)|(n)=
The gradient descent training is
z
l
(n+4)=z
l
(n)
}
l
e
h
f
m
l
(n+4)=f
m
l
(n)2}
l
(n)(
z
l
(n)e|(n)
)(
{
m
(n)f
l
m
(n)
)
e(n)
(
l
m
(n)
)
2
h
l
m
(n+4)=
l
m
(n)2}
l
(n)(
z
l
(n)e|(n)
)(
{
m
(n)f
l
m
(n)
)
2
e(n)
(
l
m
(n)
)
3
h
(11)
The inputs and output of each subsystem are dened
as in Fig.1.So the performance index is changed as M=
1
2
(b|
3
|)
2
=For FS3, the learning algorithm is the same
as (11), we only add subscripts in each variable, for exam-
ples, z
l
(n+4)$z
l
3
(n+4)>f
l
m
(n+4)$f
l
3>m
(n+4)>
l
m
(n+4)$
l
3>m
(n+4)>{
3>1
(n)=b|
1
(n)>{
3>2
(n)=
b|
2
(n)>h(n)=b|
3
(n)|(n)=h
3
(n)>
b|
3
=3
C
o
3
X
l=1
z
l
3
q
3
Y
m=1
D
l
3>m
({
3>m
)4
D@3
C
o
3
X
l=1
q
3
Y
m=1
D
l
3>m
({
3>m
)4
D
(12)
For subsystem FS2, if we want to update z
l
2
>we should
calculate CM
Cz
l
2
=CM
Cb|
3
Cb|
3
Cb|
2
Cb|
2
Cz
l
2
(13)
From Fig.1 we know
Ce|
3
Ce|
2
corresponds to {
3>2
(n)>so
CM
Ce|
3
=b|
3
|
Ce|
3
Ce|
2
=
Ce|
3
C}
l
3
C}
l
3
Ce|
2
=h
d
3
e
2
3
z
l
3
e
3
i}
l
3
·2
e|
2
f
l
3>2
(
l
3>2
)
2
¸
Ce|
2
Cz
l
2
=
}
l
2
e
2
(14)
Because
d
3
e
2
3
|
l
3
e
3
=
e|
3
|
l
3
e
3
CM
Cz
l
2
=}
l
2
e
2
2b|
3
z
l
3
e
3
}
l
3
b|
2
f
l
3>2
¡
l
3>2
¢
2
h
3
(n)(15)
|
m
2
is updated by
z
l
2
(n+4)=z
l
2
(n)}
l
2
e
2
2b|
3
z
l
3
e
3
}
l
3
b|
2
f
l
3>2
¡
l
3>2
¢
2
h
3
(n)
(16)
p
y
ˆ
1,p
x
p
np
x
,
p
q
y
ˆ
kq
x
,
q
1,q
x
q
nq
x
,
q
y
ˆ
p
e
q
e
Fig. 2. Training in general case
Compare to (6), if we dene
h
2
(n)=2b|
3
z
l
3
e
3
}
m
3
b|
2
f
l
3>2
¡
l
3>2
¢
2
h
3
(n)(17)
Using (14) h
2
(n)=
Ce|
3
Ce|
2
h
3
(n)=Similar we can obtain
h
1
(n)=
Ce|
3
Ce|
1
h
3
(n)>where
Cb|
3
Cb|
1
=2
b|
3
z
l
3
e
3
}
m
3
|
2
f
l
3>1
¡
l
3>1
¢
2
(18)
With h
1
(n)and h
2
(n)we can train the subsystem FS1 and
FS2 independently by the normal algorithm (11). In general
case shown in Fig.2, the training procedures are as follows:
1) According to the structure of the hierarchical fuzzy
neural networks, we calculate the output of each sub-
fuzzy neural networks by (3). Some outputs of fuzzy
neural networks should be the inputs of the next level.
2) Calculate the error for each block. We start from the
last block, the identication error is
h
r
(n)=b|
r
(n)|(n)(19)
where h
r
(n)is identication error, b|
r
(n)is the output
of the whole hierarchical fuzzy neural networks, |(n)
is the output the plant. Then we back propagate the
error form the structure of the hierarchical fuzzy neural
networks. In Fig.2, we can calculate the error for the
block s(dened as h
s
) form its former block t(dened
as h
t
). By the chain rule discussed above
h
s
=2b|
t
z
l
t
e
t
}
m
t
b|
s
f
l
t>s
¡
l
t>s
¢
2
h
t
(20)
3) Train the Gaussian function (membership functions
in the premise and the consequent parts) for each
block independently, for s_th block backpropagation-
like algorithm is
z
l
s
(n+4)=z
l
s
(n)
}
l
s
e
s
h
s
f
m
s>l
(n+4)=f
m
s>l
(n)2}
l
s
(
z
l
s
e|
s
)(
{
s>m
f
l
s>m
)
e
(
l
s>m
)
2
h
s
l
s=m
(n+4)=
l
s>m
(n)2}
l
s
(
z
l
s
e|
s
)(
{
s>m
f
l
s=m
)
2
e
s
(
l
s>m
)
3
h
s
(21)
where }
l
s
=
q
s
Q
m=1
exp ·³
{
s>m
f
l
s>m
l
s>m
´
2
¸>e
s
=
o
s
P
l=1
}
l
s
.
4091
III. STABLE LEARNING
If we dene the identication error as
h(n)=b|(n)|(n)(22)
By (20) h(n)can be propagated to each sub-block, named
h
s
(n), there is a virtual output |
s
(n)in the plant which is
corresponding to the output of the sub-block b|
s
(n)>so
h
s
(n)=b|
s
(n)|
s
(n)(23)
For s_th block, we assume the nonlinear plant can be
expressed in Gaussian membership function as
|
s
=Ã
p
P
m=1
|
mq
Q
m=1
exp ·³
{
l
f
m
l
m
l
´
2
¸!
Ã
p
P
m=1
q
Q
m=1
exp ·³
{
l
f
m
l
m
l
´
2
¸!(24)
where |
m
>f
m
l
and
m
l
are unknown parameters which may
minimize the modelling error = In the case of three inde-
pendent variables, a smooth function ihas Taylor formula
as
i({
1
>{
2
>{
3
)=
o1
P
n=0
1
n!
[¡{
1
{
0
1
¢
C
C{
1
+¡{
2
{
0
2
¢
C
C{
2
+¡{
3
{
0
3
¢
C
C{
3
]
n
0
i+U
o
(25)
where U
o
is the remainder of the Taylor formula. If we let
{
1
>{
2
>{
3
correspond |
m
>f
m
l
and
m
l
>{
0
1
>{
0
2
>{
0
3
correspond
|
m
>f
m
l
and
m
l
>
b|
s
=|
s
++
p
P
m=1
¡|
m
|
m
¢
}
m
e
+
p
P
m=1
q
P
l=1
Ce|
Cf
m
l
³f
m
l
f
m
l
´
+
p
P
m=1
q
P
l=1
Ce|
C
m
l
³
m
l
m
l
´+U
1
(26)
where U
1
is second order approximation error of the Taylor
series,
Ce|
s
Cf
m
l
=2}
m
(n)(
|(n)
m
e|(n)
)(
{
l
(n)f
m
l
(n)
)
e(n)
(
m
l
(n)
)
2
Ce|
s
C
m
l
=2}
m
(n)(
|(n)
m
e|(n)
)(
{
l
(n)f
m
l
(n)
)
2
e(n)
(
m
l
(n)
)
3
(27)
So
p
P
m=1
q
P
l=1
Ce|
s
Cf
m
l
³f
m
l
f
m
l
´=
2
e(n)
G
W
1
(n)e
F(n)
p
P
m=1
q
P
l=1
Ce|
s
C
m
l
³
m
l
m
l
´=
2
e(n)
G
W
2
(n)e
(n)
(28)
where
e
F(n)=·¡f
1
1
f
1
1
¢···¡f
1
q
f
1
q
¢
···(f
p
1
f
p
1
)···(f
p
q
f
p
q
)¸
W
5U
q+p
G
1
(n)=
5
9
9
9
9
9
9
7
}
1
(n)(
|(n)
1
e|(n)
)(
{
1
(n)f
1
1
(n)
)
(
1
1
(n)
)
2
···}
1
(n)(
|(n)
1
e|(n)
)(
{
q
(n)f
1
q
(n)
)
(
1
q
(n))
2
···}
p
(n)
(|(n)
p
e|(n))({
1
(n)f
p
1
(n))
(
p
1
(n)
)
2
···}
p
(n)
(|(n)
p
e|(n))({
q
(n)f
p
q
(n))
(
1
q
(n))
2
6
:
:
:
:
:
:
8
W
e
(n)=·¡
1
1
1
1
¢···¡
1
q
1
q
¢
···(
p
1
p
1
)···(
p
q
p
q
)¸
W
5U
q+p
G
2
(n)=
5
9
9
9
9
9
9
9
7
}
1
(n)(
|(n)
1
e|(n)
)(
{
1
(n)f
1
1
(n)
)
2
(
1
1
(n)
)
3
···}
1
(n)(
|(n)
1
e|(n)
)(
{
q
(n)f
1
q
(n)
)
2
(
1
q
(n))
3
···}
p
(n)
(|(n)
p
e|(n))({
1
(n)f
p
1
(n))
2
(
p
1
(n)
)
3
···}
p
(n)
(|(n)
p
e|(n))({
q
(n)f
p
q
(n))
2
(
p
q
(n))
3
6
:
:
:
:
:
:
:
8
W
(29)
h
s
(n)=
1
e(n)
}
W
(n)e|(n)+
2
e(n)
G
W
1
(n)e
F(n)
+
2
e(n)
G
W
2
(n)e
(n)+(n)(30)
where e|
n
=|(n)|
(n)> | (n)=£|
1
>··· > |
p
¤
W
>}(n)=
£}
1
···}
p
¤
W
>(n)=U
1
+.
Theorem 1: If we use Mamdani-type fuzzy neural net-
work (3) to identify nonlinear plant (1), the following
backpropagation algorithm makes identication error h
s
(n)
bounded
|(n+4)=|(n+4)
(n)
e(n)
}(n)h
s
(n)
F(n+4)=F(n)2
(n)
e(n)
G
1
(n)h
s
(n)
(n+4)=(n)2
(n)
e(n)
G
2
(n)h
s
(n)
(31)
(n)=
1+(n)
2
>(n)=k}(n)k
2
+4kG
1
(n)k
2
+
4kG
2
(n)k
2
>0?max
n
©e
2
(n)ª=The average of the
identication error satises
M= lim sup
W$4
4
W
W
X
n=1
h
2
s
(n)
4
(32)
where =
(1+(n))
2
A0>  =max
n
£
2
(n)¤
Proof: We selected a positive dened scalar O
n
as
O
n
=ke|(n)k
2
+°
°
°e
F(n)°
°
°
2
+°
°
°e
(n)°
°
°
2
(33)
The updating law (31) can be written as
e|(n+4)=e|(n)
(n)
e(n)
}(n)h(n)
e
F(n+4)= e
F(n)2
(n)
e(n)
G
1
(n)h(n)
e
(n+4)=e
(n)2
(n)
e(n)
G
2
(n)h(n)
(34)
4092
So we have
O
n
=°
°
°e|(n)
(n)
e(n)
}(n)h(n)°
°
°
2
ke|(n)k
2
+°
°
°e
F(n)2
(n)
e(n)
G
1
(n)h(n)°
°
°
2
°
°
°e
F(n)°
°
°
2
+°
°
°e
(n)2
(n)
e(n)
G
2
(n)h(n)°
°
°
2
°
°
°e
(n)°
°
°
2
=
2
(n)°
°
°
}(n)
e(n)
°
°
°
2
h
2
(n)2(n)
}
W
(n)h|(n)
e(n)
h(n)
+4
2
(n)°
°
°
G
1
(n)
e(n)
°
°
°
2
h
2
(n)4(n)
G
W
1
(n)h
F(n)
e(n)
h(n)
+4
2
(n)°
°
°
G
2
(n)
e(n)
°
°
°
2
h
2
(n)4(n)
G
W
1
(n)h
(n)
e(n)
h(n)
=
2
(n)
e
2
(n)
h
2
(n)³k}(n)k
2
+4kG
1
(n)k
2
+4kG
2
(n)k
2
´
2(n)h(n)[
1
e(n)
}
W
(n)e|(n)+
2
e(n)
G
W
1
(n)e
F(n)
+
2
e(n)
G
W
1
(n)e
(n)]
(35)
Because h(n)=
1
e(n)
}
W
(n)e|(n)+
2
e(n)
G
W
1
(n)e
F(n)+
2
e(n)
G
W
2
(n)e
(n)+(n)>the last term of (35) is
2(n)h(n)[h(n)(n)]. Because
n
A0
2(n)h(n)[h(n)(n)]
=2(n)h
2
(n)+2(n)h(n)(n)
2(n)h
2
(n)+(n)h
2
(n)+(n)
2
(n)
=(n)h
2
(n)+(n)
2
(n)
(36)
So
O
n
2
(n)
e
2
(n)
h
2
(n)³k}(n)k
2
+4kG
1
(n)k
2
+4kG
2
(n)k
2
´
(n)h
2
(n)+(n)
2
(n)
=(n)h
2
(n)[4
(n)
e
2
(n)
(k}(n)k
2
+4 kG
1
(n)k
2
+4kG
2
(n)k
2
)] +
n
2
n
(37)
We de ne (n)=k}(n)k
2
+4kG
1
(n)k
2
+4kG
2
(n)k
2
>
and we choose (n)as (n)=
1+
n
4=Becasue
max
n
{e(n)}>
e
2
(n)
max
n
{
e
2
(n)
}
e
2
(n)
4
O
n

1+(n)
h
2
(n)h4
e
2
(n)
(n)
1+(n)
i
+(n)
2
(n)

1+(n)
h
2
(n)h4
1
1+(n)
(n)i
+(n)
2
(n)
h
2
(n)+
2
(n)
(38)
where is denedin(32)=Becauseke|(n)k
2
+°
°
°e
F(n)°
°
°
2
+
°
°
°e
(n)°
°
°
2
q[min (e|(n)) + min (ef(n)) + min (e(n))]
O
n
q[max (e|(n)) + max (ef(n)) + max (e(n))]
(39)
where
q[min (e|(n)) + min (ef(n)) + min (e(n))]
q[max (e|(n)) + max (ef(n)) + max (e(n))] (40)
are K
4
-functions, and h
2
n
is an K
4
-function,
2
(n)is a
K-function. From (33) we know O
n
is the function of h(n)
and
n
>so O
n
admits a smooth ISS-Lyapunov function as
plant
FS1
FS2
)(ky
y
FS3
)1( ky
)2( ky
)(ku
)1( ku
y
ˆ
Fig. 3. Hierarchical fuzzy neural to identify a nonlinear system
in Denition 2.FromTheorem 1, the dynamic of the identi-
cation error is input-to-state stable. Because the ”INPUT”
n
is bounded and the dynamic is ISS, the ”STATE” h
n
is
bounded.
(38) can be rewritten as
O
n
h
2
n
+
2
n
h
2
n
+(41)
where =max
n
£
2
n
¤=Summarizing (41) from 4up to W,
andbyusingO
W
A0and O
1
is a constant, we obtain
O
W
O
1
P
W
n=1
h
2
n
+W
P
W
n=1
h
2
n
O
1
O
W
+WO
1
+W(42)
(32) is established.
IV. SIMULATIONS
We will use the nonlinear system which proposed [11] and
[14] to illustrate the training algorithm for hierarchical fuzzy
neural networks. The identied nonlinear plant is
|(n+4)=
|(n)|(n1)|(n2)x(n1)[|(n2)1]+x(n)
1+|(n1)
2
+|(n2)
2
The input vector is
[(n)=[|(n)>|(n4)>|(n2) >x(n)>x(n4)]
(43)
The unknown nonlinear system has the standard form
|(n+4)=i[[(n)] (44)
We use the following hierarchical fuzzy neural networks to
identify it, see Fig.3.
We use 2rules for each block, o
1
=o
2
=o
3
=2=The
input numbers for each input are q
1
=3>q
2
=2>q
3
=2=
We use 200 data to train the model, the training input is used
as in [14]. The identication results are shown in Fig.4.
Now we compare our algorithm with normal fuzzy neural
networks [1][6][8], here we use 20 rules. The training rule
is (11). Let us dene the mean squared error for nite
time is M(n)=
1
2n
P
n
l=1
h
2
(l). The comparison results
are shown in Fig. 5. We can see that compared to normal
fuzzy neural networks, hierarchical fuzzy neural networks
can model nonlinear system with less rules. By the training
4093
050 100 150 200
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
y
y
ˆ
Fig. 4. Identication with hierarchical fuzzy neural networks
0200 400 600 800 1000
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Mean
sq
uared error J
(
k
)
hierarchical fuzz
y
neural networks
normal fuzz
y
neural networks
Fig. 5. Comparison
algorithm proposed in this paper, the convergence speed is
faster than the normal one.
V. CONCLUSIONS
In this paper we propose a simple training algorithm for
hierarchical fuzzy neural networks. The modelling process
can be realized in each sub-block independently. Further
works will be done on structure training and adaptive control.
The new stable algorithms with time-varying learning rates
are applied to hierarchical fuzzy neural networks.
R
EFERENCES
[1] M.Brown, C.J.Harris, Neurofuzzy Aadaptive Modelling and Control,
Prentice Hall: New York , 1994.
[2] M.Y.Chen and D.A.Linkensm, A systematic neuro-fuzzy modeling
framework with application to material property prediction, IEEE
Trans. Syst., Man, Cybern. B, Vol.31, 781-790, 2001.
[3] D.S.Chen and R.C.Jain, A robust back propagation learning algorithm
for function approximation, IEEE Trans. Neural Networks,Vol.5,
No.3, 1994.
[4] F.L.Chung, J.C.Duan, On Multistage Fuzzy Neural Network Modeling,
IEEE Transactions on Fuzzy Systems, Vol.8, No.2, 125-142, 2000.
[5] P.A.Ioannou and J.Sun, Robust Adaptive Control, Prentice-Hall, Inc,
Upper Saddle River: NJ, 1996.
[6] J. S. Juang, ANFIS: Adaptive-network-based fuzzy inference system,
IEEE Transactions on Systems, Man and Cybernetics, vol. 23, 665-
685, 1993.
[7] M.L. Lee, H.Y. Chung and F.M. Yu, Modeling of hierarchical fuzzy
systems, Fuzzy sets and systems, 138, 343-361, 2003.
[8] C.T.Lin and G.Lee, Neural fuzzy systems: A neural-fuzzy synergism to
intelligent systems, Prentice-Hall Inc., NJ, 1996.
[9] C.T.Lin, A neural fuzzy control system with structure and parameter
learning, Fuzzy Sets and Systems., Vol.70, 183-212, 1995.
[10] Y.G.Leu, T.T.Lee and W.Y.Wang, Observer-based adaptive fuzzy-
neural control for unknown nonlinear dynamical systems, IEEE Trans.
Syst., Man, Cybern. B, Vol.29, 583-591, 1999.
[11] K.S.Narendra and K.Parthasarathy, Identication and Control of Dy-
namical Systems Using Neural Networks, IEEE Trans. Neural Net-
works, Vol.1, No.1, 4-27, 1990.
[12] G.V.S.Raju,J.ZhouandR.A.Kisner,Hierarchicalfuzzycontrol,
Int. J. of Control, 54, no. 5, pp. 1201-1216, 1991
[13] W. Rattasiri and S.K. Halgamuge, Computationally Advantageous and
Stable Hierarchical Fuzzy Systems for Active Suspension, IEEE Trans.
on Industrial Electronics, Vol. 50, No. 1, 48-61, 2003.
[14] P. S. Sastry, G. Santharam, and K. P. Unnikrishnan, Memory neural
networks for identication and control of dynamic systems,” IEEE
Trans. Neural Networks, vol. 5, 306-319, 1994.
[15] L. X. Wang, A course in fuzzy systems and control,PrentinceHall
Inc., 1997.
[16] L. X. Wang, Analysis and Design of Hierarchical Fuzzy Systems,
IEEE Transactions on Fuzzy Systems, Vol.7, No.3, 617-624, 1999.
[17] C. Wei and L.X. Wang, A Note on Universal Approximation by
Hierarchical Fuzzy Systems, Information Sciences, Vol. 123, 241-248,
2000.
[18] S.Wu and M.J.Er, Dynamic fuzzy neural networks- a novel approach
to function approximation, IEEE Trans. Syst., Man, Cybern. B, Vol.30,
358-364, 2000.
[19] R.R. Yager, “On the construction of Hierarchical Fuzzy Systems
Models, IEEE Trans. Syst., Man, Cybern. C, Vol. 28, 55-66, 1998.
[20] W.Yu and X. Li, Some stability properties of dynamic neural networks,
IEEE Trans. Circuits and Systems, Part I, Vol.48, No.1, 256-259, 2001.
[21] W.Yu and X. Li, Some new results on system identication with
dynamic neural networks, IEEE Trans. Neural Networks, Vol.12, No.2,
412-417, 2001.
[22] Wen Yu, Xiaoou Li, Fuzzy identication using fuzzy neural networks
with stable learning algorithms, IEEE Transactions on Fuzzy Systems,
Vol.12, No.3, 411-420, 2004.
4094
... In order to deal with the rule-explosion problem, hierarchical fuzzy neural networks could be used. But the normal learning method in these structures is very com- plex [13]. A method that compresses a fuzzy system with an arbitrarily large number of rules into a smaller fuzzy system by removing the redundancy in the fuzzy rule base is presented in [14]. ...
Article
Full-text available
The model in Model Predictive Control (MPC) takes the central place. Therefore, it is very important to find a predictive model that effectively describes the behavior of the system and can easily be incorporated into MPC algorithm. In this paper it is presented implicit Generalized Predictive Controller (GPC) based on Semi Fuzzy Neural Network (SFNN) model. This kind of model works with reduced number of the fuzzy rules and respectively has low computational burden, which make it suitable for real-time applications like predictive controllers. Firstly, to demonstrate the potentials of the SFNN model test experiments with two benchmark chaotic systems -Mackey-Glass and Rossler chaotic time series are studied. After that, the SFNN model is incorporated in GPC and its efficiency is tested by simulation experiments in MATLAB environment to control a Continuous Stirred Tank Reactor (CSTR).
Conference Paper
Full-text available
In this paper, a novel radial basis function (RBF) neural network is proposed and applied successively for online stable identification and control of nonlinear discrete-time systems. The proposed RBF network is a one hidden layer neural network (NN) with its all parameters being adaptable. The RBF network parameters are optimized by gradient descent method with stable learning rate whose stable convergence behavior is proved by Lyapunov stability approach. The parameter update is succeeded by a new strategy adapted from Levenberg-Marquardth (LM) method. The aim of construction of the proposed RBF network is to combine power of the networks which have different mapping abilities. These networks are auto-regressive exogenous input model, nonlinear static NN model and nonlinear dynamic NN model. To apply the model to control of the nonlinear systems, a known sliding mode control is applied to generate input of the system. From simulations; it is sown that the proposed network is an alternative model for identification and control of nonlinear systems with accurate results.
Article
Full-text available
Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.
Conference Paper
Full-text available
Dynamic neural networks with different time-scales include the aspects of fast and slow phenomenons. Some applications require that the equilibrium points of these networks be stable. The objective of the paper is to develop sufficient conditions for stability of the dynamic neural networks with different time scales. Lyapunov function and singularly perturbed technique are combined to access several new stable properties of different time-scales neural networks. Exponential stability and asymptotic stability are obtained by sector and bound conditions. Compared to other papers, these conditions are simpler. Numerical examples are given to demonstrate the effectiveness of the theoretical results.
Article
A general connectionist model, called neural fuzzy control network (NFCN), is proposed for the realization of a fuzzy logic control system. The proposed NFCN is a feedforward multilayered network which integrates the basic elements and functions of a traditional fuzzy logic controller into a connectionist structure which has distributed learning abilities. The NFCN can be constructed from supervised training examples by machine learning techniques, and the connectionist structure can be trained to develop fuzzy logic rules and find membership functions. Associated with the NFCN is a two-phase hybrid learning algorithm which utilizes unsupervised learning schemes for structure learning and the backpropagation learning scheme for parameter learning. By combining both unsupervised and supervised learning schemes, the learning speed converges much faster than the original backpropagation algorithm. The two-phase hybrid learning algorithm requires exact supervised training data for learning. In some real-time applications, exact training data may be expensive or even impossible to obtain. To solve this problem, a reinforcement neural fuzzy control network (RNFCN) is further proposed. The RNFCN is constructed by integrating two NFCNs, one functioning as a fuzzy predictor and the other as a fuzzy controller. By combining a proposed on-line supervised structure-parameter learning technique, the temporal difference prediction method, and the stochastic exploratory algorithm, a reinforcement learning algorithm is proposed, which can construct a RNFCN automatically and dynamically through a reward-penalty signal (i.e., “good” or “bad” signal). Two examples are presented to illustrate the performance and applicability of the proposed models and learning algorithms.
Article
An abstract is not available.
Article
This paper presents the architecture and learning procedure underlying ANFIS (Adaptive-Network-based Fuzzy Inference System), a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In our simulation, we employ the ANFIS architecture to model nonlinear functions, identify nonlinear components on-linely in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificail neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. I. Introduction System modeling based on conventional mathematical tools (e.g., differential equations) is not well suited for dealing with ill-defin...
Article
In this paper, a simplified method for developing Takagi-Sugeno-Kang (TSK) style fuzzy controllers is described. The method takes advantage of developer's knowledge and understanding of the system being implemented. System attributes such as variable and nonlinearity independence are utilized to significantly reduce the required rule-base size when describing the system, without compromising robustness and performance. An example of a tracking system is used to convey the process of implementing a hierarchical fuzzy controller
Conference Paper
In a recent paper we have established that several adaptive schemes are robust with respect to modeling errors consisting of fast parasitics which are weakly observable in the plant output. We now show that the weak observability assumption is crucial, that is when the parasitics are strongly observable the adaptive schemes are no longer robust. However, the addition of a low pass filter at the output makes the parasitics weakly observable and hence guarantees the robustness of the enlarged scheme.
Conference Paper
In recent years, multilayer neural networks and recurrent networks have emerged as important components for representing nonlinear transformations and have proved particularly successful in pattern recognition and optimization problems. The authors explore methods for incorporating such networks in adaptive systems for the identification and control of complex nonlinear dynamical systems
Article
A serious problem limiting the applicability of standard fuzzy controllers is the rule-explosion problem; that is, the number of rules increases exponentially with the number of input variables to the fuzzy controller. A way to deal with this “curse of dimensionality” is to use the hierarchical fuzzy systems. A hierarchical fuzzy system consists of a number of hierarchically connected low-dimensional fuzzy systems. It can be shown that the number of rules in the hierarchical fuzzy system increases linearly with the number of input variables. In this paper, we prove that the hierarchical fuzzy systems are universal approximators; that is, they can approximate any nonlinear function on a compact set to arbitrary accuracy. Our proof is constructive, that is, we first construct a hierarchical fuzzy system in a step-by-step manner, then prove that the constructed fuzzy system satisfies an error bound, and finally show that the error bound can be made arbitrarily small.
Article
A new kind of mapping rule base scheme is proposed to get the fuzzy rules of hierarchical fuzzy systems. The algorithm of this scheme is developed such that one can easily design the involved fuzzy rules in the middle layers of the hierarchical structure. In contrast with the conventional single layer fuzzy controller, the present method has approximate performance using the same scaling factors. Next, examples are given. At last, simulated results demonstrate that the algorithm is effective and feasible.
Article
This paper proves, in a constructive manner, that the general n-dimensional hierarchical fuzzy systems are universal approximators. It is an extension of the results in [L.X. Wang, Fuzzy Sets and Systems 93 (1998) 223–230]. An upper bound of approximation error is also given.