Content uploaded by Yunong Zhang
Author content
All content in this area was uploaded by Yunong Zhang on Jan 25, 2015
Content may be subject to copyright.
Zhang Neural Network without Using Time-Derivative Information
fo r Constant a nd Time-Varying Matrix Inversion
Yunong Zhang, Member, IEEE, Zenghai Chen, Ke Chen, and Binghuang Cai
Abstract— To obtain the inverses of time-varying matrices
in real time, a special kind of recurrent neural networks has
recently been proposed by Zhang et al.Itisprovedthatsuch
a Zhang neural network (ZNN) could globally exponentially
converge to the exact inverse of a given time-varying ma-
trix. To find out the effect of time-derivative term on global
con vergence as well as for easier hardware-implementation
purposes, the ZNN model without exploiting time-derivative
information is investigated in this paper for inverting online
matrices. Theoretical results of both constant matrix inversion
case and time-varying matrix inversion case are presented f o r
comparative and illustrative purposes. In order to substantiate
the presented theoretical results, computer-simulation results
are shown, which demonstrate the importance of time derivative
term of given matrices on the exact convergence of ZNN model
to time-varying matrix inverses.
I. INTRODUCTION
The problem of obtaining online the inverse of a matrix
arises in numerous fields of science, engineering, and busi-
ness. It is usually a fundamental part of many solutions,
e.g., as essential steps for signal-processing [1][2] and robot
control [3][4].
The circuit-realizable dynamic-system approach is one of
the important parallel-computational methods for solving
matrix-inverse problems [1][3]-[12]. Recently, due to the in-
depth research in neural networks, numerous dynamic and
analog solvers in the form of recurrent neural networks have
been developed and investigated [1][4][6][8][14]-[16]. The
neural-dynamic approach is now regarded as a powerful
alternative to online computation because of its parallel dis-
tributed nature and convenience of hardware implementation
[13][14].
A special kind of recurrent neural networks with implicit
dynamics has recently been proposed by Zhang et al for
time-varying equations solving; e.g., [4][9][10][12]. To solve
for the inverse of time-varying matrix A(t) ∈ R
n×n
,the
following ZNN model could be established:
A(t)
˙
X(t)=−
˙
A(t)X(t) − γF
A(t)X(t) − I
, (1)
where, starting from an initial condition X(0) ∈ R
n×n
, X(t)
is the activation state matrix corresponding to time-varying
inverse A
−1
(t). On the other hand, to invert constant matrix
A ∈ R
n×n
, without derivative-circuit
˙
A(t)X(t) (in short,
Y. Zhang, Z. Chen, and B. Cai are with the Department of Electron-
ics and Communication Engineering, Sun Yat-Sen University, Guangzhou
510275, China. K. Chen are with the School of Software, Sun Yat-Sen
University, Guangzhou 510275, China (phone: +86-20-84113597; emails:
ynzhang@ieee.org, zhynong@mail.sysu.edu.cn).
D-circuit) due to time-derivative
˙
A(t) ≡ 0 in this case, the
above ZNN m odel (1) reduces to
A
˙
X(t)=−γF
AX(t) − I
. (2)
In (1) and (2), I ∈ R
n×n
denotes the identity matrix, design
parameter γ>0 is used to scale the convergence rate of
the neural solution, and F (·):R
n×n
→ R
n×n
denotes a
matrix-valued activation-function array of neural networks.
Processing array F (·) is made of n
2
monotonically increas-
ing odd activation-functions f (·). For example, the first three
basic types of the following activation functions are depicted
in Fig. 1:
• linear activation function f(u)=u,
• bipolar sigmoid activation function (with ξ 2)
f(u)=(1− exp(−ξu))/(1 + exp(−ξu)),
• power activation function (with odd integer p 3)
f(u)=u
p
,
• and, power-sigmoid activation function
f(u)=
u
p
, if |u| > 1
δ
1−exp(−ξu)
1+exp(−ξu)
, otherwise
(3)
with ξ 2, p 3,andδ =(1+exp(−ξ))/(1 −
exp(−ξ)) > 1.
It has been shown that ZNN (1) could globally expo-
nentially converge to the exact inverse of a given time-
varying matrix [4][9][10][12]. In order to know the effect
of time-derivative information on neural matrix inversion (1)
as well as for lower-complexity hardware-implementation,
such ZNN models without using time-derivative information
are investigated in this paper for both constant-matrix and
time-varying-matrix inversion. For comparison between the
above presented ZNN model and the conventional gradient-
based neural network (GNN) for matrix inversion, please see
Appendix A. The remainder of this paper is thus organized
in three sections. In Section II, we analyze and simulate the
ZNN model (2) for constant matrix inversion. In Section
III, we analyze and simulate the ZNN model (1) without
D-circuit for time-varying matrix inversion. Section IV con-
cludes this p aper with final remarks.
Before ending this introductory section, it is worth men-
tioning the main contributions of the article as follows.
1) We investigate that ZNN model (2) could globally
exponentially converge to the exact inverse of a given
constant nonsingular matrix.
142
978-1-4244-1821-3/08/$25.00
c
2008 IEEE
Authorized licensed use limited to: SUN YAT-SEN UNIVERSITY. Downloaded on December 16, 2008 at 08:58 from IEEE Xplore. Restrictions apply.
−1 1
−1
1
0
power
f(u)
u
linear
sigmoid
Fig. 1. Activation function f(·) being the ijth element of array F(·)
2) We show that the time derivative information of given
matrices (or to say, the D-circuit) plays an important
role in ZNN mod e l (1) which inverts time-varying
matrices in real time.
3) We substantiate that the ZNN model without D-circuit
can work approximately well for tim e-varying matrix
inversion, if we pursue simpler neural-circuit imple-
mentation and allow less accurate solution.
II. C
ONSTANT MATRI X INVERSION
In this section, the ZNN model without time-derivative
information [i.e., neural-dynamics (2)] is employed for in-
verting online constant matrices. Theoretical analysis is pre-
sented in detail and verified by computer simulations.
A. Theo retical Results
The following theoretical results are established about the
global exponential convergence of ZNN (2) which inverts
constant nonsingular matrix A ∈ R
n×n
online.
Theorem 1: Consider constant nonsingular matrix A ∈
R
n×n
. If a monotonically-increasing odd activation function
array F (·) is used, then the state matrix X(t) of ZNN
(2) starting from any initial state X(0) ∈ R
n×n
always
converges to constant theoretical-inverse X
∗
:= A
−1
of
matrix A. Moreover, for constant A, the ZNN model (2)
possesses
1) global exponential convergence with rate γ if using
linear activation-function arr a y F(X)=X;
2) global exponential convergence with rate ξγ/2 if using
bipolar sigmoid activation-function array;
3) superior convergence to situation 1) for error range
[AX(t) − I]
ij
> 1 with i, j ∈{1, 2, ··· ,n},ifusing
power activation-function array; and
4) globally superior convergence to situation 1) if using
power-sigmoid activation-function array.
Proof: Omitted due to space limitation.
B. Simulative Verification
For illustration, let us consider the following constant
matrix A ∈ R
3×3
with its theoretical inverse given below
for comparison:
A =
⎡
⎣
101
110
111
⎤
⎦
,X
∗
:= A
−1
=
⎡
⎣
11−1
−10 1
0 −11
⎤
⎦
.
The ZNN model (2) solving for A
−1
could thus be depicted
in the following specific form:
⎡
⎣
101
110
111
⎤
⎦
⎡
⎣
˙x
11
˙x
12
˙x
13
˙x
21
˙x
22
˙x
23
˙x
31
˙x
32
˙x
33
⎤
⎦
= − γF
⎛
⎝
⎡
⎣
101
110
111
⎤
⎦
⎡
⎣
x
11
x
12
x
13
x
21
x
22
x
23
x
31
x
32
x
33
⎤
⎦
−
⎡
⎣
100
010
001
⎤
⎦
⎞
⎠
,
where processing array F(·) could typically be constructed
by using n
2
power-sigmoid activation functions in the form
of (3) with ξ =4and p =3.
As seen from Fig. 2, starting from any initial states ran-
domly selected in [−2, 2]
3×3
, state matrices of ZNN model
(2) all converge to constant theoretical-inverse A
−1
.Evi-
dently, the convergence time could be decreased considerably
by increasing d esign parameter γ. It follows from this figure
and other simulation data that design parameter γ plays an
important role on the convergence speed of ZNN models. In
addition, for the situation of using power-sigmoid activation-
function array, Fig. 3 shows that superior convergence could
be achieved, as compared to other activation-fu nction situ-
ations. Note that A
F
:=
trace(A
T
A) denotes hereafter
the Frobenius norm of matrix A. Computer-simulation has
now substantiated the theoretical analysis presented in Sub-
section II-A.
III. T
IME-VA RY I N G MAT R I X INVERSION
While Section II investigates the performance of ZNN
model (2) inverting constant matrices, the time-varying ma-
trices are inverted by the ZNN model in this section.
For hardware implementation with lower complexity, how-
ever, instead of using ZNN (1), we could have the following
simplified neural-dynamic model by removing the D-circuit
[i.e., the term of time derivative
˙
A(t)] from (1):
A(t)
˙
X(t)=−γF (A(t)X(t) − I) , (4)
which could solve approximately for time-varying A
−1
(t).
Note that ZNN model (4) can be viewed as a time-varying
version of ZNN model (2) by replacing A therein with A(t).
A. Preliminaries
To lay a basis for further detailed analysis of ZNN model
(4), the following invertibility-condition and A
−1
(t)
F
-
lemma are presented [4]. The former guarantees the uniform
existence of time-varying matrix inverse A
−1
(t), whereas the
latter gives the uniform upper-bound of A
−1
(t)
F
.They
will be used in the theoretical analysis of ZNN model (4) in
the ensuing subsection.
2008 International Joint Conference on Neural Networks (IJCNN 2008) 143
Authorized licensed use limited to: SUN YAT-SEN UNIVERSITY. Downloaded on December 16, 2008 at 08:58 from IEEE Xplore. Restrictions apply.
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
x
11
x
12
x
13
x
21
x
22
x
23
x
31
x
32
x
33
time t (s)time t (s)time t (s)
(a) γ =1
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−4
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
0 5 10
−2
0
2
x
11
x
12
x
13
x
21
x
22
x
23
x
31
x
32
x
33
time t (s)time t (s)time t (s)
(b) γ =10
Fig. 2. Inversion of constant matrix A by ZNN (2) using power-sigmoid activation-function array with ξ =4and p =3
0 2 4 6 8 10
0
2
4
6
0 2 4 6 8 10
0
2
4
6
0 2 4 6 8 10
0
2
4
6
0 2 4 6 8 10
0
2
4
6
linear case
sigmoid case
power case
power-sigmoid case
time t (s) time t (s)
Fig. 3. Convergence comparison of solution error X(t) − A
−1
F
by
ZNN (2) using different activation-function arrays with γ =1
Condition: There exists a real number α>0 such that
min
i∈{1,2,··· ,n}
|λ
i
(A(t))| α, ∀t 0 (5)
where λ
i
(·) denotes the ith eigenvalue of A(t) ∈ R
n×n
.
Lemma: If A(t) satisfies the invertibility condition (5) with
its norm uniformly upper bounded by β (i.e., A(t)
F
β,
∀t 0), then A
−1
(t)
F
is uniformly upper bounded, i.e.,
A
−1
(t)
F
ϕ :=
n−2
i=0
C
i
n
β
n−i−1
/α
n−i
+ n
3/2
/α (6)
for any time t 0,whereC
i
n
:= n!/(i!(n − i)!) [4].
B. Theoretical Results
The following theoretical results are established about the
solution-error bound of simplified ZNN model (4) inverting
time-varying nonsingular matrix A(t) online.
Theorem 2: Consider time-varying nonsingular matrix
A(t) ∈ R
n×n
which satisfies invertibility condition (5)
and norm condition (6). If a monotonically-increasing odd
activation function array F (·) is used, then the computational
error X(t)−A
−1
(t)
F
of ZNN (4) starting from any initial
state X(0) ∈ R
n×n
is always upper bounded, with its steady-
state solution-error no g reater than nεϕ
2
/(γρ−εϕ), provided
that
˙
A(t)
F
ε for any t ∈ [0, ∞) and design-parameter
γ is large enough (γ>εϕ/ρ), where coefficient
ρ := min
max
i,j∈{1,··· ,n}
f(|e
ij
(0)|)/|e
ij
(0)|
,f
(0)
, (7)
with e
ij
(0) := [A(0)X(0) − I]
ij
, i, j ∈{1, 2, ··· ,n}.
Proof: We can reformulate ZNN (4) as the following [with
Δ
B
(t):=−
˙
A(t) and Δ
C
(t):=0∈ R
n×n
]:
A(t)
˙
X(t)=− (
˙
A(t)+Δ
B
)X(t)
− γF (A(t)X(t) − I)+Δ
C
,
(8)
which becomes exactly equation (10) of [4]. In view of
Δ
B
F
= −
˙
A =
˙
A
F
ε and Δ
C
F
=0
for any t ∈ [0, ∞), we could now reuse the theoretical
results of Theorem 2 in [4]. That is, the computational error
X(t) − A
−1
(t)
F
of neural-dynamics (8) [equivalently,
ZNN (4)] is always upper bounded. In addition, it follows
immediately from Theorem 2 and equation (14) of [4] (see
Appendix B) that its steady-state computational error
lim
t→∞
X(t) − A
−1
(t)
F
nεϕ
2
/(γρ − εϕ).
Furthermore, design-parameter γ is required therein to be
greater than εϕ/ρ. In the original proof of Theorem 2 of [4],
coefficient ρ>0 is defined between f(e
ij
(0))/e
ij
(0) and
f
(0). Following that proof and considering the worst case
or such an error bound, we could determine the value of
ρ as in (7). Specifically speaking, if the linear activation-
function array F(X)=X is used, then ρ ≡ 1;ifthe
bipolar sigmoid activation-function array is used, then ρ =
max
i,j∈{1,2,··· ,n}
(f(|e
ij
(0)|)/|e
ij
(0)|); and, if the power-
sigmoid activation-function array (3) is used, then ρ 1
(where the sign of inequality “>” is taken in most situation).
The pr oof is thus complete. 2
144 2008 International Joint Conference on Neural Networks (IJCNN 2008)
Authorized licensed use limited to: SUN YAT-SEN UNIVERSITY. Downloaded on December 16, 2008 at 08:58 from IEEE Xplore. Restrictions apply.
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
x
11
x
12
x
21
x
22
time t (s)time t (s)
Fig. 4. Inversi on of time-varying matrix A(t) by ZNN (4) using power-
sigmoid activation-function array and with design parameter γ =1,where
dashed-dotted curves denote the theoretical time-varying inverse A
−1
(t)
0 5 10
0
0.5
1
1.5
2
2.5
3
3.5
4
0 5 10
0
0.5
1
1.5
2
2.5
3
3.5
4
γ =1
γ =10
time t (s)time t (s)
Fig. 5. Computational error X(t)−A
−1
(t)
F
by ZNN (4) using power-
sigmoid activation-function array and different values of parameter γ
C. Simulative Verification
For illustration and comparison, let us consider the time-
varying coefficient matrix with its theoretical inverse as the
following:
A(t)=
sin t cos t
− cos t sin t
,A
−1
(t)=
sin t − cos t
cos t sin t
.
ZNN model (4) is thus in this specific form
sin t cos t
− cos t sin t
˙x
11
˙x
12
˙x
21
˙x
22
= −γF
sin t cos t
− cos t sin t
x
11
x
12
x
21
x
22
−
10
01
,
where processing array F(·) could typically be constructed
by using n
2
power-sigmoid activation functions in the form
of (3) with ξ =4and p =3.
Figs. 4 and 5 could thus be generated to show the
performance of ZNN (4). According to Figs. 4 and 5, starting
from initial states randomly selected in [−2, 2]
2×2
, state
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
0 5 10
−2
−1
0
1
2
x
11
x
12
x
21
x
22
time t (s)time t (s)
Fig. 6. Inversion of time-varying matrix A(t) by ZNN (1) using power-
sigmoid activation-function array and with design parameter γ =1
matrices of the presented ZNN model (4) could not converge
to theoretical inverse exactly. Instead, it could only approach
to an approximate solution of A
−1
(t). In addition, as shown
in Fig. 5, when we increase design-parameter γ from 1 to
10, the steady-state computational error lim
t→+∞
X(t) −
A
−1
(t)
F
decreases rapidly. However, there always exists a
steady-state solution-error which could not vanish to zero.
These computer-simulation results have substantiated the
theoretical results presented in Subsection III-B.
For comparison between the simplified ZNN model (4) and
the original ZNN model (1) [which has the time-derivative
term
˙
A(t)X(t)], we could generate Fig. 6 by applying ZNN
(1) to this time-varying inversion. It shows the performance
of the original ZNN model (1) for the time-varying matrix
inversion under the same design parameters. Seeing and
comparing Figs. 4 and 6, we know that the time derivative
inform ation
˙
A(t) plays an important role on the convergence
of ZNN models for time-varying matrix inversion.
IV. C
ONCLUSIONS
An efficient recurrent neural network for online time-
varying matrix inversion has been proposed by Zhang et
al [4][9][10][12]. The performance analysis of such a ZNN
model without time-derivative term is presented in this paper.
For comparative purposes, both constant matrix inversion
and time-varying matrix inversion are analyzed. On one
hand, as time-derivative
˙
A ≡ 0, the global exponential
convergence of the ZNN model (2) for such a given constant-
matrix inversion could be achieved. On the other hand,
without exploiting the
˙
A(t) information, the simplified ZNN
model only approaches to the approximate inverse (instead
of the exact one). Simulation results have d emonstrated the
importance of the time derivative information
˙
A(t), limiting
the performance of the sim plified ZNN model for online
time-varying matrix inversion.
2008 International Joint Conference on Neural Networks (IJCNN 2008) 145
Authorized licensed use limited to: SUN YAT-SEN UNIVERSITY. Downloaded on December 16, 2008 at 08:58 from IEEE Xplore. Restrictions apply.
APPENDIX A
The design of the gradient-based neural network (GNN)
and the ZNN models (1)-(2)-(4) could be viewed from the
online solution of the following defining equation:
AX(t) − I =0,t∈ [0, +∞), (9)
where coefficient A is an n-dimensional square-matrix (being
constant for GNN design wh ile able to be time-varying for
ZNN design), and X(t) ∈ R
n×n
is to be solved.
GNN Design
The GNN model for online constant-matrix inversion
could be developed by the following procedure. Note that
the gradient-descent design method could only be employed
here for constant-matrix inversion problem.
• Firstly, to solve (9) for X(t) via a neural-dynamic
approach, we can define a scalar-valued norm-based
error function, E(t)=AX(t) − I
2
F
/2.Itisworth
pointing out that a minimum point of residual-error
function E(t)= AX(t) − I
2
F
/2 could be achieved
with E(t)=0, if and only if X(t) is the exact solution
of equation (9) [in other words, X(t)=X
∗
:= A
−1
].
• Secondly, a computational scheme could be designed to
evolve along a descent direction of this error function
E(t), until the minimum point X
∗
is reached. Note that
a typical descent direction is the negative gradient of
E(t),i.e.,−(∂E/∂X) ∈ R
n×n
.
• Thirdly, in view of ∂E/∂X = A
T
(AX(t)−I) ∈ R
n×n
,
it follows from the gradient-descent design f ormula
˙
X(t)=−γ∂E/∂X that the following neural-dynamic
equation could be adopted as of the conventional GNN
model for online constant-matrix inversion:
˙
X(t)=−γA
T
(AX(t) − I),t∈ [0, +∞),
where design parameter γ>0 is defined the same as
in the aforementioned ZNN models.
ZNN Design
The original ZNN model (1) for online constant and/or
time-varying matrix inversion could be d eveloped by the
following procedure by Zhang et al [4][9][10][12]. After-
wards, it could be simplified to be (1) and (2). Note that this
ZNN design method could be employed for both constant
and time-varying problems solving.
• Firstly, we could construct a matrix-valued error func-
tion E(X(t),t)=AX(t) − I. Note that E(X(t),t)
equals zero if and only if X(t) is the solution of (9).
• Secondly, the error-function time-derivative
˙
E(X(t),t)
is chosen to guarantee that every entry e
ij
(t), i, j =
1, 2, ··· ,n,ofE(X(t),t) converges to zero. Its general
form could be given as the following:
dE(X(t),t)
dt
= − γF (E(X(t),t)) , (10)
where design parameter γ>0 and activation function
array F (·) have been defined as in the previous sections.
• Finally, according to ZNN design formula (10), we
could thus have the ZNN model (1) for time-varying
matrix inversion (effective as well for constant matrix
inversion). Other ZNN variants [such as (1) and (2)]
could then be derived readily from this original ZNN
model (1).
A
PPENDIX B
Theorem 2 of [4]: Consider the general RNN model with
implementation errors Δ
B
and Δ
C
in (8). If Δ
B
(t)
F
ε
2
and Δ
C
(t)
F
ε
3
for any t ∈ [0, ∞), then the computation
error X − X
∗
F
is bounded with steady-state residual error
lim
t→∞
X(t) − X
∗
(t)
F
nϕ(ε
3
+ ε
2
ϕ)/(γρ − ε
2
ϕ) (14)
under the design-parameter requirement γ>ε
2
ϕ/ρ,where
the parameter ρ>0 is defined between f(e
ij
(0))/e
ij
(0)
and f
(0).Furthermore,asγ tends to positive infinity, the
steady-state residual error can be diminished to zero.
R
EFERENCES
[1] R. J. Steriti and M. A. Fiddy, “Regularized image reconstruction
using SVD and a neural network method for matrix inversion,” IEEE
Transactions on Signal Processing, vol. 41, no. 10, pp. 3074-3077,
1993.
[2] Y. Zhang, W. E. Leithead, and D. J. Leith,“Time-series Gaussian
process regression based on Toeplitz computation of O(N
2
) operations
and O(N)-lev el storage, ” Proc. the 44th IEEE Conference on Decision
and Control, Seville, 2005, pp. 3711-3716.
[3] R. H. Sturges Jr, “Analog matrix inversion (robot kinematics),” IEEE
Journal of Robotics and Automation, vol. 4, no. 2, pp. 157-162, 1988.
[4] Y. Zhang and S. S. Ge, “Design and analysis of a general recurrent
neural network model for time-varying matrix inversion,” IEEE Tran-
sations on Neural Networks, vol. 16, no. 6, pp. 1477-1490, 2005.
[5] N. C. F. Carneiro and L. P. Caloba, “A new algorithm for analog
matrix inversion,” Proc. the 38th Midwest Symposium on Circuits and
Systems, Rio de Janeiro, vol. 1, 1995, pp. 401-404.
[6] F. L. Luo and B. Zheng, “Neural network approach to computing
matrix inversion,” Applied Math. Comput., vol. 47, pp. 109-120, 1992.
[7] R. K. Manherz, B. W. Jordan, and S. L. Hakimi, “Analog methods
for computation of the generalized inverse,” IEEE Transactions on
Automatic Control, vol. 13, no. 5, pp. 582-585, 1968.
[8] J. Song and Y. Yam, “Complex recurrent neural network for comput-
ing the inverse and pseudo-inverse of the complex matrix,” Appiled
Mathematics and Computation, vol. 93, pp. 195-205, 1998.
[9] Y. Zhang and S. S. Ge, “A general recurrent neural network model
for time-varying matrix inversion,” Pr oc. the 42nd IEEE Conference
on Decision and Control, Hawaii, 2003, pp. 6169-6174.
[10] Y. Zhang, K. Chen, and W. Ma, “Matlab simulation and comparison
of Zhang neural network and gradient neural network of online
solution of linear time-varying equations,” DCDIS Proc. International
Conference on Life System Modeling and Simulation (LSMS 2007),
Shanghai, 2007, pp. 450-454.
[11] Y. Zhang, K. Chen, W. Ma, and X. Li, “Matlab simulation of gradient-
based neural network for online matrix inversion,” Lecture Notes on
Artificial Intelligence, vol. 4682, pp. 98-109, 2007.
[12] Y. Zhang, D. Jiang, and J. Wang, “A recurrent neural network
for solving Sylvester equation with time-varying coefficients,” IEEE
Transactions on Neural Net
works, vol. 13, no. 5, pp. 1053-1063, 2002.
[13] C. Mead, Analog VLSI and Neural Systems, Reading, Mass.: Addison-
Wesley, 1989.
[14] D. Tank and J. Hopfield, “Simple neural optimization networks: an
A/D converter, signal decision circuit, and a linear programming
circuit,” IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533-541, 1986.
[15] Y. H. Kim, F. L. Lewis, and C. T. Abdallah, “A dynamic recurrent
neural-network-based adaptive observer for a class of nonlinear sys-
tems, ” Automatica, vol. 33, no. 8, pp. 1539-1543, 1997.
[16] J. Wang and G. Wu, “A multilayer recurrent neural network for on-line
synthesis of minimum-norm linear feedback control systems via pole
assignment,” Automatica, vol. 32, no. 3, pp. 435-442, 1996.
146 2008 International Joint Conference on Neural Networks (IJCNN 2008)
Authorized licensed use limited to: SUN YAT-SEN UNIVERSITY. Downloaded on December 16, 2008 at 08:58 from IEEE Xplore. Restrictions apply.