Content uploaded by Jinhua She
Author content
All content in this area was uploaded by Jinhua She on Oct 09, 2015
Content may be subject to copyright.
Automatica 41 (2005) 1405–1412 www.elsevier.com/locate/automatica
Brief paper
Delay-dependentstabilizationoflinearsystemswithtime-varying
stateandinputdelays夡
Xian-Ming Zhanga,b, Min Wua,∗, Jin-Hua Shec,Yong Hea,b
aSchool of Information Science and Engineering, Central South University, Changsha 410083, China
bSchool of Mathematical Science and Computing Technology, Central South University, Changsha 410083, China
cSchool of Bionics, Tokyo University of Technology, Tokyo 192-0982, Japan
Received 12 December 2003; received in revised form 8 January 2005; accepted 9 March 2005
Available online 23 May 2005
Abstract
The integral-inequality method is a new way of tackling the delay-dependent stabilization problem for a linear system with time-varying
state and input delays: ˙x(t)=Ax(t ) +A1x(t −h1(t))+B1u(t) +B2u(t −h2(t )). In this paper, a new integral inequality for quadratic terms
is first established. Then, it is used to obtain a new state- and input-delay-dependent criterion that ensures the stability of the closed-loop
system with a memoryless state feedback controller. Finally, some numerical examples are presented to demonstrate that control systems
designed based on the criterion are effective, even though neither (A, B1)nor (A +A1,B
1)is stabilizable.
䉷2005 Elsevier Ltd. All rights reserved.
Keywords: Input delays; State delays; Delay-dependent stability; Stabilization; Integral inequality; Linear matrix inequality (LMI)
1. Introduction
Time delays are frequently encountered in a variety of
dynamic systems, such as nuclear reactors, chemical engi-
neering systems, biological systems, and population dynam-
ics models (Kolmanovskii & Nosov, 1986; Kuang, 1993).
They are often a source of instability and degradation in
control performance in many control systems. The analysis
of the stability of dynamic control systems with delays and
the synthesis of controllers for them are important both in
theory and practice (see Niculescu, 2001; Gu, Kharitonov,
& Chen, 2003), and are thus of interest to a great number
夡This paper was not presented at any IFAC meeting. This paper was
recommended for publication in revised form by Associate Editor Didier
Henrion under the direction of Editor Roberto Tempo. This work was
supported in part by the Natural Science Foundation of China under
Grant No. 60425310 and the Teaching and Research Award Program
for Outstanding Young Teachers in Higher Education Institutions of the
Ministry of Education, PR China (TRAPOYT).
∗Corresponding author. Tel.: +86731 8836091.
E-mail addresses: min@csu.edu.cn (M. Wu),
she@cc.teu.ac.jp (J.-H. She).
0005-1098/$- see front matter 䉷2005 Elsevier Ltd. All rights reserved.
doi:10.1016/j.automatica.2005.03.009
of researchers (see Barmish, 1985; Xie, 1996; Gu, 2000;
Han, 2002). Recently, Richard (2003) summarized current
research on time-delay systems and listed four open prob-
lems, one of which is the following.
Open Problem 1. Consider a linear system with both state
and input delays:
˙x(t) =Ax(t ) +A1x(t −h) +B1u(t) +B2u(t −h). (1)
If the pairs (A, B1)and/or (A+A1,B
1)are not controllable,
how can the term B2u(t −h) be used to achieve efficient
control?
When A1=0, the system has an input delay. An easy
way to deal with it is to reduce it to an ordinary delay-free
system by the Artstein model reduction method (Kwon &
Pearson, 1980; Artstein, 1982; Choi & Chung, 1995). How-
ever, the complete transformation can only be obtained for
a fully known system. That is, this method is not valid when
the system contains a time-varying delay or uncertainties.
Furthermore, stabilizing controllers obtained by this method
are distributed, and therefore difficult to implement.
1406 X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412
For the case A1= 0, Fiagbedzi and Pearson (1986, 1987)
designed a feedback controller to stabilize the system (1) by
transforming it into an ordinary delay-free system and us-
ing the concept of spectral stabilizability. The fact that this
method requires that the unstable poles of the system be
known exactly makes it difficult to use on a system with a
time-varying delay or uncertainties, and the resulting con-
troller is also distributed. Choi and Chung (1995),Kim,
Jeung, and Park (1996) and Han and Mehdi (1998) proposed
another method to directly design a robust stabilizing con-
troller for an uncertain system with state and input delays.
Their approach involves the design of a memoryless con-
troller to guarantee the stability of the closed-loop system.
Since this controller is independent of the delay, it tends to
be unduly conservative, especially when the actual delay is
small. To the best of our knowledge, surprisingly few delay-
dependent conditions have so far been established for the
open problem stated above.
This paper proposes a new method called the integral-
inequality method that can be used to study the delay-
dependent stabilization issue of the open problem for
time-varying delays. Incorporating Moon et al.’s in-
equality (Moon, Park, Kwon, & Lee, 2001) and the
Leibniz–Newton formula yields an integral inequality for
quadratic terms. This is used to obtain a new state- and
input-delay-dependent stabilization condition by means of
the Lyapunov–Krasovskii functional approach. It is easy to
show that the new criterion does not require any assump-
tions about the system matrices, e.g., neither (A, B1)nor
(A +A1,B
1)needs to be stabilizable. So, a control system
designed based on this criterion is effective, even if neither
(A, B1)nor (A +A1,B
1)is stabilizable. Moreover, a nu-
merical example shows that applying the new criterion to
the system (1) with B2=0 yields a less conservative result
than those obtained by Fridman and Shaked (2002, 2003)
and Gao and Wang (2003).
Notation: Throughout this paper, the superscripts ‘−1’
and ‘T’ stand for the inverse and transpose of a matrix, re-
spectively; Rndenotes an n-dimensional Euclidean space;
Rn×mis the set of all n×mreal matrices; P>0 means that
the matrix Pis positive definite; Iis an appropriately dimen-
sioned identity matrix; diag{···} denotes a block-diagonal
matrix; and the symmetric terms in a symmetric matrix are
denoted by *, e.g., XY
∗Z=XY
YTZ.
2. Problem statement
Consider the following system with time-varying state and
input delays:
˙x(t)=Ax(t )+A1x(t −h1(t ))
+B1u(t)+B2u(t −h2(t )), t > 0,
x(t) =(t), t ∈[−max{¯
h1,¯
h2},0], (2)
where x(t) ∈Rnand u(t) ∈Rmare the state and control
inputs, respectively; is a continuously differential initial
function; A,A1,B1and B2are known constant real matrices
with appropriate dimensions; and h1(t) and h2(t) are time-
varying bounded delays satisfying
0h1(t)¯
h1,0h2(t)¯
h2,˙
h1(t)d<1. (3)
The memoryless state feedback controller
u(t) =Kx (t ) (4)
is employed to stabilize (2). The objective of this study is
to develop a new delay-dependent stabilization method that
provides a controller gain, K, as well as upper bounds, ¯
h1and
¯
h2, on the delays such that the resulting closed-loop system,
(2) and (4), is asymptotically stable for any h1(t) and h2(t)
satisfying (3). For this purpose, the following lemmas are
first introduced.
Lemma 1 (Moon et al., 2001). The following inequality
holds for any a∈Rna,b∈Rnb,N∈Rna×nb,X∈Rna×na,
Y∈Rna×nb,and Z∈Rnb×nb:
−2aTNba
bTXY−N
∗Za
b, (5)
where
XY
∗Z0.
Applying the above lemma yields the following integral
inequality for quadratic terms.
Lemma 2. Let x(t) ∈Rnbe a vector-valued function with
first-order continuous-derivative entries. Then, the following
integral inequality holds for any matrices X,M1,M2∈
Rn×nand Z∈R2n×2n,and a scalar function h:=h(t) 0:
−t
t−h
˙xT(s)X ˙x(s)dsT(t)Υ (t) +hT(t)Z(t), (6)
where
Υ:=MT
1+M1−MT
1+M2
∗−MT
2−M2,(t) :=x(t)
x(t −h),
XY
∗Z0 (7)
with Y:=[M1M2].
Proof. From the Leibniz–Newton formula,
0=x(t) −x(t −h) −t
t−h
˙x(s)ds. (8)
X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412 1407
So, the following equation holds for any N1,N2∈Rn×n:
0=2[xT(t)NT
1+xT(t −h)NT
2]
×x(t) −x(t −h) −t
t−h
˙x(s)ds
=2T(t)NT[I−I](t) −2t
t−h
T(t)NT˙x(s)ds, (9)
where N:=[N1N2]. Applying Lemma 1 with a:=˙x(s)
and b:=
(t) yields
−2t
t−h
T(t)NT˙x(s)ds
t
t−h
˙xT(s)X ˙x(s)ds
+2T(t )(Y T−NT)[I−I](t) +hT(t )Z(t ). (10)
Substituting (10) into (9) gives us
−t
t−h
˙xT(s)X ˙x(s)ds
2T(t )Y T[I−I](t) +hT(t )Z(t ). (11)
After a simple rearrangement, (11) yields (6). This completes
the proof.
Remark 1. (6) is called an integral inequality. It plays a
key role in the derivation of a criterion for delay-dependent
stabilization in this paper. Note that the free matrices N1and
N2introduced in the proof do not appear in the integral in-
equality.The conservatism of the descriptor model transfor-
mation method, which is closely related to free parameters,
is discussed in the next section.
Remark2. The integral inequality (6) is quite different from
the ones used in Gu (2000). The free terms in (6), for exam-
ple, M1and M2, help in the design of the controller (4) for
system (2), but the integral inequalities in Gu (2000) do not.
The integral inequality (6) holds under the inequality con-
straint (7). When X>0, the constraint condition can be
removed from the integral inequality. For example, taking
Z=YTX−1Yguarantees (7) because
XY
∗Z=XY
∗YTX−1Y=GTG0,
where
G:= X1/2X−1/2Y
00
.
This yields the following proposition.
Proposition 3. Let x(t) ∈Rnbe a vector-valued function
with first-order continuous-derivative entries. Then,the fol-
lowing integral inequality holds for any matrices M1,M2∈
Rn×nand X=XT>0, and a scalar function h:= h(t)0:
−t
t−h
˙xT(s)X ˙x(t) ds
T(t) MT
1+M1−MT
1+M2
∗−MT
2−M2(t)
+hT(t) MT
1
MT
2X−1[M1M2](t), (12)
where (t) is defined in Lemma 2.
3. Main results
This section presents the delay-dependent stabilization
conditions obtained by means of the integral-inequality
method.
The closed-loop system constructed by means of (2) and
(4) is given by
˙x(t) =AKx(t) +A1x(t −h1(t)) +BKx(t −h2(t)), (13)
where
AK=A+B1K, BK=B2K. (14)
The following theorem is obtained for system (13).
Theorem 4. For given numbers i,i,i=1,2, if there exist
positive matrices ¯
P>0, ¯
R1>0, ¯
R2>0, and ¯
Q>0such
that the following LMI holds:
:=
11 12 13 14 15 00
¯
P
∗22 024 25 ¯
h1¯
R100
∗∗
33 34 35 0¯
h2¯
R20
∗∗∗−
¯
h1¯
R10000
∗∗∗ ∗ −
¯
h2¯
R2000
∗∗∗ ∗ ∗ −
¯
h1¯
R100
∗∗∗ ∗ ∗ ∗ −
¯
h2¯
R20
∗∗∗∗∗∗∗−
¯
Q
<0,(15)
1408 X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412
where
11 =A¯
P+¯
PA
T+B1Y+YTBT
1
−1−1
1(A1¯
Q+¯
QAT
1)−2−1
2(B2Y+YTBT
2)
−2
1−2
1(1−d) ¯
Q,
12 =−1
1A1¯
Q+¯
P+1−1
1¯
Q+1−2
1(1−d) ¯
Q,
13 =−1
2B2Y+¯
P+2−1
2¯
P,
14 =¯
h1(Y TBT
1+¯
PA
T−1−1
1¯
QAT
1−2−1
2YTBT
2),
15 =¯
h2(Y TBT
1+¯
PA
T−1−1
1¯
QAT
1−2−1
2YTBT
2),
22 =−2−1
1¯
Q−−2
1(1−d) ¯
Q,
24 =¯
h1−1
1¯
QAT
1,
25 =¯
h2−1
1¯
QAT
1,
33 =−2−1
2¯
P,
34 =¯
h1−1
2YTBT
2,
35 =¯
h2−1
2YTBT
2,
then closed-loop system (13) is asymptotically stable and
the state feedback control law is given by
u(t) =Y¯
P−1x(t). (16)
Proof. Choose a Lyapunov–Krasovskii functional candidate
as follows:
V (t) =xT(t )P x(t ) +
2
j=10
−¯
hjt
t+
˙xT(s)Rj˙x(s)dsd
+t
t−h1(t)
xT(s)Qx (s) ds,
where P>0, Q>0, R1>0, and R2>0. Then, the time
derivative of V (t ) along the trajectory (13) satisfies
˙
V (t) =2xT(t )P ˙x(t) +
2
j=1
¯
hj˙xT(t)Rj˙x(t)
−(1−˙
h1(t))xT(t −h1(t ))Qx(t −h1(t))
+xT(t)Qx(t ) −
2
j=1t
t−¯
hj
˙xT(s)Rj˙x(s)ds. (17)
From (3), it is clear that the following is true for j=1,2:
−t
t−¯
hj
˙xT(s)Rj˙x(s)ds−t
t−hj(t)
˙xT(s)Rj˙x(s)ds. (18)
Applying the integral inequality (12) to the term on the right-
hand side of (18) for any M1j,M2j∈Rn×nyields the
following integral inequality for j=1,2:
−t
t−hj(t)
˙xT(s)Rj˙x(t) ds
T
j(t) MT
1j+M1j−MT
1j+M2j
∗−MT
2j−M2jj(t)
+¯
hjT
j(t) MT
1j
MT
2jR−1
j[M1jM2j]j(t), (19)
where
T
j(t) =[xT(t) x T(t −hj(t ))],j=1,2.
Substituting (18) and (19) into (17), carrying out some
algebraic manipulations, and rearranging the terms gives
˙
V (t)T(t)
H+
2
j=1
¯
hjT
1Rj1
(t)
+T(t) +¯
h1T
2R−1
12+¯
h2T
3R−1
23(t), (20)
where
T(t) =[xT(t) xT(t −h1(t )) xT(t −h2(t ))],
H=
H11 PA
1−MT
11 +M21 PBK−MT
12 +M22
∗−(1−d)Q−MT
21 −M21 0
∗∗ −MT
22 −M22
,
H11=PA
K+AT
KP+Q+MT
11+M11+MT
12+M12,
1=[AKA1BK],
2=[M11 M21 0],
3=[M12 0M22].(21)
From (20), we find that, if the following matrix inequality
holds:
:=
H¯
h1T
1¯
h2T
1¯
h1T
2¯
h2T
3
∗−
¯
h1R−1
1000
∗∗−
¯
h2R−1
200
∗∗ ∗−
¯
h1R10
∗∗ ∗ ∗−
¯
h2R2
<0, (22)
then applying the Schur complement (Bernussou, Peres,
& Geromel, 1989) yields ˙
V (t ) < 0. Thus, by using the
Lyapunov–Krasovskii functional theorem (Proposition 5.2
in Gu et al., 2003), we can conclude that (13) is asymptoti-
cally stable.
In order to obtain a controller gain, K, from the non-
linear matrix inequality (22) (the nonlinearities come from
R−1
i,i =1,2), we first let
W=P00
M11 M21 0
M12 0M22 ,¯
A=AKA1BK
I−I0
I0−I.
Then,
H=WT¯
A+¯
ATW+diag{Q, −(1−d )Q, 0},
T
2=WT0
I
0,T
3=WT0
0
I.
Now, consider the case in which M11 =1P,M12 =2P,
M21 =1Q,M22 =2P,1= 0, and 2= 0. In this case,
Wis invertible; and
W−1=
P−100
−1−1
1Q−1−1
1Q−10
−2−1
2P−10−1
2P−1
. (23)
X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412 1409
Let T=diag{W−1,I,I,R
−1
1,R
−1
2}. Then,
TTT=
HT¯
h1W−TT
1¯
h2W−1T
1¯
h1T
1¯
h2T
2
∗−
¯
h1R−1
1000
∗∗−
¯
h2R−1
200
∗∗ ∗−
¯
h1R−1
10
∗∗ ∗ ∗−
¯
h2R−1
2
,(24)
where
HT=¯
AW −1+W−T¯
AT
+W−Tdiag{Q, −(1−d)Q, 0}W−1,
1=[0R−1
10],
2=[00R−1
2].
After substituting (14) and (23) into (24), setting ¯
P=P−1,
¯
R1=R−1
1,¯
R2=R−1
2,¯
Q=Q−1,andY=KP−1, and perform-
ing some simple algebraic manipulations, we find that if LMI
(15) holds, the Schur complement ensures that TTT<0,
and thus <0. So, the resulting closed-loop system (13) is
asymptotically stable, and the desired controller is defined
by (4) with K=Y¯
P−1. This completes the proof.
Free matrices are often introduced in the derivation of
delay-dependent stabilization criteria for a system with a
state delay. Since they are free, they should not be subject to
any constraints. However, they cannot ultimately be elimi-
nated from the conditions in existing criteria; and as a result,
they are in fact subject to constraints. In contrast, the con-
dition in Theorem 4 contains no free matrices at all. This
is the main reason why it produces less conservative results
than existing methods. To illustrate this point, we compare
Theorem 4 with the descriptor model transformation method
in Fridman and Shaked (2002, 2003) and Gao and Wang
(2003).
Consider the system
˙x(t) =Ax(t ) +A1x(t −h) +B1u(t) (25)
in Fridman and Shaked (2002, 2003) and Gao and
Wang(2003), which is a special case of (2) (B2=0).Inor-
der to derive a stabilization condition for the state feedback
u(t) =Kx (t ), the descriptor model transformation method
introduces the following zero term into the derivative of the
Lyapunov–Krasovskii functional:
0=2[xT(t )P T
2+˙xT(t )P T
3]
×−˙x(t) +(A +B1K+A1)x(t)−A1t
t−h
˙x(s)ds,
(26)
where P2and P3are free matrices. Moon et al.’s inequality
is applied to bound the cross term and to derive some sta-
bilization conditions. Since P2and P3are free, they should
not appear in the bounding term, and should not be subject
to any constraints. However, the free matrices in (19a) of
Fridman and Shaked (2002) and (36) of Fridman and Shaked
(2003) are restricted to Q2+QT
2<0andQ3+QT
3>0,
whereQ2=−P−1
3P2P−1
1andQ3=P−1
3withP1>0. Similar
conditions are also imposed in Gao and Wang (2003). This
is the main reason for the conservatism of the descriptor
model transformation method. On the other hand, for (25),
Eq. (26) is equivalent to
0=2[xT(t )P T
2+{(A +B1K)x(t)+A1x(t −h)}TPT
3]
×A1x(t) −A1x(t −h) −A1t
t−h
˙x(s)ds
=2[xT(t)NT
1+xT(t −h)NT
2]
×x(t) −x(t −h) −t
t−h
˙x(s)ds,
where NT
1=[PT
2+(A +B1K)TPT
3]A1and NT
2=AT
1PT
3A1.
When the cross term is bounded using Lemma 2, as stated
in Remark 1, neither of the free matrices, P2and P3,inN1
and N2appears in the result. So, in general, the integral-
inequality method produces less conservative results than
the descriptor model transformation method.
Theorem 4 employs four tuning parameters: iand i(i=
1,2). One way to adjust them is as follows. First, a consid-
eration of (15) yields 2>0, 1>−(1−d)/2, and 1= 0.
From (21), we know that the choice of i<0(i =1,2)in-
creases the degree of stability of H11 defined in (21). So, the
tuning parameters are chosen under the following condition:
1<0,2<0,
−1<(1−d)/2,1= 0and −2<0 (27)
and we define x=[
1212]T. Next, we choose the cost
function to be f(x)=tmin, for which tminI, where
is defined in (15). The scalar parameter tmin, which is a
function of x, is obtained by solving the feasibility problem
with the solver feasp in the LMI Toolbox (The MathWorks
(1995, Version 1.0.8)). It is positive when there exists no
feasible solution to the set of LMIs under consideration.
Finally, applying a numerical optimization algorithm, such
as fmincon in the Optimization Toolbox (The MathWorks
(2004, Version 3)), to f(x) under the constraint (27) yields
a locally convergent solution to the problem. If the resulting
minimum value of the cost function is negative, then the
tuning parameters that solve the problem are found. This
method is summarized in the following algorithm.
Algorithm (Maximizing ¯
h1>0forafixed¯
h2>0):
Step 1: Set a step length, hstep, for ¯
h1. Choose an upper
bound, ub, and a lower bound, lb,onxsatisfying (27). Select
the initial values x0for xand h10 for ¯
h1(where h10 is
sufficiently small). In addition, our experience shows that
choosing x0=[−1,−1,1,1]Tworks in a large number of
cases. Solve the following problem:
minxf(x), subject to (27)(28)
1410 X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412
using the function fmincon with x0,h10,ub,andlb;and
obtain a new value for the parameter vector x.Iff(x)<0,
go to step 2; otherwise, stop.
Step 2: Let x0=xand h10=h10 +hstep; and solve problem
(28) again using the function fmincon with the new x0,
h10,ub,andlb.
Step 3: If f(x)<0, go to step 2; otherwise, stop.
Remark 3. For the above algorithm, a smaller step length
for ¯
h1results directly in an ¯
h1with a higher accuracy; but
the price we pay is an increase in computation time. To keep
the computation time down, we can obtain a suitable ¯
h1with
a higher accuracy in two steps: First, choose a relatively
large step length, e.g., hstep =0.1, to solve (28) using the
above algorithm and obtain an ¯
h1with a low accuracy and
the corresponding parameters x=[1212]T. Then, use
these parameters to solve (15), and thus obtain an ¯
h1with a
higher accuracy.
Remark 4. The criterion in Theorem 4 does not require
any assumptions about the system matrices, e.g., the pairs
(A, B1)and (A +A1,B
1)need not be stabilizable. So, sys-
tems designed based on this criterion that have both state and
input delays can be stabilized, even when neither (A, B1)
nor (A +A1,B
1)is stabilizable.
Remark 5. Theorem 4 employs the integral inequality (12).
Employing (6) and (7) to bound (18) yields a more general
result, but it makes the condition more complicated. On the
other hand, when the symbol “” in (7) is replaced by “>”,
it can easily be shown that the result obtained by using (6)
and (7) to bound (18) is the same as that obtained by using
(12) in combination with the variable elimination technique
(Gu, 2001).
4. Numerical examples
This section presents numerical examples that demon-
strate the validity of the method described above.
Example 5. Consider the following system:
˙x(t) =Ax(t ) +A1x(t −h1(t))
+B1u(t) +B2u(t −h2(t)), (29)
where
A=
0000
00.500
−0.500.30
0001
,
A1=
−2−0.50 0
−0.2−10 0
0.50−2−0.5
000−1
,
B1=[1110]T,B
2=[0111]T.
Table 1
Upper bound, ¯
h, and corresponding state feedback control law, K, for
system (30)
Method ¯
hK
Fridman and Shaked (2003) 1.408 Not provided
Fridman and Shaked (2002) 1.510 [−58.31 −294.9]
Gao and Wang (2003) 3.200 [−7.964 −14.77]
Theorem 4 6.000 [−70.18 −77.67]
and there are two constant delays satisfying 0hi¯
hi,i=
1,2.
It is clear that neither (A, B1)nor (A +A1,B
1)of (29)
is stabilizable. In spite of that, applying Theorem 4 yields a
memoryless state feedback control law, u(t ) =Kx(t ), that
stabilizes the system (29). The algorithm in Section 3 was
used to find a maximum ¯
h1for ¯
h2=0.1. Taking the ini-
tial values of the parameters to be x0=[1212]T=
[−1−111]Tand h10 =0.2, setting the step length to
hstep =0.1 and choosing the upper and lower bounds on xto
be ub =[−0.01 −0.0155]Tand lb =[−4−40.10.1]T,
respectively, yielded a locally optimal combination: 1=
−1.8953,2=−1.4451,1=2.7388 and 2=1.3654, which
gave the maximum value ¯
h1=0.56. The corresponding con-
trol law was K=[0.0129−0.0031−0.0009−0.3181].How-
ever, no delay-independent state feedback control law can be
found by using the methods in Choi and Chung (1995); Kim
et al. (1996); Han and Mehdi (1998). That is, their methods
are inapplicable to this example.
Example 6. Consider the following system:
˙x(t) =Ax(t ) +A1x(t −h) +Bu(t ), (30)
where
A=00
01
,A
1=−1−1
0−0.9,B=0
1,
and there is a constant delay, h, satisfying 0h¯
h.
Equation (30) contains only a state delay. Fridman and
Shaked (2002, 2003) and Gao and Wang (2003) calculated
the upper bound ¯
hfor which a state feedback control law,
K, exists to stabilize (30). Their results are listed in Table 1
along with the results obtained by Theorem 4 for 1=−0.11
and 1=0.01. Clearly, our method produces much less
conservative results, thus demonstrating its validity.
This example shows that Theorem 4, which employs an
integral inequality, produces much less conservative results
than the descriptor model transformation method in Fridman
and Shaked (2002, 2003) and Gao and Wang (2003).
5. Conclusion
This paper has presented a state- and input-delay-
dependent stabilization criterion for a system with both
X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412 1411
state and input delays that employs a memoryless state
feedback control law. The stabilizing control law is obtained
by using the Lyapunov–Krasovskii functional approach
combined with an integral inequality. The criterion thus
obtained does not require any additional assumptions about
the system matrices, for example, that the pairs (A, B1)
and (A +A1,B
1)be stabilizable. So, the designed con-
trol law for a system with both state and input delays is
effective, even when neither (A, B1)nor (A +A1,B
1)is
stabilizable. Numerical examples illustrate the design pro-
cedure and show that the criterion is less conservative than
existing ones. Moreover, the proposed method can easily
be applied to a delay-system with uncertainties to yield a
delay-dependent robust stabilization condition.
Acknowledgements
The authors would like to thank the Associate Editor and
the anonymous reviewers for their constructive comments
and suggestions to improve the quality of the paper.
References
Artstein, Z. (1982). Linear systems with delayed control: A reduction.
IEEE Transactions on Automatic Control,27, 869–879.
Barmish, B. R. (1985). Necessary and sufficient conditions for quadratic
stabilizability of an uncertain system. Journal of Optimization Theory
and Applications,46(4), 399–408.
Bernussou, J., Peres, P. L. D., & Geromel, J. C. (1989). A linear
programming oriented procedure for quadratic stabilization of uncertain
systems. Systems & Control Letters,13(1), 65–72.
Choi, H. H., & Chung, M. J. (1995). Memoryless stabilization of
uncertain dynamic systems with time-varying delayed state and control.
Automatica,31, 1349–1351.
Fiagbedzi, Y. A., & Pearson, A. E. (1986). Feedback stabilization of
linear autonomous time lag systems. IEEE Transactions on Automatic
Control,AC-31(9), 847–855.
Fiagbedzi, Y. A., & Pearson, A. E. (1987).A multistage reduction technique
for feedback stabilizing distributed time-lag systems. Automatica,
23(3), 311–326.
Fridman, E., & Shaked, U. (2002). An improvement stabilization method
for linear time-delay systems. IEEE Transactions onAutomatic Control,
47(11), 1931–1937.
Fridman, E., & Shaked, U. (2003). Delay-dependent stability and H∞
control: Constant and time-varying delays. International Journal of
Control,76, 48–60.
Gao, H. J., & Wang, C. H. (2003). Comments and further results on a
descriptor system approach to H∞control of linear time-delay systems.
IEEE Transactions on Automatic Control,48(3), 520–525.
Gu, K., 2000. An integral inequality in the stability problem of time-
delay systems. Proceedings of the 39th IEEE conference on decision
and control (pp. 2805–2810).
Gu, K. (2001). A further refinement of discretized Lyapunov functional
method for the stability of time-delay systems. International Journal
of Control,74(10), 967–976.
Gu, K., Kharitonov, L., & Chen, J. (2003). Stability of time-delay systems.
Boston: Birkhauser.
Han, Q. L., & Mehdi, D. (1998). Comments on robust control
for parameter uncertain delay systems in state and control input.
Automatica,34, 1665–1666.
Han, Q. L. (2002). Robust stability of uncertain delay-differential systems
of neutral type. Automatica,38, 719–723.
Kim, J. H., Jeung, E. T., & Park, H. B. (1996). Robust control for parameter
uncertain delay systems in state and control input. Automatica,32,
1337–1339.
Kolmanovskii, V. B., & Nosov, V. R. (1986). Stability of functional
differential equations. London: Academic Press.
Kuang, Y. (1993). Delay differential equations with applications in
population dynamics. Boston: Academic Press.
Kwon, W. H., & Pearson, A. E. (1980). Feedback stabilization of linear
systems with delayed control. IEEE Transactions onAutomatic Control,
25, 266–269.
Moon, Y. S., Park, P. G., Kwon, W. H., & Lee, Y. S. (2001). Delay-
dependent robust stabilization of uncertain state-delayed systems.
International Journal of Control,74(14), 1447–1455.
Niculescu, S. (2001). Delay effects on stability: A robust control approach.
London: Springer.
Richard, J. (2003). Time-delay systems: An overview of some recent
advances and open problems. Automatica,39, 1667–1694.
Xie, L. (1996). Output feedback H∞control of systems with parameter
uncertainty. International Journal of Control,63(4), 741–750.
The MathWorks. (1995). LMI Control Toolbox User’s Guide. Version 1,
Natick: The MathWorks, Inc.
The MathWorks. (2004). Optimization Toolbox User’s Guide. Version 3,
Natick: The MathWorks, Inc.
Xian-Ming Zhang was born in 1968. He re-
ceived the M.S. degree in applied mathemat-
ics from Central South University, Chang-
sha, China in 1991, and is currently work-
ing toward the Ph.D. degree in the con-
trol theory and engineering in Central South
University. His current research interests are
time-delay systems, robust control, and its
applications.
Min WU was born in 1963. He received
the B.S. and M.S. degrees in engineering
from Central South University, Changsha,
China in 1983 and 1986, respectively. He re-
ceived the Ph.D. degree in engineering from
Tokyo Institute of Technology, Tokyo, Japan
in 1999. He is now a professor in the Cen-
tral South University. He received the con-
trol engineering practice paper prize of IFAC
in 1999 (jointly with M. Nakano and J.-H.
She). His current research interests are pro-
cess control, robust control and intelligent
systems.
Jin-Hua She was born in 1963. He received
a B.S. in engineering from Central South
University, Changsha, China, in 1983, and
an M.S. in 1990 and a Ph.D. in 1993 in en-
gineering from the Tokyo Institute of Tech-
nology, Tokyo, Japan. In 1993, he joined the
Department of Mechatronics, School of En-
gineering, Tokyo University of Technology;
and in April 2004, he was transferred to the
University’s School of Bionics, where he is
currently an associate professor. He received
the control engineering practice paper prize
of IFAC in 1999 (jointly with M. Wu and M. Nakano). His current
research interests include the application of control theory, repetitive
control, expert control, Internet-based engineering education, and robotics.
He is a member of the Society of Instrument and Control Engineers
(SICE), the Institute of Electrical Engineers of Japan (IEEJ) and the IEEE.
1412 X.-M. Zhang et al. / Automatica 41 (2005) 1405– 1412
Yong He received his B.S. and M.S. in ap-
plied mathematics from Central South Uni-
versity, Changsha, China in 1991 and 1994,
respectively. In July, 1994, he joined the staff
of the university, where he is currently an
associate professor. He received his Ph.D.
degree in the control theory and engineering
from Central South University, Changsha,
China in 2004. His current research interests
are time-delay systems, robust control and
its applications, networked control systems,
PID control, and neural networks.