Content uploaded by Michel Gevers
Author content
All content in this area was uploaded by Michel Gevers
Content may be subject to copyright.
Cheapest open-loop identification for control
X. Bombois(1), G. Scorletti(2),M.Gevers
(3), R. Hildebrand(4) , P. Van den Hof(1)
(1) Delft Center for Systems and Control, Delft University of Technology, The Netherlands
(2) GREYC, Equipe AUTO, ISMRA, Caen, France
(3) CESAME, Universit´
e Catholique de Louvain, Belgium
(4) LMC, Universit´
e Joseph Fourier, Grenoble, France
Abstract— This paper presents a new method of identifica-
tion experiment design for control. Our objective is to design
the open-loop identification experiment with minimal excita-
tion such that the controller designed with the identified model
stabilizes and achieves a prescribed level of H∞performance
with the unknown true system G0.
I. INTRODUCTION
A controller for a real-life system G0is usually designed
on the basis of a model ˆ
Gof G0identified using data
collected from the true system. When designing the
identification experiment, the control engineer often has
to make a trade-off between her/his desire of obtaining an
accurate model and the economical constraint of keeping
the experimental costs low. Obtaining an accurate model
requires a long identification experiment and a powerful
input signal, while keeping the experimental costs low
corresponds to a short experiment time and the excitation
of G0with a low power signal.
The typical approach to this problem has been to
maximize the accuracy of the identified model (possibly
with a given, say, control-oriented objective in mind) for a
given experiment time and under prespecified constraints
on input power (see e.g. [10], [9], [7] and references
therein). In this paper, we address this tradeoff from
the dual perspective; namely, we seek the least costly
identification experiment leading to a required model
accuracy, with a control-oriented objective in mind. More
precisely, we assume that the experiment time is fixed, and
we then define the least costly identification experiment for
control as the experiment on G0whose input signal power
Puis minimized under the constraint that the controller
ˆ
Cdesigned from the identified model ˆ
Gis guaranteed to
stabilize and to achieve sufficient performance with the
unknown true G0. In this paper, the desired performance on
G0is expressed by magnitude bounds on one (or several)
closed-loop transfer functions of [ˆ
CG
0](H∞performance
constraints).
This experiment design problem is solved in the
following context. We assume that the identification
experiment is performed in open loop using Prediction
This paper presents research results of the Belgian Programme on
Interuniversity Attraction Poles, initiated by the Belgian Federal Science
Policy Office. The scientific responsibility rests with its author(s).
Error identification, and with a model structure G(z,θ)
to which the true G0belongs [10]. This yields a model
ˆ
G=G(z,θN)and an uncertainty region, centered on ˆ
G,
containing the true G0at a user-chosen probability level.
We use an additive description of this uncertainty region,
whose size ruis a function of the input signal. In order to
highlight this dependence, the uncertainty set is denoted
Dru(ˆ
θN). It can be estimated from the data. Finally, the
controller ˆ
Cto be applied to the true system is designed
from ˆ
Gusing a pre-defined H∞control design method
with fixed weights.
We propose the following two-step methodology to solve
the experiment design problem with the cheapest experi-
mental cost. In a first step, we determine the size radm(ω)
of the largest additive uncertainty region that we can a-
priori tolerate around the to-be-identified model ˆ
Gfor the
controller ˆ
C=C(ˆ
G)to achieve the required H∞perfor-
mance level with all systems in this uncertainty region. In
a second (identification design) step, we then deduce the
least powerful quasi-stationary input signal u(t)such that
the size ru(ω)of the identified uncertainty region Dru(ˆ
θN)
is at each frequency smaller than the largest admissible
uncertainty radius radm(ω). The solution of the second
step is based on results of [12] which show that, for each
quasi-stationary input signal u(t), one can define a finite-
sized vector xuof moments of the input power spectrum
Φu(ω)weighted with a special weight depending on the
true system G0, with the property that both the inverse P−1
θ
of the covariance matrix of the parameter vector identified
with u(t)and the power Puof u(t)are affine functions of
xu. We show that the optimization of the power Puof u(t)
under the constraint ru(ω)≤radm(ω)∀ωcan therefore
be reduced to a tractable LMI1optimization problem on
the finite-sized vector of moments xu. A quasi-stationary
input signal u(t)can easily be constructed from the optimal
moment vector xu.
II. PREDICTION ERROR IDENTIFICATION ASPECTS
We consider the identification of a linear time-invariant
single input single output system with a model structure
M={G(z,θ),H(z, θ)},θ∈Rk, that is able to represent
1Linear Matrix Inequality.
the true system. Thus, the true system is given by:
y(t)=G(z,θ0)u(t)+H(z,θ0)e(t)(1)
for some unknown parameter vector θ0∈Rk, and with
e(t)a white noise of variance σ2
e.
A model ˆ
G=G(z, ˆ
θN),ˆ
H=H(z, ˆ
θN)of the
true system is obtained from Ninput-output data
y(t)and u(t)(t=1...N), using a Prediction Error
criterion: ˆ
θN
∆
=arg minθ1
NN
t=1 2(t, θ)with (t, θ)∆
=
H(z,θ)−1(y(t)−G(z,θ)u(t)).
The cost of the identification experiment is determined
by the total power Puof the chosen input signal u(t):
Pu=1
2ππ
−π
Φu(ω)dω (2)
where Φu(ω)is the power spectrum of the input signal
u(t), assumed to be quasistationary. It is this power Pu
that we shall seek to minimize.
The identified parameter vector ˆ
θNis asymptotically
normally distributed, ˆ
θN∼N(θ0,P
θ)and, given the
full-order model structure assumption, the covariance
matrix Pθhas the following expression [10]: Pθ=
σ2
e
N¯
Eψ(t, θ0)ψ(t, θ0)T−1with ψ(t, θ)=−∂(t,θ)
∂θ . The
dependence of the covariance matrix Pθon the power
spectrum of the selected input signal u(t)is evidenced by
the following expression of the inverse of Pθ[10]:
P−1
θ=N
σ2
e
1
2ππ
−π
Fu(ejω,θ
0)Fu(ejω,θ
0)∗Φu(ω)dω
+N1
2ππ
−π
Fe(ejω,θ
0)Fe(ejω,θ
0)∗dω(3)
Here, Fu(z,θ0)=ΛG(z,θ0)
H(z,θ0),Fe(z,θ0)= ΛH(z,θ0)
H(z,θ0),
ΛG(z,θ)=∂G(z,θ)
∂θ and ΛH(z,θ)= ∂H(z,θ)
∂θ .
By factoring Fu(z,θ0)as NFu(z,θ0)
dFu(z,θ0)where dFu(z,θ0)
is the least common polynomial denominator, one can
decompose Fu(z,θ0)Fu(z,θ0)∗as follows:
Fu(z,θ0)Fu(z,θ0)∗=NFu(z,θ0)N∗
Fu(z,θ0)
|dFu(z,θ0)|2
=1
|dFu(z,θ0)|2
n
i=−n
˜
Mi(θ0)zi(4)
where the matrices ˜
Mi(θ0)∈Rk×k(i=0...n) satisfy
˜
Mi(θ0)= ˜
M(−i)(θ0)T. We now introduce the moment
vector xu(θ0)∈Rn+1 of the input signal with respect to
the true system.
Definition 2.1: Consider the true system (1), the trans-
fer vector Fu(z,θ0)as below (3), the degree nof the
decomposition (4), and an input signal u(t)with power
spectrum Φu(ω). Then, the moment vector xu(θ0)∆
=
x0(θ0)x1(θ0)... xn(θ0)Tof u(t)is a vector in Rn+1
whose elements are defined as:
xi(θ0)= 1
2ππ
−π
Φu(ω)
|dFu(ejω,θ
0)|2cos(iω)dω (i=0...n)
(5)
One can write a compact expression of both P−1
θand Pu
as an affine function of the elements of xu(θ0).
Proposition 2.1: [12] Consider an identification experi-
ment performed on (1) using a quasi-stationary input signal
u(t). Then the inverse of the covariance matrix Pθ∈Rk×k
of the estimated parameter vector can be written as:
P−1
θ=¯
M(θ0)+
n
i=0
Mi(θ0,σ
2
e)xi(θ0)(6)
where ¯
M(θ0)=N1
2ππ
−πFe(ejω,θ
0)Fe(ejω,θ
0)∗dω
and Mi(θ0,σ
2
e)∈Rk×k(i=0...n) are defined
using (4) as M0(θ0,σ
2
e)= N
σ2
e
˜
M0(θ0)and
Mi(θ0,σ
2
e)= N
σ2
e˜
Mi(θ0)+ ˜
MT
i(θ0)(i=1...n).
Proposition 2.2: [12] Consider an input signal u(t)to
the true system (1) with power spectrum Φu(ω). Then the
total power Puof u(t)is a linear function of the elements
xi(θ0)of xu(θ0):
Pu=
n
i=0
ci(θ0)xi(θ0),(7)
where the coefficients ci(θ0)are defined from the polyno-
mial dFu(ejω,θ
0)as follows:
|dFu(ejω,θ
0)|2=c0(θ0)+
n
i=1
(ci(θ0)cos(iω)) (8)
Notice that the moment vector xu(θ0)and the
parametrizations of Puand Pθwith respect to this
moment vector are functions of the unknown true system
(via θ0and σ2
e). Note also that, even though Puitself
is not a function of the true system, the parametrization
of Puin (7) is a function of the true system due to the
dependence of xu(θ0)and of the coefficients ci(θ0)on the
true system.
Using the asymptotic Gaussian distribution of the es-
timated parameter vector ˆ
θN, it is possible to define an
(additive) uncertainty region Dru(ˆ
θN)around the identified
model and containing the unknown true system G0(z)at
any self-chosen probability level:
Dru(ˆ
θN)={G(z)∈H∞|(9)
G(ejω)−G(ejω,ˆ
θN)<r
u(ω)∀ω
Consider the following first order approximation of
G(z,θ0):G(z,θ0)≈G(z, ˆ
θN)+Λ
T
G(z,θ0)(θ0−ˆ
θN)with
ΛG(z,θ)as defined below (3). Using this approximation,
the size ru(ω)of Dru(ˆ
θN)can then be written as :
ru(ω)=χλ1(T(ejω,θ
0)PθT(ejω,θ
0)T)(10)
where χis a real constant depen-
dent on the chosen probability level,
T(ejω,θ
0)∆
=Re(ΛT
G(ejω,θ
0))
Im(ΛT
G(ejω,θ
0)) ∈R2×kand
λ1(A)denotes the largest eigenvalue of A. The size
ru(ω)of the uncertainty region containing G0at a given
probability level is a function of the covariance matrix Pθ
and thus, by (3), a function of the input signal u(t)used
during the identification experiment. Typically, the larger
the power Puof u(t), the smaller ru(ω). Note also that
ru(ω)depends on θ0.
III. CONTROL DESIGN OBJECTIVES AND METHOD
As stated before, our aim is to design a “satisfactory”
controller ˆ
C(z)for the unknown true system G0using
an identified model ˆ
G=G(z, ˆ
θN)of G0. A satisfactory
controller must stabilize and achieve sufficient performance
with G0. In this section, we define the concept of sufficient
performance, as well as the control design method we use to
design ˆ
Cfrom the identified model. We adopt the following
performance measure for a loop [CG]:
J(G, C, Wl,W
r)=Wl
F(G,C)
1
1+GC
G
1+GC
C
1+GC
GC
1+GC Wr∞,
(11)
where Wl(z)and Wr(z)are given diagonal performance
filters. This performance measure is quite general:
J(G, C, Wl,W
r)≤1ensures that the four entries of
Wl(z)F(G, C)Wr(z)have an H∞norm smaller than
one. Simpler H∞criteria can be chosen as special
cases; e.g. Wl(z)=diag(W(z),0) and Wr=diag(1,0),
J(G, C, Wl,W
r)≤1corresponds to W/(1+CG)∞≤1.
We build performance filters Wl(z)and Wr(z)that
reflect the performance specifications we want to achieve
with the true system. Thus, the controller ˆ
Cwill be
deemed satisfactory if J(G0,ˆ
C,Wl,W
r)≤1.Here, as
already mentioned, the controller ˆ
Cwill be designed from
the identified model ˆ
G=G(z, ˆ
θN). In order to define the
control design method leading to ˆ
C=C(G(z, ˆ
θN)),we
make the following assumption.
Assumption 3.1: We have a-priori defined a set Θof pa-
rameter vectors which we assume to contain any parameter
vector ˆ
θNthat would result from an identification under
reasonable experimental conditions. We assume also that we
have pre-selected a fixed control design method which maps
any model G(z,θ)for θ∈Θto one controller C(G(z,θ)).
For each θ∈Θ,C(G(z,θ)) stabilizes G(z,θ)and achieves
with this model a performance level
J(G(z,θ),C(G(z, θ)),W
l(z),W
r(z)) ≤γ<1,(12)
where γis a fixed scalar, strictly smaller than 1.
One design strategy that satisfies Assumption 3.1 is to
choose C(G(z,θ)) as the central controller of a four-block
H∞control design method with performance objective (12)
(see, in this aspect, the recent results in [4]). If Assump-
tion 3.1 holds, then the controller ˆ
C=C(G(z, ˆ
θN))
designed from an identified model ˆ
G=G(z, ˆ
θN)will
achieve J(ˆ
G, ˆ
C,Wl,W
r)≤γ<1. When this controller
ˆ
Cis applied to the true system G0, the achieved per-
formance will in most cases be poorer than the designed
performance. By choosing the design criterion (12) with
γ<1, we ensure, however, that there is a whole (additive)
set of systems G(z,θ)around the to-be-identified G(z, ˆ
θN),
characterized by a size ¯radm(ω, G(z, ˆ
θN)) (see next sub-
section), that are also stabilized by ˆ
Cand that achieve
J(G(z,θ),ˆ
C(z),W
l(z),W
r(z)) ≤1.Before we address
this input design problem, we need to properly define the
largest set of systems that are stabilized and achieve the
required performance with a model-based controller. We do
this in the next subsection.
IV. THE LARGEST UNCERTAINTY RADIUS radm (ω)
Let us consider one model G(z,θ)for some θ∈Θand
the controller C(G(z,θ)) designed with G(z, θ)using the
design rule mentioned in Assumption 3.1. For any positive
function r(ω)we can define an additive uncertainty set
around this model G(z,θ)(cfr. (9)):
Dr(θ)={G(z)∈H∞|(13)
G(ejω )−G(ejω,θ)<r(ω)∀ω
Consider now the set Rof frequency functions rsuch that
i)[C(G(z,θ)) G(z)] stable and
ii)J(G(z),C(G(z, θ)),W
l(z),W
r(z)) ≤1∀G(z)∈D
r(θ).
We then define ¯radm(ω, G(z,θ)) at each ωas
¯radm(ω, G(z,θ)) = sup
r∈R
r(ω).(14)
Given the model G(z,θ)and the controller C(G(z,θ)),
this largest additive uncertainty radius ¯radm(ω, G(z,θ))
can be computed by a classical ν-analysis problem [5].
The quantity ¯radm(ω, G(z, θ)) obviously depends on the
value of G(z,θ), the center of the uncertainty set. In order
to design an identification experiment, we would need to
know the largest admissible uncertainty radius around the
to-be-identified model G(z, ˆ
θN)i.e. ¯radm(ω, G(z, ˆ
θN)).
Since our aim, eventually, is to do an a priori design
of an identification input signal u(t)such that the size
ru(ω)of the estimated uncertainty set (9) is smaller than
¯radm(ω, G(z, ˆ
θN)) for all ω, we cannot let the size of
the admissible uncertainty for control, ¯radm(ω, G(z, ˆ
θN)),
depend upon a model that has not yet been estimated.
One possibility to tackle this difficulty is to approximate
the unknown ¯radm(ω, G(z, ˆ
θN)) by the largest admissible
uncertainty radius ¯radm(ω, G(z, θinit)) around G(z, θinit)
which is an available model of the considered system.
However, such an approach could lead to poor results if
there is a lot of discrepancy between ¯radm(ω, G(z, ˆ
θN))
and ¯radm(ω, G(z,θinit)). In this paper, we shall instead
use the set Θin Assumption 3.1 to determine a lower
bound radm(ω)for ¯radm(ω, G(z, ˆ
θN)). This is summarized
in the following result.
Proposition 4.1: Consider the control design method
of Assumption 3.1, and the performance measure
J(G, C, Wl,W
r)of (11) which, for each θ∈Θ,
satisfies (12). Then, for any ¯
θ∈Θ, the controller
C(G(z, ¯
θ)) designed from G(z, ¯
θ)stabilizes and achieves
J(G(z),C(G(z, ¯
θ)),W
l,W
r)≤1with all systems G(z)in
the additive uncertainty region Dradm(¯
θ)(see (13)) centered
at G(z, ¯
θ)and of size radm(ω)defined as:
radm(ω)∆
= min
θ∈Θ¯radm(ω, G(z,θ)).(15)
with ¯radm(ω, G(z,θ)) as in (14).
Computation of radm(ω).One method for the compu-
tation of radm(ω)is to use a gridding technique: for
each ωr
adm(ω)is computed as the smallest value of
¯radm(ω, G(z,θ)) over randomly selected values of θ.An
alternative and more accurate method can be used if the set
Θis an ellipsoid. In such case, radm(ω)can be computed as
the solution of a ν-analysis problem. Indeed this quantity
can be defined equivalently as radm(ω)=sup
r∈R2r(ω)
where the set R2is the set of frequency functions rsuch
that
∀θ∈Θand ∀∆(z)∈{∆(z)∈H∞||∆(ejω)|<r(ω)}:
[C(G(z,θ)) ; G(z, θ)+∆(z)] stable and (16)
J(G(z,θ)+∆(z),C(G(z,θ)),W
l,W
r)≤1.
This is a ν-analysis problem since both C(G(z,θ)) and
G(z,θ)can be expressed as Linear Fractional Transforma-
tions (LFT) of the variable θ∈Θ, with Θan ellipsoid.
This LFT description is indeed possible for a model G(z,θ)
identified using PE identification with a resulting controller
C(G(z,θ)) obeying Assumption 3.1: see [1], [4].
V. I DENTIFICATION FOR CONTROL AT THE CHEAPEST
COST
We know from the analysis of Section II that the true
G0lies with a probability level that we can select in
the set Dru(ˆ
θN)defined by (9), where ru(ω)depends
on the input signal spectrum and is given by (10). On
the other hand, we know by Proposition 4.1 that the
controller C(G(z, ˆ
θN)) computed from an identified
model G(z, ˆ
θN)is satisfactory for all models in the set
Dradm (ˆ
θN). By satisfactory is meant that ˆ
C=C(G(z, ˆ
θN))
stabilizes and achieves J(G(z),ˆ
C(z),W
l(z),W
r(z)) ≤1
∀G∈D
radm (ˆ
θN). Putting these two results together
we conclude that a controller C(G(z, ˆ
θN)) satisfies the
stability and performance requirements with G0(at the
desired probability level) if Dru(ˆ
θN)⊆D
radm (ˆ
θN),or
equivalently if the size ru(ω)of Dru(ˆ
θN)is at each
frequency smaller than the size radm(ω)of Dradm (ˆ
θN).
We now seek the input signal with the cheapest cost that
achieves this objective. Our cheapest experiment design
problem for control can thus be re-formulated as follows:
Cheapest experiment design problem for control:
Determine the stationary input signal u(t)for an
identification experiment performed on G0with Ndata in
such a way that the total power Puof u(t) is minimized,
under the constraint that the size ru(ω)of the identified
uncertainty region Dru(ˆ
θN)is at each frequency smaller
than radm(ω).
Roughly speaking, the size ru(ω)of the identified
uncertainty region Dru(ˆ
θN)increases when the input signal
power Pudecreases. The cheapest identification experiment
for control is thus the one for which the size ru(ω)of
the identified uncertainty region Dru(ˆ
θN)is as large as
possible under the constraint that ru(ω)≤radm(ω)∀ω.
This experiment design problem does not have a unique
solution. Indeed, if u0(t)is one solution, then all signals
having the same moment vector xu(θ0)as u0(t)also solve
the optimization problem. Our approach will be to solve for
the optimal moment vector xu(θ0), and then to realize an
input that has such moment vector. The optimization prob-
lem on the moment vector would be exactly solvable if the
expressions of ru(ω)and Puas a function of the moment
vector were not dependent on the unknown true system via
θ0and σ2
e. Such dependence on the unknown true system
is inherent to all identification experiment design problems.
The classical approach which we adopt here, is to replace
the unknown θ0and σ2
ein the optimization problem leading
to xu(θ0)by available estimates θinit and σ2
e,init of those
quantities. Consequently, the moment vector xu(θ0)of the
input signal(s) that solve the experiment design problem
described above in this section can be approximated by the
optimal vector xu,opt of the following optimization problem
on the variable xu=x0x1... xnT∈R(n+1):
min
xu
n
i=0
cixi(17)
subject to
χλ1
T(ejω)
¯
M+
n
i=0
Mixi−1
T(ejω)T≤r2
adm(ω)∀ω(18)
and
R(xu)∆
=
x0x1... xn
x1x0... xn−1
... ... ... ...
xnxn−1... x0
≥0(19)
where we have used the shorthand notations: ci=c(θinit),
T(ejω)=T(ejω,θ
init),¯
M=¯
M(θinit)and
Mi=Mi(θinit,σ
2
e,init)and the relations for Pθ,ru(ω)
and Puas a function of the moment vector (see (6), (10)
and (7)). The constraint (19) is there because a real vector
xu=(x0x1... xn)Tof dimension n+1 is the moment
vector of a quasi-stationary signal u(t)if and only if the
Toeplitz matrix R(xu)is positive semi-definite (see e.g.
[7]). The optimization problem (17)-(19) can be solved
exactly provided the frequency function radm(ω)can be
written as (or approximated by) a rational transfer function.
Indeed, by the Kalman-Yakubovich-Popov Lemma [11],
the conditions (18) defined at each frequency can be
transformed into one single LMI making the optimization
problem tractable. This is summarized in the following
theorem.
Theorem 5.1: Assume that there is a rational transfer
function radm(z)such that |radm (ejω)|=radm (ω)for
the frequency function radm(ω)defined in (15). Then, the
solution of the optimization problem (17)-(19) is the optimal
vector xu,opt of the following tractable LMI optimization
problem:
minimize
n
i=0
cixi(20)
over xu=(x0...xn)T∈R(n+1) and a matrix P=P∗
subject to R(xu)≥0and
A∗PA−PA
∗PB
B∗PA B∗PB +C∗
D∗XCD
≥0
(21)
where R(xu)is defined in (19), X∈R3(k+2)×3(k+2) is a
matrix, independent of the frequency but linearly dependent
on xu,defined as
X=
2I20
02
¯
M+n
i=0 Mixi0 0
0 0 Ik+2
0Ik+2 0
(22)
and (A, B, C, D )is a state-space representation of the
system F(ejω)∈C
3(k+2)×(k+2) defined as follows:
radm(ejω )
√χI20
0Ik
01
−jΛT
G(ejω,θ
init)
ΛG(ejω,θ
init)1−j0
I20
0Ik
(23)
with χand ΛG(z,θ)as defined below (10) and (3), respec-
tively.
Proof. To prove the theorem, we prove that the infinite set
of constraints (18) is equivalent to the fact that ∃P=P∗
such that (21) is satisfied. For this purpose, we first rewrite
the set of constaints (18) using the Schur complements [3]
(as is proposed in [9] for a slightly different constraint):
r2
adm(ejω )
χI2T(ejω)
T(ejω)T¯
M+n
i=0 Mixi≥0∀ω(24)
Via some simple algebraic manipulations, (24) can then
be replaced by the following equivalent set of constraints:
1
2F(ejω)∗XF(ejω )≥0∀ωwhere Xand F(ejω)are
defined in (22) and (23), respectively. It is then a conse-
quence of the Kalman-Yakubovich-Popov Lemma [11] that
this set of constraints defined at each frequency is fulfilled
if and only if ∃P=P∗such that (21) is fulfilled.
Determination of an input signal from the optimal
moment vector. The optimal vector xu,opt is computed
using Theorem 5.1. This vector xu,opt =xu,opt(θinit )is an
approximation of the moment vector of the input signals that
solve the cheapest experiment design problem for control.
Consequently, provided that this approximation is reliable,
we can determine one of the quasi-stationary input signals
that solve this experiment design problem, by determining
one representation u(t)of xu,opt(θinit )i.e. a signal u(t)
whose moment vector is equal to xu,opt(θinit). In [7], a
method is presented that determines such an input signal as
a sum of sinusoids.
VI. NUMERICAL ILLUSTRATION
In order to illustrate our results, we consider
as true system the ARX system with B0(z)=
0.10276 + 0.18123z−1,A0(z)=1−1.99185z−1+
2.20265z−2−1.84083z−3+0.89413z−4, corrupted by a
realization of a white noise signal of variance σ2
e=0.5[8].
We consider identification experiments on this true system
with N= 500 data points and a full-order model structure.
In this example, we restrict attention to the properties of
the sensitivity function and control performance criterion
J(G, C, Wl,W
r)is therefore defined as in (11) with
the filters: Wl(z)=diag 0.5165−0.4632z−1
1−0.999455z−1,0and
Wr(z)=diag (1,0). The chosen control design method is
the 4-block H∞control design method of [6] which has
the characteristics described in Section III.
Design of the cheapest input signal u(t)for control. We
first estimate radm(ω), then compute the optimal moment
vector xu,opt, and finally design a particular input signal
u(t)corresponding to xu,opt.
,
10
−3
10
−2
10
−1
10
0
10
−1
10
0
10
1
10
2
10
3
omega
Fig. 1. radm (ω)(solid) and ru(ω)computed with θinit and the optimal
xu,opt (dashdot)
In order to estimate radm(ω)(see (15)), we have
applied the gridding technique described at the end of
Section IV. The estimated frequency function radm(ω)
is represented in Figure 1. The optimal moment vector
xu,opt is determined by solving the LMI optimization
problem of Theorem 5.1 for which we have used
σ2
e,init =0.5265 and an initial estimate θinit =
(−1.9755,2.1965,−1.8495,0.8881,0.0817,0.172)T.
Figure 1 shows the frequency function ru(ω)that is
computed using the left-hand side of the constraint (18)
and the optimal moment vector xu,opt. By construction,
the moment vector xu,opt is optimized in such a way that
this frequency function ru(ω)is the largest possible under
the constraint ru(ω)≤radm(ω)∀ω. It could therefore
appear surprising that the second peak of ru(ω)is so low
and that ru(ω)decreases after this peak. The reason is
that ru(ω)depends on the covariance matrix Pθ, which
is parametrized only by n+1 components. Therefore
the behaviour of ru(ω)at one frequency is dependent on
its behaviour at other frequencies and the second peak
could not be increased without making the first peak larger
than radm(ω)and/or ru(ω)larger than radm(ω)in low
frequencies.
We have designed an input signal u(t)corresponding to
xu,opt under the form of a periodic signal:
u(t)=2.07 + 2.63 cos(0.17t)+0.69 cos(0.4t)
+0.09 cos(1.44t)+0.17 cos(πt)(25)
It is interesting to notice that this periodic input signal u(t)
has components in the low frequencies (where radm(ω)
is minimal) and at ω=0.4and ω=1.44 where the two
resonance peaks of G(z,θinit)are located.
Verification of the procedure. We have applied the in-
put signal u(t)defined in (25) to the true system. From
N= 500 recorded input-output data, we have identified
a model ˆ
Galong with the additive uncertainty region
Dru(ˆ
θN)around ˆ
G. The size ru(ω)of this uncertainty
region has not been estimated using the approximation (10),
but computed exactly using an LMI optimization problem
which can be found in [2]. From ˆ
G, we have then designed
a controller ˆ
Cusing the H∞control design method of [6],
and we have verified whether ˆ
Cstabilizes and achieves the
desired performance level with all systems in the identified
Dru(ˆ
θN). This was indeed the case.
VII. CONCLUSIONS
We have presented a new approach to the interplay
between identification design and robust control based on
an identified model and its corresponding uncertainty set.
Rather than seeking an experiment design that minimizes
some control-oriented quality measure of the estimated
uncertainty set for a predefined constraint on allowable input
power, we have in this contribution sought the experiment
design with the cheapest cost (as measured by total input
signal power) that delivers just enough precision on the es-
timated uncertainty set for the robust control specifications
to be satisfied.
REFERENCES
[1] X. Bombois, M. Gevers, G. Scorletti, and B.D.O. Anderson. Robust-
ness analysis tools for an uncertainty set obtained by prediction error
identification. Automatica, 37(10):1629–1636, 2001.
[2] X. Bombois, G. Scorletti, P. Van den Hof, and M. Gevers. Least
costly identification experiment for control. A solution based on
a high-order model approximation. In CD-ROM Proc. American
Control Conference, Boston, June 2004.
[3] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix
Inequalities in Systems and Control Theory, volume 15 of Studies in
Appl. Math. SIAM, Philadelphia, June 1994.
[4] M. Dinh, G. Scorletti, V. Fromion, and E. Magarotto. Parametrized
H∞controller design for adaptive trade-off by finite dimensional
LMI optimization. In CD-ROM Proc. European Control Conference,
Cambridge, United Kingdom, 2003.
[5] G. Ferreres and V. Fromion. Computation of the robustness margin
with the skewed µ-tool. Syst. Control Letters, 32:193–202, 1997.
[6] G. Ferreres and V. Fromion. H∞control for a flexible transmission
system. In CD-ROM Proc. European Control Conference, Brussels,
Belgium, 1997.
[7] R. Hildebrand and M. Gevers. Identification for control: optimal
input design with respect to a worst-case ν-gap cost function. SIAM
Journal on Control and Optimization, 41(5):1586–1608, 2003.
[8] I.D. Landau, D. Rey, A. Karimi, A. Voda, and A. Franco. A flexible
transmission system as a benchmark for robust digital control.
European Journal of Control, 1(2):77–96, 1995.
[9] K. Lindqvist and H. Hjalmarsson. Optimal input design using Linear
Matrix Inequalities. In CD-ROM Proc. IFAC Symposium on System
Identification, Santa Barbara, California, 2000.
[10] L. Ljung. System Identification: Theory for the User, 2nd Edition.
Prentice-Hall, Englewood Cliffs, NJ, 1999.
[11] V. M. Popov. Hyperstability of Control Systems. Springer-Verlag,
New York, 1973.
[12] M. Zarrop. Design for Dynamic System Identification. Lecture Notes
in Control and Inform. Sci. 21, Springer Verlag, Berlin, New-York,
1979.