Conference PaperPDF Available

Data-based continuous-time modelling of dynamic systems

Authors:

Abstract

Data-based continuous-time model identification of continuous-time dynamic systems is a mature subject. In this contribution, we focus first on a refined instrumental variable method that yields parameter estimates with optimal statistical properties for hybrid continuous-time Box-Jenkins transfer function models. The second part of the paper describes further recent developments of this reliable estimation technique, including its extension to handle non-uniformly sampled data situation, closed-loop and nonlinear model identification. It also discusses how the recently developed methods are implemented in the CONTSID toolbox for Matlab and the advantages of these direct schemes to continuous-time model identification.
Data-based continuous-time modelling of dynamic systems
Hugues Garnier
Abstract Data-based continuous-time model identification
of continuous-time dynamic systems is a mature subject. In this
contribution, we focus first on a refined instrumental variable
method that yields parameter estimates with optimal statistical
properties for hybrid continuous-time Box-Jenkins transfer
function models. The second part of the paper describes fur-
ther recent developments of this reliable estimation technique,
including its extension to handle non-uniformly sampled data
situation, closed-loop and nonlinear model identification. It also
discusses how the recently developed methods are implemented
in the CONTSID toolbox for Matlab and the advantages of
these direct schemes to continuous-time model identification.
I. INTRODUCTION
The identification of continuous-time (CT) models is a
problem of considerable importance that has applications in
virtually all disciplines of science. Early research on this
topic focussed on identification of CT models from CT data
(see e.g. [29], [30]). Subsequently, however, rapid develop-
ments in digital data acquisition and computers have resulted
in attention being shifted to the identification of discrete-
time (DT) models from sampled data, as documented in
many books (see e.g [34], [24] and [15]). Much less attention
has been devoted to CT modelling from DT data and many
practitioners appear unaware that such alternative methods
not only exist but may be better suited to their modelling
problems.
In order to identify a continuous-time model from time-
domain sampled data, two main time-domain approaches are
possible. In the first, ‘indirect’ approach, a DT model is
identified first using DT model identification methods, and
this is then converted into a CT model using a standard
algorithm for discrete to continuous-time conversion. In the
second, ‘direct’ approach the CT model is identified directly
from DT data. Direct data-based CT modelling is often
incorrectly presented as being too complicated but, as we
will see, the approaches are straightforward, reliable and
have proven useful in many practical applications. These
approaches have recently regained interest showing better
performance than indirect approaches for both linear and
nonlinear models, see e.g. [20], [21], [22], [12], [42]. The
main motivations for identifying CT models directly from
sampled data have been recently discussed in [7] (see also
the Conclusions Section in this paper). Exhaustive reviews of
direct estimation methods can be found in [33], [3], [5] and
[22]. Amongst the available identification approaches for CT
input-output models, the interest for instrumental variable
(IV) methods has been growing in the last years [23], [34],
H. Garnier is with Centre de Recherche en Automatique de Nancy
(CRAN), Nancy-University, CNRS, BP 70239, 54506 Vandoeuvre-les-
Nancy Cedex, France hugues.garnier@uhp-nancy.fr
[42], [28]. The main reason of this increasing interest is
that IV methods offer similar performance as extended least
square (LS) methods or other prediction-error-minimization
(PEM) methods (see [21], [17]) and provide consistent results
even for an imperfect noise structure which is the case in
most practical applications. The IV schemes considered here
present the major advantages over PEM methods to be much
less sensitive to the initialisation stage (see [20], [21], [16]) .
These IV approaches lead to optimal estimates in the linear
time-invariant case if the system belongs to the model set
defined. This paper concentrates on a reliable Instrumental
Variable (IV)-based estimation method in particular, and
presents the latest developments, including its use for closed-
loop and nonlinear model identification.
II. RIVC FOR CT LINEAR MODELS
We focus on a statistically optimal method for the
identification of continuous-time hybrid Box–Jenkins (BJ)
transfer function models from discrete-time data [43]. Here,
the model of the dynamic system is estimated in continuous-
time, differential equation form, while the associated additive
noise model is estimated as a discrete-time, autoregressive
moving average (ARMA) process. This refined instrumental
variable method for continuous-time systems (RIVC) was
first developed in 1980 by Young and Jakeman [38] and its
simplest embodiment, the simplified RIVC (SRIVC) method,
has been used successfully for many years, demonstrating the
advantages that this stochastic formulation of the continuous-
time estimation problem provides in practical applications
(see, e.g., some recent such examples in [35], [41]).
However, the ‘simplification’ that characterises the name
of the SRIVC method is the assumption, for the purposes
of simplicity and algorithmic development, that the additive
noise is purely white in form. Such an approach is optimal
under this assumption and the inherent instrumental variable
aspects of the resulting algorithm ensure that the parame-
ter estimates are consistent and asymptotically unbiased in
statistical terms, even if the noise happens to be coloured.
However, the SRIVC estimates are not, in general, statisti-
cally efficient (minimum variance) in this situation because
the prefilters are not designed to account for the colour in
the noise process.
The hybrid RIVC estimation procedure follows logically
from the refined instrumental variable (RIV) method for
discrete-time models, first developed within a maximum
likelihood (ML) context by Young in 1976 [32] and com-
prehensively evaluated by Young and Jakeman [37], [10],
[34].
hal-00602928, version 1 - 23 Jun 2011
Author manuscript, published in "4th International Symposium on Advanced Control of Industrial Processes, Adconip 2011, Hangzhou
: China (2011)"
The RIV algorithm involves concurrent DT noise model
estimation and uses this estimated noise model in the
iterative-adaptive design of statistically optimal prefilters that
effectively attenuate noise outside the passband of the system
and prewhiten the noise remaining within the bandpass. Sim-
ilarly motivated prefilters are utilised in the RIVC algorithm
but they also provide a very convenient way of generating
the prefiltered derivatives of the input and output variables,
as required for CT model estimation.
The alternative hybrid form of the continuous-time trans-
fer function model is considered here for two reasons. First,
the approach is simple and straightforward: the theoretical
and practical problems associated with the estimation of
purely stochastic, continuous-time CAR or CARMA models
are avoided by formulating the problem in this manner.
Second, as pointed out above, one of the main functions of
the noise estimation is to improve the statistical efficiency
of the parameter estimation by introducing appropriately
defined prefilters into the estimation procedure. And, as we
shall see, this can be achieved adequately on the basis of
hybrid prefilters defined by reference to discrete-time AR or
ARMA noise models.
A. Problem Formulation
For simplicity of presentation, the formulation and so-
lution of the CT estimation problem will be restricted to
the case of a linear, single-input, single-output system. It is
assumed that the input u(t)and the noise-free output x(t)
are related by the following constant coefficient, differential-
delay equation,
x(n)(t) + ao
1x(n1)(t) + · · · +ao
nx(t) =
bo
0u(m)(tτ) + · · · +bo
mu(tτ)(1)
where x(i)(t)denotes the ith time derivative of the
continuous-time signal x(t)and τis a pure time delay in
time units. This is often assumed to be an integer number
related to the sampling time: i.e.,τ=nkTsbut this is not
essential: in this CT environment, ‘fractional’ time delays can
be introduced if required (e.g., see [19], [40]). For simplicity,
the time delay will not be considered in the following
analysis but it can be accommodated straightforwardly if
identified from the data. Equation (1) can also be written
in the following compact transfer function (TF) form,
x(t) = Go(p)u(t) = Bo(p)
Ao(p)u(t)(2)
with
Bo(p) =bo
0pm+bo
1pm1+· · · +bo
m,(2a)
Ao(p) =pn+ao
1pn1+· · · +ao
n, n m(2b)
where x(t)is the deterministic output of the system; pis the
differential operator, i.e.,pix(t) = dix(t)
dti;Bo(p)and Ao(p)
are assumed to be coprime; and the system is asymptotically
stable. It is assumed that the input signal {u(t), t1< t < tN}
is applied to the system and this gives rise to an output signal
{x(t), t1< t < tN}.
In order to obtain high-quality statistical estimation re-
sults, it is vital to consider the inevitable errors that will
affect the measured output signal. It is assumed here that
x(t)is corrupted by an additive, coloured measurement noise
ξ(t), so that the complete equation for the data-generating
system, denoted by S, can be written in the form,
S:y(t) = Go(p)u(t) + Ho(p)eo(t)(3)
or, in the alternative decomposed form that is more appro-
priate in the present context
S
x(t) = Go(p)u(t)
ξ(t) = Ho(p)eo(t)
y(t) = x(t) + ξ(t)
(4)
where Ho(p)is stable and invertible, while eo(t)is a zero-
mean, continuous-time white noise source, which is assumed
to be uncorrelated with the input u(t). Finally, if the additive
coloured noise ξ(t)has rational spectral density, then a suit-
able parametric representation is the following continuous-
time, autoregressive moving average (CARMA) model
ξ(t) = Ho(p)eo(t) = Co(p)
Do(p)eo(t)(5)
where Co(p)and Do(p)are suitably defined polynomials in
the poperator.
Of course, in most practical situations, the input and
output signals u(t)and y(t)will be sampled in discrete
time. In the case of uniform sampling, at a constant sampling
interval Ts, these sampled signals will be denoted by u(tk)
and y(tk)and the output observation equation then takes the
form,
y(tk) = x(tk) + ξ(tk)k= 1,· · · N(6)
where x(tk)is the sampled value of the unobserved, noise-
free output x(t). The objective is then to identify a suitable
model structure for (4) and estimate the parameters that
characterise this structure, based on these sampled input and
output data ZN={u(tk); y(tk)}N
k=1.
Given the discrete-time, sampled nature of the data, an
obvious assumption is that the discrete-time, coloured noise
associated with the sampled output measurement y(tk)has
rational spectral density and so can be represented by a
discrete-time ARMA model. The model set to be identified
and estimated, as denoted by Mwith system (G) and
noise (H) models parameterised independently, then takes
the form,
M:{G(p, ρ), H(q, η)}(7)
where ρand ηare parameter vectors that characterise the
system and noise models, respectively. In particular, the
system model is formulated in continuous-time terms
G:G(p, ρ) = B(p, ρ)
A(p, ρ)=b0pm+b1pm1+· · · +bm
pn+a1pn1· · · +an
(8)
and the associated model parameters are stacked columnwise
in the parameter vector,
ρ=a1· · · anb0· · · bmTRn+m+1 (9)
hal-00602928, version 1 - 23 Jun 2011
while the noise model is in discrete-time form
H:H(q, η) = C(q1,η)
D(q1,η)=1 + c1q1+· · · +cqqq
1 + d1q1+· · · +dpqp
(10)
where qris the backward shift operator, i.e.,qry(tk) =
y(tkr)and the associated model parameters are stacked
columnwise in the parameter vector,
η=c1· · · cqd1· · · dpTRp+q(11)
Consequently, the noise TF takes the usual ARMA model
form
ξ(tk) = C(q1,η)
D(q1,η)e(tk)e(tk) N (0, σ2)(12)
where, as shown, e(tk)is a zero-mean, normally distributed,
discrete-time white noise sequence.
The structure Sdoes not specify any common factors
in the plant (Go) and noise (Ho) components, so that these
models can be parameterised independently. More formally,
there exists the following decomposition of the parameter
vector θfor the whole hybrid model,
θ=ρ
η(13)
such that the model equations can be written in the form
M
x(t) = G(p, ρ)u(t)
ξ(tk) = H(q, η)e(tk)
y(tk) = x(tk) + ξ(tk)
(14)
This model is considered as a hybrid Box–Jenkins model
because of its close relationship to the DT model considered
in great detail by Box and Jenkins in their seminal book on
time-series analysis, forecasting and control [1] and used as
the basis for the development of the original RIVC algorithm
[38]. Alternatively, the model can be written in the following
vector terms
M
x(n)(t) = ϕT(t)ρ
ξ(tk) = ψT(tk)η+e(tk)
y(tk) = x(tk) + ξ(tk)
(15)
where,
ϕT(t) = hx(n1)(t)· · · x(t)u(m)(t)· · · u(t)i(15a)
ψT(tk)=[ξ(tk1)· · · ξ(tkp)e(tk1)· · · e(tkq)]
(15b)
For the purposes of identification, the order of this single-
input model (with the pure time delay τnow added for
completeness) is denoted by [nmτpq]and the complete
identification problem can now be stated as follows:
Based on Nuniformly sampled measurements of the input
and output, ZN={u(tk); y(tk)}N
k=1, identify the orders n,
m,pand qof the polynomials in the system and noise TF
models, as well as any pure time delay τ, and estimate the
parameter vector θin (13) whose parameters characterise
these polynomials.
B. Optimal RIVC Estimation: Theoretical Motivation
The RIVC algorithm derives from the RIV algorithm for
DT systems. This was evolved by converting the maximum
likelihood (ML) estimation equations to a pseudo-linear form
[25] involving optimal prefilters [32], [38], [34]. A similar
analysis can be utilised in the present situation because the
problem is very similar, in both algebraic and statistical
terms. However, to conserve space, the discussion here will
be restricted to a simpler development of the RIVC algorithm
and we leave the interested reader to consult with these
earlier references for details of the ML analysis.
1) The Hybrid Box–Jenkins Estimation Model: Follow-
ing the usual prediction error minimisation (PEM) approach
in the present hybrid situation (which is ML estimation
because of the Gaussian assumptions on e(tk)), a suitable
error function ε(tk), at the kth sampling instant, is given by,
ε(tk) = D(q1,η)
C(q1,η)y(tk)B(p, ρ)
A(p, ρ)u(tk)
which can be written as,
ε(tk) = D(q1,η)
C(q1,η)1
A(p, ρ)[A(p, ρ)y(tk)B(p, ρ)u(tk)]
(16)
where the discrete-time prefilter D(q1,η)/C(q1,η)will
be recognised as the inverse of the ARMA(p,q) noise model.
Note that in these equations, we are mixing discrete and
continuous-time operators somewhat informally in order to
indicate the hydrid computational nature of the estimation
problem being considered here. Thus, operations such as,
B(p, ρ)
A(p, ρ)u(tk)
imply that the input variable u(tk)is interpolated in some
manner. This is to allow for the inter-sample behaviour that
is not available from the sampled data and so has to be
inferred in order to allow for the continuous-time numerical
integration of the associated differential equations.
Minimisation of a least squares criterion function in
ε(tk), measured at the sampling instants, provides the basis
for stochastic estimation. However, since the polynomial
operators commute in this linear case, (16) can be considered
in the alternative form,
ε(tk) = A(p, ρ)yf(tk)B(p, ρ)uf(tk)(17)
where yf(tk)and uf(tk)represent the sampled outputs of
the complete hybrid prefiltering operation involving the
continuous-time filtering operations using the filter
fc(p, ρ) = 1
A(p, ρ)(18)
as well as discrete-time filtering operations, using the inverse
noise model filter
fd(q1,η) = D(q1,η)
C(q1,η)(19)
The associated, linear-in-the-parameters estimation model
then takes the form
y(n)
f(tk) = ϕT
f(tk)ρ+η(tk)(20)
hal-00602928, version 1 - 23 Jun 2011
where,
ϕT
f(tk)=[y(n1)
f(tk)· · · yf(t)u(m)
f(tk)· · · uf(tk)]
(21)
and η(tk)is the continuous-time noise signal η(t) =
A(p, ρ)ξ(t)sampled at the kth sampling instant.
2) RIVC Estimation: Optimal methods of IV estimation
(see, e.g., [32], [23]) normally involve an iterative (or re-
laxation) algorithm in which, at each iteration, the ‘auxiliary
model’ used to generate the instrumental variables, as well as
the associated prefilters, are updated, based on the parameter
estimates obtained at the previous iteration. Let us consider,
therefore, the jth iteration where we have access to the
estimate,
ˆ
θj1=ˆ
ρj1
ˆ
ηj1(22)
obtained previously at iteration j1. The most important
aspect of optimal IV estimation is the definition of an optimal
instrumental variable. In the present context, this is generated
from the output of the continuous-time auxiliary model,
ˆx(t, ˆ
ρj1) = G(p, ˆ
ρj1)u(t)(23)
which is prefiltered in the same hybrid manner as the other
variables. The associated optimal IV vector ˆ
ϕf(tk), is then
an estimate of the noise-free version of the vector ϕf(tk)in
(21) and is defined as follows
ˆ
ϕf(tk) = hˆx(n1)
f(tk)· · · ˆxf(tk)u(m)
f(tk)· · · uf(tk)iT
(24)
where it should be noted that
ˆ
ϕf(tk) = ˆ
ϕf(tk,ˆ
ρj1,ˆ
ηj)(25)
because the instrumental variables are now prefiltered and
so are a function of both the system parameter estimates
at the previous iteration and the most recent noise model
parameter estimates. For simplicity, however, these additional
arguments will be omitted in the subsequent analysis. Note
also that the noise-free version of the vector ϕf(tk)in (21),
which we will define as follows,
˚
ϕT
f(tk) = hx(n1)
f(tk)· · · xf(tk)u(m)
f(tk)· · · uf(tk)i
(26)
where x(t) = Go(p)u(t), is referred to in Section II-D
when considering the statistical properties of the optimal IV
parameter estimates.
The IV optimisation problem can now be stated in the
form
ˆ
ρj(N) = arg min
ρ
"1
N
N
X
k=1
ˆ
ϕf(tk)ϕT
f(tk)#ρ
"1
N
N
X
k=1
ˆ
ϕf(tk)y(n)
f(tk)#2
Q
(27)
where kxk2=xTQx and Q=I. This results in the well-
known solution of the IV estimation (IV normal) equations
ˆ
ρj(N) = "N
X
k=1
ˆ
ϕf(tk)ϕT
f(tk)#1N
X
k=1
ˆ
ϕf(tk)y(n)
f(tk)(28)
where the ˆ
ρj(N)is the IV estimate of the system model pa-
rameter vector at the jth iteration based on the appropriately
prefiltered input/output data ZN={u(tk); y(tk)}N
k=1.
As regards the hybrid prefiltering, it will be noted from
(25) that this involves the inverse noise model parameters ˆ
ηj
obtained at the current jth iteration. This is because, given
ˆ
ρj1, an estimate of the sampled noise signal ξ(tk), at the
jth iteration, is obtained by subtracting the sampled output of
the auxiliary model equation (23) from the measured output
y(tk),i.e.,
ˆ
ξ(tk) = y(tk)ˆx(t, ˆ
ρj1)(29)
This estimate provides the basis for the estimation of the
noise model parameter vector ηj, using whatever ARMA
model estimation algorithm is selected for this task.
C. The RIVC and SRIVC Algorithms
The iterative RIVC and SRIVC algorithms follow directly
from the RIV and SRIV algorithms for DT systems (e.g.,
[34]). This section summarises both algorithms.
1) The RIVC Algorithm: Bearing the analysis of the
previous subsection II-B.2 in mind, the main steps in the
RIVC algorithm are as follows:
Step 1. Initialisation: generate an initial estimate of the
TF model parameter vector ˆ
ρousing the simpli-
fied RIVC (SRIVC) algorithm (see subsection II-
C.2) and use this to define the initial CT prefilter
fc(p, ˆ
ρo).
Step 2. Iterative estimation.
for j= 1 : convergence
(i) Generate the IV series ˆx(t, ˆ
ρj1)using the auxil-
iary model built up from the estimated polynomials
A(p, ˆ
ρj1)and B(p, ˆ
ρj1)based on ˆ
ρj1at the
previous (j1)th iteration.
(ii) Prefilter the input u(tk), output y(tk)and instru-
mental variable ˆx(t, ˆ
ρj1)by the continuous-time
filter fc(p, ˆ
ρj1)in order to generate the filtered
derivatives of these variables.
(iii) Obtain an optimal estimate of the noise model
parameter vector ˆ
ηjbased on the estimated noise
sequence ˆ
ξ(tk)from (29), using a selected ARMA
estimation algorithm.
(iv) Sample the filtered derivative signals at the discrete-
time sampling interval Tsand prefilter these by the
discrete-time filter fd(q1,ˆ
ηj), in order to define
all the required elements in the data vector ϕf(tk),
the IV vector ˆ
ϕf(tk)and the nth-order filtered
derivative y(n)
f(tk).
(v) Based on these prefiltered data, generate the latest
estimate ˆ
ρjof the system model parameter vector
using the en bloc IV solution (28), or its recur-
sive equivalent. Together with the estimate ˆ
ηjof
the noise model parameter estimate from (iii), this
provides the estimate ˆ
θjof the composite parameter
vector at the jth iteration.
end
Step 3. After the convergence of the iterations is complete,
compute the estimated parametric error covariance
matrix ˆ
Pρ, associated with the converged estimate
hal-00602928, version 1 - 23 Jun 2011
ˆ
ρof the system model parameter vector, from the
expression (see Section II-D),
ˆ
Pρ= ˆσ2"N
X
k=1
ˆ
ϕf(tk)ˆ
ϕT
f(tk)#1
(30)
where ˆ
ϕf(tk)is the IV vector obtained at conver-
gence and ˆσ2is the estimated residual variance.
2) The SRIVC Algorithm: It will be noted that the above
formulation of the RIVC estimation problem is considerably
simplified if it is assumed that the additive noise is white,
i.e.,C(q1,η) = D(q1,η)=1. In this case, simplified
RIVC (SRIVC) estimation involves only the parameters in
the A(p, ρ)and B(p, ρ)polynomials and the prefiltering only
involves the continuous-time prefilter fc(p, ρ)=1/A(p, ρ).
Consequently, the main steps in the SRIVC algorithm are the
same as those in the RIVC algorithm, except that the noise
model estimation and subsequent discrete-time prefiltering in
steps (ii)and (iii)of the iterative procedure are no longer
required and are omitted.
It is worth noting that the RIVC algorithm has a much
longer computation time than the SRIVC algorithm. As a
result, it is advantageous to use the SRIVC algorithm for
initial model order identification and only employ the full
RIVC algorithm in those situations where the theoretical
assumptions are satisfied and it is essential to have the
most efficient parameter estimates and better estimates of
the uncertainty on the parameters. For day-to-day usage, the
SRIVC algorithm provides a quick and reliable approach to
continuous-time model identification and estimation.
D. Theoretical Background and Statistical Properties of the
RIVC Estimates
The motivational arguments presented in Section II-
B suggest that, upon convergence, the RIVC parameter
estimates will possess the optimal statistical properties of
consistency and asymptotic efficiency when the additive
noise has a Gaussian normal probability distribution and
rational spectral density. This section presents more formal
analysis to verify further the optimality of the estimates and
confirm the asymptotic independence of the system and noise
model parameter estimates.
1) Optimality of RIVC Estimation: In the control and
systems literature, optimal IV estimation is usually consid-
ered in relation to the so-called ‘extended IV’ approach to
estimation, as developed for the DT case [23]. A similar
approach can be applied in the present CT case by re-writing
the IV optimisation equation (27) in the following alternative
form that explicitly reveals a continuous-time prefilter f(p)
ˆ
ρ(N) = arg min
ρk[1
N
N
X
k=1
ζf(tk)f(p)ϕT(tk)ˆ
ρ
"1
N
N
X
k=1
ζf(tk)f(p)y(n)(tk)#2
Q
(31)
where f(p)is the stable prefilter, ζf(tk)is the prefiltered
instrumental vector ζf(tk) = f(p)ζ(tk)and Qis a positive-
definite matrix. By definition, when Go G, the extended IV
estimate provides a consistent estimate under the following
two conditions
(¯
E{ζf(tk)f(p)ϕT(tk)}is non-singular,
¯
E{ζf(tk)f(p)ξ(tk)}= 0 (32)
Clearly, the selection of the instrumental variable vector
ζf(tk), the weighting matrix Qand the prefilter f(p)may
have a considerable effect on the covariance matrix Pθ
produced by the IV estimation algorithm.
In the open-loop situation, the Cram´
er–Rao lower bound
on Pθfor any unbiased identification method (e.g., [23], [15])
defines the optimal solution. In this regard, it has been shown
that the minimum value of the covariance matrix Pθ, as a
function of the design variables ζf(tk),f(p)and Q, exists
and is given by
PθPopt
θ
with
Popt
θ= [¯
E{˚
ζf(tk)˚
ζT
f(tk)}]1(33)
where ˚
ζf(tk)is the optimally prefiltered IV vector, with the
associated design variables defined as
Q=I,(34a)
f(p) = 1
Ho(p)Ao(p)=Do(p)
Co(p)Ao(p),(34b)
˚
ζ(tk) = x(n1)(tk)· · · x(tk)u(m)(tk)· · · u(tk)T
(34c)
so that,
˚
ζf(tk) = f(p)˚
ζ(tk)(35)
which will be recognised as the noise-free, prefiltered vector
˚
ϕT
f(tk)defined earlier in (26).
2) Comments:
Not surprisingly, the above analysis justifies the RIVC
algorithmic design that iteratively updates those aspects
of the theoretical solution that are not known a priori:
in this case, the unknown model polynomials and the
noise-free output of the system that is, of course, the
source of the instrumental variables. If it is assumed
that, in all identifiable situations, the RIVC algorithm
converges in the sense that ˆ
ρρand ˆ
ηη, then the
RIVC estimates will be consistent and asymptotically
efficient.
The optimal filter f(p)in (34b) is formulated in CT
terms. In the proposed RIVC algorithm, this filter takes
a hybrid form, as discussed in the previous sections.
One very important aspect of TF modelling is the
identification of the model structure: i.e., the degrees
n,m,p, and qof the model polynomials and any
associated pure time delay τ. A model order selection
method associated to the SRIVC model estimation
method allows the user to automatically search over a
whole range of different model orders. Two statistical
measures are then used to help to user choose the best
model structure (see Subsection II.E and [43].
hal-00602928, version 1 - 23 Jun 2011
Both RIVC/SRIVC routines are avalaible in the CON-
TSID (see Section IV below) and CAPTAIN1Toolboxes
for MATLABr.
E. Model Order Identification
One very important aspect of TF modelling is the identi-
fication of the model structure: i.e., the degrees n,m,p, and
qof the model polynomials and any associated pure time
delay τ. One statistical measure that is useful in this regard
is the coefficient of determination R2
T, defined as follows
R2
T= 1 σ2
ˆ
ξ
σ2
y
(36)
where σ2
ˆ
ξis the variance of the estimated noise ˆ
ξ(tk)and σ2
y
is the variance of the measured output y(tk).R2
Tis clearly
a normalised measure of how much of the output variance is
explained by the deterministic system part of the estimated
model. However, it is well known that this measure, on
its own, is not sufficient to avoid over-parametrisation and
identify a parsimonious model, so that other model order
identification statistics are required. In this regard, because
the SRIVC and RIVC methods exploit optimal instrumental
variable methodology, they are able to utilise the special
properties of the instrumental product matrix (IPM) [39]; in
particular, the YIC statistic [34] which is defined as follows
YIC = loge
ˆσ2
σ2
y
+ loge{NEVN}; NEVN = 1
nθ
nθ
X
i=1
ˆpii
ˆ
θ2
i
(37)
Here, nθ=n+m+p+q+ 1 is the number of estimated
parameters; ˆpii is the ith diagonal element of the block-
diagonal covariance matrix Pθ, where,
Pθ=Pρ0
0 Pη(38)
and so is an estimate of the variance of the estimated
uncertainty on the ith parameter estimate. ˆ
θ2
iis the square
of the ith parameter estimate in the θvector, so that ratio
ˆpii/ˆ
θ2
iis a normalised measure of the uncertainty on the ith
parameter estimate.
From the definition of R2
T, we see that the first term in
the YIC is simply a relative measure of how well the model
explains the data: the smaller the model residuals the more
negative the term becomes. The normalised error variance
norm (NEVN) term, on the other hand, provides a measure
of the conditioning of the IPM, which needs to be inverted
when the IV normal equations are solved (see, e.g., [34]):
if the model is overparameterised, then it can be shown
that the IPM will tend to singularity and, because of its ill-
conditioning, the elements of its inverse (in the form here of
the covariance matrix Pθ) will increase in value, often by
several orders of magnitude. When this happens, the second
term in the YIC tends to dominate the criterion function,
indicating over-parametrisation.
1http://www.es.lancs.ac.uk/cres/captain/
It is important to note that, based on practical experience,
the YIC is normally best considered during SRIVC identi-
fication, which is much less computationally intensive than
RIVC identification, so allowing for much faster investiga-
tion of the model order range selected by the user. In this
situation, nθis replaced by nρ=n+m+ 1 the ˆpii are
obtained by reference to the covariance matrix Pρ.
Although heuristic, the YIC has proven very useful in
practical identification terms. It should not, however, be used
as a sole arbiter of model order: rather the combination of
R2
Tand YIC provides an indication of the best parsimonious
models that can be evaluated by other standard statistical
measures (e.g., the auto and partial autocorrelation of the
model residuals, the cross-correlation of the residuals with
the input signal u(tk),etc.). Also, the physical interpretation
of the model can often provide valuable information on
the model adequacy: for instance, a model with complex
eigenvalues caused by overparametrisation may prove incom-
patible with the non-oscillatory nature of the physical system
under study.
III. LATES T DEV ELO PME NTS FOR THE RIVC METHOD
Recent developments aimed at extending the RIVC
method to handle wider practical situations in order to en-
hance the application field of direct CT model identification.
A. Multiple-input Systems
It is clearly straightforward to extend the RIVC/SRIVC
methods to the multiple-input situation if the TF denominator
is common to all input channels. The situation is not so
straightforward in the case where there are different denom-
inator polynomials for each input channel. However, follow-
ing the RIV approach for DT systems [9], the algorithms can
be extended to handle this situation [4]: indeed, the current
version of RIVC in the CONTSID Toolbox provides this
option.
B. Non-uniformly Sampled Data
One advantage of the SRIVC approach to continuous-
time modelling is that it can be based on irregularly sampled
data and can handle ‘fractional’ pure time delays. The current
implementation of the SRIVC algorithm in the CONTSID
Toolbox can handle irregularly sampled data. However, the
RIVC algorithm has not yet been upgraded in this regard
because it requires additional interpolation and re-sampling
in order to generate a regularly sampled series for the ARMA
noise model estimation parts of the algorithm.
C. Closed-loop Model Identification
Provided there is an external command input signal, the
identification and estimation of a system within a closed
automatic control loop has always been straightforward when
using IV estimation methodology [31], [8]. In the case of
the RIVC/SRIVC algorithms, a two-stage approach, such as
that used by Van den Hof [27] for discrete-time systems, is
the most effective, since it does not require prior knowledge
of the control system. Recent research [44] has shown
hal-00602928, version 1 - 23 Jun 2011
that a modification of this approach employing the SRIVC
algorithm (rather than the FIR model estimation used by Van
den Hof) for estimating the control input signal, followed by
full RIVC estimation of the system, based on this estimated
control input, works extremely well.
D. Hammerstein and LPV Model Identification
Direct identification of CT nonlinear models is still an un-
mature subject. This section discusses briefly the extension of
the RIVC method for the identification of Hammerstein and
LPV CT Box–Jenkins models. In the case of Hammerstein
hybrid BJ model, the nonlinear function f(.)is assumed to
be a sum of known basis functions γ1, γ2, . . . , γlgiven as:
¯u(t) =
l
X
i=1
αiγi(u(t)) with α1= 1.(39)
The hybrid CT BJ Hammerstein model is described by the
following input-output relationship:
x(t) = G(pu(t)
ξ(tk) = H(q1)e(tk),
y(tk) = x(tk) + ξ(tk),
(40)
where
G(p) = B(p)
A(p).(41)
where the coloured noise associated with the sampled output
measurement y(tk)has rational spectral density and can be
represented by a discrete-time autoregressive moving average
ARMA model:
ξ(tk) = H(q1)e(tk) = C(q1)
D(q1)e(tk)(42)
The RIVC method has very recently been extended to
estimate the parameters of such CT hybrid BJ Hammerstein
models [12], [13].
So called Linear Parameter Varying (LPV) models have
been the subject of recent interest. The RIVC approach has
recently been extended to estimate CT LPV input/output
models [14].
IV. SOFTWARE ASPECTS - THE CONTSID
TOOLBOX
The field of system identification is an extensive and
versatile area. It is easy to get confused by the vast number
of approaches and variants of methods available. We have
seen so far that direct continuous-time model identification
from sampled data is now a mature subject and it is important
to package the identification tools in a user-friendly way. An
attempt to do that was carried out with the CONTinuous-time
System IDentification (CONTSID) toolbox for MATLABr.
The CONTSID toolbox was first released in 1999 [2]. It
has gone through several updates. The key features of the
CONTSID toolbox are [6]:
it supports most of the time-domain methods developed
over the last thirty years [3] for identifying linear
dynamic continuous-time parametric models from mea-
sured input/output sampled data;
it provides transfer function and state-space model iden-
tification methods for single-input single-output (SISO)
and multiple-input multiple-output (MIMO) systems,
including both traditional and more recent approaches;
it can handle mild irregularly sampled data in a straight-
forward way;
it may be seen as an add-on to the system identification
(SID) toolbox for MATLABr. To facilitate its use, it
has been given a similar setup to the SID toolbox;
it provides a flexible graphical user interface (GUI) that
lets the user analyse the experimental data, identify and
evaluate models in an easy way;
It can be freely downloaded from
http://www.cran.uhp-nancy.fr/contsid/
The latest version of the CONTSID toolbox has the following
three major additions:
it supports errors-in-variables CT transfer function
model identification [18], [26];
it provides routines to estimate linear CT transfer func-
tion model in closed loop [8], [44];
it includes methods to identify nonlinear CT Hammer-
stein models [12], [13].
V. CONCLUSIONS
This paper has first described the full RIVC algorithm for
identifying hybrid Box–Jenkins transfer function models for
linear, continuous-time systems from discrete-time, sampled
data. The latest developments of the RIVC approach for non-
uniformly sampled data, closed-loop identification as well
as for nonlinear Hammerstein and LPV model identification
have also been briefly discussed.
It is felt that continuous-time model identification, based
on a stochastic formulation of the transfer function estimation
problem, provides a theoretically elegant and practically
useful approach to the modelling of stochastic dynamic
systems from sampled data.
It is an approach that has many advantages in scientific
terms since it provides differential equation models that
conform with models used in most scientific research, where
conservation equations are normally formulated in terms of
differential equations. It is also a model defined by a unique
set of parameter values that are not dependent on the sam-
pling interval, so eliminating the need for conversion from
discrete to continuous time that is an essential element of in-
direct approaches to estimation based on discrete-time model
estimation. These direct continuous-time model identification
methods have proven to be particularly well suited in the case
of mild non-uniformly sampled data, dominant system modes
with widely different natural frequencies (stiff systems), fast
sampled data, or when the input does not respect the zero-
order hold assumption. Finally but not the least, these direct
data-based CT modelling methods have proven successful in
many practical applications and are available as user-friendly
hal-00602928, version 1 - 23 Jun 2011
and computationally efficient algorithms in the CONTSID
toolbox for MatlabTM.
VI. ACKNOWLEDGMENTS
I am extremely grateful to Professor Peter Young for
teaching me the basics of the Refined Instrumental Variable
estimation concept. I would like also gratefully acknowledge
the contributions of all of my students in the developments
presented in the paper: Michel Mensler, Marion Gilson,
Eric Huselstein, Stephane Thil, Damien Kuss and Vincent
Laurain.
REFERENCES
[1] G.E.P. Box and G.M. Jenkins. Time Series Analysis Forecasting and
Control. Holden-Day: San Francisco, 1970.
[2] H. Garnier and M. Mensler. CONTSID: a continuous-time system
identification toolbox for Matlab. 5th European Control Conference
(ECC’99), Karlsruhe (Germany), September 1999.
[3] H. Garnier, M. Mensler, and A. Richard. Continuous-time model iden-
tification from sampled data. Implementation issues and performance
evaluation. International Journal of Control, 76(13):1337–1357, 2003.
[4] H. Garnier, M. Gilson, P.C. Young, and E. Huselstein. An optimal IV
technique for identifying continuous-time transfer function model of
multiple input systems. Control Engineering Practice, 15(4):471–486,
2007.
[5] H. Garnier and L. Wang (Eds). Identification of Continuous-time
Models from Sampled Data. Springer-Verlag, London, 2008.
[6] H. Garnier, M. Gilson, T. Bastogne, and M. Mensler. CONTSID
toolbox: a software support for continuous-time data-based modelling.
In Identification of continuous-time models from sampled data, H.
Garnier and L. Wang (Eds.), Springer, London, pages 249–290, 2008.
[7] H. Garnier, L. Wang, and P.C. Young. Direct Identification of
Continuous-time Models from Sampled Data: Issues, Basic Solutions
and Relevance. In Identification of continuous-time models from
sampled data, H. Garnier and L. Wang (Eds.), Springer, London, pages
1–29, 2008.
[8] M. Gilson, H. Garnier, P.C. Young, and P. Van den Hof. Instrumental
variable methods for continuous-time closed-loop model identification.
In Identification of continuous-time models from sampled data, H.
Garnier and L. Wang (Eds.), Springer-Verlag, London, pp. 133-160,
2008.
[9] A.J. Jakeman, L.P. Steele, and P.C. Young. Instrumental variable
algorithms for multiple input systems described by multiple transfer
functions. IEEE Transactions on Systems, Man, and Cybernetics,
SMC-10:593–602, 1980.
[10] A.J. Jakeman and P.C. Young. Refined instrumental variable methods
of time-series analysis: Part II, multivariable systems. International
Journal of Control, 29:621–644, 1979.
[11] A.J. Jakeman and P.C. Young. Advanced methods of recursive time-
series analysis. International Journal of Control, 37:1291–1310, 1983.
[12] V. Laurain, M. Gilson, H. Garnier, and P.C. Young. Refined instrumen-
tal variable methods for identification of Hammerstein continuous-time
Box-Jenkins models. 47th IEEE Conference on Decision and Control
(CDC’2008), Cancun (Mexico), December 2008.
[13] V. Laurain, M. Gilson, and H. Garnier. Refined instrumental variable
methods for Hammerstein Box-Jenkins models. In System Identifica-
tion, Environmetric Modelling and Control System Design, L. Wang,
H. Garnier and T. Jakeman (Eds.), Springer, London, 2011.
[14] V. Laurain, M. Gilson, R. Toth, and H. Garnier. Direct identification
of continuous-time LPV input/ouput models. IET Control Theory and
Applications, special issue ”Continuous-time model identification”,
2011.
[15] L. Ljung. System Identification. Theory for the User. Prentice Hall,
Upper Saddle River, 2nd edition, 1999.
[16] L. Ljung. Initialisation aspects for subspace and output-error iden-
tification methods. European Control Conference, Cambridge, UK,
2003.
[17] L. Ljung. Experiments with identification of continuous-time models.
In 15th IFAC Symposium on System Identification, Saint-Malo, France,
July 2009.
[18] K. Mahata and H. Garnier. Identification of continuous-time errors-
in-variables models. Automatica, 46(9):1477–1490, 2006.
[19] K. Mahata and H. Garnier. Identification of continuous-time Box-
Jenkins models with arbitrary time-delay. 46th Conference on Decision
and Control (CDC’2007), New Orleans, LA, USA, December 2007.
[20] G.P. Rao and H. Garnier. Numerical illustrations of the relevance
of direct continuous-time model identification. 15th IFAC World
Congress, Barcelona, Spain, July 2002.
[21] G.P. Rao and H. Garnier. Identification of continuous-time systems:
direct or indirect? Systems Science, 30(3):25–50, 2004.
[22] G. P. Rao and H. Unbehauen. Identification of continuous-time
systems. IEE Proceedings Control Theory & Appl., 153(2), March
2006.
[23] T. S¨
oderstr¨
om and P. Stoica. Instrumental Variable Methods for System
Identification. Springer Verlag, New York, 1983.
[24] T. S¨
oderstr¨
om and P. Stoica, System Identification, Series in Systems
and Control Engineering. Prentice Hall, 1989.
[25] V. Solo. Time Series Recursions and Stochastic Approximation. PhD
thesis, Australian National University, Canberra, Australia, 1978.
[26] S. Thil, H. Garnier and M. Gilson Third-order cumulants based
methods for continuous-time errors-in-variables model identification.
Automatica, 44(3), 2008.
[27] P. Van den Hof. Closed-loop issues in system identification. Annual
Reviews in Control, 22:173–186, 1998.
[28] L. Wang, H. Garnier and T. Jakeman (Eds). System Identification,
Environmetric Modelling and Control System Design. Springer-Verlag,
2011.
[29] P.C. Young. In flight dynamic checkout - a discussion. IEEE
Transactions on Aerospace, AS2(3):1106–1111, 1964.
[30] P.C. Young. The determination of the parameters of a dynamic process.
Radio and Electronic Engineering (Journal of IERE), 29:345–361,
1965.
[31] P.C. Young. An instrumental variable method for real-time identifica-
tion of a noisy process. Automatica, 6:271–287, 1970.
[32] P.C. Young. Some observations on instrumental variable methods of
time-series analysis. International Journal of Control, 23:593–612,
1976.
[33] P.C. Young. Parameter estimation for continuous-time models - a
survey. Automatica, 17(1):23–39, 1981.
[34] P.C. Young. Recursive Estimation and Time-Series Analysis. Springer-
Verlag, Berlin, 1984.
[35] P.C. Young. Data-based mechanistic modeling of engineering systems.
Journal of Vibration and Control, 4:5–28, 1998.
[36] P.C. Young. Data-based mechanistic modeling of environmental, eco-
logical, economic and engineering systems. Journal of Environmental
Modelling and Software, 13:105–122, 1998.
[37] P.C. Young and A.J. Jakeman. Refined instrumental variable methods
of time-series analysis: Part I, SISO systems. International Journal of
Control, 29:1–30, 1979.
[38] P.C. Young and A.J. Jakeman. Refined instrumental variable methods
of time-series analysis: Part III, extensions. International Journal of
Control, 31:741–764, 1980.
[39] P.C. Young, A.J. Jakeman, and R. McMurtrie. An instrumental variable
method for model order identification. Automatica, 16:281–296, 1980.
[40] P.C. Young. The data-based mechanistic approach to the modelling,
forecasting and control of environmental systems. Annual Reviews in
Control, 30:169–182, 2006.
[41] P.C. Young and H. Garnier. Identification and estimation of
continuous-time, data-based mechanistic models for environmental
systems. Environmental Modelling & Software, 21:1055–1072, 2006.
[42] P.C. Young. The refined instrumental variable method: unified estima-
tion of discrete and continuous-time transfer function models. Journal
Europ´
een des Syst`
emes Automatis´
es, 2008.
[43] P.C. Young, H. Garnier, and M. Gilson. Refined instrumental variable
identification of continuous-time hybrid Box-Jenkins models. In Iden-
tification of continuous-time models from sampled data, H. Garnier
and L. Wang (Eds.), pages 91–132. Springer-Verlag, London, 2008.
[44] P.C. Young, H. Garnier, and M. Gilson. Simple refined IV methods
of closed-loop system identification. 15th IFAC Symposium on System
Identification (SYSID’2009), Saint-Malo (France), July 2009.
hal-00602928, version 1 - 23 Jun 2011
... In this synthesis, a multivariable transfer function is identified using the numeric Direct Continuous-Time Identification (DCTI) approach [12]. This technique has attracted an increasing attention of several researchers in the last few years [13][14][15][16][17][18]. It provides a robust and accurate method for the identification of dynamical systems under the influence of perturbations. ...
... This interest is due to the ability of these approaches to provide consistent results even for an imperfect noise structure which is the case in most practical applications [16]. The early contributions on continuous-time identification can be found in [17][18]. that are discussed in detail by [20]. ...
... The PID controller parameters are given as follows: (17) where K represents the steady-state gain, T is the time constant, and D is the time delay of the system. β should satisfy β > 0.2T and β > 0.25L [26]. ...
Article
Full-text available
The interactions between input/output in multivariable processes represent a major challenge in the design of decentralized controllers. In this paper, a simple method for the design of decentralized PID controller is proposed. It consists to combine the conventional PID controller with the static decoupler approach. For each single loop, the individual controller is independently designed by applying the internal model control (IMC) tuning rules. To demonstrate the effectiveness of the proposed method, the PID controller with and without decoupling is implemented on an aerothermic process. It is a pilot scale heating and ventilation system equipped with a heater grid and a centrifugal blower, fully connected through the Humusoft MF624 data acquisition system for real time control. The outcome of the experimental results is that the main control objectives, such as set-point tracking and interactions rejection are well achieved. The experimental results have shown that the proposed method provides a significant improvement compared to conventional PID controller.
... ii) The direct approach: The derivative is approximated to construct an input-output estimation of a continuous time system from discrete datasets. In other words, a state variable filter or a causal, stable, realizable linear operator is employed to approximate the higher order derivatives, see, e.g., (Garnier, 2011;Johansson, 2009;Mercère, Ouvrard, Gilson, & Garnier, 2007;Young, 1981). In (Garnier, 2011), it is shown that this method could handle non-uniformly sampled data if filtered data can be sampled uniformly. ...
... In other words, a state variable filter or a causal, stable, realizable linear operator is employed to approximate the higher order derivatives, see, e.g., (Garnier, 2011;Johansson, 2009;Mercère, Ouvrard, Gilson, & Garnier, 2007;Young, 1981). In (Garnier, 2011), it is shown that this method could handle non-uniformly sampled data if filtered data can be sampled uniformly. In contrast to the direct approach, our method only uses the available non-uniformly sampled data without any filtering or derivative estimation. ...
Article
To develop efficient just-in-time personalized treatments, dynamical models are needed that provide a description of how an individual responds to treatment. However, available system identification approaches cannot effectively be applied to most behavioral datasets since, usually, the data collected is subjected to a large amount of noise and time sampling is not uniform. To be able to circumvent these issues, in this paper a new method is proposed for parsimonious system identification of continuous-time systems that does not require specially structured data. The developed algorithm provides an effective way to leverage these “non-standard” datasets to identify continuous time dynamical models that are compatible with a priori information available on the process. The algorithm developed is tested on data obtained from a behavioral study on adolescents and violence. The objective is to model the temporal dynamics of the association between violence exposure and mental health symptoms (depression and anxiety) in day-to-day life among a sample of adolescents at heightened risk for both substance use exposure and problem behavior. The information extracted from individual models of behavior such as the maximum burden and the time of fading away of depression/anxiety does differ substantially from person to person. This information has the potential to be useful to design personalized interventions that would have a better chance of succeeding.
... In other words, the higher order derivatives are approximated with a state variable filter or a causal, stable, realizable linear operator, see e.g. Johansson (2009);Garnier (2011);Young (1981). Recently it has been shown that this method can handle non-uniformly sampled data if the filtered data are sampled uniformly Garnier (2011). ...
... Johansson (2009);Garnier (2011);Young (1981). Recently it has been shown that this method can handle non-uniformly sampled data if the filtered data are sampled uniformly Garnier (2011). Different than the direct approach, the method proposed uses the available non-uniformly sampled data without filtering or derivative estimation to estimate the uniformly sampled impulse response of the continuous system. ...
Conference Paper
Considerable effort has been devoted to the development of algorithms for identification of parsimonious discrete time models from noisy input/output data sets since this facilitates controller design. Several methods, such as nuclear norm minimization, have been used to provide approximate solutions to this non-convex problem. However, even though the field of continuous time system identification is now mature, results on parsimonious model identification of continuous time systems are still very limited. In this paper, an atomic norm minimization method is proposed for this purpose that can handle non-uniformly sampled data without preprocessing. The proposed approach provides an efficient way to use noisy, non-uniformly sampled data to determine a reliable, low-order continuous time model. Numerical performance is illustrated using academic examples and simulated behavioral data from a smoking cessation study.
... Identification is performed using the indirect method [23], i.e., the effect of the current controller is taken into account. The proposed method: 1) requires neither an injection circuit nor additional sensors; 2) provides the desired parameters of the lossless LCL-filter model needed in the controller [3]; and 3) gives insight into the physical system due to obtained physical parameters [24]. ...
Conference Paper
Model-based control techniques are frequently used with grid-connected converters. This paper proposes a method for identifying model parameters of an LCL filter connected to a grid converter for control purposes. In identification, a discrete-time autoregressive moving average with exogenous input (ARMAX) model structure is used and closed-loop current control is considered. The resulting discrete-time model parameters are translated into the continuous-time physical parameters (inductance and capacitance values of the filter) by comparing the estimated discrete-time model with the analytical discrete-time model. Simulation and experimental results show that the proposed method yields good parameter estimates that are suitable for control tuning.
... In the synthesis of the decentralized PI-D controller, a multivariable transfer function is identified using the numeric Direct Continuous-Time Identification (DCTI) approach [7,[20][21]. This technique has attracted an increasing attention of several researchers in the last few years [22][23][24][25][26]. It provides a robust and accurate method for the identification of dynamical systems under the influence of perturbations. ...
Article
Full-text available
The aerothermic process is a pilot scale heating and ventilation system. It is equipped with a heater grid and a centrifugal blower, fully connected through the Humusoft MF624 data acquisition system for real time control. The interaction between its main variables is considered as challenging for mono-variable controllers. An abrupt change in the ventilator speed might cause an undesirable disturbance in the air temperature representing a factor that must be managed to conserve energy. To annul the effect of this interaction, this paper presents an experimental comparison between three forms of the PID controller: the conventional PID controller, the PI-D controller and its decentralized version. A multi-variable continuous state space model is obtained from on-line experimental data. The outcome of the experimental results is that the main control objectives, such as set-point tracking and interactions rejection, are well achieved for the temperature and the air flow simultaneously.
Article
Full-text available
The aerothermic process is a pilot scale heating and ventilation system equipped with a heater grid and a centrifugal blower. The interaction between its main variables is considered as challenging for mono-variable controllers. A change in the ventilator speed affects the temperature behavior which represents a factor that must be managed for energy saving and the human welfare. This paper presents an experimental comparison between a Centralized Discrete State Space Model Predictive Control (CDSSMPC) and a Decentralized PI-D (DPI-D) controller. These both techniques are designed by using respectively the Laguerres functions and the static decoupler approach. To demonstrate the effectiveness of the two methods, an implementation on an aerothermic process is performed. This pilot scale is fully connected through the Humusoft MF624 data acquisition system for real time control. The results show satisfactory performance in closed-loop of the DPI-D controller compared to the CDSSMPC and the conventional PID ones.
Article
This paper presents the CONTSID toolbox for the identification of continuous-time transfer function or state-space models for linear time-invariant SISO and MIMO systems from sampled input/output data. The acronym CONTSID stands for CONtinuous-Time System IDentification toolbox. It has been designed as an addon to the Mathwork's system identification toolbox and has been given a similar setup. The toolbox is a collection of m-files and operates exclusively in the command line mode. It comprises most of the direct parametric continuous-time model identification methods from a discrete-time set of measurements, both classical ones as the state-variable filter method and more recent ones as subspace-based methods. The toolbox also includes tools for evaluating the estimated model properties.
Chapter
The paper reviews the progress of research on parameter estimation for continuous-time models of dynamic systems over the period 1958-78. Majordevelopments are considered in historical order and within a classification system which conforms as closely as possible to that which has arisen naturally over the past two decades. While every attempt is made to consider progress in other related scientific disciplines, such as econometrics, the major accent is on research in the control and systems field.