Fig 1 - uploaded by Felipe M Pait
Content may be subject to copyright.
Model structure selection, filter tuning, and parameter estimation.  

Model structure selection, filter tuning, and parameter estimation.  

Source publication
Conference Paper
Full-text available
Identification of linear time-invariant multivariable systems can best be understood as comprising three separate problems: selection of system model structure, filter design, and parameter estimation itself. In previous contributions we approached the first using matchable-observable models originally developed in the adaptive control literature,...

Context in source publication

Context 1
... one picks the list l which results in the best performance, according to a criterion J (·) established beforehand. The strategy is depicted in Figure 1. In order to keep the number of steps finite, an upper bound n max to the McMillan degree is defined, and the model order is chosen over the interval between p ≤ n ≤ n max as a compromise between complexity and performance. ...

Citations

... Bad choices of A 0 will lead to inaccurate models while good choices lead to excellent models. From (18), it can be seen that if L = K, the prediction error e y (k) becomes white-noise. As a result, A 0 = A + KC is the optimal value of A 0 and the minimal of J is a prediction error estimator (PEM). ...
Article
Full-text available
System identification approaches have been used to design an experiment, generate data, and estimate dynamical system models for Just Walk, a behavioral intervention intended to increase physical activity in sedentary adults. The estimated models serve a number of important purposes, such as understanding the factors that influence behavior and as the basis for using control systems as decision algorithms in optimized interventions. A class of identification algorithms known as matchable-observable linear identification has been reformulated and adapted to estimate linear time-invariant models from data obtained from this intervention. The experimental design, estimation algorithms, and validation procedures are described, with the best models estimated from data corresponding to an individual intervention participant. The results provide insights into the individual and the intervention, which can be used to improve the design of future studies.
... As the accuracy of the estimated parameters depends on the characteristic polynomial of the observer, this is also optimized. In [2], the Barycenter method performs this optimization. This method generates a set of curiosities (polynomials) and assigns a weight to each curiosity. ...
... It gives a greater weight to the curiosities that have a better performance (lower cost). In [3] and [2], three different ways of generating curiosities were proposed: ...
... 3) Dominant Pole Initialization [2]: Curiosities are defined by two parameters which result from the characterization of α as the characteristic polynomial of a low-pass filter with a pair of dominant complex poles parametrized in terms of cutoff frequency w c and damping factor ζ. ...
... In the case of a "black-box" approach, it is still possible to pick l o using some structure selection method, e.g. [7], [13]. ...
... Build ϕ i (t k ) as in (8). Solveθ i in (13). end for Compute A m , B m , C m and D m of model (1)-(2). ...
... In this paper we follow a line of research [10], [11], [12] that gave promising results in batch identification, and which has some advantages that we believe are worth exploring for recursive identification. It employs quasi-canonical MIMO state-space representations proposed in the context of adaptive control which have the properties of observability, matchpoint controllability, and matchability. ...
... for i = 1 to n y do Build ϕ i,k as in (7). Compute the Kalman gain (11). Updateθ i,k and P i,k using (10) and (12), respectively. ...
... If necessary, the observability indices may be determined off-line, using some structure selection method, e.g. [3], [11]. ...
Conference Paper
This paper presents a recursive parameter estimation algorithm based on a matchable-observable parameterization of multivariable process models. As a consequence of the properties of the models used, no undesired pole-zero cancellations appear, the number of model parameters is not excessive, linear least-squares estimation methods are applicable, and parameter estimation can be accomplished without the need for iterative or nonlinear optimization. The performance of the algorithm developed is assessed in comparison with a well-established recursive subspace method, in a simulation study with time-invariant and time-varying scenarios. The results obtained demonstrate the accuracy and effectiveness of the proposed approach.
Conference Paper
In this paper an identification method for state-space LPV models is presented. The method is based on a particular parameterization that can be written in linear regression form and enables model estimation to be handled using Least-Squares Support Vector Machine (LS-SVM). The regression form has a set of design variables that act as filter poles to the underlying basis functions. In order to preserve the meaning of the Kernel functions (crucial in the LS-SVM context), these are filtered by a 2D-system with the predictor dynamics. A data-driven, direct optimization based approach for tuning this filter is proposed. The method is assessed using a simulated example and the results obtained are twofold. First, in spite of the difficult nonlinearities involved, the nonparametric algorithm was able to learn the underlying dependencies on the scheduling signal. Second, a significant improvement in the performance of the proposed method is registered, if compared with the one achieved by placing the predictor poles at the origin of the complex plane, which is equivalent to considering an estimator based on an LPV auto-regressive (LPV-ARX) structure.