Content uploaded by Daniel Rodríguez Ramírez
Author content
All content in this area was uploaded by Daniel Rodríguez Ramírez on Oct 31, 2018
Content may be subject to copyright.
Tracking MPC with non-convex steady
state admissible sets
Andres Cotorruelo ∗Daniel Limon ∗∗ Emanuele Garone ∗
Daniel R. Ramirez ∗∗
∗Service d’Automatique et d’Analyse des Syst`emes, ´
Ecole
polytechnique de Bruxelles, Universit´e Libre de Bruxelles, 50, av. F.D.
Roosevelt, CP 165/55 1050 — Brussels, Belgium (e-mails: {acotorru,
egarone}@ulb.ac.be)
∗∗ Departamento de Ingenier´ıa de Sistemas y Autom´atica, Universidad
de Sevilla, Escuela Superior de Ingenieros, Camino de los
Descubrimientos s/n. 41092, Seville, Spain (e-mails:
{dlm,danirr}@us.es)
Abstract: In this paper, we propose an extension to the existing Model Predictive Control
scheme for tracking. This extension is able to provide a solution for the case where the set of
steady-state admissible outputs is non-convex. This is achieved by means of a transformation
that maps the output set into a convex set. In the proposed scheme, the cost function and
constraints of the usual tracking MPC are modified, so that the controller can drive the system
to any point in the admissible steady-state domain without violating any constraints.
The paper discusses the feasibility and stability of the proposed approach and a final simulation
demonstrates the effectiveness of the approach.
Keywords: Predictive control, Target Tracking, Transformations, Constraint satisfaction
problems.
1. INTRODUCTION
Usually, Model Predictive Control (MPC) is the chosen
scheme to be implemented when handling a constrained
control problem, as reported in Morari and Lee (1999),
Qin and Badgwell (2003).
Most of the MPCs are designed for set point regulation
(typically the origin) rather than for reference tracking,
although it is not uncommon for the set point to change
during operation. When this happens, the controller may
lose feasibility in the new conditions (Bemporad et al.,
1997).
A different approach to the tracking problem is the Ex-
tended Command Governor, first introduced in Gilbert
and Ong (2009), in which the reference applied to the
controller is the output of an optimization problem. An-
other approach, the Explicit Reference Governor, is able to
solve the problem of reference tracking in non-convex sets
of steady-state admissible outputs (Nicotra and Garone,
2016). The ERG scheme explicitly calculates the applied
reference without the need for on-line optimization.
The tracking MPC formulation was first proposed for
linear systems in Lim´on et al. (2008) and further extended
in following publications. In Limon et al. (2018), the
?This research has been funded by the FNRS MIS “Optimization-
free Control of Nonlinear Systems subject to Constraints”, Ref.
F.4526.17. and by Ministerio de Econom´ıa y Competitividad of
Spain under project DPI2016-76493-C3-1-R co-financed by European
FEDER Funds.
authors extend the formulation to nonlinear systems with
piece-wise constant reference signals. The main limitation
of this formulation is the inability to deal with non-convex
sets of output admissible steady states. The authors deal
with this by restricting the operation of the MPC to a
convex subset of admissible outputs. This paper overcomes
this limitation by means of a transformation between the
non-convex output admissible steady states and a convex
parameter set.
The paper is organized as follows: in Section 2 the problem
is stated, followed by a brief summary of the tracking MPC
formulation and its application to a case study in Section 3.
In Section 4 the formulation is extended to deal with non-
convex sets of steady state output admissible sets, and it
is applied to the same case study in Section 5. The paper
ends with a conclusion in Section 6.
1.1 Notation
In∈Rn×nis the n-dimensional identity matrix, ||x||P
is the weighted Euclidean norm ||x||P=√x>P x with
x∈Rnand P∈Rn×n; let aand bbe column vectors, (a, b)
denotes [a>b>]>,N0denotes the set of natural numbers
plus zero; N0=N∪ {0}, Im(f) denotes the image of a
function f.
2. PROBLEM STATEMENT
Let a discrete-time system be defined by the following
equations:
x(t+ 1) = f(x(t), u(t)) (1)
y(t) = h(x(t), u(t))
where x∈Rnis the system state, u∈Rm, the control
signal and y∈Rp, the system output.
We denote the system steady state, input and output as
xs,usand ys, respectively. By definition, these vectors are
such that:
xs=f(xs, us), ys=h(xs, us).(2)
We assume that system (1) is subject to state and input
constraints in the form:
(x(t), u(t)) ∈ Z, t ∈N0(3)
where Z ⊂ Rn+mis a closed nonempty set.
In order to avoid loss of controllability due to active
constraints, the following restricted constrained set is
defined:
ˆ
Z={z:z+e∈ Z,∀|e| ≤ ε}(4)
where ε > 0 is an arbitrarily small scalar. The admissible
set of equilibrium states in which constraints are inactive
is defined as:
Zs={(x, u)∈ˆ
Z:x=f(x, u)}(5)
Ys={y=h(x, u):(x, u)∈ Zs}(6)
It is assumed that Zsis nonempty and that there exist a
locally Lipschitz continuous function gx:Ys→Rnand a
continuous function gu:Ys→Rmsuch that:
xs=gx(ys), us=gu(ys),Im(gx)⊆ Zs,Im(gu)⊆ Zs(7)
3. TRACKING MPC
In (Limon et al., 2018) the authors introduce a MPC
formulation of the tracking problem for nonlinear systems.
The main difference with respect to traditional MPC
schemes is the addition of a virtual reference, ys, to the
set of decision variables.
The evolution of the system will tend towards this virtual
reference, while ysitself will be steered towards the actual
reference yt. This is done so by adding an extra term
to the cost function that penalizes the deviation of the
virtual reference from the actual reference. This is the
so called offset cost function and is denoted by VO(·) :
Rp→R.VOis assumed to be convex, positive definite
and subdifferentiable, as expressed in Ferramosca et al.
(2011).
Accordingly, the typical cost function in the tracking MPC
formulation is:
VN(x, yt;u, ys) =
N−1
X
j=0
`(x(j)−xs, u(j)−us) (8)
+VO(ys−yt)
where ytis the reference, uis the sequence of calculated
control inputs, u= (u(0), u(1),·· · , u(N−1)), and `:Rn×
Rm→Ris the stage cost function.
The other major difference with respect to the classical
MPC is that the evolution is forced to reach the artificial
steady state, xs, after Nsteps. This is equivalent to
choosing the terminal control law as the steady state input,
us=gu(ys) and Γ = {(x, ys) : x=gx(ys), ys∈ Yt}as the
terminal set.
At this point, the optimization problem to be solved at
each time step by the MPC can be formulated as:
min
u,ys
VN(x, yt;u, ys) (9)
s.t.
x(0) = x, (10)
x(j+ 1) = f(x(j), u(j)), j = 0,··· , N −1 (11)
(x(j), u(j)) ∈ Z, j = 0,··· , N −1 (12)
xs=gx(ys), us=gu(ys) (13)
ys∈ Yt(14)
x(N) = xs(15)
where Ytis a suitable convex inner approximation of Ys.
Under these conditions, in Limon et al. (2018) the recursive
feasibility of the MPC scheme and the fact that, for a
constant reference yt, the output ywill asymptotically
converge to the approximation of ytin Ytwhich minimizes
VO(y, yt) are proven. Clearly, the main limit of this scheme
is that, if Ysis very non-convex, the use of Ytcan be very
conservative.
The goal of this paper is to extend the MPC scheme
proposed by Limon et al. (2018) to enable it to directly
make use of possibly non-convex sets Ys.
In order to understand the need in Limon et al. (2018) for
a convex Yt, it is interesting to see what happens in the
case that a non-convex Ysis used in (14).
As an illustrative example consider the following discrete
time LTI system:
x(t+ 1) =
1100
0100
0011
0001
x+
0.5 0
1 0
0 0.5
0 1
u(16)
y(t) = 1000
0010x
The system is subjected to the following constraints:
−0.1≤u(j)≤0.1, j = 0,··· , N −1 (17)
y(j)∈Ω, , j = 0,··· , N −1 (18)
ys∈Ω (19)
where Ω is represented in Figures 1,3 and 6 in yellow.
This example can be interpreted as the control of the
position of a planar robot confined in Ω. For the imple-
mentation of the controller, the following stage cost and
offset functions are used:
`(x(j)−xs, u(j)−us) = ||x(j)−xs||2
Q+||u(j)−us||2
R(20)
VO(ys−yt) = ||ys−yt||2
T(21)
with weighting matrices Q=I4,R= 0.1I2and T=10I2.
A prediction and control horizon of Nc=Np=N= 5
steps is used.
In Figure 1, a representation of the evolution of the system
is shown. As it can be seen, the controller is unable to find
a path converging to the desired reference, and steers the
-1 -0.5 0 0.5 1
y1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
y2
Fig. 1. System output (solid black line, black diamonds),
trajectory of ys(solid blue line, blue circles), starting
point (red diamond) and reference (green triangle) in
the original space, solved with the original tracking
MPC scheme.
system to another point of equilibrium. In the following
section, a modification to the posed tracking MPC scheme
is presented, in which the controller is able to steer the
system to the desired set point.
4. EXTENSION TO DEAL WITH NON-CONVEX YS
Let φ(·) : Rny→Rnθbe a Lipschitz continuous function
that maps the set of steady state admissible references
Ysinto a convex set Θ, i.e., a function such that the set
Θ = {θ:θ=φ(y),∀y∈ Ys}is convex. We will denote by
φ−1to the inverse transformation:
φ−1(φ(x)) = x
At this point the main idea is to define the offset cost
function as a norm distance in the domain Θ, i.e.
VO=VO(θ−φ(yt)) (22)
In the previous formulation, the offset cost function pe-
nalizes the deviation of the virtual reference, ys, from the
setpoint, yt. In this formulation, the penalization is on
the deviation between the transformation of ysfrom yt,
θ−φ(yt).
In the same fashion, constraints (13) need to be changed
to take into account the new mapping. This is done as
follows:
xs= ˆgx(θ), us= ˆgu(θ) (23)
where ˆgx(·) = gx(φ−1(·)),ˆgu(·) = gu(φ−1(·)).
The proposed modification of the tracking MPC formula-
tion is the following:
min
u,θ VN(x, yt:u, θ) =
N−1
X
j=0
`(x(j)−xs, u(j)−us) (24)
+VO(θ−φ(yt))
s.t.
x(0) = x, (25)
x(j+ 1) = f(x(j), u(j)), j = 0,··· , N −1 (26)
(x(j), u(j)) ∈Z, j = 0,··· , N −1 (27)
xs= ˆgx(θ), us= ˆgu(θ) (28)
θ∈Θ (29)
x(N) = xs(30)
With the proposed modification, θwill steer towards φ(yt)
while remaining in the interior of Θ. This means that ys
will converge to ytin a trajectory that never leaves Yt.
Remark 1. In general, the construction of the mapping φ
might not be straightforward. As it will be mentioned in
Section 6, the characterization of classes of non-convex sets
that can be easily mapped to convex ones will be left for
future research.
5. ILLUSTRATIVE EXAMPLE
In this section, the proposed formulation is implemented
for system (16) subject to constraints (17)–(19).
The first step is finding the mapping function φand its
inverse φ−1. It is easy to realize that in polar coordinates,
Ω is convex, therefore, the following transformation is
defined:
ys=φ(θ) =
θ1+ 1
2cos 3π
2θ2
θ1+ 1
2sin 3π
2θ2
(31)
θ=φ−1(ys) =
2||ys|| − 1
2
3πarctan ys2
ys1
(32)
φ−1(Ys) = Θ = {θ∈R2: 0 ≤θi≤1, i ={1,2}} (33)
This transformation maps the set Ω to the box [0,1]; the
transformation of the space is shown in Figure 2.
-1 -0.5 0 0.5 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
2
Fig. 2. Transformation of the space.
For the simulation, the same functions and parameters as
in Section 3 are used. In Figures 3 and 4, the evolution
of the system in the original and transformed space,
respectively, are reported; in Figure 5 the evolution of
the inputs is shown. One can see how, with the proposed
scheme the trajectory converges to the desired steady-
state. Finally, in Figure 6 it is shown how the controller
drives the system to the nearest admissible steady state
when the reference is inadmissible.
In Figures 3, 4 and 6 the system output is depicted as a
solid black line with black diamonds, the trajectory of ys
as a solid blue line with blue circles, the starting point as
a red diamond, and the reference as a green triangle.
6. CONCLUSION
In this paper an extension to the tracking MPC proposed
in Limon et al. (2018) has been presented. With it, the
formulation is now able to deal with non-convex Ys,
-1 -0.5 0 0.5 1
y1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
y2
Fig. 3. System output, evolution of ys, starting point
and reference in the original space solved with the
proposed modification.
0 0.2 0.4 0.6 0.8 1
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
2
Fig. 4. System output, evolution of ys, starting point and
reference in the transformed space.
5 10 15 20 25 30
Time (t)
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
u
u1
u2
Fig. 5. Evolution of the input signal.
-1 -0.5 0 0.5 1
y1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
y2
Fig. 6. System output, evolution of ys, starting point and
reference in the original space with an inadmissible
reference.
reducing possible conservativity. Future research lines will
be focused on defining classes of non-convex domains that
can be mapped onto convex domains.
REFERENCES
Bemporad, A., Casavola, A., and Mosca, E. (1997). Non-
linear control of constrained linear systems via pre-
dictive reference management. IEEE transactions on
Automatic Control, 42(3), 340–349.
Ferramosca, A. et al. (2011). Model predictive control of
systems with changing setpoints. control predictivo de
sistemas con punto de operaci´on cambiante.
Gilbert, E.G. and Ong, C.J. (2009). An extended com-
mand governor for constrained linear systems with
disturbances. In Decision and Control, 2009 held
jointly with the 2009 28th Chinese Control Conference.
CDC/CCC 2009. Proceedings of the 48th IEEE Confer-
ence on, 6929–6934. IEEE.
Lim´on, D., Alvarado, I., Alamo, T., and Camacho, E.F.
(2008). Mpc for tracking piecewise constant references
for constrained linear systems. Automatica, 44(9), 2382–
2387.
Limon, D., Ferramosca, A., Alvarado, I., and Alamo, T.
(2018). Nonlinear mpc for tracking piece-wise constant
reference signals. IEEE Transactions on Automatic
Control.
Morari, M. and Lee, J.H. (1999). Model predictive con-
trol: past, present and future. Computers & Chemical
Engineering, 23(4-5), 667–682.
Nicotra, M. and Garone, E. (2016). Constrained control
of nonlinear systems: the explicit reference governor and
its application to unmanned aerial vehicles.
Qin, S.J. and Badgwell, T.A. (2003). A survey of industrial
model predictive control technology. Control engineer-
ing practice, 11(7), 733–764.