Content uploaded by David Haessig
Author content
All content in this area was uploaded by David Haessig on Mar 17, 2015
Content may be subject to copyright.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 983
Separate-Bias Estimation with
Reduced-Order Kalman Filters
David Haessig and Bernard Friedland
Abstract—This paper presents the optimal two-stage Kalman filter for
systems that involve noise-free observations and constant but unknown
bias. Like the full-order separate-bias Kalman filter presented in 1969 [1],
this new filter provides an alternative to state vector augmentation and
offers the same potential for improved numerical accuracy and reduced
computational burden. When dealing with systems involving accurate,
essentially noise-free measurements, this new filter offers an additional
advantage, a reduction in filter order. The optimal separate-bias reduced-
order estimator involves a reduced-order filter for estimating the state,
the order equalling the number of states less the number of observations.
Index Terms—Reduced-order Kalman filter, separate-bias estimation,
two-stage filtering.
I. INTRODUCTION
Consider the problem of estimating the state of a linear dynamic
system influenced by a constant but unknown bias vector
. One
method for handling this estimation problem is through state aug-
mentation. The bias vector is appended to the state vector to form
the augmented state vector, and the bias dynamic equation
is appended to the original process dynamics equation to form an
augmented dynamic system. A Kalman filter applied to this new
system then estimates the bias terms as well as the state of the original
problem. However, when the number of bias terms is comparable
to or larger than the number of states of the original problem,
the filter implementation involves computations with much larger
matrices, which increases the likelihood of numerical conditioning
difficulties, and in some cases precludes their solution and the
accurate estimation of the state and bias. This motivated an effort,
in the late 1960’s, to develop a method which avoided the numerical
inaccuracies introduced by computations with large matrices. The
outcome of the effort is a technique known as separate-bias estimation
[1], also more recently called two-stage estimation. This method
separates the estimation of the bias from that of the dynamic state,
thereby reducing the size of the matrices and vectors involved in the
filtering computations. Two separate, uncoupled filters run in parallel
to generate the optimal estimate of the bias and of the “bias-free”
state (i.e., the state that would result if the bias were zero). The
optimal estimate of the actual state
is then generated by adding
a bias correction term to the “bias-free” state estimate, downstream
of both stages of the filter. Thus, the separate-bias filter includes
two filters that are essentially uncoupled, which gives rise to its
potential for improved numerical accuracy, conditioning, and reduced
computational burden.
The separate-bias filter as originally given in [1] has been the focus
of significant interest and activity. Scores of references to the original
paper have appeared in the literature (see the reference list given in
[2]). And as is often the case with techniques that attract significant
use, there have been a number of alternate derivations and extensions.
Mendel [3] in 1978, and Ignagni [4] in 1981 developed two different
Manuscript received April 10, 1996; revised December 19, 1996.
D. Haessig is with GEC-Marconi Hazeltine Corporation, Wayne, NJ 07474-
0932 USA (e-mail: haessig@systems.gec.com).
B. Friedland is with the Department of Electrical and Computer Engineer-
ing, New Jersey Institute of Technology, Newark, NJ 07102 USA.
Publisher Item Identifier S 0018-9286(98)04642-X.
derivations of the separate-bias filter. In both, the bias terms are
assumed to be unknown and constant. In recent years, efforts have
focused on the ramifications of nonconstant bias terms, i.e., bias that
vary as a random walk process,
, where is a vector of
zero mean, Gaussian, white noise processes. Ignagni was the first to
consider this case in [5], where he develops a two-stage estimator
that is similar to the original in form and complexity, but which
is suboptimal. He points out, however, that in many applications
the bias terms vary slowly so that the degree of suboptimality is
insignificant or acceptable. Alouani et al. in [6] propose a different
two-stage estimator which is optimal if an algebraic constraint on
the bias and state process noise covariances is satisfied. However,
since this constraint is almost never satisfied in practice, it was
their conclusion that all practical two-stage estimators based on
their proposed solution would be suboptimal. An optimal two-stage
filter which does not impose a constraint on the process noise
covariances was presented by Hseih et al. in [7]. The proposed filter
is significantly more complex than that of the original. Hseih points
out that the additional complexity and computational burden could
be unwarranted in some cases. It was not stated as to whether the
original advantages of the two-stage estimator—improved numerical
conditioning and potentially reduced computational burden—remain
intact in Hseih’s proposed filter. Nevertheless, the cost for achieving
optimality appears to be the loss of at least some part of those original
advantages.
So, as noted, all of the efforts to extend the original separate-
bias estimator have concerned the ramifications of random bias.
In this paper we take the separate-bias estimator in a completely
new direction and consider a different case—that of noise-free
measurements. Bias state process noise is again assumed to be zero.
It is well known that when measurement noise is zero, the optimal
Kalman filter is one whose order is lower than that of the original
dynamic process [8]. The order of the filter is reduced by the number
of noise-free measurements available. This is also the case with the
separate-bias Kalman estimator when noise-free measurements are
involved. The order of the “bias-free” state estimator is reduced by
the number of noise-free measurements. In this paper the reduced-
order form of the separate-bias estimator is developed for systems in
which the entire measurement vector is noise-free.
There are many practical examples of systems that could benefit
from this type of filter. Consider, for instance, the laboratory calibra-
tion of a strapdown inertial system. The measurement vector consists
of the position and attitude of the unit being calibrated. All six of
these measurements are essentially known—the linear positions being
identically zero because the unit does not move translationally, and
the angular positions being measured very precisely using a rotary
position table. The calibration task involves the estimation of 18
constants (two scalefactor errors, biases and misalignments per axis)
and 12 states (the angular and linear positions and velocities). With
a centralized Kalman filter developed using state augmentation or a
separate-bias Kalman filter, the dimension of the filter is 30. With
a reduced-order centralized or a reduced-order separate-bias Kalman
filter, the dimension of the filter is reduced to 24. The computational
advantages are obvious.
This paper is organized as follows. Section II contains a statement
of the problem and the development of the main result. Section III
contains the conclusions. In the Appendix we present the general
form of the reduced-order Kalman filter, which serves as a starting
point for the derivation of the separate-bias reduced-order Kalman
0018–9286/98$10.00 1998 IEEE
984 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998
estimator.
II. M
AIN RESULT
The problem under consideration is that of simultaneously estimat-
ing the state
and bias vector of a linear process
(1)
with observations
(2)
where
is the state vector, is a vector of constant
but unknown biases,
is the control vector, is
the measurement vector,
is a white Gaussian noise process with
spectral density matrix
, and where and are
coefficient matrices, possibly time-varying. To develop the separate-
bias form of the reduced-order Kalman filter for this system, we first
apply the general reduced-order Kalman filter (given in the Appendix)
to a system involving an unknown bias vector. This yields a filter
arrangement consisting of two coupled reduced-order Kalman filters,
one providing the optimal estimate of the unmeasured state
and
the other the optimal estimate of the bias
. From this coupled filter,
the separate-bias reduced-order Kalman filter is derived.
A. Reduced-Order Kalman Filter Applied to a System with Bias
1) State Equations: The application of the reduced-order Kalman
filter to a system with unknown bias begins with the partitioning of
the state vector of the system, (1) and (2), into directly measured and
unmeasured substates
(3)
(4)
The more general observation equation (2) can be converted into this
simpler form (4) by defining a new substate
and applying the
change in variable
to both (1) and (2). The
bias vector
is appended to the state vector of (3), forming the new
state vector,
. In accordance with the reduced-order filter
given in the Appendix, we define the subvector of unmeasured states
to contain the unmeasured dynamic states and the unknown
bias vector
The subvector of directly measured states is set equal to .
Equation (3) then becomes
where use has been made of the bias dynamic equation, . The
submatrices of (41) in the general reduced-order Kalman filter are
therefore
(5)
Fig. 1. Coupled reduced-order bias and state estimators.
The gain matrix , its time derivative , and vector of (43) and
(44) are partitioned accordingly
These and the partitioned matrices in (5), when substituted into the
general reduced-order Kalman filtering equations (42)–(48), yield
(6)
(7)
(8)
(9)
(10)
This form of the reduced-order Kalman filter, (6)–(10), has the
structure shown in Fig. 1. The bias and state estimators are mutually
coupled through the filter dynamic equations, (9) and (10), and
through their covariance propagation equations, as shown below.
2) Variance Equations: The matrix
of (46) is partitioned in
accordance with the substates contained in
as follows:
where
autocovariance of estimate of state ;
autocovariance of estimate of bias ;
cross covariance of and .
Using the submatrices (5) and the covariance matrix equation (46)
we derive
(11a)
(11b)
(11c)
where
The partitioned Kalman matrices in (7) and (8), as defined by (45),
can be expressed as
(12a)
(12b)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 985
Fig. 2. Separate-bias reduced-order Kalman filter.
with
(13)
Thus, (6)–(10) and (11)–(13) define the (coupled) reduced-order
Kalman filter for systems with bias.
B. Separate-Bias Reduced-Order Kalman Filter
The desired form for the separate-bias reduced-order Kalman filter
is as shown in Fig. 2. In this form the mutual coupling is eliminated.
In the full-order case, the input to the separate-bias estimator is the
residual of the “bias-free” estimator [9]. Since there is no residual
in a reduced-order filter, both the measurement
and the “bias-
free” estimator output, which together are somewhat equivalent to the
“bias-free” residual, are used as inputs to the separate-bias estimator.
1) “Bias-Free” State Estimator: It is noted that (11b) and (11c)
together are homogeneous in
and . Hence, if
then
(14)
for
, and hence satisfies
(15)
The interpretation of (14) and (15) is that if the bias
is perfectly
known at
, then by virtue of , it is perfectly known
thereafter, and the estimation problem reduces to that in which there is
no bias. The bias-free estimator is therefore the reduced-order Kalman
filter, (7), (9), and (12a), with the simplifications that result when
and
(16a)
(16b)
(16c)
where
is given by (15). (Wiggles rather than hats are
used to denote the new variables of the “bias-free” filter.)
2) Transformation of State Equations: As in the case of the full-
order separate-bias Kalman filter, we introduce the transformation
(17)
where
is the estimate of if no bias were present. The matrix
is to be determined such that this relationship (17) holds. To this
end, we substitute (7), (8), and (16a) into (17), yielding
(18)
For this expression to hold for all
independent of the estimator
states
, and , the terms multiplying must cancel, thus
(19)
which leaves
(20)
In order for (19) and (20) to hold we must have
(21)
Into this last equation we substitute (9), (10), and (16b). Then, using
(19), (21), and (8) to simplify the result, one finds
Finally, using (19) to eliminate , we have
which is satisfied when
(22)
This is an
th dimension matrix differential equation gov-
erning the bias correction weighting matrix
. Since the eigenvalues
of
are the same as those of the “bias-free” filter [see
(16b)], (22) is guaranteed to be stable.
3) Transformation of Variance: With the state equations decou-
pled, what remains is the decoupling of the equations governing the
covariance of the “bias-free” and separate-bias filters. The covariance
, defined originally by (46) and in partitioned form by (11), is
expressed in terms of the covariance
that applies when the bias
is known, plus a correction which depends on the covariance of the
bias estimate
. The covariance matrix that applies when the bias
is perfectly known is to be denoted by
(23)
where
is the solution to (15) with given. It is noted that
is also the solution to (46) when
(24)
If the bias is not perfectly known, however, (24) is not the correct
initial condition. Instead, the initial covariance will be
(25)
where
and may or may not be zero. The question
is how much
and change as the result of changing the
initial conditions from (24) to (25). This is answered by making use
of the fact that if
is a solution to (46), then any other solution can
be expressed as follows [10]:
(26)
where
(27)
(28)
In (26)–(28),
can be partitioned as
(29)
986 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998
Fig. 3. Separate-bias reduced-order Kalman filter.
By substituting into (27) the definitions of given by (47) and of
and in (5), one finds that
(30a)
i.e., constant (30b)
Similarly, (28) becomes
(31)
Then, by substituting (23) and (29) into (26)
(32)
Hence, with (32) it is possible to avoid the solution of the mutually
coupled equations (11a)–(11c) to determine
and . Instead,
one needs only to compute
and , using the equations
which are not mutually coupled, (15), (30a), (30b), and (31).
The initial conditions
and of (30) and (31)
must be properly selected to satisfy
These initial conditions are not unique. For the important special
case in which
, i.e., when there is no a priori correlation
between the state and bias, one choice of initial conditions is
(33)
In this case
for all , and
(34)
Now, upon use of (16c) and (33), one finds that (30a) reduces to
(35)
Similarly, (31) and (12b) become
(36)
(37)
Note that (35) is the same as the matrix differential equation (22)
governing
. Hence, by setting , we have
for all time. This, as a result, simultaneously satisfies the
state transformation relationship (19) and the variance transformation
equations given by (33)–(37).
4) Separate-Bias Estimator: The dependence of the separate-bias
estimator on the optimal state estimate
is eliminated by substituting
(17) into (10), yielding
(38)
This equation, in conjunction with (8), (35), (36), and (37), with
defines the bias estimation portion of the separate-bias estimator
shown in the upper half of Fig. 3. The appropriate initial conditions
are
and . The
“bias-free” estimator, shown in the lower half of Fig. 3, is given by
(16b), (16c), and (15). And since
, we have ,
and the initial conditions are
by (16a),
and
by (32).
5) Steady-State Observer: In certain applications the accuracy
and complexity of the time-varying Kalman filter may not be needed,
and in its place a steady-state observer may suffice. The steady-state
separate-bias reduced-order observer has the same structure as that
shown in Fig. 3; however, the Kalman gain matrices,
and , are
replaced by constant matrices determined in some other way, e.g.,
pole placement. The bias correction matrix
of (22) then becomes
a constant matrix given by
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 987
III. CONCLUSION
A special form of Kalman filter applicable to systems involving
unknown biases and noise-free observations was derived. The optimal
estimator was shown to involve a reduced-order filter for estimating
the state, the order equalling the number of states less the number
of noise-free measurements. This filtering arrangement offers in the
reduced-order case the same advantages offered by the full-order
separate-bias Kalman filter [1]—the potential for better numerical
conditioning and reduced computational burden compared to that of
the centralized Kalman filter based on state augmentation.
A
PPENDIX
REDUCED-ORDER KALMAN FILTER
The general reduced-order Kalman filter serves as a starting point
for the derivation of the separate-bias form of the reduced-order
Kalman filter. The specific form of the reduced-order Kalman filter
used applies to the systems representable as
(39)
(40)
where
is the state vector, is the observation vector,
is the control vector, and is the white process noise vector with
spectral density matrix
. Observation noise is absent, as is the basic
assumption with the reduced-order Kalman filter. It is also assumed,
without any great loss in generality, that the state variables are defined
so that the first
of them are measured directly (i.e., )
and the remaining
are not measured at all. This corresponds
to a partitioning of the state vector and matrices in (39) and (40)
as follows:
(41)
(The overbars are used here for consistency with the notation em-
ployed in Section II.) As shown in [9], the reduced-order Kalman
filter for the process with the matrices partitioned as above is given
by
(42)
(43)
with
(44)
The Kalman gain
and covariance of the error in estimating
are given by
(45)
(46)
where
(47)
(48)
The time derivative of the Kalman gain matrix in (44) can be
generated by differentiating (45) with the help of (46).
In these expressions it is assumed that the matrix
is nonsingular,
or equivalently, that the submatrix
is of full rank. Thus, reduced-
order Kalman filters of this form exist only for systems which have
an independent source of noise driving each element of the vector of
directly measured states
[9].
R
EFERENCES
[1] B. Friedland, “Treatment of bias in recursive filtering,” IEEE Trans.
Automat. Contr., vol. AC-14, pp. 359–367, Aug. 1969.
[2]
, “Separate-bias estimation and some applications,” in Control and
Dynamic Systems, vol. 20, C. T. Leondes, Ed. New York: Academic,
1983, pp. 1–45.
[3] J. M. Mendel and N. D. Washburn, “Multistage estimation of bias states
in linear systems,” Int. J. Contr., vol. 28, no. 4, pp. 511–524, 1978.
[4] M. B. Ignagni, “An alternate derivation and extension of Friedland’s
two-stage Kalman estimator,” IEEE Trans. Automat. Contr., vol. 26, pp.
746–750, June 1981.
[5]
, “Separate-bias Kalman estimator with bias state noise,” IEEE
Trans. Automat. Contr., vol. 35, pp. 338–341, Mar. 1990.
[6] A. T. Alouani, P. Xia, T. R. Rice, and W. D. Blair, “ On the optimality
of two-stage state estimation in the presence of random bias,” IEEE
Trans. Automat. Contr., vol. 38, pp. 1279–1282, Aug. 1993.
[7] C. S. Hseih and F. C. Chen, “Optimal solution of the two-stage Kalman
estimator,” in Proc. 34th Conf. Decision and Contr., New Orleans, LA,
Dec. 1995, pp. 1532–1537.
[8] A. Bryson and D. Johansen, “Linear filtering for time-varying systems
using measurements containing colored noise,” IEEE Trans. Automat.
Contr., vol. 10, pp. 4–10, Jan. 1965.
[9] B. Friedland, “On the properties of reduced-order Kalman filters,” IEEE
Trans. Automat. Contr., vol. 34, pp. 321–324, Mar. 1989.
[10]
, “On solutions of the Riccati equation in optimization problems,”
IEEE Trans. Automat. Contr., vol. 12, pp. 303–304, June 1967.
Robust Stabilization for Continuous-Time Systems with
Slowly Time-Varying Uncertain Real Parameters
Wassim M. Haddad and Vikram Kapila
Abstract—In this paper the authors construct a new class of parameter-
dependent Lyapunov functions to guarantee robust stability in the pres-
ence of time-varying rate-restricted plant uncertainty. Extensions to a
class of time-varying nonlinear uncertainty that generalize the multivari-
able Popov criterion are also considered. These results are then used for
controller synthesis to address the problem of robust stabilization in the
presence of slowly time-varying real parameters.
Index Terms—Absolute stability, Popov criterion, real parameter un-
certainty, robust stabilization, time-varying uncertainty.
I. INTRODUCTION
In a recent paper [5] a refined Lyapunov function technique was
developed to overcome some of the current limitations of Lyapunov
function theory for the problem of robust stability and performance
Manuscript received August 16, 1995; revised September 27, 1996. This
work was supported in part by the National Science Foundation under Grant
ECS-9496249 and the Air Force Office of Scientific Research under Grant
F49620-96-1-0125.
W. M. Haddad is with the School of Aerospace Engineering, Geor-
gia Institute of Technology, Atlanta, GA 30332-0150 USA (e-mail:
wm.haddad@aerospace.gatech.edu).
V. Kapila is with the Department of Mechanical, Aerospace, and
Manufacturing Engineering, Polytechnic University, Brooklyn, NY 11201
USA.
Publisher Item Identifier S 0018-9286(98)04641-8.
0018–9286/98$10.00 1998 IEEE