ArticlePDF Available

Separate-bias estimation with reduced-order Kalman filters

Authors:

Abstract and Figures

This paper presents the optimal two-stage Kalman filter for systems that involve noise-free observations and constant but unknown bias. Like the full-order separate-bias Kalman filter, this new filter provides an alternative to state vector augmentation and offers the same potential for improved numerical accuracy and reduced computational burden. When dealing with systems involving accurate, essentially noise-free measurements, this new filter offers an additional advantage, a reduction in filter order. The optimal separate-bias reduced order estimator involves a reduced order filter for estimating the state, the order equalling the number of states less the number of observations
Content may be subject to copyright.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 983
Separate-Bias Estimation with
Reduced-Order Kalman Filters
David Haessig and Bernard Friedland
AbstractThis paper presents the optimal two-stage Kalman filter for
systems that involve noise-free observations and constant but unknown
bias. Like the full-order separate-bias Kalman filter presented in 1969 [1],
this new filter provides an alternative to state vector augmentation and
offers the same potential for improved numerical accuracy and reduced
computational burden. When dealing with systems involving accurate,
essentially noise-free measurements, this new filter offers an additional
advantage, a reduction in filter order. The optimal separate-bias reduced-
order estimator involves a reduced-order filter for estimating the state,
the order equalling the number of states less the number of observations.
Index TermsReduced-order Kalman filter, separate-bias estimation,
two-stage filtering.
I. INTRODUCTION
Consider the problem of estimating the state of a linear dynamic
system influenced by a constant but unknown bias vector
. One
method for handling this estimation problem is through state aug-
mentation. The bias vector is appended to the state vector to form
the augmented state vector, and the bias dynamic equation
is appended to the original process dynamics equation to form an
augmented dynamic system. A Kalman filter applied to this new
system then estimates the bias terms as well as the state of the original
problem. However, when the number of bias terms is comparable
to or larger than the number of states of the original problem,
the filter implementation involves computations with much larger
matrices, which increases the likelihood of numerical conditioning
difficulties, and in some cases precludes their solution and the
accurate estimation of the state and bias. This motivated an effort,
in the late 1960’s, to develop a method which avoided the numerical
inaccuracies introduced by computations with large matrices. The
outcome of the effort is a technique known as separate-bias estimation
[1], also more recently called two-stage estimation. This method
separates the estimation of the bias from that of the dynamic state,
thereby reducing the size of the matrices and vectors involved in the
filtering computations. Two separate, uncoupled filters run in parallel
to generate the optimal estimate of the bias and of the “bias-free”
state (i.e., the state that would result if the bias were zero). The
optimal estimate of the actual state
is then generated by adding
a bias correction term to the “bias-free” state estimate, downstream
of both stages of the filter. Thus, the separate-bias filter includes
two filters that are essentially uncoupled, which gives rise to its
potential for improved numerical accuracy, conditioning, and reduced
computational burden.
The separate-bias filter as originally given in [1] has been the focus
of significant interest and activity. Scores of references to the original
paper have appeared in the literature (see the reference list given in
[2]). And as is often the case with techniques that attract significant
use, there have been a number of alternate derivations and extensions.
Mendel [3] in 1978, and Ignagni [4] in 1981 developed two different
Manuscript received April 10, 1996; revised December 19, 1996.
D. Haessig is with GEC-Marconi Hazeltine Corporation, Wayne, NJ 07474-
0932 USA (e-mail: haessig@systems.gec.com).
B. Friedland is with the Department of Electrical and Computer Engineer-
ing, New Jersey Institute of Technology, Newark, NJ 07102 USA.
Publisher Item Identifier S 0018-9286(98)04642-X.
derivations of the separate-bias filter. In both, the bias terms are
assumed to be unknown and constant. In recent years, efforts have
focused on the ramifications of nonconstant bias terms, i.e., bias that
vary as a random walk process,
, where is a vector of
zero mean, Gaussian, white noise processes. Ignagni was the first to
consider this case in [5], where he develops a two-stage estimator
that is similar to the original in form and complexity, but which
is suboptimal. He points out, however, that in many applications
the bias terms vary slowly so that the degree of suboptimality is
insignificant or acceptable. Alouani et al. in [6] propose a different
two-stage estimator which is optimal if an algebraic constraint on
the bias and state process noise covariances is satisfied. However,
since this constraint is almost never satisfied in practice, it was
their conclusion that all practical two-stage estimators based on
their proposed solution would be suboptimal. An optimal two-stage
filter which does not impose a constraint on the process noise
covariances was presented by Hseih et al. in [7]. The proposed filter
is significantly more complex than that of the original. Hseih points
out that the additional complexity and computational burden could
be unwarranted in some cases. It was not stated as to whether the
original advantages of the two-stage estimator—improved numerical
conditioning and potentially reduced computational burden—remain
intact in Hseih’s proposed filter. Nevertheless, the cost for achieving
optimality appears to be the loss of at least some part of those original
advantages.
So, as noted, all of the efforts to extend the original separate-
bias estimator have concerned the ramifications of random bias.
In this paper we take the separate-bias estimator in a completely
new direction and consider a different case—that of noise-free
measurements. Bias state process noise is again assumed to be zero.
It is well known that when measurement noise is zero, the optimal
Kalman filter is one whose order is lower than that of the original
dynamic process [8]. The order of the filter is reduced by the number
of noise-free measurements available. This is also the case with the
separate-bias Kalman estimator when noise-free measurements are
involved. The order of the “bias-free” state estimator is reduced by
the number of noise-free measurements. In this paper the reduced-
order form of the separate-bias estimator is developed for systems in
which the entire measurement vector is noise-free.
There are many practical examples of systems that could benefit
from this type of filter. Consider, for instance, the laboratory calibra-
tion of a strapdown inertial system. The measurement vector consists
of the position and attitude of the unit being calibrated. All six of
these measurements are essentially known—the linear positions being
identically zero because the unit does not move translationally, and
the angular positions being measured very precisely using a rotary
position table. The calibration task involves the estimation of 18
constants (two scalefactor errors, biases and misalignments per axis)
and 12 states (the angular and linear positions and velocities). With
a centralized Kalman filter developed using state augmentation or a
separate-bias Kalman filter, the dimension of the filter is 30. With
a reduced-order centralized or a reduced-order separate-bias Kalman
filter, the dimension of the filter is reduced to 24. The computational
advantages are obvious.
This paper is organized as follows. Section II contains a statement
of the problem and the development of the main result. Section III
contains the conclusions. In the Appendix we present the general
form of the reduced-order Kalman filter, which serves as a starting
point for the derivation of the separate-bias reduced-order Kalman
0018–9286/98$10.00 1998 IEEE
984 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998
estimator.
II. M
AIN RESULT
The problem under consideration is that of simultaneously estimat-
ing the state
and bias vector of a linear process
(1)
with observations
(2)
where
is the state vector, is a vector of constant
but unknown biases,
is the control vector, is
the measurement vector,
is a white Gaussian noise process with
spectral density matrix
, and where and are
coefficient matrices, possibly time-varying. To develop the separate-
bias form of the reduced-order Kalman filter for this system, we first
apply the general reduced-order Kalman filter (given in the Appendix)
to a system involving an unknown bias vector. This yields a filter
arrangement consisting of two coupled reduced-order Kalman filters,
one providing the optimal estimate of the unmeasured state
and
the other the optimal estimate of the bias
. From this coupled filter,
the separate-bias reduced-order Kalman filter is derived.
A. Reduced-Order Kalman Filter Applied to a System with Bias
1) State Equations: The application of the reduced-order Kalman
filter to a system with unknown bias begins with the partitioning of
the state vector of the system, (1) and (2), into directly measured and
unmeasured substates
(3)
(4)
The more general observation equation (2) can be converted into this
simpler form (4) by defining a new substate
and applying the
change in variable
to both (1) and (2). The
bias vector
is appended to the state vector of (3), forming the new
state vector,
. In accordance with the reduced-order filter
given in the Appendix, we define the subvector of unmeasured states
to contain the unmeasured dynamic states and the unknown
bias vector
The subvector of directly measured states is set equal to .
Equation (3) then becomes
where use has been made of the bias dynamic equation, . The
submatrices of (41) in the general reduced-order Kalman filter are
therefore
(5)
Fig. 1. Coupled reduced-order bias and state estimators.
The gain matrix , its time derivative , and vector of (43) and
(44) are partitioned accordingly
These and the partitioned matrices in (5), when substituted into the
general reduced-order Kalman filtering equations (42)–(48), yield
(6)
(7)
(8)
(9)
(10)
This form of the reduced-order Kalman filter, (6)–(10), has the
structure shown in Fig. 1. The bias and state estimators are mutually
coupled through the filter dynamic equations, (9) and (10), and
through their covariance propagation equations, as shown below.
2) Variance Equations: The matrix
of (46) is partitioned in
accordance with the substates contained in
as follows:
where
autocovariance of estimate of state ;
autocovariance of estimate of bias ;
cross covariance of and .
Using the submatrices (5) and the covariance matrix equation (46)
we derive
(11a)
(11b)
(11c)
where
The partitioned Kalman matrices in (7) and (8), as defined by (45),
can be expressed as
(12a)
(12b)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 985
Fig. 2. Separate-bias reduced-order Kalman filter.
with
(13)
Thus, (6)–(10) and (11)–(13) define the (coupled) reduced-order
Kalman filter for systems with bias.
B. Separate-Bias Reduced-Order Kalman Filter
The desired form for the separate-bias reduced-order Kalman filter
is as shown in Fig. 2. In this form the mutual coupling is eliminated.
In the full-order case, the input to the separate-bias estimator is the
residual of the “bias-free” estimator [9]. Since there is no residual
in a reduced-order filter, both the measurement
and the “bias-
free” estimator output, which together are somewhat equivalent to the
“bias-free” residual, are used as inputs to the separate-bias estimator.
1) “Bias-Free” State Estimator: It is noted that (11b) and (11c)
together are homogeneous in
and . Hence, if
then
(14)
for
, and hence satisfies
(15)
The interpretation of (14) and (15) is that if the bias
is perfectly
known at
, then by virtue of , it is perfectly known
thereafter, and the estimation problem reduces to that in which there is
no bias. The bias-free estimator is therefore the reduced-order Kalman
filter, (7), (9), and (12a), with the simplifications that result when
and
(16a)
(16b)
(16c)
where
is given by (15). (Wiggles rather than hats are
used to denote the new variables of the “bias-free” filter.)
2) Transformation of State Equations: As in the case of the full-
order separate-bias Kalman filter, we introduce the transformation
(17)
where
is the estimate of if no bias were present. The matrix
is to be determined such that this relationship (17) holds. To this
end, we substitute (7), (8), and (16a) into (17), yielding
(18)
For this expression to hold for all
independent of the estimator
states
, and , the terms multiplying must cancel, thus
(19)
which leaves
(20)
In order for (19) and (20) to hold we must have
(21)
Into this last equation we substitute (9), (10), and (16b). Then, using
(19), (21), and (8) to simplify the result, one finds
Finally, using (19) to eliminate , we have
which is satisfied when
(22)
This is an
th dimension matrix differential equation gov-
erning the bias correction weighting matrix
. Since the eigenvalues
of
are the same as those of the “bias-free” filter [see
(16b)], (22) is guaranteed to be stable.
3) Transformation of Variance: With the state equations decou-
pled, what remains is the decoupling of the equations governing the
covariance of the “bias-free” and separate-bias filters. The covariance
, defined originally by (46) and in partitioned form by (11), is
expressed in terms of the covariance
that applies when the bias
is known, plus a correction which depends on the covariance of the
bias estimate
. The covariance matrix that applies when the bias
is perfectly known is to be denoted by
(23)
where
is the solution to (15) with given. It is noted that
is also the solution to (46) when
(24)
If the bias is not perfectly known, however, (24) is not the correct
initial condition. Instead, the initial covariance will be
(25)
where
and may or may not be zero. The question
is how much
and change as the result of changing the
initial conditions from (24) to (25). This is answered by making use
of the fact that if
is a solution to (46), then any other solution can
be expressed as follows [10]:
(26)
where
(27)
(28)
In (26)–(28),
can be partitioned as
(29)
986 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998
Fig. 3. Separate-bias reduced-order Kalman filter.
By substituting into (27) the definitions of given by (47) and of
and in (5), one finds that
(30a)
i.e., constant (30b)
Similarly, (28) becomes
(31)
Then, by substituting (23) and (29) into (26)
(32)
Hence, with (32) it is possible to avoid the solution of the mutually
coupled equations (11a)–(11c) to determine
and . Instead,
one needs only to compute
and , using the equations
which are not mutually coupled, (15), (30a), (30b), and (31).
The initial conditions
and of (30) and (31)
must be properly selected to satisfy
These initial conditions are not unique. For the important special
case in which
, i.e., when there is no a priori correlation
between the state and bias, one choice of initial conditions is
(33)
In this case
for all , and
(34)
Now, upon use of (16c) and (33), one finds that (30a) reduces to
(35)
Similarly, (31) and (12b) become
(36)
(37)
Note that (35) is the same as the matrix differential equation (22)
governing
. Hence, by setting , we have
for all time. This, as a result, simultaneously satisfies the
state transformation relationship (19) and the variance transformation
equations given by (33)–(37).
4) Separate-Bias Estimator: The dependence of the separate-bias
estimator on the optimal state estimate
is eliminated by substituting
(17) into (10), yielding
(38)
This equation, in conjunction with (8), (35), (36), and (37), with
defines the bias estimation portion of the separate-bias estimator
shown in the upper half of Fig. 3. The appropriate initial conditions
are
and . The
“bias-free” estimator, shown in the lower half of Fig. 3, is given by
(16b), (16c), and (15). And since
, we have ,
and the initial conditions are
by (16a),
and
by (32).
5) Steady-State Observer: In certain applications the accuracy
and complexity of the time-varying Kalman filter may not be needed,
and in its place a steady-state observer may suffice. The steady-state
separate-bias reduced-order observer has the same structure as that
shown in Fig. 3; however, the Kalman gain matrices,
and , are
replaced by constant matrices determined in some other way, e.g.,
pole placement. The bias correction matrix
of (22) then becomes
a constant matrix given by
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 7, JULY 1998 987
III. CONCLUSION
A special form of Kalman filter applicable to systems involving
unknown biases and noise-free observations was derived. The optimal
estimator was shown to involve a reduced-order filter for estimating
the state, the order equalling the number of states less the number
of noise-free measurements. This filtering arrangement offers in the
reduced-order case the same advantages offered by the full-order
separate-bias Kalman filter [1]—the potential for better numerical
conditioning and reduced computational burden compared to that of
the centralized Kalman filter based on state augmentation.
A
PPENDIX
REDUCED-ORDER KALMAN FILTER
The general reduced-order Kalman filter serves as a starting point
for the derivation of the separate-bias form of the reduced-order
Kalman filter. The specific form of the reduced-order Kalman filter
used applies to the systems representable as
(39)
(40)
where
is the state vector, is the observation vector,
is the control vector, and is the white process noise vector with
spectral density matrix
. Observation noise is absent, as is the basic
assumption with the reduced-order Kalman filter. It is also assumed,
without any great loss in generality, that the state variables are defined
so that the first
of them are measured directly (i.e., )
and the remaining
are not measured at all. This corresponds
to a partitioning of the state vector and matrices in (39) and (40)
as follows:
(41)
(The overbars are used here for consistency with the notation em-
ployed in Section II.) As shown in [9], the reduced-order Kalman
filter for the process with the matrices partitioned as above is given
by
(42)
(43)
with
(44)
The Kalman gain
and covariance of the error in estimating
are given by
(45)
(46)
where
(47)
(48)
The time derivative of the Kalman gain matrix in (44) can be
generated by differentiating (45) with the help of (46).
In these expressions it is assumed that the matrix
is nonsingular,
or equivalently, that the submatrix
is of full rank. Thus, reduced-
order Kalman filters of this form exist only for systems which have
an independent source of noise driving each element of the vector of
directly measured states
[9].
R
EFERENCES
[1] B. Friedland, “Treatment of bias in recursive filtering,” IEEE Trans.
Automat. Contr., vol. AC-14, pp. 359–367, Aug. 1969.
[2]
, “Separate-bias estimation and some applications,” in Control and
Dynamic Systems, vol. 20, C. T. Leondes, Ed. New York: Academic,
1983, pp. 1–45.
[3] J. M. Mendel and N. D. Washburn, “Multistage estimation of bias states
in linear systems,” Int. J. Contr., vol. 28, no. 4, pp. 511–524, 1978.
[4] M. B. Ignagni, “An alternate derivation and extension of Friedland’s
two-stage Kalman estimator,” IEEE Trans. Automat. Contr., vol. 26, pp.
746–750, June 1981.
[5]
, “Separate-bias Kalman estimator with bias state noise,” IEEE
Trans. Automat. Contr., vol. 35, pp. 338–341, Mar. 1990.
[6] A. T. Alouani, P. Xia, T. R. Rice, and W. D. Blair, On the optimality
of two-stage state estimation in the presence of random bias,” IEEE
Trans. Automat. Contr., vol. 38, pp. 1279–1282, Aug. 1993.
[7] C. S. Hseih and F. C. Chen, “Optimal solution of the two-stage Kalman
estimator,” in Proc. 34th Conf. Decision and Contr., New Orleans, LA,
Dec. 1995, pp. 1532–1537.
[8] A. Bryson and D. Johansen, “Linear filtering for time-varying systems
using measurements containing colored noise,” IEEE Trans. Automat.
Contr., vol. 10, pp. 4–10, Jan. 1965.
[9] B. Friedland, “On the properties of reduced-order Kalman filters,” IEEE
Trans. Automat. Contr., vol. 34, pp. 321–324, Mar. 1989.
[10]
, “On solutions of the Riccati equation in optimization problems,”
IEEE Trans. Automat. Contr., vol. 12, pp. 303–304, June 1967.
Robust Stabilization for Continuous-Time Systems with
Slowly Time-Varying Uncertain Real Parameters
Wassim M. Haddad and Vikram Kapila
AbstractIn this paper the authors construct a new class of parameter-
dependent Lyapunov functions to guarantee robust stability in the pres-
ence of time-varying rate-restricted plant uncertainty. Extensions to a
class of time-varying nonlinear uncertainty that generalize the multivari-
able Popov criterion are also considered. These results are then used for
controller synthesis to address the problem of robust stabilization in the
presence of slowly time-varying real parameters.
Index TermsAbsolute stability, Popov criterion, real parameter un-
certainty, robust stabilization, time-varying uncertainty.
I. INTRODUCTION
In a recent paper [5] a refined Lyapunov function technique was
developed to overcome some of the current limitations of Lyapunov
function theory for the problem of robust stability and performance
Manuscript received August 16, 1995; revised September 27, 1996. This
work was supported in part by the National Science Foundation under Grant
ECS-9496249 and the Air Force Office of Scientific Research under Grant
F49620-96-1-0125.
W. M. Haddad is with the School of Aerospace Engineering, Geor-
gia Institute of Technology, Atlanta, GA 30332-0150 USA (e-mail:
wm.haddad@aerospace.gatech.edu).
V. Kapila is with the Department of Mechanical, Aerospace, and
Manufacturing Engineering, Polytechnic University, Brooklyn, NY 11201
USA.
Publisher Item Identifier S 0018-9286(98)04641-8.
0018–9286/98$10.00 1998 IEEE
... Among the main aspects characterizing the complexity of problems in science and engineering are the nonlinearities (Bendat 1998; Schoukens and Ljung 2019), uncertainties (Martynyuk et al. 2019), noisy and non-stationary environment (Hendricks et al. 2008;Moss and McClintock 1989), temporal variability (Tomás-Rodríguez and Banks 2010), among others. Computational modeling approaches which consider these complexities in their formulations do have better performance for facing to conditions of stability and convergence (Bonyadi and Michalewicz 2016;Boutayeb et al. 1997), polarized parametric estimation (Chan et al. 2020;Haessig and Friedland 1998), non-modeled dynamics (Khayyam et al. 2020), high approximation and prediction errors (Tang et al. 2020). In data analysis, an increasingly concern from researchers is related to the presence of several types of uncertainties such as inaccuracy and incompleteness of data and information, parametric and structural uncertainties, propagation and accumulation of uncertainties, and unknown initial conditions, that must be taken into account by modeling approaches in order to ensure accurate models for real-world problems (Wang and Zhao 2013). ...
Article
Full-text available
In this paper, a methodology for design of fuzzy Kalman filter, using interval type-2 fuzzy models, in discrete time domain, via spectral decomposition of experimental data, is proposed. The adopted methodology consists of recursive parametric estimation of local state space linear submodels of interval type-2 fuzzy Kalman filter for tracking and forecasting of the dynamics inherited to experimental data, using an interval type-2 fuzzy version of Observer/Kalman Filter Identification (OKID) algorithm. The partitioning of the experimental data is performed by interval type-2 fuzzy Gustafson–Kessel clustering algorithm. The interval Kalman gains in the consequent proposition of interval type-2 fuzzy Kalman filter are updated according to unobservable components computed by recursive spectral decomposition of experimental data. Computational results illustrate the efficiency of proposed methodology for filtering and tracking the time delayed state variables of Chen’s chaotic attractor in a noisy environment, and experimental results illustrate its applicability for adaptive and real time forecasting the dynamic spread behavior of novel Coronavirus 2019 (COVID-19) outbreak in Brazil.
... Thanks to the high computing power of modern processors, micro-controllers, or even field-programmable gate arrays (FPGAs), it has become possible to deploy intelligent and sophisticated control approaches, e.g., observer-based control, utilizing only a minimal number of sensors, see [11] and [12]. Contributions like [13][14][15][16] reflect the progress in theoretical studies of Kalman filters (KFs), especially concerning robustness and the ability to deal with unknown or inaccessible disturbances or model uncertainty. In many situations, time-varying disturbances like friction effects can be modeled as additional unknown inputs. ...
Article
Full-text available
In this contribution, a gain adaptation for sliding mode control (SMC) is proposed that uses both linear model predictive control (LMPC) and an estimator-based disturbance compensation. Its application is demonstrated with an electromagnetic actuator. The SMC is based on a second-order model of the electric actuator, a direct current (DC) drive, where the current dynamics and the dynamics of the motor angular velocity are addressed. The error dynamics of the SMC are stabilized by a moving horizon MPC and a Kalman filter (KF) that estimates a lumped disturbance variable. In the application under consideration, this lumped disturbance variable accounts for nonlinear friction as well as model uncertainty. Simulation results point out the benefits regarding a reduction of chattering and a high control accuracy.
... But, implementing this augmented strategy may load infeasible computation or even diverge for ill-conditioned systems. Friedland proposed a paralleling reduced-order filtering to separate the bias estimates, and demonstrated that it is equivalent to the above augmenting strategy [3]. Lin et al. estimated the biases by using the local unassociated track estimates at a single time [4]. ...
Article
Full-text available
To mitigate the negative effects of the sensor measurement biases for the maneuvering target, a novel incremental center differential Kalman filter (ICDKF) algorithm is proposed. Based on the principle of independent incremental random process, the incremental measurement equation is modeled to preprocess the sensor measurement biases. Then, a general ICDKF algorithm is proposed by augmenting the process and measurement noises into the state vector to mitigate the negative effects of the sensor biases. For the system with additive noises, an additive ICDKF algorithm is derived by introducing the incremental measurement equation to reduce the measurement biases. Numerical simulations for four types of sensor biases are designed to demonstrate that the proposed ICDKF can effectively mitigate the measurement biases compared to the CDKF.
... Furthermore, the technique is very sensitive to the noise level of the intermittent displacement measurement since the bias correction is solely based on this intermittent displacement measurement. On the contrary, the bias term is explicitly included in the state vector and directly estimated in many engineering problems which deal with biased measurements [41,42]. ...
... This filter is no longer optimal because the covariance propagation and gain calculation are now based on a reduced model [22] with model replacement [11]. One of the objectives of linear covariance analysis is to determine the effects of these suboptimal schemes on the true navigation error. ...
Chapter
During the 2011-2012 winter semester, graduate students in the Department of Aerospace Engineering, Technion were asked to reproduce the results of an attitude estimation paper by Prof. Itzhack Bar-Itzhack, “True Covariance Simulation of the EUVE Update Filter”, as a homework assignment for a special topics course on linear covariance analysis. The students reproduced both the expected filter errors and the true filter errors as reported by Bar-Itzhack using covariance analysis. Bar Itzhack’s work was then extended to determine the closed-loop pointing/control errors, again using linear covariance techniques. The control problem included star-tracker and gyro errors, magnetic torquer actuation errors, random disturbance moments, a suboptimal Kalman filter with model replacement, and a simple proportional-derivative control law. Using an augmented state formulation, covariance techniques were used to determine the variances of the expected and true attitude estimation errors, the variances of the true pointing errors of the closed-loop system, and the variance of the required control effort. Results were verified by nonlinear Monte Carlo analysis. The linear covariance analysis proved to be a very useful and fast analysis tool for the preliminary design of attitude determination and control systems.
... To remedy the situation, there have been suggestions to estimate both the system state and the disturbance simultaneously in the observer equation. For the case of a constant or slowly varying unknown disturbance, the designs in [2][3][4] are applicable. In some particular cases, if the knowledge of a disturbance model is known a priori, we can combine the system model with the disturbance model and construct an observer for the combined models to obtain accurate state and disturbance estimates [5][6][7][8]. ...
Article
When there are external disturbances acting on the system, the conventional Luenberger observer design for state estimation usually results in a biased state estimate. This paper presents a robust state and disturbance observer design that gives both accurate state and disturbance estimates in the face of large disturbances. The proposed robust observer is structurally different from the conventional one in the sense that a disturbance estimation term is included in the observer equation. With this disturbance estimation term, the robust observer design problem is skillfully transformed into a disturbance rejection control problem. We then can utilize the standard H∞ control design tools to optimize the robust observer between the disturbance rejection ability and noise immune ability. An important advantage of the proposed robust observer is that it applies to both minimum-phase systems and non-minimum phase systems.
Conference Paper
In the present article, an alternative methodology is presented to tackle the filtering problem in presence of unknown bias. The basic idea is to construct an H ∞ estimator of the estimation error due to the unknown bias. The bias aware filter can be implemented by means of two filters, namely the standard H 2 Kalman filter and the H ∞ estimation error estimator. On the other hand, the bias aware filter can also be implemented as a single filter. Depending on user's need, one or the other implementation can be done.
Thesis
Full-text available
On-line, simultaneous state and parameters estimation in deterministic, nonlinear dynamic systems of known structure is the problem considered. Available methods are few and fall short of user needs in that they are difficult to apply, their applicability is restricted to limited classes of systems, and for some, conditions guaranteeing their convergence don’t exist. The new methods developed herein are placed into two categories: those that involve the use of Riccati equations, and those that do not. Two of the new methods do not use Riccati equations, and each is considered to be a different extension of Friedland’s parameter observer for nonlinear systems with full state availability to the case of partial state availability. One is essentially a reduced-order variant of a state and parameter estimator developed by Raghavan. The other is developed by the direct extension of Friedland’s parameter observer to the case of partial state availability. Both are shown to be globally asymptotically stable for nonlinear systems affine in the unknown parameters and involving nonlinearities that depend on known quantities, a class restriction also true of existing state and parameter estimation methods. The two new methods offer, however, the advantages of improved computational efficiency and the potential for superior transient performance, which is demonstrated in a simulation example. Of the new methods that do involve a Riccati equation, there are three. The first is the separate-bias form of the reduced-order Kalman filter. The scope of this filter is somewhat broader than the others developed herein in that it is an optimal filter for linear, stochastic systems involving noise-free observations. To apply this filter to the joint state and parameter estimation problem, one interprets the unknown parameters as constant biases. For the system class defined above, the method is globally asymptotically stable. The second Riccati equation based method is derived by the application of an existing method, the State Dependent Algebraic Riccati Equation (SDARE) filtering method, to the problem of state and parameter estimation. It is shown to work well in several nonlinear examples involving a few unknown parameters; however, as the number of parameters increases, the method’s applicability is diminished due to an apparent loss of observability within the filter which hinders the generation of filter gains. The third is a new filtering method which uses a State Dependent Differential Riccati Equation (SDDRE) for the generation of filter gains, and through its use, avoids the “observability” shortcomings of the SDARE method. This filter is similar to the Extended Kalman Filter (EKF), and is compared to the EKF with regard to stability through a Lyapunov analysis, and with regard to performance in a 4th order stepper motor simulation involving 5 unknown parameters. For the very broad class of systems that are bilinear in the state and unknown parameters, and potentially involving products of unmeasured states and unknown parameters, the EKF is shown to possess a semi-global region of asymptotic stability, given the assumption of observability and controllability along estimated trajectories. The stability of the new SDDRE filter is discussed.
Article
The rotating period of the pulsars are highly stable, and the measurement of the pulsars can be adopted to correct the clock error of the satellite-borne atomic clock. In order to solve the problem that the minor system bias can largely worsen the performance of the pulsar timing system, an algorithm for the pulsar timing system with the system bias is proposed. Based on the principle of the pulsar timing system, the system bias was modeled. By using the two-stage Kalman filter, the system state and the system bias were decoupled and estimated. The results of the simulations show that the proposed algorithm can effectively reduce the impact of the system bias and can improve the performance of the pulsar timing system.
Article
The advantages of the bias-separated filter implementation as compared with Kalman filtering are pointed out. A method of derivation of the bias-separated filter structure, based on the theory of linear observers, is presented. Some alternative derivations are also described. The extension from a constant to a time-varying bias, to nonlinear systems, and to noise on bias is discussed. Problems of fixed interval smoothing and failure detection and estimation are considered. Additional applications to trajectory estimation, added-inertial navigation, calibration, satellite-attitude estimation, and process control are illustrated. Some areas requiring further investigation are pointed out.
Article
This paper provides an alternative, constructive derivation of Friedland's (1966) method for recursive bias filtering ; and, extends his method to the case where we may wish to increase (or decrease) the number of biases. We show that it is possible to add (or delete) bias states in such a manner that previously computed quantities can be used to obtain new estimates of the dynamical state vector and the now larger bias vector. Adding (or deleting) bias states is important when, for example, the bias states are used to model constant but unknown instrumentation error sources, of which there can be a large number.
Conference Paper
Several known results are unified by considering properties of reduced-order Kalman filters. For the case in which the number of noise sources equals the number of observations, it is shown that the reduced-order Kalman filter achieves zero steady state variance of the estimation error if and only if the plant has no transmission zeros in the right half plane, since these would be among the poles of the Kalman filter. The reduced order Kalman filter cannot achieve zero variance of the estimation error if the number of independent noise sources exceed the number of observations. It is also shown that the reduced order Kalman filter achieves the generalized Doyle-Stein condition for robustness when the noise sources are collocated with the control inputs. When there are more observations than noise sources, additional noise sources can be postulated to improve the observer frequency response without diminishing robustness.
Article
An alternate simplified derivation of Friedland's two-stage Kalman estimator is given for a somewhat more general class of problems than considered by Friedland. Friedland's result is also extended to encompass two variations on the basic idea which are of practical interest.
Article
The problem of estimating the state x of a linear process in the presence of a constant but unknown bias vector b is considered. This bias vector influences the dynamics and/or the observations. It is shown that the optimum estimate hat{x} of the state can be expressed as hat{x} = x + V_{x}hat{b} (1) where tilde{x} is the bias-free estimate, computed as if no bias were present, hat{b} is the optimum estimate of the bias, and V x is a matrix which can be interpreted as the ratio of the covariance of tilde{x} and hat{b} to the variance of hat{b} . Moreover, hat{b} can be computed in terms of the residuals in the bias-free estimate, and the matrix V x depends only on matrices which arise in the computation of the bias-free estimates. As a result, the computation of the optimum estimate tilde{x} is effectively decoupled from the estimate of the bias hat{b} , except for the final addition indicated by (1).
Article
The difference D between two solutions S and M of the matrix Riccati equation. -dot{M} = MA + A'M + MBM + C is given by D = RQ^{-1}R' , where -dot{R} = (A+SB)R and -dot{Q}= RBR' . These relations can be used to evaluate M(t) for t < T arising in optimization problems in which M(T) does not exist. The relations can also be used to compare the solution of the Riccati equation with its asymptotic solution.
Article
The Kalman-Bucy filter for continuous linear dynamic systems assumes all measurements contain "white" noise, i.e. noise with correlation times short compared to times of interest in the system. It is shown here that if correlation times are not short, or if some measurements are free of noise, the optimal filter is a modification of the Kalman-Bucy filter which, in general, contains differentiators as well as integrators. It is also shown for this case that the estimate and its covariance matrix are, in general, discontinuous at the time when measurements are begun. The case of random bias errors in the measurements is shown by example to be a limiting case of colored noise.
Article
Sufficient conditions for the optimality of a two-stage state estimator in the presence of random bias are derived. Under an algebraic constraint on the correlation between the state and bias process noises, the optimal estimate of the system state can be obtained as a linear combination of the output of the first stage (a bias-free filter) and the second stage (a bias filter). Because the algebraic constraint is restrictive in practice, the results indirectly indicate that for most practical systems the proposed solution to the two-stage estimation problem will be suboptimal
Article
A modified decoupled Kalman estimator suitable for use when the bias vector varies as a random-walk process is defined and demonstrated in a practical application consisting of the calibration of a strapdown inertial navigation system. The estimation system accuracy associated with the modified estimator is shown to be essentially the same as that of the generalized partitioned Kalman estimator. Considering that the sensor error random rates assumed in the example are on the order of 5 to 10 times greater than normally associated with contemporary strapdown systems, it may be inferred that inertial navigation systems possessing more typical sensor error random growth characteristics should be amenable to a decoupled estimator approach in a broad spectrum of aided-navigation system applications. This should also be true in a variety of other applications in which the bias vector experiences only limited random variation
Article
Several known results are unified by considering properties of reduced-order Kalman filters. For the case in which the number of noise sources equals the number of observations, it is shown that the reduced-order Kalman filter achieves zero steady-state variance of the estimation error if and only if the plant has no transmission zeros in the right-half plane, since these would be among the poles of the Kalman filter. The reduced-order Kalman filter cannot achieve zero variance of the estimation error if the number of independent noise sources exceeds the number of observations. It is also shown that the reduced-order Kalman filter achieves the generalized Doyle-Stein condition for robustness when the noise sources are colocated with the control inputs. When there are more observations than noise sources, additional noise sources can be postulated to improve the observer frequency response without diminishing robustness