PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Recent advances in the fields of robotics and automation have spurred significant interest in robust state estimation. To enable robust state estimation, several methodologies have been proposed. One such technique, which has shown promising performance, is the concept of iteratively estimating a Gaussian Mixture Model (GMM), based upon the state estimation residuals, to characterize the measurement uncertainty model. Through this iterative process, the measurement uncertainty model is more accurately characterized, which enables robust state estimation through the appropriate de-weighting of erroneous observations. This approach, however, has traditionally required a batch estimation framework to enable the estimation of the measurement uncertainty model, which is not advantageous to robotic applications. In this paper, we propose an efficient, incremental extension to the measurement uncertainty model estimation paradigm. The incremental covariance estimation (ICE) approach, as detailed within this paper, is evaluated on several collected data sets, where it is shown to provide a significant increase in localization accuracy when compared to other state-of-the-art robust, incremental estimation algorithms.
Content may be subject to copyright.
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 1
Robust Incremental State Estimation through
Covariance Adaptation
Ryan M. Watson1, Jason N. Gross1, Clark N. Taylor2, and Robert C. Leishman2
Abstract—Recent advances in the fields of robotics and au-
tomation have spurred significant interest in robust state estima-
tion. To enable robust state estimation, several methodologies
have been proposed. One such technique, which has shown
promising performance, is the concept of iteratively estimating a
Gaussian Mixture Model (GMM), based upon the state estimation
residuals, to characterize the measurement uncertainty model.
Through this iterative process, the measurement uncertainty
model is more accurately characterized, which enables robust
state estimation through the appropriate de-weighting of erro-
neous observations. This approach, however, has traditionally re-
quired a batch estimation framework to enable the estimation of
the measurement uncertainty model, which is not advantageous
to robotic applications. In this paper, we propose an efficient,
incremental extension to the measurement uncertainty model
estimation paradigm. The incremental covariance estimation
(ICE) approach, as detailed within this paper, is evaluated on
several collected data sets, where it is shown to provide a
significant increase in localization accuracy when compared to
other state-of-the-art robust, incremental estimation algorithms.
I. INTRODUCTION
THE ability to infer information about the system and
the operating environment is one of the key components
enabling many robotic applications. To equip robotic platforms
with this capability, several state estimation frameworks [1]
have been developed (e.g., the Kalman filter [2], or the particle
filter [3]).
The traditional state estimation methodologies perform ade-
quately when the collected observations adhere to the a priori
models. However, in many robotic applications of interest, the
observations can be degraded (e.g., global navigation satellite
system (GNSS) observations in an urban environment, or RGB
observations in a low-light setting), which cause a deviation
between the collected observations and the assumed models.
When this deviation is present, the traditional state estimation
schemes (i.e., estimators that utilize the l2-norm exclusively
to construct the cost-function) can breakdown [4].
To overcome the breakdown of traditional state estimators
in data degraded scenarios, several robust estimation schemes
have been developed. These robust estimation schemes com-
pensate for erroneous observations by either adapting the
measurement function [5], [6] or by adapting the measurement
uncertainty model [7]. As discussed within [8], there is an
equivalence between these two compensation schemes. Thus,
within this work, the focus will be on erroneous observation
1Department of Mechanical and Aerospace Engineering, West Virginia Uni-
versity, Morgantown, WV
2Autonomy and Navigation Technology Center, Air Force Institute of Tech-
nology
compensation though measurement uncertainty model adapta-
tion.
To enable this measurement uncertainty model adapta-
tion in practice, several implementations have been devel-
oped. Specifically, these implementations fall into one of two
paradigms. The methods that fall into the first framework
are the group of consensus seeking (i.e., the approaches that
conduct optimization with a trusted subset of the original
observations) approaches (e.g., realizing, reversing, recovering
(RRR) [9], single-cluster spectral graph partitioning (SCGP)
[10], and l1relaxation [11]). The methods that fall into the
second framework are the group of de-weighting (i.e., the
approaches that conduct optimization with all the observations;
however, they remain robust by reducing the contribution of
observations based upon their deviation from the assumed
model) approaches (e.g., maximum likelihood type estimators
(m-estimators) [7], switchable constraints [12], and dynamic
covariance scaling (DCS) [13]).
To extend the robust state estimation through covariance
adaptation approach from the traditional uni-modal uncertainty
model paradigm to a multi-modal implementation, the max-
mixtures (MM) [14] approach was developed. The MM ap-
proach mitigates the increased computation complexity gener-
ally assumed to accompany the incorporation of multi-modal
uncertainty models by first assuming that the uncertainty
model can be represented by a Gaussian mixture model
(GMM), then selecting the single Gaussian component from
the GMM that maximizes the likelihood of the individual
observation given the current state estimate.
When initially proposed, the MM approach utilized a static
measurement uncertainty model (i.e., the multi-modal mea-
surement uncertainty model is assumed known a priori). To
extend the MM approach to scenarios where the measurement
uncertainty model is not accurately characterized a priori,
several works [15]–[17] have investigated the concept of
adapting the measurement uncertainty model based upon the
state estimation residuals [18].
Through this adaptive process, the measurement uncertainty
model is more accurately characterized, which enables robust
state estimation through the appropriate de-weighting of erro-
neous observations. This approach, however, has traditionally
required a batch [19], or fixed-lag [20] estimation framework
to enable the estimation of the measurement uncertainty
model, which is not advantageous to most robotic applications,
as incremental updates are usually required. Additionally, as
we’ll discuss in this paper, theses approaches are inefficient
both respect to memory and computation in the estimation
of the measurement uncertainty model.
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 2
Within this paper, we propose a novel extension to the
measurement uncertainty model estimation paradigm. Specif-
ically, we propose an efficient, incremental extension of the
methodology. The efficiency of the approach is granted by
incrementally adapting the uncertainty model with only a
small subset of informative state estimation residuals (i.e., the
state estimation residuals which do not adhere to the a priori
model), which is a key differentiating factor between the pro-
posed approach and those previously developed [15], [20]. The
incremental nature of the approach is granted through recent
advances within the probabilistics graphical model community
(i.e., through the utilization of the incremental smoothing and
mapping (iSAM2) [21] algorithm), in conjunction with the
ability to merge GMM’s [22].
To provide a discussion of the proposed incremental co-
variance estimation (ICE) approach, the remainder of the
paper is accordingly organized. First, a brief introduction to
state estimation is provided in Section II, with a specific
emphasis being placed on the current limitations of robust state
estimation. Based upon the discussion provided in Section II,
the discussion turns to the proposed ICE robust framework
in Section III. In Section IV, the proposed ICE approach is
validated on several collected GNSS data sets, where improved
estimation accuracy is observed, when compared to other state-
of-the-art robust state estimators. Finally, the paper terminates
in Section V with a brief conclusion and discussion of future
research.
II. STATE ESTIMATION
A. Batch Estimation
For the sake of completeness, a succinct review of state
estimation and its robust variants is detailed in this section.
For a more thorough examination of the topic, the reader is
referred to Section II of [19].
To begin, the general state estimation problem can be
formulated as the process of inferring a set of states Xthat
in some sense are in best agreement with the provided
information Y. The metric utilized to quantify agreement, in
this work, is the maximization of the posterior distribution
(i.e., the maximum a posteriori (MAP) state estimate ˆ
X), as
presented in Eq. 1.
ˆ
X= argmax
X
p(X|Y)(1)
To enable the implementation of the MAP estimation prob-
lem, the factor graph [23] formulation can be utilized. The
factor graph is a probabilistic graphical model framework
which enables the factorization of the posterior distribution
into a product of functions that operate on a reduced domain,
as shown in Eq. 2
p(X|Y)
N
Y
n=1
ψn(An, Bn),(2)
All software developed to enable the evaluation presented in this study is
publicly available at https://github.com/wvu-navLab/ICE.
where, ψn(An, Bn)is an application specific domain reduced
function (i.e., a factor in the factor graph model), which oper-
ates on An {X1, X2...,Xn}, and Bn {Y1, Y2. . . Ym}.
When utilizing the factor graph formulation, as a means
to enable a computationally efficient implementation, it is
commonly assumed that each factor within the factorization
adheres to a Gaussian noise model. With this assumption in
place, the estimation problem presented in Eq. 1 is reduced to
finding the set of states which minimizes the squared sum of
weighted residuals [24], as presented in Eq. 3
ˆ
X= argmin
X
N
X
n=1
|| rn(X)||Λns.t. rn(X),ynhn(X),
(3)
where rn(X)is an observation residual, hnis a function that
maps the state estimate to the observation domain, Λnis the
utilized covariance (i.e., residual weighting) matrix, and || ||
is defined as the l2-norm.
B. Incremental Estimation
For many applications, the information is provided incre-
mentally. When this is the case, the estimation framework
discussed previously is inefficient due to the need to refactor
the entire measurement Jacobian matrix every time a new
information is provided.
To overcome this computation limitation, the concept of
incrementally updating the matrix factorization (e.g., QR-
decomposition) was studied within [25]. Within [25], they
enabled the incremental updating of the matrix factorization
by first augmenting the previous factorization, then, restoring
the upper triangular form of the factorization through the
utilization of Givens rotations1.
The approach proposed within [25] does have one key
limitation, which is the requirement to conduct periodic batch
re-computation of the QR-decomposition for the entire mea-
surement Jacobian matrix to enable variable re-ordering. This
batch re-computation is utilized to maintain the sparsity of the
upper-triangular system. To mitigate this batch re-computation
the Bayes tree [27] was introduced. This directed graphical
model directly represents the square root information matrix
and can be easily computed from the associated factor graph in
a two-step process, as detailed in [21]. Due to the structure of
the Bayes tree graphical model, this methodology removes the
requirement to re-factor the entire system when new informa-
tion is added. Instead, only the affected section of the Bayes
tree is re-factored, as detailed within [21]. This approach to
state estimation is titled iSAM2, and is the approach utilized
within this study.
C. Robust Estimation
Utilizing the iSAM2 approach provides an efficient estima-
tion framework when the provided information adheres to the
a priori models. However, when the provided information does
not adhere to the a priori models, the estimator can breakdown
1See section 5.1.8 of [26] for a thorough review of Givens rotations with
applications to least squares (LS).
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 3
[4]. This property is not exclusive to the iSAM2 framework,
instead, it is a fundamental property of any estimation frame-
work that exclusively utilizes the l2-norm to construct its cost
function.
To overcome this limitation, several robust estimation
frameworks have been proposed (e.g., m-estimators [7],
switchable constraints [12], and MM [14]). Linking all of
these estimation frameworks is the concept of enabling robust
estimation through appropriately weighting (i.e., scaling the
assumed covariance model) the contribution of each informa-
tion source based upon the level of adherence between the
information and the a priori model. To implement this concept,
the iteratively re-weighted least squares (IRLS) formation [28],
as provided in Eq. 4, can be utilized, where the weighting
function w()is dependent upon the utilized robust estimation
framework (e.g., DCS [13]).
ˆ
X= argmin
X
N
X
n=1
wn(en)ens.t. en,krn(X)kΛn(4)
To extend robust state estimation from the traditional uni-
modal uncertainty model paradigm to a multi-modal imple-
mentation, the MM [14] approach was developed. The MM
approach mitigates increased computation complexity gener-
ally assumed to accompany the incorporation of multi-modal
uncertainty models by first assuming that the uncertainty
model can be represented by a GMM, then selecting the
single Gaussian component from the GMM that maximizes
the likelihood of the individual observation given the current
state estimate.
The MM approach was extended within the batch co-
variance estimation (BCE) framework [15], [19] to enable
the estimation of the multi-modal covariance models during
optimization. The BCE approach enables the estimation of
the multi-modal covariance model through the utilization of
variational inference (VI) [29] on the current set of state
estimation residuals. The BCE approach provided promising
results with the primary limitation being the batch estimation
nature of the framework. To overcome this computational
limitation, an extension to the BCE approach, as described
within section III, which enables efficient incremental updating
while maintaining the robust characteristics, is proposed within
this paper.
III. PROP OS ED AP PROACH
To facilitate a discussion of the proposed ICE framework
the assumed data model is first explained. Then, a method
for incremental measurement uncertainty model adaptation
is presented. Finally, pull the previously mentioned topics
together, the discussion concludes with an overview of the
proposed ICE framework.
A. Data Model
As calculated by the estimator, a set of state estimation
residuals R={r1, r2, . . . , rN|rn,ynhn(X)}is
provided. The set of state estimation residuals can be char-
acterized by a GMM, which, for this work, will act as the
measurement uncertainty model, GMMg. As proposed within
[14], with the intent to minimize the computation complexity
of the optimization problem, the GMM can be reduced to
selecting the most likely component from the mixture model
to approximately characterize each observation, as depicted
in Eq. 5 where µmis the components mean and Λmis the
components covariance.
rnmax
mwmN(rn|θm)s.t. θm,{µm,Λm}(5)
For this work, it is additionally assumed that the set of
residuals, R, can be partitioned into two distinct groups. The
first group is the set of all residuals which sufficiently adhere to
the a priori covariance model (i.e., do not deviate sufficiently
from the most likely component within GMMg), which will
be indicated by the set RI. While, the second group is the
set residuals which do not sufficiently adhere to the a priori
covariance model, which will be indicated by the set RO.
To quantify the level of adherence to the a priori uncertainty
model, the z-test, as provided in Eq. 6, is employed. Within
Eq. 6 µ, and σare the mean and standard deviation of the
most likely component from GMMgfor the state estimation
residual rn. Utilizing the z-test as a metric to quantify the level
of agreement between the set of state estimation residual and
the a priori uncertainty model, we can more concretely define
the two groupings as, RI={r|rR, Z(r, φ)< Tr}2and
RO={r|rR, r /RI}.
Z(rn, φ) = rnµ
σs.t. φ,{µ, σ}(6)
B. Uncertainty Model Adaptation
By definition, the set ROis not accurately characterized
by GMMgthus, it is desired to adapt the uncertainty model
to more accurately represent the new observations. To enable
the adaptation of the uncertainty model, a two step proce-
dure is utilized. This procedure starts by estimating a new
GMM, which will be indicated by GMMn, based solely on
the set RO. Then, GMMnis merged into the prior model
(i.e., GMMg) to provide a more accurate characterization the
measurement uncertainty model. This procedure is elaborated
upon in Section III-B1 and Section III-B2, respectively.
1) Variational Clustering: To estimate GMMn,
the set of model parameters which maximizes the
log marginal likelihood, as depicted in Eq. 7, must be
calculated. In Eq. 7, θis the set of mean vectors and
covariance matrices which define the new GMM, and Zis an
assignment variable (i.e., the variable Zassigns each rRO
to a specific component within the model).
log p(RO) = log Zp(RO,θ,Z)dZdθ(7)
In general, the integral presented in Eq. 7 is computational
intractable [30]. Thus, a method of approximate integration
2Tris a user defined parameter that encodes the acceptable amount an
observation can deviation from the a priori model in terms of multiples
of the standard deviation. For the evaluation presented, the 3-σheuristic
(i.e.,Tr= 3.0) was utilized to encode the acceptable amount of deviation.
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 4
must be implemented. For this work, the VI3[30], [31]
approach is utilized primarily due this class of algorithms
run-time performance when compared to sampling based ap-
proaches (i.e., Monte Carlo methods [32]).
To enable a VI based approximation, several parameters
must be defined a priori (e.g., the prior distribution over the
model parameters θ). The specific instantiation of VI utilized
in the ICE algorithm is adopted from the previously proposed
BCE approach. Thus, we have omitted a detailed discussion
of the utilized inference approach within this article, as a
discussion of the utilized VI approach is already detailed in
Section III.A of [19].
2) Efficient GMM Merging: To enable the second step
of the measurement uncertainty model adaptation (i.e., the
merging of GMMninto the prior model GMMg), an im-
plementation of the algorithm presented in [22] is utilized.
To provide a description of the approach, let’s evaluate the
equivalence between gn,{wn, µn,Λn} GMMn(e.g., the
first component in GMMn) and gg,{wg, µg,Λg} GMMg
(e.g., the first component in GMMg).
To test the equivalence, we will first extract the set of
observations RO,gnROthat correspond to set of state
estimation residuals that are characterized by component gn.
Utilizing RO,gn, it is desired to check if the set of state estima-
tion residuals has an equivalent covariance to the hypothesis
covariance model (i.e., we want to see if Λn= Λg, where
Λn= cov(RO,gn)and Λgis the hypothesis covariance from
gg).
To determine if our two GMM components have an equiva-
lent covariance model, we must first transform the set of obser-
vations RO,gnwith Cholesky decomposition of our hypothesis
covariance4. This transformation provides us with a new data
set, defined as Y={y=L1r|rRO,gn,Λg=LLT}.
Utilizing the transformed set of state estimation residuals
Y, the W-statistic [33] can be constructed, as provided in
Eq. 8, to test the equivalence of covariance matrices. Within
Eq. 8, Λy= cov(Y), mis the cardinality of the set Y(i.e.,
m=|Y|), and dis the dimension the state estimation residuals
(i.e, ymRd).
W=1
dT ryI)2d
m1
dT ry)2+d
m(8)
The W-statistic is known to have an asymptotic χ2distri-
bution with degrees of freedom d(d+ 1)/2, as depicted in Eq.
9. Thus, a Chi-square test with a user defined critical value is
utilized to test the equivalence of covariance matrices.
mW d
2χ2
d(d+1)/2(9)
To test the equivalence of mean vectors, the T-statistic [34],
as provided in Eq. 10, is utilized. Within Eq. 10, µnis the
mean of the component of GMMn, and µgis the mean vector
of the component of GMMg. The T-statistic is utilized to test
the equivalence of mean vectors because it is known to have
3The libcluster [31] software library was utilized to implement ICE.
4This whitening process is conducted because the covariance test is only valid
for unit covariance matrices.
an asymptotic Fdistribution, as depicted in Eq. 11. Thus, an
F-test with user defined critical value is utilized to test the
equivalence of mean vectors.
T2=mkµnµgkΛy(10)
md
d(m1)T2Fd,md(11)
If both the mean and covariance of two components are
found to be equivalent, then the new component gnis merged
with the prior component ggto adapt the measurement uncer-
tainty model GMMg. To adapt the measurement uncertainty
model, the mean, covariance and weighting can be updated,
as presented in Eqs. 12, 13, and 14, respectively. Within Eqs.
12, 13, and 14, Nis the total number of points which are
characterized by GMMg,Mis the total number of points
which are characterized by GMMn, and mis the number of
points which are characterized by component gn.
µ=Nwgµg+n
Nwg+m(12)
Λ = NwgΛg+mΛn
Nwg+m+NwgµgµT
g+nµT
n
Nwg+mµµT(13)
w=Nwg+m
N+M(14)
If the new component gndoes not match a component
within GMMg, then the mean and covariance of gnis added
to GMMg. When the new component is added to GMMgthe
weighting vector is updating, as presented in Eq. 15, where
N,M, and mare as defined above. When the new component
is added, the weighting for all of the remaining components
in GMMgare updated according to Eq. 16.
w=m
N+M(15)
w=Nwg
N+M(16)
Through the utilization of the mixture model merging ap-
proach developed within [22], and outlined in this section,
the measurement uncertainty model can be adapted online.
This adaptation is conducted without the need for storing all
previous state estimation residuals (i.e, only the most recent
residuals ROwhich do not adhere to the a priori model are
required), which dramatically reduces the computational and
memory cost of the proposed approach.
C. Algorithm Overview
With the discussion provided in the previous sections, the
conversation can now turn to an overview of the proposed
robust estimation framework. To facilitate a discussion, a
graphical overview of the ICE framework is depicted in Fig.
1.
From Fig. 1, it is shown that the ICE algorithm starts at
each epoch by calculating the set of state estimation residuals
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 5
Rtfrom the current set of observations Ytand the state
propagated from the previous epoch Xt1. As discussed
within Section III-A, this set of state estimation residuals
Rtcan be partitioned into two distinct groups (i.e., the set
of state estimation residuals which correspond to erroneous
observations RO,t, and the set of state estimation residuals
which correspond to observations that adhere to the a priori
model RI,t) through the utilization of the z-test.
With the set RO,t, the previous set of state estimation
residuals which correspond to erroneous observations ROis
appended. If the length of ROis greater than a user defined
threshold5(i.e., if |RO|> Tc), the set is utilized to modify the
measurement uncertainty model, as described in Section III-B.
After the adaptation of the uncertainty model, the set ROis
cleared and the set of observations which adhere to the a priori
model RI,tare incorporated. With the incorporation of the new
observations, a new state estimate is provided, following the
discussion provided in Section II-B.
If the length of set of state estimation residuals, which
correspond to erroneous observations, ROis less than a user
defined threshold, then the uncertainty model is not adapted
for the current epoch. Instead, the previous measurement
uncertainty model is utilized to incorporate the new set of
observations which adhere to the a priori model. With the new
observations incorporated, a new state estimated is provided,
as described in Section II-B. This process is continued in an
iterative fashion for as long as needed (e.g., until the data
collection terminates).
IV. RES ULT S
A. Data Collection
To conduct an evaluation of the proposed robust estimation
framework, a collection of three kinematic GNSS data sets is
utilized. These GNSS data sets, as can be visualized through
their ground traces, which are shown in Fig. 2, were made
publicly available and are described within [19].
For these data collects, the binary in-phase and quadrature
(IQ) data in the L1-band was recorded. By recording the
IQ data in place of the GNSS receiver dependent observa-
tions (i.e., the pseudorange and carrier-phase observables),
the same data collect can be utilized to generate several
sets of observations with varying levels of degradation after
playing back through a software defined GNSS receiver [35]
with different sets of tracking parameters. Specifically, the
receiver dependent observations can be generated off-line by
playing the IQ data into a GNSS receiver, where the level
of degradation is varied by changing the GNSS receiver’s
tracking parameters (i.e., changing the bandwidth of the phase
lock loop (PLL), the delay lock loop (DLL) and the correlator
spacing). For a detailed discussion on the impact that the
GNSS receiver tracking parameters can have on the quality
5The specific realization of this threshold has the potential to greatly affect the
estimation accuracy, and run-time performance of the ICE algorithm. Within
this study, Tcwas set to 1,000 as that provided an acceptable compromise
between run-time performance and the covariance estimation accuracy of the
VI algorithm for a 2-dimensional covariance model. A thorough sensitivity
analysis of the ICE approach will be the subject of a future study.
Calculate Residuals
Environment
Sensor
A Priori Information
Z-test
|RO|> TcVariational Clustering
GMM Merging
Add New Observations
Incremental State Update
Rt
RO,tRI,t
Yes
RO
No
GMMn
GMMg
Xt,GMMg
t=t+∆t
Yt
Xt,GMMgICE
Gather Observations
Fig. 1: Graphical depiction of the proposed robust state estima-
tion algorithm titled incremental covariance estimation (ICE).
Fig. 2: Ground trace for the three utilized GNSS data sets.
The white trace corresponds to data collect 1, the green trace
corresponds to data collect 2, and the blue trace corresponds
to data collect 3.
of the generated observables, the reader is referred to [36],
[37], which is reviewed in [19].
For this study, two sets of observations are generated (i.e.,
a low-quality and high-quality data set) for each of the data
collects. The specific GNSS receiver tracking parameters uti-
lized to generate the low-quality and high-quality observations
are provided within Table III of [19].
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 6
B. Factor Graph Construction
The specific factor graph construction, utilized to enable
GNSS based inference in this evaluation, is adopted from [38].
Thus, a detailed review of the utilized construction has been
omitted from this paper; however, for a succinct review of
the utilized GNSS observation model, and the utilized method
for incorporating GNSS observation into the factor graph, the
reader is referred to Section II of [38]. Aside from the GNSS
observations, the only other information incorporated in the
factor graph is a process noise constraint between consecutive
state estimates.
C. Evaluation
With the discussed GNSS observations and the specific
factor graph construction, an evaluation of the proposed
methodology can be conducted. To provided a comparison for
the proposed approach, four additional estimation frameworks
will be utilized. The first comparison methodology is the
traditional l2-norm based estimator. The second comparison
methodology is the MM approach, which has a static mea-
surement error covariance model (i.e., a fixed two component
measurement error covariance model). The third comparison
methodology is the DCS approach, where the DCS approach is
utilized because it is both a closed form version of switchable
constraints and a specific implementation of an m-estimator
[13]. All of the above disused robust estimators are built upon
the iSAM2 algorithm [21], as implemented within the Georgia
Tech Smoothing and Mapping (GTSAM) library [39]. As a
final method of comparison, the BCE [19] approach is utilized.
All of the utilized estimation frameworks are provided
the same initial measurement covariance model (i.e., Λo=
diag(2.52,0.252)). Additionally, the estimator specific hyper-
parameters utilized within this evaluation are provided in Table
I.
TABLE I: Robust optimization parameter definition, where
Kρand KΦare the DCS pseudorange and carrier-phase
observation kernel widths, respectively.
Methodology Parameter Value
DCS Kρ2.5
KΦ0.25
MM weighting 1
scale factor 10
ICE Tc1,000
Tr3.0
1) Localization Performance: To enable the assessment of
the localization performance, a reference ground-truth must be
established. To generate this ground-truth, a differential GNSS
solution (i.e., real time kinematic (RTK)6) is utilized, which is
known to provide centimeter level localization accuracy [36].
With the RTK generated reference ground-truth solution, the
localization performance as quantified through the residual-
sum-of-squares (RSOS) positioning error of the five estima-
tion frameworks, when low-quality observations are utilized,
is provided in Table II. From Table II, it can be seen that all
6This solution was realized with RTKLIB [40].
four of the robust estimation frameworks provided a significant
increase is localization accuracy, with respect to the median,
when compared to the traditional l2-norm approach.
Additionally, it should be noted that the ICE approach pro-
vides the most accurate, with respect to median error, solution
for all three data collects when low-quality observations are
utilized. At first, it may seems counterintuitive that the ICE
implementation out-performs the BCE algorithm with respect
to median positioning error; however, this result is expected
as the ICE algorithm is initially rejecting observations that do
not adhere to the observation model (i.e., the ICE algorithm
does not include erroneous observations within optimization,
instead, it utilizes those observation to adapt the measurement
covariance model). On the other hand, the BCE approach
never rejects observations (i.e., all observations are utilized
during optimization), thus creating a possibility to bias in the
solution.
To continue the localization performance evaluation, we
can assess the localization performance of the four estimation
frameworks with the high-quality observations, as provided in
Table III. From Table III, first, it should be noted that all four
estimation frameworks are providing comparable localization
statistics as would be expected when the utilized obser-
vations adhere to the a priori measurement error covariance
model. However, it can also be noted that the ICE approach
is providing the most accurate localization statistics, for the
incremental estimation algorithms, the majority of the time.
2) Covariance Estimation Analysis: To continue the eval-
uation, the estimated covariance from the ICE approach is
assessed. Within this assessment, we have two primary ob-
jectives. First, we would like to show that the incrementally
estimated covariance represents the measurement uncertainty
model. Secondly, we would like to show that the covariance
estimation process is efficiently conducted.
To enable this assessment the high-quality observations are
utilized, as provided in Fig. 3. Within Fig. 3, the black points
correspond to the state estimation residuals of observations
which sufficiently adhere to the a priori measurement error
uncertainty model. While, the red points correspond to the
state estimation residuals of observations which were not well
defined by the a priori measurement uncertainty model, and
thus not included during optimization; however, were utilized
to modify the measurement uncertainty model. Additionally,
the ellipses correspond to components of the incrementally
estimated measurement error uncertainty model, with 95%
confidence.
From Fig. 3, it can be seen that the incrementally esti-
mated measurement uncertainty models closely resemble the
assumed model for the high quality observations (i.e., an inlier
distribution which characterizes a majority of the observations,
and outlier distributions which characterize a small percentage
of erroneous observations). This is specifically evident for
data collects 1 and 3, as depicted in Fig. 3a and Fig. 3c,
respectively.
To verify the efficiency of the covariance adaptation ap-
proach, we can evaluate the number of times the measurement
uncertainty model was adapted. For, data collects 1 and 3,
as depicted in Fig, 3a and Fig. 3c, the covariance model
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 7
TABLE II: Horizontal RSOS localization error results when low fidelity receiver tracking parameters are utilized to generate the
observations. The green and red cell entries correspond to the minimum and maximum statistic for the incremental estimation
frameworks, respectively. As a comparison to the incremental estimators, the positioning performance provided by the previously
proposed BCE approach is provided.
(a) Localization results for data collect 1.
(m.) L2DCS MM ICE BCE
mean 2.51 0.99 1.66 0.73 1.12
median 2.57 0.64 1.63 0.56 1.05
std. dev. 1.41 0.98 1.05 0.72 0.40
max 10.78 9.71 10.06 13.19 6.36
(b) Localization results for data collect 2.
(m.) L2DCS MM ICE BCE
mean 4.00 4.00 3.12 2.11 2.86
median 2.48 2.08 1.94 0.93 1.78
std. dev. 3.87 4.59 3.92 2.10 1.90
max 29.18 31.05 31.40 23.02 11.16
(c) Localization results for data collect 3.
(m.) L2DCS MM ICE BCE
mean 4.94 4.16 4.51 4.35 4.83
median 4.41 2.82 3.62 1.48 5.54
std. dev. 2.97 3.54 3.33 5.23 2.08
max 29.53 30.38 28.30 26.61 10.45
TABLE III: Horizontal RSOS localization error results when high fidelity receiver tracking parameters are utilized to generate
the observations. The green and red cell entries correspond to the minimum and maximum statistic for the incremental
estimation frameworks, respectively. As a comparison to the incremental estimators, the positioning performance provided by
the previously proposed BCE approach is provided.
(a) Localization results for data collect 1.
(m.) L2DCS MM ICE BCE
mean 0.44 0.43 0.41 0.42 0.64
median 0.37 0.36 0.35 0.35 0.66
std. dev. 0.30 0.27 0.29 0.28 0.42
max 5.38 5.33 5.35 5.22 6.04
(b) Localization results for data collect 2.
(m.) L2DCS MM ICE BCE
mean 0.79 0.81 0.84 0.79 0.58
median 0.82 0.81 0.84 0.83 0.56
std. dev. 0.46 0.46 0.50 0.46 0.40
max 3.97 3.93 10.77 2.95 3.91
(c) Localization results for data collect 3
(m.) L2DCS MM ICE BCE
mean 1.09 1.10 1.11 1.07 0.90
median 0.96 0.95 1.00 0.89 0.82
std. dev. 0.67 0.73 0.72 0.66 0.38
max 7.83 7.83 18.08 7.82 3.67
(a) Incrementally estimated measurement er-
ror covariance model for data collect 1. For
this measurement uncertainty model, approx-
imately 91% of the observations are charac-
terized by component 1.
(b) Incrementally estimated measurement
error covariance model for data collect 2.
For this data collect, only 249 observations
did not adhere to the a priori measurement
uncertainty model.
(c) Incrementally estimated measurement
error covariance model for data collect 3.
For this measurement uncertainty model,
approximately 98% of the observations are
characterized by component 1.
Fig. 3: Incrementally estimated measurement error covariance model when the observations are generated with high fidelity
receiver tracking parameters.
was only adapted once to enable the incorporation of two
outlier distributions. For data collect 2, as depicted in Fig. 3b,
no covariance adaptation step was conducted instead, only
249 observations were rejected. In contrast, if the covariance
model was naively adapted every time the number of residuals
were greater than the residual cardinality threshold, then data
collect 1 would have required 75 adaptations, data collect 2
would have required 57 adaptations, and data collect 3 would
have required 91 adaptations. Thus, the incorporation of the
z-test to partition the set of residuals dramatically increased
the efficiency of the proposed approach.
3) Run-time Analysis: To conclude the evaluation of the
proposed methodology, a run-time comparison7for all of the
incremental estimators is provided in Fig 4. From Fig. 4, it is
shown that l2-norm, DCS, and the MM approaches all provide
comparable run-time performance.
In Fig. 4 it is clearly shown that the ICE methodology
provides the slowest average run-time; however, this slower
7This run-time comparison was conducted on a 2.8GHz Intel Core i7-7700HQ
processor.
run-time which is still on average approximately 25 Hz
could prove to be a valid comprise when considering the
significantly increase in localization accuracy granted by the
approach. Additionally, although ICE approach is the slowest,
while not exploited in this current implementation, it is possi-
ble to implement the algorithm such that covariance adaption
and state estimation are running in parallel.
V. CONCLUSION
Within this paper, we propose a novel extension to the mea-
surement uncertainty model estimation paradigm for enabling
robust state estimation. Specifically, we propose an efficient,
incremental extension of the methodology. The efficiency of
the approach is granted by adapting the uncertainty model with
only a small subset of informative state estimation residuals
(i.e., the state estimation residuals which do not adhere to
the a priori model). The incremental nature of the approach
is granted through recent advances within the probabilistics
graphical model community, and the ability to merge GMM’s.
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2020 8
Fig. 4: Estimator update time in milliseconds for each of the
incremental estimation frameworks over all data collects.
To evaluate the proposed ICE approach, three degraded
GNSS data sets are utilized. Based upon the results obtained
on these data sets, the proposed approach provides promising
results. Specifically, the proposed ICE approach provides
significantly increased localization performance when utilizing
degraded data, when compared to other state-of-the-art robust,
incremental estimation algorithms.
REFERENCES
[1] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear
approaches. John Wiley & Sons, 2006.
[2] R. E. Kalman, “A new approach to linear filtering and prediction
problems,” Journal of basic Engineering, vol. 82, no. 1, pp. 35–45,
1960.
[3] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics. MIT press,
2005.
[4] F. R. Hampel, “Contribution to the theory of robust estimation,” Ph. D.
Thesis, University of California, Berkeley, 1968.
[5] J. Hidalgo-Carrió, D. Hennes, J. Schwendner, and F. Kirchner, “Gaussian
process estimation of odometry errors for localization and mapping,”
in 2017 IEEE International Conference on Robotics and Automation
(ICRA), pp. 5696–5701, IEEE, 2017.
[6] T. Tang, D. Yoon, F. Pomerleau, and T. D. Barfoot, “Learning a bias
correction for lidar-only motion estimation,” in 2018 15th Conference
on Computer and Robot Vision (CRV), pp. 166–173, IEEE, 2018.
[7] P. J. Huber, Robust Statistics. Wiley New York, 1981.
[8] A. Kvas and T. Mayer-Gürr, “Grace gravity field recovery with back-
ground model uncertainties,” Journal of geodesy, vol. 93, no. 12,
pp. 2543–2552, 2019.
[9] Y. Latif, C. Cadena, and J. Neira, “Realizing, reversing, recovering:
Incremental robust loop closing over time using the irrr algorithm,
in 2012 IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 4211–4217, IEEE, 2012.
[10] E. Olson, M. R. Walter, S. J. Teller, and J. J. Leonard, “Single-cluster
spectral graph partitioning for robotics applications.,” in Robotics:
Science and Systems, pp. 265–272, 2005.
[11] L. Carlone, A. Censi, and F. Dellaert, “Selecting good measurements via
âˇ
D¸S 1 relaxation: A convex approach for robust estimation over graphs,”
in 2014 IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 2667–2674, IEEE, 2014.
[12] N. Sünderhauf and P. Protzel, “Switchable constraints for robust pose
graph SLAM,” in 2012 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, pp. 1879–1884, IEEE, 2012.
[13] P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard,
“Robust map optimization using dynamic covariance scaling, in 2013
IEEE International Conference on Robotics and Automation, pp. 62–69,
Citeseer, 2013.
[14] E. Olson and P. Agarwal, “Inference on networks of mixtures for robust
robot mapping,” The International Journal of Robotics Research, vol. 32,
no. 7, pp. 826–840, 2013.
[15] R. M. Watson, C. N. Taylor, R. C. Leishman, and J. N. Gross, “Batch
Measurement Error Covariance Estimation for Robust Localization, in
Proceedings of the 31st International Technical Meeting of The Satellite
Division of the Institute of Navigation (ION GNSS+2018), pp. 2429–
2439, 2018.
[16] T. Pfeifer and P. Protzel, “Robust sensor fusion with self-tuning mixture
models,” in 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), pp. 3678–3685, IEEE, 2018.
[17] D. Wang, J. Xue, Z. Tao, Y. Zhong, D. Cui, S. Du, and N. Zheng, Ac-
curate mix-norm-based scan matching,” in 2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), pp. 1665–1671,
IEEE, 2018.
[18] G. Agamennoni, P. Furgale, and R. Siegwart, “Self-tuning m-estimators,
in 2015 IEEE International Conference on Robotics and Automation
(ICRA), pp. 4628–4635, IEEE, 2015.
[19] R. M. Watson, J. N. Gross, C. N. Taylor, and R. C. Leishman, “En-
abling Robust State Estimation through Measurement Error Covariance
Adaptation,” IEEE Transactions on Aerospace and Electronic Systems,
pp. 1–1, 2019.
[20] T. Pfeifer and P. Protzel, “Incrementally learned mixture models for gnss
localization,” arXiv preprint arXiv:1904.13279, 2019.
[21] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard, and
F. Dellaert, “iSAM2: Incremental smoothing and mapping using the
Bayes tree,” The International Journal of Robotics Research, vol. 31,
no. 2, pp. 216–235, 2012.
[22] M. Song and H. Wang, “Highly efficient incremental estimation of
Gaussian mixture models for online data stream clustering,” in Intelligent
Computing: Theory and Applications III, vol. 5803, pp. 174–184,
International Society for Optics and Photonics, 2005.
[23] F. Dellaert, M. Kaess, et al., “Factor graphs for robot perception,”
Foundations and Trends® in Robotics, vol. 6, no. 1-2, pp. 1–139, 2017.
[24] F. Dellaert and M. Kaess, “Square Root SAM: Simultaneous localization
and mapping via square root information smoothing,” The International
Journal of Robotics Research, vol. 25, no. 12, pp. 1181–1203, 2006.
[25] M. Kaess, A. Ranganathan, and F. Dellaert, “Fast Incremental Square
Root Information Smoothing.,” in IJCAI, pp. 2129–2134, 2007.
[26] G. H. Golub and C. F. van Loan, Matrix Computations. JHU Press,
fourth ed., 2013.
[27] M. Kaess, V. Ila, R. Roberts, and F. Dellaert, “The Bayes tree: An
algorithmic foundation for probabilistic robot mapping,” in Algorithmic
Foundations of Robotics IX, pp. 157–173, Springer, 2010.
[28] Z. Zhang, “Parameter estimation techniques: A tutorial with application
to conic fitting,” Image and vision Computing, vol. 15, no. 1, pp. 59–76,
1997.
[29] D. M. Blei, M. I. Jordan, et al., “Variational inference for Dirichlet
process mixtures,” Bayesian analysis, vol. 1, no. 1, pp. 121–143, 2006.
[30] M. Beal, Variational algorithms for approximate Bayesian inference.
PhD thesis, University of London, 2003.
[31] D. Steinberg, An unsupervised approach to modelling visual data. PhD
thesis, University of Sydney., 2013.
[32] A. Doucet and X. Wang, “Monte Carlo methods for signal processing:
a review in the statistical signal processing context, IEEE Signal
Processing Magazine, vol. 22, no. 6, pp. 152–170, 2005.
[33] O. Ledoit, M. Wolf, et al., “Some hypothesis tests for the covariance
matrix when the dimension is large compared to the sample size,” The
Annals of Statistics, vol. 30, no. 4, pp. 1081–1102, 2002.
[34] H. Hotelling, “The generalization of Studentâ ˘
A´
Zs ratio,” in Break-
throughs in statistics, pp. 54–65, Springer, 1992.
[35] C. Fernandez-Prades, J. Arribas, P. Closas, C. Aviles, and L. Esteve,
“GNSS-SDR: an open source tool for researchers and developers, in
Proceedings of the 24th International Technical Meeting of The Satellite
Division of the Institute of Navigation (ION GNSS 2011), pp. 780–0,
2001.
[36] E. Kaplan and C. Hegarty, Understanding GPS: principles and applica-
tions. Artech house, 2005.
[37] A. Van Dierendonck, P. Fenton, and T. Ford, “Theory and performance
of narrow correlator spacing in a gps receiver,” Navigation, vol. 39,
no. 3, pp. 265–283, 1992.
[38] R. M. Watson and J. N. Gross, “Evaluation of kinematic precise point
positioning convergence with an incremental graph optimizer,” in Po-
sition, Location and Navigation Symposium (PLANS), 2018 IEEE/ION,
pp. 589–596, IEEE, 2018.
[39] F. Dellaert, “Factor graphs and gtsam: A hands-on introduction,” tech.
rep., Georgia Institute of Technology, 2012.
[40] T. Takasu, “RTKLIB: An open source program package for GNSS
positioning,” 2011.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this article, we present a computationally efficient method to incorporate background model uncertainties into the gravity field recovery process. While the geophysical models typically used during the processing of GRACE data, such as the atmosphere and ocean dealiasing product, have been greatly improved over the last years, they are still a limiting factor of the overall solution quality. Our idea is to use information about the uncertainty of these models to find a more appropriate stochastic model for the GRACE observations within the least squares adjustment, thus potentially improving the gravity field estimates. We used the ESA Earth System Model to derive uncertainty estimates for the atmosphere and ocean dealiasing product in the form of an autoregressive model. To assess our approach, we computed time series of monthly GRACE solutions from L1B data in the time span of 2005 to 2010 with and without the derived error model. Intercomparisons between these time series show that noise is reduced on all spatial scales, with up to 25% RMS reduction for Gaussian filter radii from 250 to 300 km, while preserving the monthly signal. We further observe a better agreement between formal and empirical errors, which supports our conclusion that used uncertainty information does improve the stochastic description of the GRACE observables.
Conference Paper
Full-text available
The factor graph has become the standard framework for representing a plethora of robotic navigation problems. One primary reason for this adoption by the community is the fast and efficient inference that can be conducted over the graph when a unimodal Gaussian noise model is assumed. However, the unimodal Gaussian noise model assumption does not reflect reality in many situations, particularly measurements that may include gross outliers (e.g. feature tracking between images, place recognition, or GNSS multipath). To combat this issue, several methodologies have been proposed for conducting robust inference on factor graphs. These models work by reducing the contribution of constraints that do not adhere to the specified noise model by scaling the corresponding elements of the information matrix. A unifying assumption shared by the proposed robust graph inference algorithms is that the measurement noise model is known a priori and that the specified noise model does not vary with time. In the situation where the measurement model is not fully known, rejecting the outliers can become far more difficult. To overcome this issue, a novel method is proposed that utilizes a non-parametric soft clustering algorithm to iteratively estimate the measurement error covariance matrix. The estimated covariance mixture model is then used within the max-mixtures framework to mitigate the effect of false constraints. The proposed methodology provides robust optimization in the face of faulty measurements where little or no information is provided about the measurement uncertainty
Conference Paper
Full-text available
Highly accurate mapping and localization is of prime importance for mobile robotics, and its core lies in efficient scan matching. Previous research are focusing on designing a robust objective function and the residual error distribution is often ignored or simply assumed as unitary or mixture of simple distributions. In this paper, a mixture of exponential power (MoEP) distributions is proposed to approximate the residual error distribution. The objective function induced by MoEP-based residual error modelling ensembles a mix-norm-based scan matching (MiNoM), which enhances the matching accuracy and convergence characteristic. Both the parameters of transformation (rotation and translation) and residual error distribution are estimated efficiently via an EM-like algorithm. The optimization of MiNoM is iteratively achieved via two phases: An on-line parameter learning (OPL) phase to learn residual error distribution for better representation according to the likelihood field model (LFM), and an iteratively reweighted least squares (IRLS) phase to attain transformation for accuracy and efficiency. Extensive experimental results validate that the proposed MiNoM outperforms several state-of-the-art scan matching algorithms in both convergence characteristic and matching accuracy.
Conference Paper
Full-text available
Estimation techniques to precisely localize a kine-matic platform with GNSS observables can be broadly partitioned into two categories: differential, or undifferenced. The differential techniques (e.g., real-time kinematic (RTK)) have several attractive properties, such as correlated error mitigation and fast convergence; however, to support a differential processing scheme, an infrastructure of reference stations within a proximity of the platform must be in place to construct observation corrections. This infrastructure requirement makes differential processing techniques infeasible in many locations. To mitigate the need for additional receivers within proximity of the platform, the precise point positioning (PPP) method utilizes accurate orbit and clock models to localize the platform. The autonomy of PPP from local reference stations make it an attractive processing scheme for several applications; however, a current disadvantage of PPP is the slow positioning convergence when compared to differential techniques. In this paper, we evaluate the convergence properties of PPP with an incremental graph optimization scheme (Incremental Smoothing and Mapping (iSAM2)), which allows for real-time filtering and smoothing. The characterization is first conducted through a Monte Carlo analysis within a simulation environment, which allows for the variations of parameters, such as atmospheric conditions, satellite geometry, and intensity of multipath. Then, an example collected data set is utilized to validate the trends presented in the simulation study.
Conference Paper
Full-text available
Robust navigation in urban environments has received a considerable amount of both academic and commercial interest over recent years. This is primarily due to large commercial organizations such as Google and Uber stepping into the autonomous navigation market. Most of this research has shied away from Global Navigation Satellite System (GNSS) based navigation. The aversion to utilizing GNSS data is due to the degraded nature of the data in urban environment (e.g., multipath, poor satellite visibility). The degradation of the GNSS data in urban environments makes it such that traditional (GNSS) positioning methods ( e.g., extended Kalman filter, particle filters) perform poorly. However, recent advances in robust graph theoretic based sensor fusion methods, primarily applied to Simultaneous Localization and Mapping (SLAM) based robotic applications, can also be applied to GNSS data processing. This paper will utilize one such method known as the factor graph in conjunction several robust optimization techniques to evaluate their applicability to robust GNSS data processing. The goals of this study are two-fold. First, for GNSS applications, we will experimentally evaluate the effectiveness of robust optimization techniques within a graph theoretic estimation framework. Second, by releasing the software developed and data sets used for this study, we will introduce a new open-source front-end to the Georgia Tech Smoothing and Mapping (GTSAM) library for the purpose of integrating GNSS pseudorange observations.
Article
Accurate platform localization is an integral component of most robotic systems. As these robotic systems become more ubiquitous, it is necessary to develop robust state estimation algorithms that are able to withstand novel and non-cooperative environments. When dealing with novel and non-cooperative environments, little is known a priori about the measurement error uncertainty, thus, there is a requirement that the uncertainty models of the localization algorithm be adaptive. Within this paper, we propose the batch covariance estimation technique, which enables robust state estimation through the iterative adaptation of the measurement uncertainty model. The adaptation of the measurement uncertainty model is granted through non-parametric clustering of the residuals, which enables the characterization of the measurement uncertainty via a Gaussian mixture model. The provided Gaussian mixture model can be utilized within any non-linear least squares optimization algorithm by approximately characterizing each observation with the sufficient statistics of the assigned cluster (i.e., each observation's uncertainty model is updated based upon the assignment provided by the nonparametric clustering algorithm). The proposed algorithm is verified on several GNSS collected data sets, where it is shown that the proposed technique exhibits some advantages when compared to other robust estimation techniques when confronted with degraded data quality.
Conference Paper
GNSS localization is an important part of today's autonomous systems, although it suffers from non-Gaussian errors caused by non-line-of-sight effects. Recent methods are able to mitigate these effects by including the corresponding distributions in the sensor fusion algorithm. However, these approaches require prior knowledge about the sensor's distribution, which is often not available. We introduce a novel sensor fusion algorithm based on variational Bayesian inference, that is able to approximate the true distribution with a Gaussian mixture model and to learn its parametrization online. The proposed Incremental Variational Mixture algorithm automatically adapts the number of mixture components to the complexity of the measurement's error distribution. We compare the proposed algorithm against current state-of-the-art approaches using a collection of open access real world datasets and demonstrate its superior localization accuracy.
Conference Paper
A fundamental problem of non-linear state estimation in robotics is the violation of assumptions about the sensors' error distribution. State of the art approaches reduce the impact of these violations with robust cost functions or predefined non-Gaussian error models. Both require extensive parameter tuning and fail if the sensors' error characteristic changes over time, due to environmental changes, ageing or sensor malfunctions. We demonstrate how the error distribution itself can be part of the state estimation process. Based on an efficient approximation of a Gaussian mixture, we optimize the sensor model simultaneously during the standard state estimation. Due to an implicit expectation-maximization approach, we achieve a fast convergence without prior knowledge of the true distribution parameters. We implement this self-tuning algorithm in a least-squares optimization framework and demonstrate its real time capability on a real world dataset for satellite localization of a driving vehicle. The resulting estimation quality is superior to previous robust algorithms.