Content uploaded by Yuriy S. Shmaliy
Author content
All content in this area was uploaded by Yuriy S. Shmaliy on Nov 23, 2021
Content may be subject to copyright.
IEEE Proof
IEEE TRANSACTIONS ON SIGNAL PROCESSING 1
Discrete Time q-Lag Maximum Likelihood FIR
Smoothing and Iterative Recursive Algorithm
1
2
Shunyi Zhao , Senior Member, IEEE, Jinfu Wang, Yuriy S. Shmaliy , Fellow, IEEE,andFeiLiu , Member, IEEE3
Abstract—The finite impulse response (FIR) approach is known4
to be more robust than the Kalman approach. In this paper, we
Q1
Q2
5
derive a batch q-lag maximum likelihood (ML) FIR smoother for6
full covariance matrices and represent it with an iterativealgorithm7
using recursions for diagonal covariance matrices. It is shown that,8
under ideal conditions of fully known model, the ML FIR smoother9
occupies an intermediate place between the more accurate Rauch-10
Tung-Striebel (RTS) smoother and the less accurate unbiased FIR11
smoother. With uncertainties and errors in noise covariances, ML12
FIR smoothing is significantly superior to RTS smoothing. It is also13
shown experimentally that ML FIR smoothing is more robust than14
RTS smoothing against measurement outliers.
Q3
15
Index Terms—State space, maximum likelihood, FIR smoother,16
Kalman smoother, UFIR smoother.17
I. INTRODUCTION18
THE denoising state estimation technique called smoothing19
is used in real time when some time delay is allowed in20
the output. Unlike optimal filtering, which deals with unique21
Kalman recursions, optimal smoothing can be organized in dif-22
ferent ways. Moreover, noticing that the coordinate basis in state23
space is arbitrary for any linear system, Moore argued in [1] that24
there is an infinity of smoothing algorithms. Even so, only two25
general approaches are available to design smoothers [2]–[5]:26
infinite impulse response (IIR), such as the recursive Kalman27
filter, and finite impulse response (FIR).28
In recursive smoothing, state estimates are improved by29
processing backward available Kalman estimates in the fixed-30
lag, fixed-interval, and fixed-point smoothing algorithms [6].31
In fixed-lag smoothing [7], the optimal estimate is achieved32
with a fixed time delay-lag q. The widely used fixed-interval33
Rauch-Tung-Striebel (RTS) optimal smoother [8], also known34
as Kalman smoother, takes data from a fixed interval to obtain35
aq-lag smoothed estimate therein. Because of its simplicity36
Manuscript received January 13, 2021; revised July 26, 2021 and September
30, 2021; accepted November 7, 2021. The associate editor coordinating the
review of this manuscript and approving it for publication was Dr. Victor Elvira.
This work was supported in part by the National Natural Science Foundation of
China under Grants 61973136 and 61833007, in part by the Natural Science
Foundation of Jiangsu Province under Grant BK20211528, and in part by
Mexican CONACyT-SEP Project A1-S-10287, Funding CB2017-2018. (Cor-
responding author: Yuriy S. Shmaliy.)
Shunyi Zhao, Jinfu Wang, and Fei Liu are with the Key Labora-
tory of Advanced Process Control for Light Industry (Ministry of Educa-
tion), Jiangnan University, Wuxi, China (e-mail: shunyi.s.y@gmail.com; jing-
wang@jiangnan.edu.cn; fliu@jiangnan.edu.cn).
Yuriy S. Shmaliy is with the Department of Electronics, Universidad de
Guanajuato, Salamanca 36855, Mexico (e-mail: shmaliy@ugto.mx).
Digital Object Identifier 10.1109/TSP.2021.3127677
and accuracy, it has been extended for various scenarios using 37
techniques such as Huber densities and H∞performance [5], 38
[9], [10]. Another widely used fixed-interval optimal smoother 39
was proposed by Bierman [11] and is now better known as the 40
modified Bryson-Frazier (MBF) smoother. A specific property 41
of fixed-point smoothing is that it improves the estimate with 42
each new data. 43
Unlike recursive KF-based schemes, batch FIR smoothing 44
deals with data on the averaging horizon [m, k]of Npoints, 45
where m=k−N+1and Nis the horizon length, and gives 46
aq-lag smoothed estimate for 0<q<N−1. A classic ex- 47
ample is the Savitsky-Golay (SG) FIR smoothing filter [12], 48
which associated the estimate with the middle of the averaging 49
horizon, although not in state space. Various FIR smoothers 50
can be designed using optimal, maximum likelihood (ML), 51
and unbiased FIR approaches based on forward and backward 52
state space models and filters. Even so, FIR smoothing is 53
still less used in state space and its properties are not well 54
understood. 55
First attempts to develop an optimal FIR (OFIR) smoother 56
were made in [13]. The solution was found by modifying 57
the receding horizon (RH) FIR predictive filter [14], which 58
computational complexity O(N2)was reduced to O(N)using 59
the system matrix property. At about the same time, another 60
order-recursive FIR smoother was obtained in [15] using the 61
Cholesky factorization. Further progress in RH FIR smoothing 62
was achieved in 2007–2014 by W. H. Kwon and his followers. 63
The minimum variance FIR smoother was derived in discrete 64
time in [16], [17] and in continuous time in [18], where bias- 65
constrained solutions were found for a fixed lag. Another similar 66
fixed-lag RH ML FIR smoother was proposed in [21]. In [19], 67
it was argued that the lag size of half an averaging horizon is 68
not generally best in the mean square error sense, unlike the 69
setting in the SG filter. This was later investigated in detail 70
in [20], where the SG filter was generalized as a special case 71
of even-order polynomials. In [22], it was shown that the ML 72
FIR structure is universal for all types of FIR state estimators 73
with build-in unbiasedness. Some other approaches [23], [24] 74
can also be considered to develop FIR estimators. 75
Several norm-bounded RH FIR smoothers, called robust, have 76
also been obtained. The energy-to-error RH FIR smoother was 77
derived in [25] for deterministic discrete models affected by 78
disturbances. Under the same conditions, a minimax robust RH 79
FIR smoother was developed and investigated in continuous time 80
in [26]. The H∞RH FIR smoother was designed in [27] using 81
a linear matrix inequality. 82
1053-587X © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
IEEE Proof
2IEEE TRANSACTIONS ON SIGNAL PROCESSING
Since predictive RH smoothing is not required in signal83
processing, a universal p-shift OFIR filter was developed [28],84
[29], which becomes a q-lag OFIR smoother by p=−q.By85
neglecting noise, this solution becomes a q-lag unbiased FIR86
(UFIR) smoother that produces a smoothed estimate in one step,87
provided that the UFIR filtering estimate [30] is available. Later,88
the UFIR smoother was investigated in detail for polynomial89
models in [20] and compared to the RTS (or Kalman) smoother90
in [31].91
Summing up the results of this brief review, we note that the92
theory of q-lag ML FIR smoothing has not yet been developed.93
It is thus unclear whether the q-lag ML FIR smoothing esti-94
mate can be computed iteratively, since the recursive forms for95
this smoothing remain unknown. Noting that FIR smoothing is96
superior to predictive RH FIR smoothing, we view the develop-97
ment of q-lag FIR smoothing algorithms as an important signal98
processing problem.99
In this paper, we derive a q-lag ML FIR smoother in the100
discrete-time state space, find recursive forms for white Gaus-101
sian noise, and design an iterative algorithm. We also investi-102
gate properties of this smoother and test it by simulations and103
experimentally based on examples of tracking and localization.104
As benchmarks, we employ the RTS and UFIR smoothers. The105
main contributions are the following:106
rBatch q-lag ML FIR smoother operating with full covari-107
ance matrices.108
rRecursive forms for diagonal block covariance matrices.109
rIterative q-lag ML FIR smoothing algorithm using recur-110
sions.111
rEvidence of better robustness of q-lag ML FIR smoothing112
under temporary model uncertainties, error in noise covari-113
ances, and outliers.114
The rest of the paper is organized as follows. Section II115
discusses the model and formulates the problem. Derivation116
of the batch q-lag ML FIR smoother is given in Section III.117
Recursive forms and an iterative q-lag ML FIR smoothing118
algorithm are obtained in Section IV. An analysis of the q-lag119
ML FIR smoother performance is given in Section V. Numerical120
testing of the target tracking problem is carried out in Section VI.121
Experimental verification is given in Section VII and concluding122
remarks are drawn in Section VIII.123
II. MODEL AND PROBLEM FORMULATION124
Consider a discrete time-varying process represented in state125
space by the following equations,126
xn=Anxn−1+Bnwn,(1)
yn=Cnxn+vn,(2)
where nis the time index, xn∈RKis the state vector, yn∈RM
127
is the measurement vector, An∈RK×K,Bn∈RK×L, and128
Cn∈RM×Kare known matrices, wn∼N(0,Qn)∈RLand129
vn∼N(0,Rn)∈RMare zero mean white Gaussian process130
noise and measurement noise, respectively, having the covari-131
ances Qn=E{wnwT
n}and Rn=E{vnvT
n}. It is assumed132
that wnand vnare mutually uncorrelated, E{wivT
j}=0,133
for all iand j, and the process satisfies the requirements of 134
observability and controllability [33], [34]. 135
To derive a FIR estimator, a common strategy is to extend 136
model (1) and (2) on the averaging horizon [m=n−N+1,n]137
of Npoints as [28], [29], [35] 138
Xm,n =Am,nxm+Bm,n Wm,n,(3)
Ym,n =Cm,nxm+Gm,n Wm,n +Vm,n,(4)
where the extended vectors are defined as 139
Xm,n =xT
mxT
m+1 ... xT
nT∈RNK ,
140
Ym,n =yT
myT
m+1 ... yT
nT∈RNM ,
141
Wm,n =wT
mwT
m+1 ... wT
nT∈RNL,
142
Vm,n =vT
mvT
m+1 ... vT
nT∈RNM,
and the extended matrices are given by 143
Am,n =I,AT
m+1 ,... ,(Am+1
n−1)T,(Am+1
n)TT,(5)
Bm,n =⎡
⎢
⎢
⎢
⎢
⎢
⎣
Bm0... 00
Am+1BmBm+1 ... 00
.
.
..
.
.....
.
..
.
.
Am+1
n−1BmAm+2
n−1Bm+1 ... Bn−10
Am+1
nBmAm+2
nBm+1 ... AnBn−1Bn
⎤
⎥
⎥
⎥
⎥
⎥
⎦
,(6)
Cm,n =¯
Cm,nAm,n ,(7)
Gm,n =¯
Cm,nBm,n ,(8)
¯
Cm,n =diag
CmCm+1 ... Cn
N,(9)
Ag
r=⎧
⎪
⎪
⎨
⎪
⎪
⎩
ArAr−1...Agg<r
Arg=r
Ig=r+1
0g>r+1
.(10)
This model assumes that the state xmat mis known and thus 144
wm=0.145
FIR smoothing can be organized on [m, n], if we introduce a 146
lag 0<q<N−1and define the q-lag estimate as 147
ˆ
xn−qˆ
xn−q|n=Hm,n(q)Ym,n ,(11)
where Hm,n(q)is the FIR smoother gain and Ym,n is a vector 148
of real data collected on [m, n]. The approach was originally 149
proposed to develop the p-shift OFIR filter for time-invariant 150
systems without input [28], [29]. Note that, for zero noise, this 151
filter becomes a p-shift UFIR filter [30]. By setting q=−p,we 152
have the q-lag OFIR and UFIR smoothers with the following 153
extreme properties. The OFIR smoother requires all information 154
about noise and initial values and is thus not robust. The UFIR 155
smoother [31] does not require such information and is thus 156
robust, but not optimal. 157
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 3
An in-between solution is the maximum likelihood (ML) FIR158
estimator [22], [32], which does not require the initial state159
and unifies the minimum variance UFIR estimator [36], optimal160
unbiased FIR (OUFIR) estimator [37], and OFIR estimator with161
embedded unbiasedness [38]. Although the ML FIR estimator162
is more robust than the OFIR estimator, so far no ML FIR163
smoothing solution has been addressed to the engineer, which164
motivates our present work.165
The problem now formulates as follows. Given model (1) and166
(2), we would like to obtain a batch q-lag ML FIR smoother by167
maximizing the likelihood p(Ym,n|xn−q), find recursive forms168
under white Gaussian noise, and develop an iterative algorithm.169
We also wish to investigate the trade-off between the q-lag ML170
FIR, UFIR, and RTS smoothers by simulation and test them171
experimentally.172
III. BATC H q-LAG ML FIR SMOOTHER173
In this section, we derive a batch q-lag ML FIR smoother by174
maximizing the likelihood function of the FIR estimate (11) on175
[m, n].176
Theorem 1: Given model (1) and (2) with a nonsingular An
177
and uncorrelated wn∼N(0,Qn)and vn∼N(0,Rn).The178
q-lag ML FIR smoothing estimate is obtained by179
ˆ
xn−q=Hm,n(q)Ym,n (12a)
=˜
CT
m,n(q)Σ−1
m,n(q)˜
Cm,n(q)−1
ט
CT
m,n(q)Σ−1
m,n(q)Ym,n ,(12b)
where ˜
Cm,n(q)=Cm,n (Am+1
n−q)−1and Σm,n(q)is given by180
Σm,n(q)=[Gm,n −˜
Cm,n(q)B(N−q)
m,n ]Qm,n[···]T
+Rm,n,(13)
in which [...]is the term that is equal to the relevant preceding
181
term.182
Proof: To prove (12b), extract the state xn−qfrom (3) with183
respect to xmas184
xn−q=Am+1
n−qxm+B(N−q)
m,n Wm,n,(14)
where B(N−q)
m,n is the (N−q)th vector row in (6) given by185
B(N−q)
m,n =Am+1
n−qBmAm+2
n−qBm+1 ... Bn−q
N−q
0... 0
q.
Multiply both-hand sides of (14) by (Am+1
n−q)−1(we first186
assume that Am+1
n−qis invertible and then will remove this as-187
sumption), move the second term on the right-hand side to the188
left-hand side, and provide the back-in-time state equation189
xm=(Am+1
n−q)−1xn−q−(Am+1
n−q)−1B(N−q)
m,n Wm,n.(15)
Next, substitute (15) into (4) and write the measurement 190
residual as 191
Ym,n −˜
Cm,n(q)xn−q=Nm,n (q),(16)
where the noise vector is given by 192
Nm,n(q)=[Gm,n −˜
Cm,n(q)B(N−q)
m,n ]Wm,n +Vm,n.(17)
For uncorrelated white Gaussian wnand vn, write the likeli- 193
hood of Ym,n as 194
p(Ym,n|xn−q)∝exp−1
2Ym,n −˜
Cm,n(q)xn−qT
×Σ−1
m,n(q)[...],(18)
and transform the weighting matrix as 195
Σm,n(q)=ENm,n (q)NT
m,n(q)
=[Gm,n −˜
Cm,n(q)B(N−q)
m,n ]Qm,n[···]T+Rm,n .
(19)
Define the ML FIR estimate of xn−qby the condition 196
∂p(Ym,n|xn−q)
∂xn−qxn−q=ˆ
xn−q
=0,
minimize the quadratic form in (19), and obtain 197
0=˜
CT
m,n(q)Σ−1
m,n(q)[Ym,n −˜
CT
m,n(q)ˆ
xn−q].(20)
Finally, use (20), represent the q-lag ML FIR smoothed esti- 198
mate ˆ
xn−qby (11) as (12a), and complete the proof. 199
As can be seen, (12b) has the weighted least square struc- 200
ture [39] with the weighting matrix Σm,n(q)given by (13). 201
IV. RECURSIVE FORMS AND ITERATIVE ALGORITHM 202
Theorem 1 states that the q-lag ML FIR smoother is con- 203
structed using N-varying block matrices ˜
Cm,n(q)and Σm,n (q),204
which implies computational complexity when Nis large. More- 205
over, the assumption that the matrix Anis nonsingular limits the 206
scope. These problems can be solved if we find recursive forms 207
for iteratively computing the q-lag ML FIR estimate ˆ
xn−q, which 208
will be discussed next. 209
A. Modified Smoother Gain 210
To find recursions for (12b), we first modify the gain Hm,n(q)211
by decomposing the weighting matrix. In doing so, we represent 212
matrix Σm,n(q)specified by (19) as 213
Σm,n(q)=Fm,n −¯
Fm,n(q),(21)
where the following auxiliary matrices are introduced, 214
Fm,n =Gm,nQm,n GT
m,n +Rm,n,(22)
¯
Fm,n(q)=Gm,n Qm,nBT
m,n(q)
+Bm,n(q)Qm,n [Gm,n −B
m,n(q)]T,(23)
and Bm,n(q)= ˜
Cm,n(q)B(N−q)
m,n . Next, to avoid inverting the 215
matrix An, we represent the gain Hm,n(q)as shown in the 216
following lemma. 217
IEEE Proof
4IEEE TRANSACTIONS ON SIGNAL PROCESSING
Algorithm 1: Batch q-lag ML FIR Smoothing Algorithm.
Data: N,yn,Qn,Rn,q
Result: ˆ
xn−q
1: for n=N, N +1,··· do
2: m=n−N+1
3: Fm,n =Gm,nQm,n GT
m,n +Rm,n
4: Dm,n =(CT
m,nF−1
m,nCm,n )−1
5: Um,n =Cm,nDm,n CT
m,n
6: ¯
Hm,n(q)=Am+1
n−qDm,nCT
m,nF−1
m,n
7: ˜
Hm,n(q)=B(N−q)
m,n Qm,nGT
m,nF−1
m,n(I−Um,n F−1
m,n)
8: ˆ
xn−q=[¯
Hm,n(q)+ ˜
Hm,n(q)]Ym,n
9: end for
Lemma 1: The q-lag ML FIR smoother gain Hn,m(q)is given218
in theorem 1 as (12b). Its equivalent formulation, avoiding the219
inversion of Am+1
n−q, is the following,220
Hm,n(q)= ¯
Hm,n(q)+ ˜
Hm,n(q),(24)
where the sub gains are defined by
221
¯
Hm,n(q)=Am+1
n−qDm,nCT
m,nF−1
m,n,(25)
˜
Hm,n(q)=B(N−q)
m,n Qm,nGT
m,nF−1
m,n I−Um,nF−1
m,n,
(26)
using the auxiliary matrices Dm,n =(CT
m,nF−1
m,nCm,n )−1and222
Um,n =Cm,nDm,n CT
m,n.223
Proof: The proof is postponed to Appendix A. 224
Using (24) given by lemma 1, a pseudo code of the batch q-lag225
ML FIR smoothing algorithm is listed as Algorithm 1.226
As can be seen, the algorithm does not involve the inversion227
of Am+1
n−q, unlike the original form stated by theorem 1.228
B. Iterative q-Lag ML FIR Smoothing Algorithms229
The decomposition (24) allows finding recursive forms for230
the iterative computation of (12b). To make it possible, we first231
represent (12b) as232
ˆ
xn−q=¯
Hm,n(q)Ym,n +˜
Hm,n(q)Ym,n ,
=¯
xn(q)+˜
xn(q).(27)
The following theorem establishes recursive forms for (27).
233
Theorem 2: Given the batch q-lag ML FIR smoothing es-234
timate specified by lemma 1 as (24). The filtering estimate is235
provided by computing236
ˆ
xi=ˆ
xi−1+(Ki+¯
Ki)(yi−CiAiˆ
xi−1)(28)
from i=p+1to i=n,p=max(K, 2), gains Kiand ¯
Kiare237
given by238
Ki=PiCT
i(CiPiCT
i+Ri)−1,(29)
¯
Ki=(I−KiCi)ΞiDm,i Li,(30)
Li=ΞT
iCT
i(CiPiCT
i+Ri)−1, and239
Pi=Ai(I−Ki−1Ci−1)Pi−1AT
i+BiQiBT
i,(31)
Algorithm 2: Iterative q-lag ML FIR Smoothing Algorithm.
Data:N,K,yn,Qn,Rn,q
Result:ˆ
xn−q
1: for n=N, N +1,··· do
2: m=n−N+1,p=max(K, 2)
3: Compute ˆ
xp,Dm,p,Ξp, and Pp
4: for i=p+1,...,n do
5: Compute Pi,Ξi, and Dm,i
6: Compute ¯
Kiand ¯
Ki
7: ˆ
xi=ˆ
xi−1+(¯
Ki+Ki)(yi−CiAiˆ
xi−1)
8: if n−q<ithen
9: Set ˆ
xn−q−1(q)=ˆ
xn−q−1,Pn−q−1=Pn−q−1,
and Ξ
n−q−1=Ξn−q−1
10: Compute Pi,Ξ
i,Ki, and ¯
Ki
11: ˆ
xi(q)=ˆ
xi−1(q)+(Ki+¯
Ki)(yi−CiAiˆ
xi−1)
12: end if
13: end for
14: ˆ
xn−q=ˆ
xn(q)
15: end for
Ξi=Ai(I−Ki−1Ci−1)Ξi−1,(32)
Dm,i =(D−1
m,i−1+LiCiΞi)−1.(33)
Given n−q<iand initial ˆ
xn−q(q)=ˆ
xn−q,Pn−q=Pn−q,240
and Ξ
n−q=Ξn−q, the smoothed estimate is computed by 241
ˆ
xi(q)=ˆ
xi−1(q)+(Ki+¯
Ki)(yi−CiAiˆ
xi−1)(34)
from i=n−q+1to i=n, where 242
Ki=PiCT
i(CiPiCT
i+Ri)−1
¯
Ki=(Ξ
i−KiCiΞi)Dm,iLi,
Pi=(P
i−1−K
i−1Ci−1Pi−1)AT
i,(35)
Ξ
i=Ξ
i−K
iCiΞi.(36)
The output ˆ
xn−qis taken as ˆ
xn−q=ˆ
xi(q)when i=nfor the 243
initial values of 244
ˆ
xε=(¯
Hm,ε +˜
H[1]
m,ε)Ym,ε ,(37)
Dm,ε =(CT
m,εF−1
m,εCm,ε )−1,(38)
Ξε=Am+1
ε−AεΥε−1F−1
m,ε−1Cm,ε−1,(39)
Pε=B(ε−m+1)
m,ε Qm,nB(ε−m+1)T
m,ε
−AεΥε−1F−1
m,ε−1AT
ε,(40)
where Υε−1is defined by (B.4) with n=ε.245
Proof: Use the decomposition (24), represent ˆ
xn−qas (27) 246
and follow Appendix B to complete the proof. 247
A pseudo code of the iterative q-lag ML FIR smoothing 248
algorithm is listed as Algorithm 2, where all matrices and vectors 249
are specified above. 250
First, using an auxiliary time variable ichanging from p+1to 251
n, the ML FIR filtering estimate ˆ
xiis computed. Then the q-lag 252
ML FIR smoothing estimates is computed for n−q<in253
and the output is taken as ˆ
xn−q=ˆ
xi(q)when i=n. Note that 254
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 5
ˆ
xispecified by (28) is an auxiliary state estimate introduced in255
Algorithm 2. It is thus not related to the batch estimate ˆ
xn−q
256
provided in (12a), but the notations are similar.257
V. P ERFORMANCE ANALYSIS258
Provided the batch q-lag ML FIR smoothed estimate (theorem259
1) and the iterative computation algorithm using recursions260
(theorem 2), we can now analyze the q-lag ML FIR smoother261
performance.262
A. Estimation Errors and Unbiasedness263
Defining the error covariance of the q-lag ML FIR smoother264
at n−qas265
ΔPn(q)=E(xn−q−ˆ
xn−q)(xn−q−ˆ
xn−q)T,(41)
substituting xn−qwith (11), ˆ
xn−qwith (20), and using (24), we266
obtain267
ΔPn(q)=[B(N−q)
m,n −H
m,n(q)Gm,n ]Qm,n[···]T
+Hm,n(q)Rm,n HT
m,n(q).(42)
Next, substituting the second Hm,n(q)in the right-hand side268
of (42) with (24) brings (42) to269
ΔPn(q)= ¯
Hm,n(q)Rm,n ¯
Hm,n(q)T
+[B(N−q)
m,n −H
m,n(q)Gm,n ]Qm,n[···]T
+˜
Hm,n(q)Rm,n [¯
Hm,n(q)+ ˜
Hm,n(q)]T
+¯
Hm,n(q)Rm,n ˜
Hm,n(q)T.(43)
Note that all of the terms in (43) depend on the process
270
noise covariance, except for the first term, which depends on271
the measurement noise covariance.272
We now apply the unbiasedness condition E{xn−q}=273
E{ˆ
xn−q}to ˆ
xn−qspecified by (20) with (24) and to xn−qgiven274
by (14). The averaging gives275
E{xn−q}=Am+1
n−qE{xm},(44)
E{ˆ
xn−q}=[¯
Hm,n(q)+ ˜
Hm,n(q)]Cm,n E{xm}.(45)
Thus, for the q-lag ML FIR smoother, we have two
276
unbiasedness constraints: Am+1
n−q=¯
Hm,n(q)Cm,n and 0=277
˜
Hm,n(q)Cm,n .278
B. Special Cases279
The following special cases can now be considered. In the280
case of filtering, q=0,thegain ¯
Hm,n(q)defined by (25) and281
the gain ˜
Hm,n(q)defined by (26) become282
¯
Hm,n(q=0)= Am+1
nDm,nCT
m,nF−1
m,n,
˜
Hm,n(q=0)= ¯
Bm,nQm,n GT
m,n(I−Um,n F−1
m,n),
where ¯
Bm,n denotes the last row vector in Bm,n. Note that the283
optimal UFIR (OUFIR) filter [37] can be considered as a special284
case of the q-lag ML FIR smoother.285
In another extreme when QnI,wehave ¯
Hm,n(q)≈0and 286
Fm,n ≈Rm,n, which transforms Hm,n (q)to 287
Hm,n(q)=Am+1
n−q(CT
m,nR−1
m,nCm,n )−1CT
m,nR−1
m,n.(46)
For diagonal measurement noise covariance Rm,n =δ2
vIand 288
negligible system noise, Qn→0, gain (46) can further be 289
modified to be the q-lag UFIR smoother gain [20] 290
lim
Qn→0Hm,n(q)=Am+1
n−q(CT
m,nCm,n )−1CT
m,n,(47)
which ignores any information about zero mean noise and can 291
be computed in one step by scaling the UFIR filter gain [30]. It 292
then follows that the proposed method inherits some properties 293
of the UFIR approach. 294
VI. NUMERICAL SIMULATIONS 295
In this section, we test the q-lag ML FIR smoother numerically 296
by different models and investigate the trade-off with the RTS 297
and UFIR smoothers. Our main goal is to evaluate accuracy and 298
robustness on the optimal horizon of Nopt points [30] using the 299
root mean square error (RMSE) as a performance criterion. 300
A. Moving Target Tracking 301
We now consider an example of moving target tracking using 302
model (1) and (2) with a switching system matrix such that An=303
A(1) when 1<n150,An=A(2) when 150 <n250,304
and An=A(3) otherwise. In particular, we specify matrices as 305
A(1) = ⎡
⎢
⎢
⎢
⎣
10Ts0
01 0 Ts
00 1 0
00 0 1
⎤
⎥
⎥
⎥
⎦
,Bn=⎡
⎢
⎢
⎢
⎣
T2
s
20
0T2
s
2
Ts0
0Ts
⎤
⎥
⎥
⎥
⎦
,
A(i)=
⎡
⎢
⎢
⎢
⎣
10 sin(wiTs)
wi
−(1−cos(wiTs))
wi
01−(1−cos(wiTs))
wi
sin(wiTs)
wi
00 cos(wiTs)−sin(wiTs)
00 sin(wiTs)cos(wiTs)
⎤
⎥
⎥
⎥
⎦
,
where i=2,3,Ts=1s,w2=−0.2rad/s,w3=0.2,rad/s,306
and Cn=[C
n;C
n], where C
n=[1000] and C
n=307
[0 1 0 0]. The covariances of the process and measurement 308
noise specified as Qn=0.12IL×Land Rn=10
2IM×M,re- 309
spectively. The process starts from N(x0;ˆ
x0,P0)with ˆ
x0=310
[250 250 2.511.5]Tand P0=I, and is run over 500 data 311
points. Next, we consider two scenarios. 312
1) Ideal Conditions: Here we assume that all of the model 313
parameters are set accurately and sketch typical estimation errors 314
in Fig. 1 for Nopt =40with q=8. As can be seen, all smoothers 315
produce consistent estimates. Even so, RTS smoothing gives 316
slightly more stable estimates and suboptimal UFIR smoothing 317
slightly less accurate estimates. 318
Since the horizon length Nand lag qboth affect the accu- 319
racy, we next compute the RMSE tr(Pn)as a function of 320
qand Nover [m, n]. We then show in Fig. 2 the smoothing 321
effect with respect to n= 500 for a constant matrix A, and in 322
Fig. 3 for a time-varying matrix Awith respect to n= 300.In 323
IEEE Proof
6IEEE TRANSACTIONS ON SIGNAL PROCESSING
Fig. 1. Typical estimation errors produced by the smoothing algorithms over
n∈[100,300] under the ideal condition of a fully known process: (a) first state,
(b) second state, (c) third state, and (c) fourth state.
Fig. 2(a), which corresponds to Nopt =50,weplottheRMSE324
tr(P500)for q=0and tr(P451)for q=49. The RMSE325
tr(Pn)of the RTS smoother is shown with respect to n= 500326
for n= 500 −q,q∈[0,N −1]. Here, the empirical RMSE327
averaged over 1000 Monte-Carlo runs for each time instant is328
also provided and represented with solid lines of different colors329
for different approaches. Note that for exactly known model330
parameters, the difference between the empirical and theoretical331
RMSEs is insignificant, and we omit the empirical RMSEs in332
Fig. 3. In Fig. 3, we move the anchor point from n= 500 to333
n= 300 and keep same parameters as in Fig. 2. Since the system334
matrix Anvaries in this case from A(1) to A(2) at n= 150 and335
from A(2) to A(3) at n= 250, the state-space model on [m, n]336
becomes time-varying when N=50,N=80, and N= 200.337
The following conclusions can be drawn from the analysis of338
Figs. 2 and 3 assuming ideal conditions:339
rRMSEs produced by the q-lag ML FIR and UFIR340
smoothers (Fig. 2) are symmetric functions on [m, n]for341
0<q<N
opt −1, while errors in the RTS reduce with342
increasing qand reach a steady state.343
rThe q-lag ML FIR smoother occupies an intermediate place344
between the RTS and UFIR smoothers. When qis small, it345
performs similarly to the RTS smoother. For large N1,346
the proposed ML FIR smoother performs similarly to the347
RTS smoother, and the allowed lag qbecomes larger.348
2) Temporary Model Uncertainty: We next investigate the349
performance under temporary model uncertainties, which are350
associated with some unpredictable jumps in the process. We351
simulate the uncertainty by injecting into the process an ad-352
ditional signal dn=[0,0,10,0]Twhen 200 n205 and353
ignoring it in the algorithms. Fig. 4 shows typical smoothing354
errors produced the algorithms, where N=30and q=20are355
set for the UFIR and ML FIR smoothers, Even a quick look356
reveals that the q-lag ML FIR smoother makes a significant im- 357
provement over the RTS smoother and performs a bit better than 358
the UFIR smoother. Specifically, responding to the uncertainty 359
the ML FIR smoother produces smaller overshoots and much 360
shorter transients than the RTS smoother. 361
3) Inaccurate Noise Statistics: Since the q-lag UFIR 362
smoother completely ignores zero mean noise, the question 363
arises of errors in the ML FIR and RTS smoothers under the 364
incorrectly set noise covariances. Therefore, we investigate sen- 365
sitivities of the smoothing algorithms to errors in noise covari- 366
ances by introducing in the algorithms two scaling factors αand 367
βas α2Qnand Rn/β2. Note that α=1and β=1make the 368
model ideal. 369
Typical RMSEs produced by the smoothers as 2D functions 370
of αand βare shown in Fig. 5, where the β-axis is repeated 371
three times to merge all the plots together for a clarity. Since the 372
q-lag UFIR smoother is (α, β)-invariant, its RMSE is constant 373
in the (α, β)plane. That is, errors in noise statistics do not 374
affect the q-lag UFIR smoother performance. On the contrary, 375
the RTS smoother is sensitivity to αand βand gives the largest 376
RMSEs when αand βdeviate from one. The q-lag ML FIR 377
smoother occupies an in-between place. It performs similarly 378
to the q-lag UFIR smoother when αis relatively small. This 379
observation confirms the theoretical analysis provided for (47). 380
Thus, we conclude that in terms of robustness to errors in noise 381
covariances, the three smoothers perform differently and the 382
proposed ML FIR smoother is an intermediate solution. 383
B. Signal Reconstruction 384
Referring to the better robustness of the q-lag ML FIR estima- 385
tor to temporary model uncertainties and errors in noise covari- 386
ances, we now wonder how accurate it is on signal reconstruc- 387
tion. We consider a two-state harmonic state model and represent 388
it by (1) and (2) with Bn=I,Cn=[0,1],An=[10
τ1], and 389
τ=0.1s. The noise covariances are determined as Rn=0.25 390
and 391
Qn=ττ
2/2
τ2/2τ3/3.
Following [6], we generate the first state process as a random 392
walk and the second state process as the integral of the first state 393
plus random noise. The whole process is generated over 250 394
points for the initial values chosen as ˆ
x0=[0,0]Tand P0=I.395
The reconstructed signals obtained by the smoothers in the 396
time interval 50 ...230 are shown in Fig. 6 for N=50and 397
q=20. It can be seen that all smoothers recover the signal 398
well, although the UFIR smoother does it less accurately. This 399
is because information of the noise covariances is ignored by 400
the UFIR smoother. However, this smoother will improve per- 401
formance if we set Noptimally to as Nopt [30]. 402
We will now test all smoothers, assuming errors in noise co- 403
variances, and illustrate the results in Fig. 7 for 0.1Qnand 10Rn404
and in Fig. 8 for 0.12Qnand 102Rn. The first point to notice is 405
that errors in the noise covariances make the RTS smoother very 406
inaccurate. In contrast, ML FIR smoothing demonstrates much 407
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 7
Fig. 2. RMSEs computed as functions of qfor different Nand constant matrix A:(a)N=50,(b)N=80,and(c)N= 200. The anchor point is n= 500.
Solid lines depict empirical RMSEs.
Fig. 3. RMSEs computed as functions of qfor different Nand time-varying matrix A:(a)N=50,(b)N=80,and(d)N= 200. The anchor point is n= 300.
Fig. 4. Typical errors produced by the smoothing algorithms under a tempo-
rary model uncertainty: (a) first state, (b) second state, (c) third state, and (c)
fourth state.
better robustness, although its performance also degenerates.408
Indeed, with very imprecise noise statistics, the RTS smoother409
cannot track signal (Fig. 8), whereas the ML FIR smoother does410
it appropriately depending on the horizon length N.Itisalso 411
worth noticing that ML FIR smoothing performs similarly to 412
UFIR smoothing when Qn1, which is consistent with the 413
results in Fig. 5. 414
VII. EXPERIMENTAL TESTING 415
Finally, we test the smoothers by the indoor quadrotor lo- 416
calization problem using ultra-wide bandwidth (UWB) mea- 417
surements [40], [41] obtained experimentally [37] in the fa- 418
cilities of the Jiangnan University. The planned path is shown 419
in Fig. 9 along with noisy UWB data where multiple out- 420
liers can be recognized. The state vector is assigned as xn=421
[xE,n,˙xE,n,x
N,n,˙xN,n,x
V,n,˙xV,n]T, where xE,n,xN,n , and 422
xV,n denote the quadrotor location in the east, north, and 423
vertical directions, respectively, and ˙xE,n ,˙xN,n , and ˙xV,n are 424
the corresponding velocities. The planned path is used as a 425
“true” trajectory with coordinates (xE,n,xN,n ,xV,n) and the 426
observation matrix is specified by 427
Cn=⎡
⎢
⎣
100000
001000
000010
⎤
⎥
⎦.
To carry out smoothed localization, we use model (1) with 428
the system matrix Ansuch that its diagonal elements are equal 429
to unity, other components are A12 =A34 =A56 =Ts=0.2s,430
IEEE Proof
8IEEE TRANSACTIONS ON SIGNAL PROCESSING
Fig. 5. TypicalRMSEs as 2D functions of αand βproduced by the smoothers:
(a) first state, (b) second state, (c) third state, and (c) fourth state. To merge the
plots, the β-axis is repeated three times in the range of [0.1,3].
Fig. 6. Signal reconstruction by smoothing on the time interval of N=50
points.
Fig. 7. Signal reconstruction by smoothing under errors in noise covariances
simulated as 0.1Qnand 10Rnfor N=40,N=80,andN= 100.
Fig. 8. Signal reconstruction by smoothing under errors in the noise covari-
ances simulated as 0.12Qnand 102Rnfor N=40,N=80,andN= 100.
Fig. 9. Planned quadrotor path and UWB measurement data with multiple
outliers.
Bnis an identity matrix, and the system noise covariance is given 431
by 432
Qn=δ2
w
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎣
τ2/3τ/20000
τ/210000
00τ2/3τ/20 0
00τ/21 0 0
0000τ2/3τ/2
0000τ/21
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎦
,
where τ=1s. The measurement noise, extracted with respect 433
to the planned path along each of the coordinates, and the corre- 434
sponding histograms are shown in Fig. 10, where outlier clusters 435
can be recognized. As we can see, the histograms are generally 436
asymmetric on a long time scale, heavy-tailed, and do not obey 437
exactly the Gaussian law. That is, the Gaussian assumption used 438
to derive the ML FIR smoother is not obeyed in this scenario. 439
This affects the performance significantly. Consequently, we 440
expect that larger errors will appear in the smoothed estimates 441
and thus effort should be made to determine correctly the noise 442
covariances. 443
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 9
Fig. 10. Measurement noise extracted with respect to the planned path: (a) the
east direction, (b) north direction, and (c) vertical direction.
Fig. 11. RMSEs produced by the smoothing algorithms for N=90 and
q=20 and averaged over [40...80] s. The zoomed area corresponds to
[44.5...46.5] s.
Referring to the histograms shown in Fig. 10 and data outliers,444
we assign σv=1m for the measurement noise and obtain445
Rn=σ2
vI. Analyzing the quadrotor dynamics, we conclude446
that its average speed is about 0.5m/s. For the speed standard447
deviation of 20%, we set σw=0.1m/sto the second, fourth,448
and sixth states. We also set N=90and q=20and compute the449
corresponding RMSEs over a time span of [40...80] s (Fig. 11).450
In this case, there is no essential difference between the esti-451
mates, although ML FIR smoothing is slightly more accurate452
and UFIR smoothing appears to be less robust to outliers.453
To achieve better smoothing in the presence of outliers, we set454
σv=10
2mto effectively filter out fast excursions and keep the455
same parameters as in the previous case. The RMSEs sketched456
in Fig. 12 show a dramatic difference. Indeed, while ML FIR457
and UFIR smoothing remove outliers in their very consistent es-458
timates, RTS smoothing does so with larger errors and generally459
poorer performance.460
Fig. 12. RMSEs produced by the smoothing algorithms for δv=10
2m,N=
90,andq=40, and averaged over the time interval [40...80] s.
A. Computational Complexity 461
We finish this section by measuring the time consumed by 462
the estimators. Although computation time is usually not an 463
issue for practical applications of smoothing algorithms, we do 464
it for comparison by running each of the smoothers 20 times 465
and averaging the elapsed time. We use the Matlab R2020a 466
software operating in a MacBook with 8-Core Intel Core i9 467
CPU (2.4 GHz) and 32.00 GB RAM, which showed that the 468
batch ML FIR smoother consumes the largest time of 15.23 s 469
and the iterative ML FIR algorithms consumes 5.34 s.TheUFIR 470
smoother consumed 4.74 s and the recursive RTS smoother 471
consumed the shortest time of 0.77 s. Thus, we conclude that 472
a higher robustness of the q-lag ML FIR smoother is achieved 473
at the expense of the computation time. 474
VIII. CONCLUSION 475
The q-lag ML FIR smoother, obtained in this paper in batch 476
form and represented with an iterative algorithm using recur- 477
sions, has demonstrated better robustness than the Kalman (RTS) 478
smoother. Also, the ML FIR smoother works more accurately 479
than the RH FIR smoother developed at an early stage. Simu- 480
lations have shown that ML FIR smoothing is superior to RTS 481
smoothing under uncertainties due to temporary model errors 482
and errors in noise covariances. An experimental example of 483
quadrotor localization using UWB data confirmed that q-lag ML 484
FIR smoothing is more accurate and more robust in the presence 485
of outliers. A payment for this is a larger computation time, 486
which, however, is not a problem in post-processing. Therefore, 487
we are currently working on the design of a fast q-lag ML FIR 488
smoothing algorithm and plan to report the results in near future. 489
Also, a theoretical comparison of the ML FIR smoother to other 490
available FIR-type smoothers will be our next topic. 491
APPENDIX A492
PROOF OF LEMMA 1493
Consider (21), use the matrix inversion lemma [42] 494
(X−Y)−1=X−1+(X−Y)−1YX−1,
IEEE Proof
10 IEEE TRANSACTIONS ON SIGNAL PROCESSING
and represent the inverse of (21) as495
Σ−1
m,n(q)=F−1
m,n +Σ
−1
m,n(q)¯
Fm,n(q)F−1
m,n.(A.1)
Omit qfor simplicity, when there is no confusion, intro-
496
duce a dummy variable s{m, n}, and reassign matrices as497
Σm,n(q)⇒Σs,¯
Fm,n(q)⇒¯
Fs,˜
Cm,n(q)⇒˜
Cs,Hm,n(q)⇒498
Hs, and Bm,n(q)⇒B
s.499
Introduce the auxiliary block matrices Fs=˜
CT
sF−1
s˜
Csand500
¯
Fs=˜
CT
sΣ−1
s¯
FsF−1
s˜
Csand rewrite Hsgiven by (19) as501
Hs=F−1
s˜
CT
sΣ−1
s−H
s,(A.2)
where H
s=(Fs+¯
Fs)−1¯
FsF−1
s˜
CT
sΣ−1
s. Replace Σ−1
son the502
right-hand side of (A.2) by (A.1) and obtain503
Hs=F−1
s˜
CT
sF−1
s+ˇ
Hs−H
s,(A.3)
where ˇ
Hs=F−1
s˜
CT
sΣ−1
s¯
FsF−1
s.504
Substitute Σ−1
sin ˇ
Hswith (A.1), introduce Ss=505
F−1
s˜
CT
sΣ−1
s¯
FsF−1
s¯
FsF−1
sand ¯
Ss=F−1
s˜
CT
sF−1
s¯
FsF−1
s,506
avoid qagain, and arrive at507
ˇ
Hs=Ss+¯
Ss.(A.4)
Consider ¯
Fsgiven by (23), multiply Gsfrom the left-hand508
side with the identity matrix ˜
Cs˜
CT
s(˜
Cs˜
CT
s)−1, discard ˜
Csfrom509
the both sides, and come up with510
¯
Fs=˜
CsB(N−q)
sQsGT
s+Ls,(A.5)
where Ls=[˜
CT
s(˜
Cs˜
CT
s)−1Gs−B(N−q)
s]QsBT
s. Substitute511
(A.5) into ¯
Ssand obtain512
¯
Ss=B(N−q)
sQsGT
s+LsF−1
s.(A.6)
Reasoning similarly, consider ¯
Fsdefined by (A.2) and trans-513
form Ssto Ss=F−1
s¯
Fs¯
Ss, where ¯
Fs=(Fs+¯
Fs)¯
Ss˜
Csis514
achieved by substituting ¯
Fsgiven by (A.5) into ¯
Fs. Then rewrite515
Ssas516
Ss=(
¯
Ss+Ss)˜
Fs=¯
Ss˜
FsI−˜
Fs−1,(A.7)
where ˜
Fs˜
Cs¯
Ss=¯
FsF−1
s. At this point, represent ˇ
Hsspec-517
ified by (A.4) equivalently as518
ˇ
Hs=¯
Ss+¯
Ss˜
FsI−˜
Fs−1.(A.8)
Using ¯
Fs=(Fs+¯
Fs)¯
Ss˜
Cs, transform the term H
son the519
right-hand side of (A.3) to520
H
s=¯
Ss˜
CsF−1
s˜
CT
sΣ−1
s,
=¯
SsUsF−1
s+¯
SsUsΣ−1
s
H
s
˜
Fs,(A.9)
where Us˜
CsF−1
s˜
CT
sand the second equality is obtained521
by substituting Σ−1
swith (A.1). Denote the second term on the522
right-hand side of (A.9) as ΨsH
s˜
Fsand write523
H
s=¯
SsUsF−1
s+Ψ
s.(A.10)
Substitute (A.10) into Ψs, make some rearrangements, and524
obtain525
Ψs=¯
SsUsF−1
s˜
FsI−˜
Fs−1.(A.11)
Next, subtract H
sfrom ˇ
Hsto have 526
ˇ
Hs−H
s=˜
Hs+ΔHs,(A.12)
where ˜
Hs=¯
Ss(I−UsF−1
s)and ΔHs=¯
Ss(I−527
UsF−1
s)˜
Fs(I−˜
Fs)−1. Consider an equality 528
(I−UsF−1
s)˜
Fs=(
˜
Cs−UsF−1
s˜
Cs)¯
Ss
=0,
which guarantees ΔHs=0and reduces (A.12) to ˇ
Hs−H
s=529
˜
Hs. Substitute ¯
Ssin ˜
Hswith (A.6), use the property of 530
0=B(N−q)
s
T˜
CT
sF−1
s−˜
CT
sF−1
sUsF−1
s
=B(N−q)
s
T⎛
⎜
⎝˜
CT
sF−1
s−˜
CT
sF−1
s˜
Cs
Fs
×F−1
s˜
CT
sF−1
s⎞
⎟
⎠,
and simplify ˜
Hsto 531
˜
Hs=B(N−q)
sQsGT
sF−1
sI−UsF−1
s.(A.13)
Finally, combine (A.13) with (A.3), introduce ¯
Hm,n =532
F−1
m,n ˜
CT
m,nF−1
m,n, and represent gain Hm,n as Hm,n =¯
Hm,n +533
˜
Hm,n. Use the definition of Fm,n , where ˜
Cm,n is specified by 534
(15), rewrite Hm,n(q)in a compact form of (24) by removing 535
the inverse of Am+1
n−q, and complete the proof. 536
APPENDIX B537
PROOF OF THEOREM 2538
Use the following lemma. 539
Lemma 2: A nonsingular matrix Fm,n is given by (22). Then 540
its inverse is 541
F−1
m,n =Π−1
m,nZ−1
m,n,(B.1)
where, by introducing Z
m,n =Υ
T
n−1AT
nCT
nR−1
n,542
Πm,n =&Fm,n−10
0 Rn',
Z−1
m,n =&I+Z
m,n¯
Z−1
m,nZ
m,n−Z
m,n ¯
Z−1
m,n
−¯
Z−1
m,nZ
m,n ¯
Z−1
m,n ',(B.2)
Υn−1=¯
Bm,n−1Qm,n−1GT
m,n−1,(B.3)
Z
m,n =CnAnΥn−1F−1
m,n−1,(B.4)
¯
Zm,n =I+CnPnCT
nR−1
n,(B.5)
Pn=On−AnΥn−1F−1
m,n−1ΥT
n−1AT
n,(B.6)
On=An¯
Bm,n−1Qm,n−1¯
BT
m,n−1AT
n+BnQnBT
n
=¯
Bm,nQm,n ¯
BT
m,n.(B.7)
Proof: Decompose Gm,n given by (8) as 543
Gm,n =&Gm,n−10
CnAn¯
Bm,n−1CnBn',
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 11
where ¯
Bm,n−1denotes the last vector row in Bm,n−1.For544
diagonal Qm,n and Rm,n, write545
Fm,n =Πm,n +¯
Πm,n,(B.8)
where
546
¯
Πm,n =&0ΥT
n−1AT
nCT
n
CnAnΥn−1CnOnCT
n'.
Since Πm,n is invertible, the inversion of (B.8) gives (B.1)547
with Zm,n =I+¯
Πm,nΠ−1
m,n. Use the Schur complement [43],548
compute the inverse matrix Z−1
m,n, arrive at (B.2), and complete549
the proof. 550
Introduce an auxiliary time index i, use lemma 1, and find551
recursions for the iterative algorithm. To do this, replace n552
by i, use (B.1) with n=ito compute F−1
m,i, and decompose553
CT
m,iF−1
m,i as554
CT
m,iF−1
m,i =CT
m,i−1F−1
m,i−1(CiAm+1
i)TR−1
iZ−1
m,i
=CT
m,i−1F−1
m,i−1−L
iZ
m,i Li,(B.9)
where
555
Li=ΞT
iCT
iR−1
i¯
Z−1
m,i,(B.10)
Ξi=Am+1
i−AiΥi−1F−1
m,i−1Cm,i−1.(B.11)
Substitute (B.9) into D−1
m,n and obtain an equality D−1
m,i =556
D−1
m,i−1+LiCiΞi, from which find557
Dm,i =Dm,i−1−Dm,iLiCiΞiDm,i−1.(B.12)
Define XiB(N−q)
m,i Qm,iGT
m,i, follow the above lines, and558
provide559
XiF−1
m,i =Xi−1F−1
m,i−1−K
iZ
m,i Ki,(B.13)
where Kiis defined as560
Ki=PiCT
iR−1
i¯
Z−1
m,i,(B.14)
Pi=Oi−X
i−1Fm,i−1ΥT
i−1AT
i,(B.15)
with Oi=B(N−q)
m,i−1Qm,i−1¯
BT
m,i−1AT
i=B(N−q)
m,i Qm,i ¯
BT
m,i.561
A. Recursive Forms for ¯
xi(q)and ˜
xi(q)With in−Q562
Follow the definition of Dm,i provided after (26) and establish563
the following inequality,564
δminCT
m,iCm,i CT
m,iFm,i Cm,i,(B.16)
where δmin denotes a minimum eigenvalue of the nonsingular565
matrix Fm,i. To guarantee the invertibility of Dm,i, change i566
from the value of εsuch that CT
m,iCm,i 0is satisfied. Given567
ε=max(m+K−1,m+1), where Kis the number of the568
states, observe that CT
m,iCm,i is invertible for ε<inif569
[Ai,Ci]is uniformly detectable [44]. Since, for ε<in−q,570
the gain Hm,i(q)specified by (24) has the structure of the ML571
FIR smoother gain with q=0, obtain572
Hm,i(0) = Am+1
iDm,iCT
m,iF−1
m,i
+¯
Bm,iQm,i GT
m,iF−1
m,i(I−Um,i F−1
m,i),
the recursive forms for which were found in [37] and are sum- 573
marized by (28)–(33). 574
B. Recursions of ¯
xi(q)and ˜
xi(q)With n−Q<In575
Consider ¯
xi(q)for n−q<inand decompose (27) using 576
(B.9) as 577
¯
xi(q)=Ai(q)Dm,i CT
m,i−1F−1
m,i−1Ym,i−1
+Li¯
yi),(B.17)
where Ai(q)=Ii×···×In−q+1 ×A
m+1
n−q,¯
yi=yi−578
Z
m,iYm,i−1, and Iidenotes the identity matrix assigned 579
for time step i. Note that Ai(q)=Ai−1(q)for n−q<in.580
Substitute Dm,i in (B.17) with (B.12), rearrange the terms, 581
and obtain 582
¯
xi(q)=¯
xi−1(q)−Ai−1(q)Dm,iLiˆ
xa−c
i−1,
+Ai−1(q)Dm,iLi¯
yi,(B.18)
where ˆ
xa−c
i−1=CiAi(ˆ
xa
i−1−ˆ
xc
i−1)and 583
ˆ
xa
i−1=Am+1
i−1Dm,i−1CT
m,i−1F−1
m,i−1Ym,i−1,(B.19)
ˆ
xc
i−1=Υ
i−1F−1
m,i−1Cm,i−1Dm,i−1CT
m,i−1F−1
m,i−1
×Ym,i−1.(B.20)
Next, consider ˜
xi(q), introduce ˜
xf
i(q)XiF−1
m,iYm,i and 584
˜
xh
i(q)XiF−1
m,iCm,i Dm,iCT
m,iF−1
m,iYm,i , and obtain 585
˜
xi(q)=˜
xf
i(q)−˜
xh
i(q).(B.21)
Using (B.13), represent ˜
xf
i(q)recursively as 586
˜
xf
i(q)=˜
xf
i−1(q)+Ki(yi−Z
m,iYm,i−1).(B.22)
Observe that ˜
xh
i(q)satisfies 587
˜
xh
i(q)=(Xi−1F−1
m,i−1Cm,i−1+KiCiΞi)Dm,i
×(CT
m,i−1F−1
m,i−1Ym,i−1+Li¯
yi),(B.23)
combine it with (B.12), and represent as 588
˜
xh
i(q)=˜
xh
i−1(q)−X
i−1F−1
m,i−1Cm,i−1Dm,iLiˆ
xa−c
i−1
+Kiˆ
xa−c
i−1−K
iCiΞiDm,iLiˆ
xa−c
i−1
+Xi−1F−1
m,i−1Cm,i−1Dm,iLi¯
yi
+KiCiΞiDm,iLi¯
yi.(B.24)
Combine (B.18), (B.22), and (B.24), use the property of ¯
yi−589
ˆ
xa−c
i−1=yi−CiAiˆ
xi−1, and end up with the recursion 590
ˆ
xi(q)=ˆ
xi−1(q)+(Ki+¯
Ki)(yi−CiAiˆ
xi−1),(B.25)
where Kiis given by (B.14), ˆ
xi−1=Hm,i−1(0)Ym,i−1is the 591
ML FIR filtering estimate, and 592
¯
Ki=(Ξ
i−K
iCiΞi)Dm,iLi,(B.26)
Ξ
i=Ai−1(q)−X
i−1F−1
m,i−1Cm,i−1.(B.27)
IEEE Proof
12 IEEE TRANSACTIONS ON SIGNAL PROCESSING
C. Recursions of Ξ
iand Pi
593
Since Ai(q)=Ai−1(q)holds true for n−q<in,seti=594
i+1, refer to (B.13), and arrive at (36). For Oispecified after595
(B.15), write Oi=Oi−1AT
iand come up with the recursion for596
Pispecified by (36). Take the initial values ˆ
xn−q(q),Ξ
n−q, and597
Pn−qfrom ˆ
xn−q(q)=ˆ
xn−q,Ξ
n−q=Ξn−qand Pn−q=Pn−q
598
and complete the proof.599
REFERENCES600
[1] J. B. Moore, “Discrete-time fixed-lag smoothing algorithms,” Automatica,601
vol. 9, no. 2, pp. 163–173, Mar. 1973.602
[2] J. S. Meditch, “A survey of data smoothing for linear and nonlinear603
dynamic systems,” Automatica, vol. 9, no. 2, pp. 151–162, Mar. 1973.604
[3] M. S. Grewal and A. P. Andrews, Kalman Filtering: Theory and Practice605
Using MATLAB, 3rd ed. Hoboken, NJ, USA: Wiley, 2008.606
[4] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter,607
Boston, MA, USA: Artech House, 2004.608
[5] M. Bell, “The iterated Kalman smoother as a Gauss-Newton method,”609
SIAM J. Optim., vol. 4, no. 3, pp. 626–636, Aug. 1994.610
[6] G. Einicke, Smoothing, Filtering and Prediction: Estimating the Past,611
Present and Future. Rijeka, Croatia: InTech, 2012.612
[7] B. D. Anderson and J. B. Moore, Optimal Filtering, Englewood Cliffs, NJ,613
USA: Prentice-Hall, 1979.614
[8] H. E. Rauch, F. Tung, and C. T. Striebel, “Maximum likelihood estimates615
of linear dynamic systems,” AIAA J., vol. 8, pp. 1445–1450, 1965.616
[9] C. D. Karlgaard, “Nonlinear regression Huber-Kalman filtering and fixed-617
interval smoothing,” J. Guid. Control, Dyn., vol. 38, no. 3, pp. 322–330,618
2015.619
[10] E. Blanco, P. Neveus, and G. Thomas, “The H∞fixed-interval smoothing620
problem for continuous systems,” IEEE Trans. Signal Process., vol. 54,621
no. 11, pp. 4085–4090, Nov. 2006.622
[11] G. J. Bierman, Factorization Methods for Discrete Sequential Estimation.623
New York, NY, USA: Academic Press, vol. 128, 1977.624
[12] A. Savitzky and M. J. E. Golay, “Smoothing and differentiation of data by625
simplified least squares procedures,” Anal. Chem., vol. 36 no. 8, pp. 1627–626
1639, 1964.627
[13] W. H. Kwon, K. S. Lee, and J. H. Lee, “Fast algorithms for optimal628
FIR filter and smoother of discrete-time state-space models,” Automatica,629
vol. 30, no. 3, pp. 489–492, Mar. 1994.630
[14] O. K. Kwon, W. H. Kwon, and K. S. Lee, “FIR filters and recursive631
forms for discrete-time state-space models,” Automatica, vol. 25, no. 5,632
pp. 715–728, Sep. 1989.633
[15] J. T. Yuan and J. A. Stuller, “Order-recursive FIR smoothers,” IEEE Trans.634
Signal Process., vol. 42, no. 5, pp. 1242–1246, May 1994.635
[16] B. K. Kwon, S. Han, O. K. Kwon, and W. H. Kwon, “Minimum variance636
FIR smoothers for discrete-time state space models,” IEEE Signal Process.637
Lett., vol. 14, no. 8, pp. 557–560, Aug. 2007.638
[17] B. K. Kwon and S. Han, “An optimal fixed-lag FIR smoother for discrete639
time-varying state space models,” J. Inst. Control, Robot. Syst., vol. 20,640
no. 1, pp. 1–5, 2014.641
[18] B. K. Kwon, S. Han, and W. H. Kwon, “Minimum variance FIR smoothers642
for continuous-time state space signal models,”IEEE Signal Process. Lett.,643
vol. 14, no. 12, pp. 1024–1027, Dec. 2007.644
[19] B. K. Kwon, J.-W. Choi, J. H. Park, S. Han, and W. H. Kwon, “A best lag645
size of minimum variance FIR smoothers,” IEEE Signal Process. Lett.,646
vol. 16, no. 4, pp. 307–310, Apr. 2009.647
[20] Y. S. Shmaliy and L. Morales-Mendoza, “FIR smoothing of discrete-time648
polynomial models in state space,” IEEE Trans. Signal Process., vol. 58,649
no. 5, pp. 2544–2555, May 2010.650
[21] C. K. Ahn and P. S. Kim, “Fixed-lag maximum likelihood FIR smoother651
for state-space models,” IEICE Electron. Exp., vol. 5, no. 1, pp. 11–16,652
2008.653
[22] S. Zhao and Y. S. Shmaliy, “Unified maximum likelihood form for bias654
constrained FIR filters,” IEEE Signal Process. Lett., vol. 23, no. 12,655
pp. 1848–1852, Dec. 2016.656
[23] F. Ding, “State filtering and parameter estimation for state space sys-657
tems with scarce measurements,” Signal Process., vol. 104, pp. 369–380,658
Nov. 2014.659
[24] X. Zhang and F. Ding, “Adaptive parameter estimation for a general 660
dynamical system with unknown states,” Int. J. Robust Nonlinear Control,661
vol. 30, no. 4, pp. 1351–1372, 2020. 662
[25] S. Han and W. H. Kwon, “L2-EFIR smoothers for deterministic discrete- 663
time state-space signal models,” IEEE Trans. Autom. Control, vol. 52, 664
no. 5, pp. 927–931, May 2007. 665
[26] S. Han, B. K. Kwon, and W. H. Kwon, “Minimax FIR smoothers for 666
deterministic continuous-time state space models,” Automatica, vol. 45, 667
no. 6, pp. 1561–1566, Jun. 2009. 668
[27] C. K. Ahn and S. H. Han, “New H∞FIR smoother for linear discrete- 669
time state-space models,” IEICE Trans. Commun., vol. E91.B, no. 3, 670
pp. 896–899, 2008. 671
[28] Y. S. Shmaliy, “Linear optimal FIR estimation of discrete time-invariant 672
state-space models,” IEEE Trans. Signal Process., vol. 58, no. 6, 673
pp. 3086–3096, Jun. 2010. 674
[29] Y. S. Shmaliy and O. Ibarra-Manzano, “Time-variant linear optimal finite 675
impulse response estimator for discrete state-space models,” Int. J. Adapt. 676
Control Signal Process., vol. 26, no. 2, pp. 95–104, 2012. 677
[30] Y.S. Shmaliy, S. Zhao, and C. K. Ahn, “Unbiased FIR filtering: An iterative 678
alternative to Kalman filtering ignoring noise and initial conditions,” IEEE 679
Control Syst. Mag., vol. 37, no. 5, pp. 70–89, Oct. 2017. 680
[31] D. Simon and Y. S. Shmaliy, “Unified forms for Kalman and finite 681
impulse response filtering and smoothing,” Automatica, vol. 49, no. 6, 682
pp. 1892–1899, Jun. 2013. 683
[32] S. Zhao, Y. S. Shmaliy, F. Liu, and S. Khan, “Unbiased, optimal, and 684
in-betweens: The trade off in discrete FIR filtering,” IET Signal Process.,685
vol. 10, no. 4, pp. 325–334, Jun. 2016. 686
[33] D. G. Nicolao, “On the time-varying Riccati difference equation of op- 687
timal filtering,” SIAM J. Control Optim., vol. 30, no. 6, pp. 1251–1269, 688
Nov. 1992. 689
[34] F. L. Lewis, D. Vrabie, and V. L. Syrmos, Optimal Control. Hoboken, NJ, 690
USA: Wiley, 2012. 691
[35] A. H. Jazwinski, Stochastic Processes and Filtering Theory. Chelmsford, 692
MA, USA: Courier Corp., 2007. 693
[36] S. Zhao, Y.S. Shmaliy, B. Huang, and F. Liu, “Minimum variance unbiased 694
FIR filter for discrete time-variant systems,” Automatica, vol. 53, no. 3, 695
pp. 355–361, Mar. 2015. 696
[37] S. Zhao, Y.S. Shmaliy, and F. Liu, “Fast Kalman-like optimal unbiased FIR 697
filtering with applications,” IEEE Trans. Signal Process., vol. 64, no. 9, 698
pp. 2284–2297, May 2016. 699
[38] S. Zhao, Y. S. Shmaliy, F. Liu, O. Ibarra-Manzano, and S. Khan, “Effect of 700
embedded unbiasedness on discrete-time optimal FIR filtering estimates,” 701
EURASIP J. Adv. Sign. Process., vol. 83, pp. 1–13, 2015. 702
[39] R. Isermann and M. Münchhof, Identification of Dynamic Systems: An 703
Introduction With Applications, Berlin, Germany: Springer, 2010. 704
[40] M. Z. Win and R. A. Scholtz, “Ultra-wide bandwith time-hopping spread- 705
spectrum impulse radio for wireless multiple-access communication,” 706
IEEE Trans. Commun., vol. 48, no. 4, pp. 679–691, Apr. 2000. 707
[41] W. Suwansantisuk, M. Z. Win, and L. A. Shepp, “On the performance 708
of wide-bandwidth signal acquisition in dense multipath channels,” IEEE 709
Trans. Veh. Technol., vol. 48, no. 4, pp. 679–691, Apr. 2000. 710
[42] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation. Upper Saddle 711
River, NJ, USA: Prentice-Hall, 2000. 712
[43] W. H. Kwon and S. Han, Receding Horizon Control: Model Predictive 713
Control for State Models. London, U.K.: Springer, 2005. 714
[44] K. ˚
Aström, Introduction to Stochastic Control Theory,NewYork,NY, 715
USA: Dover, 1970. 716
Shunyi Zhao (Senior Member, IEEE) received the 717
Ph.D. degree in control theory and application from 718
the Key Laboratory of Advanced Process Control for 719
Light Industry (Ministry of Education), Institute of 720
Automation, Jiangnan University, Wuxi, China, in 721
2015. From 2013 to 2014, he was a Visiting Stu- 722
dent with the Department of Chemical and Materials 723
Engineering, University of Alberta, Edmonton, AB, 724
Canada, where he was a Postdoctoral Fellow from 725
2015 to 2018. In 2015, he joined Jiangnan University 726
as an Associate Professor, and is currently a Professor. 727
His research interests include statistical signal processing, Bayesian estimation 728
theory, and fault detection and diagnosis. Dr. Zhao was the recipient of the 729
Alexander von Humboldt Research Fellowship in Germany, the Excellent Ph.D. 730
Thesis Award (2016) in Jiangsu Province, China, and a nomination of Excellent 731
Doctoral Thesis from the Chinese Association of Automation in 2016. 732
733
IEEE Proof
ZHAO et al.: DISCRETE TIME Q-LAG MAXIMUM LIKELIHOOD FIR SMOOTHING AND ITERATIVE RECURSIVE ALGORITHM 13
Jinfu Wang received the B.Sc. degree from the De-734
partment of Automation, Northeast Petroleum Uni-735
versity, Daqing, China, in 2020. He is currently work-736
ing toward the M.Sc. degree with the Key Laboratory737
of Advanced Process Control for Light Industry (Min-738
istry of Education), Institute of Automation, Jiangnan739
University, Wuxi, China. His research interests in-740
clude state estimation and Bayesian data analysis.741
742
Yuriy S. Shmaliy (Fellow, IEEE) received the B.S.,743
M.S., and Ph.D. degrees in electrical engineering744
from Kharkiv Aviation Institute, Kharkiv, Ukraine,745
in 1974, 1976, and 1982, respectively, and the Dc.Sc.746
degree in electrical engineering from USSR Gov-747
ernment, in 1992. Since 1986, he has been a Full748
Professor. From 1985 to 1999, he was with Kharkiv749
Military University, Kharkiv, Ukraine. In 1992, he750
founded the Scientific Center Sichron and was the751
Director in 2002. Since 1999, he has been with the752
Universidad de Guanajuato, Guanajuato, Mexico, and753
from 2012 to 2015, he headed the Department of Electronics Engineering in this754
University.755
He has 498 journal and conference papers and holds 81 patents. He has756
authored the books Continuous-Time Signals (Springer,2006), Continuous-Time757
Systems (Springer, 2007), and GPS-Based Optimal FIR Filtering of Clock758
Models (Nova Science Publications, 2009). He also edited the book Probability:759
Interpretation, Theory and Applications (Nova Science Publications, 2012). His760
current interests include robust state estimation, statistical signal processing, and761
stochastic system theory.His discrete orthogonal polynomials are called Discrete762
Shmaliy Moments. He was rewarded a title, Honorary Radio Engineer of the763
USSR, in 1991, and was with the Ukrainian State Award Committee on Science764
and Technology, in 1998-1999. He was the recipient of the Royal Academy of765
Engineering Newton Research Collaboration Program Award, in 2015, IEEE766
Latin America Eminent Engineer Award, in 2021, and several best conference767
paper awards. He was invited many times to give tutorial, seminar, and plenary768
lectures.769
770
Fei Liu (Member, IEEE) received the B.S. degree in 771
electrical technology and the M.S. degree in industrial 772
automation from the Wuxi Institute of Light Industry, 773
China, in 1987 and 1990, respectively, and the Ph.D. 774
degree in control science and control engineering 775
from Zhejiang University, Hangzhou, China, in 2002. 776
From 1990 to 1999, he was an Assistant, a Lecturer, 777
and an Associate Professor with the Wuxi Institute of 778
Light Industry. Since 2003, he has been a Professor 779
with the Institute of Automation, Jiangnan University, 780
Wuxi, China. From 2005 to 2006, he was a Visiting 781
Professor with the University of Manchester, Manchester, U.K. His research 782
interests include advanced control theory and applications, batch process control 783
engineering, statistical monitoring and diagnosis and intelligent technique in 784
industrial process. 785
786