Downloaded from orbit.dtu.dk on: Jul 20, 2020
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay
Published in:
American Control Conference 2007
Link to article, DOI:
10.1109/ACC.2007.4282878
Publication date:
2007
Document Version
Publisher's PDF, also known as Version of record
Link back to DTU Orbit
Citation (APA):
Åkesson, B. M., Jørgensen, J. B., Poulsen, N. K., & Jørgensen, S. B. (2007). A Generalized Autocovariance
Least-Squares Method for Covariance Estimation. In American Control Conference 2007 IEEE.
https://doi.org/10.1109/ACC.2007.4282878
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright
owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
You may not further distribute the material or use it for any profit-making activity or commercial gain
You may freely distribute the URL identifying the publication in the public portal
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Proceedings of the 2007 American Control Conference
Marriott Marquis Hotel at Times Square
New York City, USA, July 11-13, 2007
ThC07.2
A Generalized Autocovariance Least-Squares Method for Covariance
Estimation
Bernt M. Åkesson
Sten Bay Jørgensen
John Bagterp Jørgensen
CAPEC
Informatics and Mathematical Modelling
CAPEC
Department of Chemical Engineering
Technical University of Denmark
Department of Chemical Engineering
Technical University of Denmark
DK-2800 Kgs. Lyngby
Technical University of Denmark
DK-2800 Kgs. Lyngby
Email: jbj@imm.dtu.dk
DK-2800 Kgs. Lyngby
Email: baa@kt.dtu.dk
Email: sbj@kt.dtu.dk
TABLE I
D EFINITIONS OF S YMBOLS IN (8) FOR E ACH K ALMAN F ILTER F ORM
Abstract— A generalization of the autocovariance leastsquares method for estimating noise covariances is presented.
The method can estimate mutually correlated system and sensor
noise and can be used with both the predicting and the filtering
form of the Kalman filter.
Index Terms— Covariance estimation, optimal estimation,
state estimation.
I. INTRODUCTION
The Kalman filter requires knowledge about the noise
statistics. In practical applications, however, the noise covariances are generally not known. The autocovariance leastsquares (ALS) method was presented by Odelson et al. [1]
as a technique for estimating the system and sensor noise
covariances from plant data. The technique was shown to
give unbiased estimates with smaller variance than previously proposed methods, such as the correlation method by
Mehra [2]. The objective of this paper is to demonstrate how
the method can be extended to systems where the system
noise and the sensor noise are mutually correlated. Moreover,
the generalized method works with both the predicting and
the filtering form of the Kalman filter.
II. GENERALIZED AUTOCOVARIANCE
LEAST-SQUARES ESTIMATION
Consider a linear time-invariant system in discrete-time,
xk+1 = Axk + Buk + Gwk
yk = Cxk + vk
ek
Ā
Ḡ
yk −Cx̂k|k−1
A − K pC
[G −K p ]
yk −Cx̂k|k
A − K pC
[G −K p ]
C̄
H̄
C
I
C −CK f C
I −CK f
and the Kalman filter gains are defined as
K p = (APpCT + GSwv )(CPpCT + Rv )−1
T
T
−1
K f = PpC (CPpC + Rv )
(5)
(6)
and Pp is the covariance of the state prediction error, x̃k|k−1 =
T
] is obtained
xk − x̂k|k−1 . The covariance Pp = E[x̃k|k−1 x̃k|k−1
as the solution to the Riccati equation
Pp = APp AT + GQw GT
T
− (APpCT + GSwv )(CPpCT + Rv )−1 (CPp AT + Swv
GT ).
(7)
x̃k+1|k = Āx̃k|k−1 + Ḡw̄k
(3)
ek = C̄x̃k|k−1 + H̄vk
(8)
where ek and the system matrices have different definitions,
as shown in Table I, depending on which form of the filter
is used. The noise w̄k in (8) is defined as
w
w̄k = k
vk
with properties
Q̄
w̄k
0
∼N
, Tw
vk
0
S̄wv
S̄wv
Rv
(9)
where
or in the filtering form,
1-4244-0989-6/07/$25.00 ©2007 IEEE.
Filtering Form
(1)
Assume that a suboptimal stationary Kalman filter is used
to estimate the state. The filter is based on initial guesses of
covariances Qw , Rv and Swv . The filter can be either in the
one-step predicting form,
x̂k|k = x̂k|k−1 + K f (yk −Cx̂k|k−1 ),
Predicting Form
A general state-space model of the measurement prediction/estimate error can be defined,
where A ∈ Rnx ×nx , B ∈ Rnx ×nu , G ∈ Rnx ×nw and C ∈ Rny ×nx .
The process noise wk and the measurement noise vk are zeromean white noise processes according to
Qw Swv
0
wk
∼N
, T
.
(2)
vk
0
Swv Rv
x̂k+1|k = Ax̂k|k−1 + Buk + K p (yk −Cx̂k|k−1 ),
Symbol
(4)
Q̄w = E[w̄k w̄Tk ] =
Qw
T
Swv
Swv
S
T
, S̄wv = E[w̄k vk ] = wv .
Rv
Rv
3713
Authorized licensed use limited to: Danmarks Tekniske Informationscenter. Downloaded on November 18, 2009 at 09:39 from IEEE Xplore. Restrictions apply.
ThC07.2
The autocovariance of the measurement prediction or
estimate error is given by
Re,0 = E[ek eTk ] = C̄PpC̄T + H̄Rv H̄ T
Re, j = E[ek+ j eTk ] = C̄Ā j PpC̄T + C̄Ā j−1 GSwv H̄ T
− C̄Ā j−1 K p Rv H̄ T , j ≥ 1
Re (L) = OPp OT + Z
+Z
"
+Ψ
L
M
"i=1
#i=1
+ [(Ψ ⊗ Z)Unw ,ny ,L − D(In2x + Tnx ,nx )(K p ⊗ G)
(11)
..
.
Re,0
"
#
L
M
T
Swv
i=1
#
ZT
Rv ΨT ,
i=1
(13)
Z=Γ
L
M
#
Ψ=Γ
G ,
i=1
"
#
L
M
(−K p ) +
i=1
L
M
H̄
i=1
and
C̄
C̄Ā
O = . ,
..
C̄ĀL−1
0
C̄
C̄Ā
..
.
Γ=
C̄ĀL−2
···
0
0
0
0
0
0
..
.
···
C̄Ā C̄
0
0
0
.
..
.
0
We apply the vec operator to (13) in order to state the
problem as a linear least-squares problem. The vec operator
performs stacking of the matrix columns to form a column
matrix [3]. By applying the rules for the vec operator, we
write the Lyapunov equation for Pp in stacked form, with
the subscript s used as shorthand for the vec operator, i.e.
vec (A) = As .
vec(Pp ) = vec(ĀPp ĀT ) + vec(ḠQ̄w ḠT )
= (In2x − Ā ⊗ Ā)−1 (ḠQ̄w ḠT )s
(14)
We introduce three permutation matrices. For an m × n
matrix A we define a permutation matrix Um,n,L , which is an
mnL2 × mn matrix of zeros and ones satisfying
!
vec
L
M
A
D = (O ⊗ O)(In2x − Ā ⊗ Ā)−1 .
(18)
= Um,n,L vec (A) .
d
Given a sequence of data {ei }i=1
, the estimate of the
autocovariance can be computed by
1 Nd − j
∑ ei+ j eTi ,
Nd − j i=1
(19)
where Nd is the length of the data sequence. The estimated
autocovariance matrix R̂e (L) can be formed analogously to
(12) using the estimates (19). Now (17) can be written in the
form of a linear least-squares problem
2
(Qw )s
(20)
A (Swv )s − (R̂e (L))s
Φ = min
Qw ,Swv ,Rw
(Rv )s
2
where additional constraints may be necessary in order to
ensure positive semidefiniteness of the covariance matrices.
As noted in [1], a short data sequence or significant model
error may result in covariance estimates that are not positive
definite. This problem can be remedied by stating (20) as a
convex semidefinite programming problem [1].
The optimal Kalman filter gain can then be computed
from the estimated covariances by (5) or (6) after solving
the Riccati equation (7).
Note that for the case with a predicting Kalman filter and
with Swv = 0, (17) takes the form presented in [1].
where
"
(17)
R̂e, j =
Qw Z T
T
+ (Z ⊗ Ψ)Uny ,nw ,L Tny ,nw ](Swv )s
N
(12)
#
Swv Ψ + Ψ
L
M
(10)
RTe,L−1
RTe,L−2
and can be written as
L
M
(Re (L))s =[(Z ⊗ Z)Unw ,L + D(G ⊗ G)](Qw )s
+ [(Ψ ⊗ Ψ)Uny ,L + D(K p ⊗ K p )](Rv )s
in which
The autocovariance matrix is defined as
···
Re,0
RTe,1
Re,1
R
e,0
Re (L) = .
..
..
.
Re,L−1 Re,L−2 · · ·
"
Applying the vec operator to (13) yields
(15)
i=1
For a square matrix of size p × p we have the permutation
matrix U p,L = U p,p,L . Finally, there is the vec-permutation
matrix (or commutation matrix) Tm,n , such that for an m × n
matrix A [3], [4],
vec AT = Tn,m vec (A) .
(16)
III. CONCLUSIONS AND FUTURE WORKS
A. Conclusions
A generalization of the autocovariance least-squares
method by Odelson et al. has been presented. The generalization is applicable to systems with mutually correlated
disturbances and also works with data generated by the
filtering form of the Kalman filter.
B. Future Works
Solution methods for the constrained least-squares problem will be investigated. The estimation method will be
applied to realistic examples.
R EFERENCES
[1] B. J. Odelson, M. R. Rajamani, and J. B. Rawlings, “A new autocovariance least-squares method for estimating noise covariances,”
Automatica, vol. 42, no. 2, pp. 303–308, 2006.
[2] R. K. Mehra, “On the identification of variances and adaptive kalman
filtering,” IEEE Transactions on Automatic Control, vol. AC-15, no. 12,
pp. 175–184, 1970.
[3] J. W. Brewer, “Kronecker products and matrix calculus in system
theory,” IEEE Transactions on Circuits and Systems, vol. 25, no. 9,
pp. 772–781, 1978.
[4] J. R. Magnus and H. Neudecker, “The commutation matrix: some
properties and applications,” The Annals of Statistics, vol. 7, no. 2,
pp. 381–394, 1979.
3714
Authorized licensed use limited to: Danmarks Tekniske Informationscenter. Downloaded on November 18, 2009 at 09:39 from IEEE Xplore. Restrictions apply.