5models
5models
5models
The latter condition means that the covariance of any two elements depends
only on their temporal separation |t − s|. Notice that, if the elements of the
sequence are normally distributed, then the two conditions are sufficient to
establish strict stationarity. On their own, they constitute the conditions of
weak or 2nd-order stationarity.
The condition on the covariances implies that the dispersion matrix of the
vector [x1 , x2 , . . . , xn ] is a bisymmetric Laurent matrix of the form
γ γ1 γ2 ... γn−1
0
γ1 γ0 γ1 ... γn−2
(5.2) Γ= γ2
.
γ1 γ0 ... γn−3 ,
. .. .. .. ..
. . . . .
γn−1 γn−2 γn−3 ... γ0
wherein the generic element in the (i, j)th position is γ|i−j| = C(xi , xj ). Given
that a sequence of observations of a time series represents only a segment of
a single realisation of a stochastic process, one might imagine that there is
little chance of making valid inferences about the parameters of the process.
66
D.S.G. POLLOCK : LINEAR STOCHASTIC MODELS
However, provided that the process x(t) is stationary and provided that the
statistical dependencies between widely separated elements of the sequence are
weak, it is possible to estimate consistently those parameters of the process
which express the dependence of proximate elements of the sequence. If one
is prepared to make sufficiently strong assumptions about the nature of the
process, then a knowledge of such parameters may be all that is needed for a
complete characterisation of the process.
Moving-Average Processes
The qth-order moving average process, or MA(q) process, is defined by the
equation
(5.3) y(t) = µ0 ε(t) + µ1 ε(t − 1) + · · · + µq ε(t − q),
where ε(t), which has E{ε(t)} = 0, is a white-noise process consisting of a
sequence of independently and identically distributed random variables with
zero expectations. The equation is normalised either by setting µ0 = 1 or by
setting V {ε(t)} = σε2 = 1. The equation can be written in summary notation
as y(t) = µ(L)ε(t), where µ(L) = µ0 + µ1 L + · · · + µq Lq is a polynomial in the
lag operator.
A moving-average process is clearly stationary since any two elements
yt and ys represent the same function of the vectors [εt , εt−1 , . . . , εt−q ] and
[εs , εs−1 , . . . , εs−q ] which are identically distributed. In addition to the condi-
tion of stationarity, it is usually required that a moving-average process should
be invertible such that it can be expressed in the form of µ−1 (L)y(t) = ε(t)
where the LHS embodies a convergent sum of past values of y(t). This is an
infinite-order autoregressive representation of the process. The representation
is available only if all the roots of the equation µ(z) = µ0 + µ1 z + · · · + µq z q = 0
lie outside the unit circle. This conclusion follows from our discussion of partial
fractions.
As an example, let us consider the first-order moving-average process which
is defined by
(5.4) y(t) = ε(t) − θε(t − 1) = (1 − θL)ε(t).
Provided that |θ| < 1, this can be written in autoregressive form as
ε(t) = (1 − θL)−1 y(t)
(5.5) © ª
= y(t) + θy(t − 1) + θ2 y(t − 2) + · · · .
Imagine that |θ| > 1 instead. Then, to obtain a convergent series, we have to
write
y(t + 1) = ε(t + 1) − θε(t)
(5.6)
= −θ(1 − L−1 /θ)ε(t),
67
D.S.G. POLLOCK : TIME SERIES AND FORECASTING
γτ = E(yt yt−τ )
nX X o
=E µi εt−i µj εt−τ −j
(5.8) i j
XX
= µi µj E(εt−i εt−τ −j ).
i j
Therefore
X
(5.10) γτ = σε2 µj µj+τ .
j
γ0 = σε2 (1 + θ2 ),
(5.12) γ1 = −σε2 θ,
γτ = 0 if τ > 1.
68
D.S.G. POLLOCK : LINEAR STOCHASTIC MODELS
Autoregressive Processes
The pth-order autoregressive process, or AR(p) process, is defined by the
equation
69
D.S.G. POLLOCK : TIME SERIES AND FORECASTING
Provided that the process is stationary with |φ| < 1, it can be represented in
moving-average form as
The autocovariances of the process can be found by using the formula of (10)
which is applicable to moving-average process of finite or infinite order. Thus
γτ = E(yt yt−τ )
nX X o
=E φi εt−i φj εt−τ −j
(5.20) i j
XX
= φi φj E(εt−i εt−τ −j );
i j
70
D.S.G. POLLOCK : LINEAR STOCHASTIC MODELS
71
D.S.G. POLLOCK : TIME SERIES AND FORECASTING
Given α0 = 1 and the values for γ0 , γ1 , γ2 , we can find σε2 and α1 , α2 . Con-
versely, given α0 , α1 , α2 and σε2 , we can find γ0 , γ1 , γ2 . It is worth recalling at
this juncture that the normalisation σε2 = 1 might have been chosen instead
of α0 = 1. This would have rendered the equations more easily intelligible.
Notice also how the matrix following the first equality is folded across the axis
which divides it vertically to give the matrix which follows the second equality.
Pleasing effects of this sort often arise in time-series analysis.
γ
0 γ1 ... γr γr+1 1 2
σ(r)
γ1 γ0 ... γr−1 γr
α1(r) 0
. .. .. =
(5.29) . .. .. ..
... ,
. . . . . .
0
γr γr−1 ... γ0 γ1 αr(r)
γr+1 γr ... γ1 γ0 0 g
wherein
X
r
(5.30) g= αj(r) γr+1−j with α0(r) = 1.
j=0
72
D.S.G. POLLOCK : LINEAR STOCHASTIC MODELS
The two systems of equations (29) and (31) can be combined to give
γ 2
0 γ1 ... γr γr+1 1 σ(r) + cg
γ1 γ0 ... γr−1 γr
α1(r) + cαr(r) 0
. .. .
(5.32)
..
..
.
..
.
..
.
.
..
. =
.. .
γr γr−1 ... γ0 γ1 αr(r) + cα1(r) 0
2
γr+1 γr ... γ1 γ0 c g + cσ(r)
then the final element in the vector on the RHS becomes zero and the system
becomes the set of Yule–Walker equations of order r + 1. The solution of the
equations, from the last element αr+1(r+1) = c through to the variance term
2
σ(r+1) is given by
½ r ¾
1 X
αr+1(r+1) = 2 αj(r) γr+1−j
σ(r) j=0
α1(r+1) α1(r) αr(r)
(5.34) .. . .
. = .. + αr+1(r+1) ..
αr(r+1) αr(r) α1(r)
© ª
2
σ(r+1) 2
= σ(r) 1 − (αr+1(r+1) )2 .
73
D.S.G. POLLOCK : TIME SERIES AND FORECASTING
µ(z)µ(z −1 )
(5.37) γ(z) = σε2 .
α(z)α(z −1 )
where γτ −i = E(yt−τ yt−i ) and δi−τ = E(yt−τ εt−i ). Since εt−i is uncorrelated
with yt−τ whenever it is subsequent to the latter, it follows that δi−τ = 0 if
τ > i. Since the index i in the RHS of the equation (38) runs from 0 to q, it
follows that
X
(5.39) αi γi−τ = 0 if τ > q.
i
74
D.S.G. POLLOCK : LINEAR STOCHASTIC MODELS
75