Lecture 2: ARMA Models
*
1
ARMA Process
As we have remarked, dependence is very common in time series observations. To model this time
series dependence, we start with univariate ARMA models. To motivate the model, basically we
can track two lines of thinking. First, for a series
x
t
, we can model that the level of its current
observations depends on the level of its lagged observations.
For example, if we observe a high
GDP realization this quarter, we would expect that the GDP in the next few quarters are good
as well. This way of thinking can be represented by an AR model. The AR(1) (autoregressive of
order one) can be written as:
x
t
=
φx
t

1
+
t
where
t
∼
WN
(0
, σ
2
) and we keep this assumption through this lecture. Similarly, AR(
p
) (au
toregressive of order
p
) can be written as:
x
t
=
φ
1
x
t

1
+
φ
2
x
t

2
+
. . .
+
φ
p
x
t

p
+
t
.
In a second way of thinking, we can model that the observations of a random variable at time
t
are not only affected by the shock at time
t
, but also the shocks that have taken place before
time
t
. For example, if we observe a negative shock to the economy, say, a catastrophic earthquake,
then we would expect that this negative effect affects the economy not only for the time it takes
place, but also for the near future. This kind of thinking can be represented by an MA model. The
MA(1) (moving average of order one) and MA(
q
) (moving average of order
q
) can be written as
x
t
=
t
+
θ
t

1
and
x
t
=
t
+
θ
1
t

1
+
. . .
+
θ
q
t

q
.
If we combine these two models, we get a general ARMA(
p, q
) model,
x
t
=
φ
1
x
t

1
+
φ
2
x
t

2
+
. . .
+
φ
p
x
t

p
+
t
+
θ
1
t

1
+
. . .
+
θ
q
t

q
.
ARMA model provides one of the basic tools in time series modeling. In the next few sections,
we will discuss how to draw inferences using a univariate ARMA model.
*
Copyright 20022006 by Ling Hu.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
2
Lag Operators
Lag operators enable us to present an ARMA in a much concise way.
Applying lag operator
(denoted
L
) once, we move the index back one time unit; and applying it
k
times, we move the
index back
k
units.
Lx
t
=
x
t

1
L
2
x
t
=
x
t

2
.
.
.
L
k
x
t
=
x
t

k
The lag operator is distributive over the addition operator, i.e.
L
(
x
t
+
y
t
) =
x
t

1
+
y
t

1
Using lag operators, we can rewrite the ARMA models as:
AR(1) :
(1

φL
)
x
t
=
t
AR(
p
) :
(1

φ
1
L

φ
2
L
2

. . .

φ
p
L
p
)
x
t
=
t
MA(1) :
x
t
= (1 +
θL
)
t
MA(
q
) :
x
t
= (1 +
θ
1
L
+
θ
2
L
2
+
. . .
+
θ
q
L
q
)
t
Let
φ
0
= 1
, θ
0
= 1 and define log polynomials
φ
(
L
)
=
1

φ
1
L

φ
2
L
2

. . .

φ
p
L
p
θ
(
L
)
=
1 +
θ
1
L
+
θ
2
L
2
+
. . .
+
θ
p
L
q
With lag polynomials, we can rewrite an ARMA process in a more compact way:
AR :
φ
(
L
)
x
t
=
t
MA :
x
t
=
θ
(
L
)
t
ARMA :
φ
(
L
)
x
t
=
θ
(
L
)
t
3
Invertibility
Given a time series probability model, usually we can find multiple ways to represent it. Which
representation to choose depends on our problem.
For example, to study the impulseresponse
functions (section 4), MA representations maybe more convenient; while to estimate an ARMA
model, AR representations maybe more convenient as usually
x
t
is observable while
t
is not.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 DE JONG
 Econometrics, Autoregressive moving average model, Xt

Click to edit the document details