Lecture 1: Stationary Time Series
*
1
Introduction
If a random variable
X
is indexed to time, usually denoted by
t
, the observations
{
X
t
, t
∈
T
}
is
called a time series, where
T
is a time index set (for example,
T
=
Z
, the integer set).
Time series data are very common in empirical economic studies. Figure 1 plots some frequently
used variables. The upper left figure plots the quarterly GDP from 1947 to 2001; the upper right
figure plots the the residuals after linear-detrending the logarithm of GDP; the lower left figure
plots the monthly S&P 500 index data from 1990 to 2001; and the lower right figure plots the log
difference of the monthly S&P. As you could see, these four series display quite different patterns
over time. Investigating and modeling these different patterns is an important part of this course.
In this course, you will find that many of the techniques (estimation methods, inference proce-
dures, etc) you have learned in your general econometrics course are still applicable in time series
analysis. However, there are something special of time series data compared to cross sectional data.
For example, when working with cross-sectional data, it usually makes sense to assume that the
observations are independent from each other, however, time series data are very likely to display
some degree of dependence over time.
More importantly, for time series data, we could observe
only one history of the realizations of this variable. For example, suppose you obtain a series of
US weekly stock index data for the last 50 years. This sample can be said to be large in terms of
sample size, however, it is still one data point, as it is only one of the many possible realizations.
2
Autocovariance Functions
In modeling finite number of random variables, a covariance matrix is usually computed to sum-
marize the dependence between these variables.
For a time series
{
X
t
}
∞
t
=
-∞
, we need to model
the dependence over infinite number of random variables. The autocovariance and autocorrelation
functions provide us a tool for this purpose.
Definition 1
(Autocovariance function). The autocovariance function of a time series
{
X
t
}
with
V ar
(
X
t
)
<
∞
is defined by
γ
X
(
s, t
) =
Cov
(
X
s
, X
t
) =
E
[(
X
s
-
EX
s
)(
X
t
-
EX
t
)]
.
Example 1
(Moving average process) Let
t
∼
i.i.d.
(0
,
1), and
X
t
=
t
+ 0
.
5
t
-
1
*
Copyright 2002-2006 by Ling Hu.
1
