{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# note1 - Lecture 1 Stationary Time Series 1 Introduction If...

This preview shows pages 1–4. Sign up to view the full content.

Lecture 1: Stationary Time Series * 1 Introduction If a random variable X is indexed to time, usually denoted by t , the observations { X t , t T } is called a time series, where T is a time index set (for example, T = Z , the integer set). Time series data are very common in empirical economic studies. Figure 1 plots some frequently used variables. The upper left figure plots the quarterly GDP from 1947 to 2001; the upper right figure plots the the residuals after linear-detrending the logarithm of GDP; the lower left figure plots the monthly S&P 500 index data from 1990 to 2001; and the lower right figure plots the log difference of the monthly S&P. As you could see, these four series display quite different patterns over time. Investigating and modeling these different patterns is an important part of this course. In this course, you will find that many of the techniques (estimation methods, inference proce- dures, etc) you have learned in your general econometrics course are still applicable in time series analysis. However, there are something special of time series data compared to cross sectional data. For example, when working with cross-sectional data, it usually makes sense to assume that the observations are independent from each other, however, time series data are very likely to display some degree of dependence over time. More importantly, for time series data, we could observe only one history of the realizations of this variable. For example, suppose you obtain a series of US weekly stock index data for the last 50 years. This sample can be said to be large in terms of sample size, however, it is still one data point, as it is only one of the many possible realizations. 2 Autocovariance Functions In modeling finite number of random variables, a covariance matrix is usually computed to sum- marize the dependence between these variables. For a time series { X t } t = -∞ , we need to model the dependence over infinite number of random variables. The autocovariance and autocorrelation functions provide us a tool for this purpose. Definition 1 (Autocovariance function). The autocovariance function of a time series { X t } with V ar ( X t ) < is defined by γ X ( s, t ) = Cov ( X s , X t ) = E [( X s - EX s )( X t - EX t )] . Example 1 (Moving average process) Let t i.i.d. (0 , 1), and X t = t + 0 . 5 t - 1 * Copyright 2002-2006 by Ling Hu. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1950 1960 1970 1980 1990 2000 0 2000 4000 6000 8000 10000 12000 Time GDP 1950 1960 1970 1980 1990 2000 -0.2 -0.1 0 0.1 0.2 Time Detrended Log(GDP) 1990 1992 1994 1996 1998 2000 2002 0 500 1000 1500 Time Monthly S&P 500 Index 1990 1992 1994 1996 1998 2000 2002 -0.2 -0.1 0 0.1 0.2 Time Monthly S&P 500 Index Returns Figure 1: Plots of some economic variables 2
then E ( X t ) = 0 and γ X ( s, t ) = E ( X s X t ). Let s t . When s = t , γ X ( t, t ) = E ( X 2 t ) = 1 . 25 , when t = s + 1, γ X ( t, t + 1) = E [( t + 0 . 5 t - 1 )( t +1 + 0 . 5 t )] = 0 . 5 , when t - s > 1,

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}