This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 2: ARMA Models * 1 ARMA Process As we have remarked, dependence is very common in time series observations. To model this time series dependence, we start with univariate ARMA models. To motivate the model, basically we can track two lines of thinking. First, for a series x t , we can model that the level of its current observations depends on the level of its lagged observations. For example, if we observe a high GDP realization this quarter, we would expect that the GDP in the next few quarters are good as well. This way of thinking can be represented by an AR model. The AR(1) (autoregressive of order one) can be written as: x t = x t 1 + t where t WN (0 , 2 ) and we keep this assumption through this lecture. Similarly, AR( p ) (au toregressive of order p ) can be written as: x t = 1 x t 1 + 2 x t 2 + ... + p x t p + t . In a second way of thinking, we can model that the observations of a random variable at time t are not only affected by the shock at time t , but also the shocks that have taken place before time t . For example, if we observe a negative shock to the economy, say, a catastrophic earthquake, then we would expect that this negative effect affects the economy not only for the time it takes place, but also for the near future. This kind of thinking can be represented by an MA model. The MA(1) (moving average of order one) and MA( q ) (moving average of order q ) can be written as x t = t + t 1 and x t = t + 1 t 1 + ... + q t q . If we combine these two models, we get a general ARMA( p,q ) model, x t = 1 x t 1 + 2 x t 2 + ... + p x t p + t + 1 t 1 + ... + q t q . ARMA model provides one of the basic tools in time series modeling. In the next few sections, we will discuss how to draw inferences using a univariate ARMA model. * Copyright 20022006 by Ling Hu. 1 2 Lag Operators Lag operators enable us to present an ARMA in a much concise way. Applying lag operator (denoted L ) once, we move the index back one time unit; and applying it k times, we move the index back k units. Lx t = x t 1 L 2 x t = x t 2 . . . L k x t = x t k The lag operator is distributive over the addition operator, i.e. L ( x t + y t ) = x t 1 + y t 1 Using lag operators, we can rewrite the ARMA models as: AR(1) : (1 L ) x t = t AR( p ) : (1 1 L 2 L 2 ... p L p ) x t = t MA(1) : x t = (1 + L ) t MA( q ) : x t = (1 + 1 L + 2 L 2 + ... + q L q ) t Let = 1 , = 1 and define log polynomials ( L ) = 1 1 L 2 L 2 ... p L p ( L ) = 1 + 1 L + 2 L 2 + ... + p L q With lag polynomials, we can rewrite an ARMA process in a more compact way: AR : ( L ) x t = t MA : x t = ( L ) t ARMA : ( L ) x t = ( L ) t 3 Invertibility Given a time series probability model, usually we can find multiple ways to represent it. Which representation to choose depends on our problem. For example, to study the impulseresponse functions (section 4), MA representations maybe more convenient; while to estimate an ARMA...
View
Full
Document
This note was uploaded on 07/17/2008 for the course ECON 840 taught by Professor De jong during the Spring '08 term at Ohio State.
 Spring '08
 DE JONG
 Econometrics

Click to edit the document details