lecture3 - EC3062 ECONOMETRICS LINEAR STOCHASTIC MODELS Let...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EC3062 ECONOMETRICS LINEAR STOCHASTIC MODELS Let {xτ +1 , xτ +2 , . . . , xτ +n } denote n consecutive elements from a stochastic process. If their joint distribution does not depend on τ , regardless of the size of n, then the process is strictly stationary. Any two segments of equal length will have the same distribution with (1) E (xt ) = μ < ∞ for all t and C (xτ +t , xτ +s ) = γ|t−s| . The condition on the covariances implies that the dispersion matrix of the vector [x1 , x2 , . . . , xn ] is a bisymmetric Laurent matrix of the form (2) ⎡γ 0 ⎢ γ1 ⎢ Γ = ⎢ γ2 ⎢. ⎣. . γn−1 γ1 γ0 γ1 . . . γ2 γ1 γ0 . . . ... ... ... .. . γn−2 γn−3 ... γn−1 ⎤ γn−2 ⎥ ⎥ γn−3 ⎥ , .⎥ .⎦ . γ0 wherein the generic element in the (i, j )th position is γ|i−j | = C (xi , xj ). 1 EC3062 ECONOMETRICS Moving-Average Processes The q th-order moving average MA(q ) process, is defined by (3) y (t) = μ0 ε(t) + μ1 ε(t − 1) + · · · + μq ε(t − q ), where ε(t) = {εt ; t = 0, ±1, ±2, . . .} is a sequence of i.i.d. random variables 2 with E {ε(t)} = 0 and V (εt ) = σε , defined on a doubly-infinite set of integers. We set can μ0 = 1. The equation can also written as y (t) = μ(L)ε(t), where μ(L) = μ0 + μ1 L + · · · + μq Lq is a polynomial in the lag operator L, for which Lj x(t) = x(t − j ). This process is stationary, since any two elements yt and ys are the same function of [εt , εt−1 , . . . , εt−q ] and [εs , εs−1 , . . . , εs−q ], which are identically distributed. If the roots of the polynomial equation μ(z ) = μ0 +μ1 z +· · ·+μq z q = 0 lie outside the unit circle, then the process is invertible such that μ−1 (L)y (t) = ε(t), which is an infinite-order autoregressive representation. 2 EC3062 ECONOMETRICS Example. Consider the first-order MA(1) moving-average process y (t) = ε(t) − θε(t − 1) = (1 − θL)ε(t). (4) Provided that |θ| < 1, this can be written in autoregressive form as ε(t) = 1 y (t) = y (t) + θy (t − 1) + θ2 y (t − 2) + · · · . (1 − θL) Imagine that |θ| > 1 instead. Then, to obtain a convergent series, we have to write y (t + 1) = ε(t + 1) − θε(t) = −θ(1 − L−1 /θ)ε(t), where L−1 ε(t) = ε(t + 1). This gives θ−1 (7) ε(t) = − y (t + 1) = −θ−1 (1 − L−1 /θ) y (t + 1) y (t + 2) + ··· . + θ θ2 Normally, this would have no reasonable meaning. 3 EC3062 ECONOMETRICS The Autocovariances of a Moving-Average Process Consider γτ = E (yt yt−τ ) μi εt−i =E (8) μj εt−τ −j i j μi μj E (εt−i εt−τ −j ). = i j Since ε(t) is a sequence of independently and identically distributed random variables with zero expectations, it follows that (9) 0, E (εt−i εt−τ −j ) = if i = τ + j ; 2 σε , if i = τ + j . Therefore (10) 2 γτ = σε μj μj +τ . j 4 EC3062 ECONOMETRICS Now let τ = 0, 1, . . . , q . This gives 2 γ0 = σε (μ2 + μ2 + · · · + μ2 ), 0 1 q (11) 2 γ1 = σε (μ0 μ1 + μ1 μ2 + · · · + μq−1 μq ), . . . 2 γq = σε μ0 μq . Also, γτ = 0 for all τ > q . The first-order moving-average process y (t) = ε(t) − θε(t − 1) has the following autocovariances: 2 γ0 = σε (1 + θ2 ), (12) 2 γ1 = −σε θ, γτ = 0 if τ > 1. 5 EC3062 ECONOMETRICS For a vector y = [y0 , y2 , . . . , yT −1 ] of T consecutive elements from a firstorder moving-average process, the dispersion matrix is ⎡ (13) 1 + θ2 ⎢ −θ ⎢ 2⎢ D(y ) = σε ⎢ 0 ⎣. . . 0 −θ 1 + θ2 −θ . . . 0 −θ 1 + θ2 . . . 0 0 ... ... ... .. . ⎤ 0 0 0 . . . ⎥ ⎥ ⎥. ⎥ ⎦ . . . 1 + θ2 In general, the dispersion matrix of a q th-order moving-average process has q subdiagonal and q supradiagonal bands of nonzero elements and zero elements elsewhere. The empirical autocovariance of lag τ ≤ T − 1 is 1 cτ = T T −τ (yt − y )(yt+τ − y ) ¯ ¯ t=0 with 1 y= ¯ T T −1 yt . t=0 Notice that cT −1 = T −1 y0 yT −1 comprises only the first and the last element of the sample. 6 EC3062 ECONOMETRICS 4 2 0 −2 −4 −6 0 25 50 75 100 Figure 1. The graph of 125 observations on a simulated series generated by an MA(2) process y (t) = (1 + 1.25L + 0.80L2 )ε(t). 7 125 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 0 5 10 15 20 Figure 2. The theoretical autocorrelations of the MA(2) process y (t) = (1 + 1.25L + 0.80L2 )ε(t) (the solid bars) together with their empirical counterparts, calculated from a simulated series of 125 values. 8 25 EC3062 ECONOMETRICS Autoregressive Processes The pth-order autoregressive AR(p) process, is defined by (17) α0 y (t) + α1 y (t − 1) + · · · + αp y (t − p) = ε(t). Setting α0 = 1 identifies y (t) as the output. This can be written as α(L)y (t) = ε(t), where α(L) = α0 + α1 L + · · · + αp Lp . For the process to be stationary, the roots of the equation α(z ) = α0 + α1 z + · · · + αp z p = 0 must lie outside the unit circle. This condition enables us to write the autoregressive process as an infinite-order moving-average process in the form of y (t) = α−1 (L)ε(t). 9 EC3062 ECONOMETRICS Example. Consider the AR(1) process defined by ε(t) = y (t) − φy (t − 1) = (1 − φL)y (t). (18) Provided that the process is stationary with |φ| < 1, it can be represented in moving-average form as (19) y (t) = 1 ε(t) = ε(t) + φε(t − 1) + φ2 ε(t − 2) + · · · . 1 − φL The autocovariances of the AR(1) process can be found in the manner of an MA process. Thus γτ = E (yt yt−τ ) φi εt−i =E (20) i φj εt−τ −j j φi φj E (εt−i εt−τ −j ); = i j 10 EC3062 ECONOMETRICS Since 0, E (εt−i εt−τ −j ) = (9) if i = τ + j ; 2 σε , if i = τ + j , it follows that (21) γτ = 2 σε j j +τ φφ j 2 σε φτ = . 1 − φ2 For a vector y = [y0 , y2 , . . . , yT −1 ] of T consecutive elements from a firstorder autoregressive process, the dispersion matrix has the form ⎡ (22) 2 σε D(y ) = 1 − φ2 ⎢ ⎢ ⎢ ⎢ ⎣ 1 φ φ2 . . . φT −1 φ 1 φ . . . φT −2 11 φ2 φ 1 . . . φT −3 ⎤ . . . φT −1 . . . φT −2 ⎥ ⎥ . . . φT −3 ⎥ . .⎥ .. .⎦ . . ... 1 EC3062 ECONOMETRICS The Autocovariances of an Autoregressive Process Multiplying i αi yt−i = εt by yt−τ and taking expectations gives (24) αi E (yt−i yt−τ ) = E (εt yt−τ ). i Taking account of the normalisation α0 = 1, we find that (25) E (εt yt−τ ) = 2 σε , if τ = 0; 0, if τ > 0. Therefore, on setting E (yt−i yt−τ ) = γτ −i , equation (24) gives αi γτ −i = (26) i 2 σε , if τ = 0; 0, if τ > 0. The second equation enables us to generate the sequence {γp , γp+1 , . . .} given p starting values γ0 , γ1 , . . . , γp−1 . 12 EC3062 ECONOMETRICS According to (26), there is α0 γτ + α1 γτ −1 + · · · + α2 γτ −p = 0 for τ > 0 Thus, given γτ −1 , γτ −2 , . . . , γτ −p for τ ≥ p, we can find γτ = −α1 γτ −1 − α2 γτ −2 − · · · − αp γτ −p . By letting τ = 0, 1, . . . , p in (26), we generate a set of p + 1 equations, which can be arrayed in matrix form as follows: ⎡γ 0 (27) ⎢ γ1 ⎢ ⎢ γ2 ⎢. ⎣. . γp γ1 γ0 γ1 . . . γ2 γ1 γ0 . . . ... ... ... .. . γp−1 γp−2 ... 2 γp ⎤ ⎡ 1 ⎤ ⎡ σε ⎤ γp−1 ⎥ ⎢ α1 ⎥ ⎢ 0 ⎥ ⎥⎢ ⎥ ⎢ ⎥ γp−2 ⎥ ⎢ α2 ⎥ = ⎢ 0 ⎥ . . ⎥⎢ . ⎥ ⎢ . ⎥ . ⎦⎣ . ⎦ ⎣ . ⎦ . . . 0 αp γ0 These the Yule–Walker equations, which can be used for generating the 2 values γ0 , γ1 , . . . , γp from the values α1 , . . . , αp , σε or vice versa. 13 EC3062 ECONOMETRICS Example. For an example of the two uses of the Yule–Walker equations, consider the AR(2) process. In this case, ⎡ (28) γ0 ⎣ γ1 γ2 ⎡ γ1 γ0 γ1 α0 = ⎣ α1 α2 ⎤⎡ ⎤ ⎡ ⎡ α0 α2 γ2 γ1 ⎦ ⎣ α1 ⎦ = ⎣ 0 0 γ0 α2 α1 α0 + α2 α1 ⎤⎡ ⎤ α1 α2 0 ⎡ α0 α1 α2 2 σε ⎤ 0 α0 α1 ⎤ γ2 ⎤ 0 ⎢ γ1 ⎢⎥ 0 ⎦ ⎢ γ0 ⎥ ⎣⎦ α0 γ1 γ2 γ0 α2 0 ⎦ ⎣ γ1 ⎦ = ⎣ 0 ⎦ . α0 γ2 0 2 Given α0 = 1 and the values for γ0 , γ1 , γ2 , we can find σε and α1 , α2 . 2 Conversely, given α0 , α1 , α2 and σε , we can find γ0 , γ1 , γ2 . Notice how the matrix following the first equality is folded across the axis which divides it vertically to give the matrix which follows the second equality. 14 EC3062 ECONOMETRICS 4 2 0 −2 −4 0 25 50 75 100 Figure 3. The graph of 125 observations on a simulated series generated by an AR(2) process (1 − 0.273L + 0.81L2 )y (t) = ε(t). 15 125 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 0 5 10 15 20 Figure 4. The theoretical autocorrelations and of the AR(2) process (1 − 0.273L + 0.81L2 )y (t) = ε(t) (the solid bars) together with their empirical counterparts, calculated from a simulated series of 125 values. 16 25 EC3062 ECONOMETRICS The Partial Autocorrelation Function Let αr(r) be the coefficient associated with y (t − r) in an autoregressive process of order r whose parameters correspond to the autocovariances γ0 , γ1 , . . . , γr . Then the sequence {αr(r) ; r = 1, 2, . . .}, of which the index corresponds to models of increasing orders, constitutes the partial autocorrelation function. In effect, αr(r) indicates the role in explaining the variance of y (t) which is due to y (t − r) when y (t − 1), . . . , y (t − r + 1) are also taken into account. The sample partial autocorrelation pτ at lag τ is the correlation between the two sets of residuals obtained from regressing the elements yt and yt−τ on the set of intervening values yt−1 , yt−2 , . . . , yt−τ +1 . The partial autocorrelation measures the dependence between yt and yt−τ after the effect of the intervening values has been removed. The theoretical partial autocorrelations function of a AR(p) process is zero-valued for all τ > p. Likewise, all elements of the sample partial autocorrelation function are expected to be close to zero for lags greater than p 17 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 0 5 10 Figure 5. 15 20 25 The theoretical partial autocorrelations of the AR(2) process (1 − 0.273L + 0.81L2 )y (t) = ε(t) together with their empirical counterparts, calculated from a simulated series of 125 values. 18 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 0 5 10 Figure 6. 15 20 25 The theoretical partial autocorrelations of the MA(2) process y (t) = (1 + 1.25L + 0.80L2 )ε(t) together with their empirical counterparts, calculated from a simulated series of 125 values. 19 EC3062 ECONOMETRICS Autoregressive Moving-Average Processes The autoregressive moving-average ARMA(p, q ) process of orders p and q is defined by (36) α0 y (t) + α1 y (t − 1) + · · · + αp y (t − p) = μ0 ε(t) + μ1 ε(t − 1) + · · · + μq ε(t − q ). The equation is normalised by setting α0 = 1 and μ0 = 1. The equation can be denoted by α(L)y (t) = μ(L)ε(t). Provided that the roots of the equation α(z ) = 0 lie outside the unit circle, the process can be described as an infinite-order MA process: y (t) = α−1 (L)μ(L)ε(t). Conversely, provided the roots of the equation μ(z ) = 0 lie outside the unit circle, the process can be described as an infinite-order AR process: μ−1 (L)α(L)y (t) = ε(t). 20 EC3062 ECONOMETRICS The Autocovariances of an ARMA Process Multiplying i αi yt−i = i μi εt−i by yt−τ and taking expectations gives αi γτ −i = (38) i μi δi−τ , i where γτ −i = E (yt−τ yt−i ) and δi−τ = E (yt−τ εt−i ). Since εt−i is uncorrelated with yt−τ whenever it is subsequent to the latter, it follows that δi−τ = 0 if τ > i. Since the index i in the RHS of the equation (38) runs from 0 to q , it follows that αi γi−τ = 0 (39) if τ > q. i Given the q +1 values δ0 , δ1 , . . . , δq , and p initial values γ0 , γ1 , . . . , γp−1 for the autocovariances, the equation (38) can be solved recursively to obtain the subsequent values {γp , γp+1 , . . .}. 21 EC3062 ECONOMETRICS To find the requisite values δ0 , δ1 , . . . , δq , consider multiplying the equation i αi yt−i = i μi εt−i by εt−τ and taking expectations. This gives 2 αi δτ −i = μτ σε , (40) i where δτ −i = E (yt−i εt−τ ). The equation may be rewritten as (41) δτ = 1 2 δτ −i , μτ σε − α0 i=1 and, by setting τ = 0, 1, . . . , q , we can generate recursively the required values δ0 , δ1 , . . . , δq . 22 EC3062 ECONOMETRICS Example. Consider the ARMA(2, 2) model, which gives the equation (42) α0 yt + α1 yt−1 + α2 yt−2 = μ0 εt + μ1 εt−1 + μ2 εt−2 . Multiplying by yt , ⎡ γ0 ⎣ γ1 (43) γ2 yt−1 and yt−2 and taking expectations gives ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤ α0 δ0 δ1 δ2 μ0 γ1 γ2 γ0 γ1 ⎦ ⎣ α1 ⎦ = ⎣ 0 δ0 δ1 ⎦ ⎣ μ1 ⎦ . γ1 γ0 α2 0 0 δ0 μ2 Multiplying by εt , ⎡ δ0 ⎣ δ1 (44) δ2 εt−1 and εt−2 and taking expectations gives ⎤⎡ ⎤ ⎡ 2 ⎤⎡ ⎤ 00 0 α0 σε 0 μ0 2 δ0 0 ⎦ ⎣ α1 ⎦ = ⎣ 0 σε 0 ⎦ ⎣ μ1 ⎦ . 2 δ1 δ0 α2 μ2 0 0 σε When the latter equations are written as ⎤⎡ ⎤ ⎡⎤ ⎡ δ0 μ0 0 α0 0 2 ⎣ α1 α0 0 ⎦ ⎣ δ1 ⎦ = σε ⎣ μ1 ⎦ , (45) α2 α1 α0 δ2 μ2 23 EC3062 ECONOMETRICS they can be solved recursively for δ0 , δ1 and δ2 on the assumption that 2 that the values of α0 , α1 , α2 and σε are known. Notice that, when we 2 adopt the normalisation α0 = μ0 = 1, we get δ0 = σε . When the equations (43) are rewritten as ⎡ (46) α0 ⎣ α1 α2 α1 α0 + α2 α1 ⎤⎡ ⎤ ⎡ γ0 μ0 α2 0 ⎦ ⎣ γ1 ⎦ = ⎣ μ1 μ2 α0 γ2 μ1 μ2 0 ⎤⎡ ⎤ δ0 μ2 0 ⎦ ⎣ δ1 ⎦ , 0 δ2 they can be solved for γ0 , γ1 and γ2 . Thus the starting values are obtained, which enable the equation (47) α0 γτ + α1 γτ −1 + α2 γτ −2 = 0; τ >2 to be solved recursively to generate the succeeding values {γ3 , γ4 , . . .} of the autocovariances. 24 EC3062 ECONOMETRICS 10 5 0 −5 −10 −15 0 25 50 75 100 Figure 7. The graph of 125 observations on a simulated series generated by an ARMA(2, 1) process (1 − 0.273L + 0.81L2 )y (t) = (1 + 0.9L)ε(t). 25 125 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 0 5 10 15 20 Figure 8. The theoretical autocorrelations and of the ARMA(2, 1) process (1 − 0.273L + 0.81L2 )y (t) = (1 + 0.9L)ε(t) together with their empirical counterparts, calculated from a simulated series of 125 values. 26 25 EC3062 ECONOMETRICS 1.00 0.75 0.50 0.25 0.00 −0.25 −0.50 −0.75 0 5 10 15 20 25 Figure 9. The theoretical partial autocorrelations of the ARMA(2, 1) process (1 − 0.273L + 0.81L2 )y (t) = (1 + 0.9L)ε(t) together with their empirical counterparts, calculated from a simulated series of 125 values. 27 ...
View Full Document

Ask a homework question - tutors are online