{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

weatherwax_Box_N_Jenkins

# weatherwax_Box_N_Jenkins - Notes on the Book Time Series...

This preview shows pages 1–4. Sign up to view the full content.

Notes on the Book: TimeSeries Analysis: Forecastingand Control by George E. P. Boxand Gwilym M. Jenkins John L. Weatherwax June 20, 2008 * [email protected] 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 2 (The Autocorrelation Function and the Spec- trum) Notes on the Text Notes on positive definiteness and the autocovariance matrix The book defined the autocovariance matrix Γ n of a stochastic process as Γ n = γ 0 γ 1 γ 2 · · · γ n 1 γ 1 γ 0 γ 1 · · · γ n 2 γ 2 γ 1 γ 0 · · · γ n 3 . . . . . . . . . · · · . . . γ n 1 γ n 2 γ n 3 · · · γ 0 . (1) Then holding the definition for a second, if we consider the derived time series L t given by L t = l 1 z t + l 2 z t 1 + · · · + l n z t n +1 , we can compute the variance of this series using the definition var[ L t ] = E [( L t ¯ L ) 2 ]. We first evaluate the mean of L t ¯ L = E [ l 1 z t + l 2 z t 1 + · · · + l n z t n +1 ] = ( l 1 + l 2 + · · · + l n ) μ, since z t is assumed stationary so that E [ z t ] = μ for all t . We then have that L t ¯ L = l 1 ( z t μ ) + l 2 ( z t 1 μ ) + l 3 ( z t 2 μ ) + · · · + l n ( z t n +1 μ ) , so that when we square this expression we get ( L t ¯ L ) 2 = n summationdisplay i =1 n summationdisplay j =1 l i l j ( z t ( i 1) μ )( z t ( j 1) μ ) . Taking the expectation of both sides to compute the variance and using E [( z t ( i 1) μ )( z t ( j 1) μ )] = γ | i j | , gives var[ L t ] = n summationdisplay i =1 n summationdisplay j =1 l i l j γ | i j | . As the expression on the right-hand-side is the same as the quadratic form bracketleftbig l 1 l 2 l 3 · · · l n bracketrightbig γ 0 γ 1 γ 2 · · · γ n 1 γ 1 γ 0 γ 1 · · · γ n 2 γ 2 γ 1 γ 0 · · · γ n 3 . . . . . . . . . · · · . . . γ n 1 γ n 2 γ n 3 · · · γ 0 l 1 l 2 l 3 . . . l n . (2) 2
Thus since var[ L t ] > 0 (from its definition) for all possible values for l 1 ,l 2 ,l 3 , · · · l n 1 we have shown that the inner product given by Equation 2 is positive for all nonzero vectors with components l 1 ,l 2 ,l 3 , · · · l n 1 we have shown that the autocovariance matrix Γ n is positive definite. Since the autocorrelation matrix, P n , is a scaled version of Γ n it too is positive definite. Given the fact that P n is positive definite we can use standard properties of positive definite matrices to derive properties of the correlations ρ k . Given a matrix Q of size n × n , we define the principal minors of Q to be determinants of smaller square matrices obtained from the matrix Q . The smaller submatrices are selected from Q by selecting a set of indices from 1 to n representing the rows (and columns) we want to downsample from. Thus if you view the indices selected as the indices of rows from the original matrix Q , the columns we select must equal the indices of the rows we select. As an example, if the matrix Q is 6 × 6 we could construct one of the principal minors from the first, third, and sixth rows. If we denote the

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}