This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Notes on the Book: Time Series Analysis: Forecasting and Control by George E. P. Box and Gwilym M. Jenkins John L. Weatherwax ∗ June 20, 2008 * [email protected] 1 Chapter 2 (The Autocorrelation Function and the Spec trum) Notes on the Text Notes on positive definiteness and the autocovariance matrix The book defined the autocovariance matrix Γ n of a stochastic process as Γ n = γ γ 1 γ 2 ··· γ n − 1 γ 1 γ γ 1 ··· γ n − 2 γ 2 γ 1 γ ··· γ n − 3 . . . . . . . . . ··· . . . γ n − 1 γ n − 2 γ n − 3 ··· γ . (1) Then holding the definition for a second, if we consider the derived time series L t given by L t = l 1 z t + l 2 z t − 1 + ··· + l n z t − n +1 , we can compute the variance of this series using the definition var[ L t ] = E [( L t − ¯ L ) 2 ]. We first evaluate the mean of L t ¯ L = E [ l 1 z t + l 2 z t − 1 + ··· + l n z t − n +1 ] = ( l 1 + l 2 + ··· + l n ) μ , since z t is assumed stationary so that E [ z t ] = μ for all t . We then have that L t − ¯ L = l 1 ( z t − μ ) + l 2 ( z t − 1 − μ ) + l 3 ( z t − 2 − μ ) + ··· + l n ( z t − n +1 − μ ) , so that when we square this expression we get ( L t − ¯ L ) 2 = n summationdisplay i =1 n summationdisplay j =1 l i l j ( z t − ( i − 1) − μ )( z t − ( j − 1) − μ ) . Taking the expectation of both sides to compute the variance and using E [( z t − ( i − 1) − μ )( z t − ( j − 1) − μ )] = γ  i − j  , gives var[ L t ] = n summationdisplay i =1 n summationdisplay j =1 l i l j γ  i − j  . As the expression on the righthandside is the same as the quadratic form bracketleftbig l 1 l 2 l 3 ··· l n bracketrightbig γ γ 1 γ 2 ··· γ n − 1 γ 1 γ γ 1 ··· γ n − 2 γ 2 γ 1 γ ··· γ n − 3 . . . . . . . . . ··· . . . γ n − 1 γ n − 2 γ n − 3 ··· γ l 1 l 2 l 3 . . . l n . (2) 2 Thus since var[ L t ] > 0 (from its definition) for all possible values for l 1 ,l 2 ,l 3 , ··· l n − 1 we have shown that the inner product given by Equation 2 is positive for all nonzero vectors with components l 1 ,l 2 ,l 3 , ··· l n − 1 we have shown that the autocovariance matrix Γ n is positive definite. Since the autocorrelation matrix, P n , is a scaled version of Γ n it too is positive definite. Given the fact that P n is positive definite we can use standard properties of positive definite matrices to derive properties of the correlations ρ k . Given a matrix Q of size n × n , we define the principal minors of Q to be determinants of smaller square matrices obtained from the matrix Q . The smaller submatrices are selected from Q by selecting a set of indices from 1 to n representing the rows (and columns) we want to downsample from. Thus if you view the indices selected as the indices of rows from the original matrix...
View
Full
Document
This note was uploaded on 04/01/2012 for the course ORIE 5550 taught by Professor Matteson during the Spring '12 term at Cornell University (Engineering School).
 Spring '12
 MATTESON

Click to edit the document details