{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Chapter_10 - CHAPTER 10 10.1 Let(1 = y(1 2 = y 2 A 1 1 y...

Info icon This preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
207 CHAPTER 10 10.1 Let (1) (2) where the matrix A 1,1 is to be determined. This matrix is chosen so as to make the innovations processes α (1) and α (2) uncorrelated with each other. That is, (3) Substitute Eqs. (1) and (2) into (3): Postmultiplying both sides of this equation by the inverse of and rearranging: We may rewrite Eqs. (1) and (2) in the compact form This relation shows that, given the observation vectors y (1) and y (2), we may compute the innovations processes α (1) and α (2). The block lower triangular transformation matrix is invertible since its determinant equals 1. Hence, we may recover y (1) and y (2) from α (1) and α (2) by using the relation: α 1 ( ) y 1 ( ) = α 2 ( ) y 2 ( ) A 1 1 , y 1 ( ) + = E α 2 ( )α H 1 ( ) [ ] 0 = E y 2 ( ) y H 1 ( ) [ ] A 1 1 , E y 1 ( ) y H 1 ( ) [ ] + 0 = E y 1 ( ) y H 1 ( ) [ ] A 1 1 , E y 2 ( ) y H 1 ( ) [ ] E y 1 ( ) y H 1 ( ) [ ] { } 1 = α 1 ( ) α 2 ( ) I 0 A 1 1 , I y 1 ( ) y 2 ( ) = I 0 A 1 1 , I
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
208 In general, we may express the innovations process α ( n ) as a linear combination of the observation vectors y (1), y (2),..., y ( n ) as follows: where A n -1,0 = I . The set of matrices { A n -1, k } is chosen to satisfy the following conditions We may thus write The block lower triangular transformation matrix is invertible since its determinant equals one. Hence, we may go back and forth between the given set of observation vectors and the corresponding set of innovations processes without any loss of information. 10.2 First, we note that Since the estimate consists of a linear combination of the observation vectors , and since y 1 ( ) y 2 ( ) I 0 A 1 1 , I 1 α 1 ( ) α 2 ( ) = I 0 A 1 1 , I α 1 ( ) α 2 ( ) = α n ( ) y n ( ) A 1 1 , y n 1 ( ) A n -1 n -1 , y 1 ( ) + + + = A n 1 k , , y n - k +1 ( ), n k =1 n 1 2 , , , = = E α n +1 ( H n ( ) [ ] 0 = n 1 2 , , , = α 1 ( ) α 2 ( ) α n ( ) I 0 0 A 1 1 , I 0 A n 1 n -1 , , A n 1 n -2 , , I y 1 ( ) y 2 ( ) y n ( ) = . . . . . . . . . . . . . . . y 1 ( ) y 2 ( ) … y n ( ) , , , { } α 1 ( ) α 2 ( ) … α n ( ) , , , { } E ε n n -1 , ( ) v 1 H n ( ) [ ] E x n ( ) v 1 H n ( ) [ ] E x ˆ n y n -1 ( ) v 1 H n ( ) [ ] = x ˆ n y n 1 ( ) y 1 ( ) … y n -1 ( ) , ,
Image of page 2
209 it follows that We also have Since, by hypothesis and it follows that Accordingly, we deduce that Next, we note that We have Also, since consists of a linear combination of y (1),..., y ( n -1) and since E y k ( ) v 1 H n ( ) [ ] 0 , 0 k n = E x ˆ n y n -1 ( ) v 1 H n ( ) [ ] 0 = E x n ( ) v 1 H n ( ) [ ] Φ n 0 , ( ) E x 0 ( ) v 1 H n ( ) [ ] Φ n i , ( ) E v 1 i ( ) v 1 H n ( ) [ ] i =1 n -1 + = E x 0 ( ) v 1 H n ( ) [ ] 0 = E v 1 i ( ) v 1 H n ( ) [ ] 0 , = 0 i n E x n ( ) v 1 H n ( ) [ ] 0 = E ε n n -1 , ( ) v 1 H n ( ) [ ] 0 = E ε n n -1 , ( ) v 2 H n ( ) [ ] E x n ( ) v 2 H n ( ) [ ] E x ˆ n y n -1 ( ) v 2 H n ( ) [ ] = E x n ( ) v 2 H n ( ) [ ] 0 = x ˆ n ( ) y n -1 ( )
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
210 it follows that We therefore conclude that 10.3 The estimated state-error vector equals The expected value of the squared norm of ε ( i,n ) equals Differentiating this index of performance with respect to the vector b ( k ) and setting the result equal to zero, we find that the optimum value of b i ( k ) is determined by Hence, the optimum value of b i ( k ) equals E y k ( ) v 2 H n ( ) [ ] 0 = , 1 k n -1 E x ˆ n y n -1 ( ) v 2 H n ( ) [ ] 0 = E ε n n -1 , ( ) v 2 H n ( ) [ ] 0 = ε i n , ( ) x i ( ) x ˆ i y n ( ) = x i ( ) b i k ( )α k ( ) k =1 n = E ε i n , ( ) 2 [ ] E ε H i n , ( i n , ( ) [ ] = k =1 n b i H k ( ) b i l ( ) E α * k ( )α l ( ) [ ] b i H k ( ) E x i ( )α * k ( ) [ ] k =1 n l =1 n = E x H i ( )α k ( ) [ ] b i k ( ) E x H i ( ) x i ( ) [ ] + k =1 n 2 b i k ( ) E α k ( )α * k ( ) [ ] 2 E x i ( )α * k ( ) [ ] 0 = b i k ( ) E x i ( )α * k ( ) [ α 2 =
Image of page 4
211 where Correspondingly, the estimate of the state vector equals where
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}