This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE 562a Homework Solutions 5 22 March 2006 1 1. (a) Since the observation and desired signal are jointly Gaussian and mean zero we know that optimal unconstrained solution is the LMMSE estimator. Therefore we have that v i ( u ) , E { v ( u )  x i ( u ) } = R vx ( i ) R 1 x ( i ) x i ( u ) i = 1 , 2 , 3 ... In particular, we have for i = k + 1 v k +1 ( u ) = R vx ( k + 1) R 1 x ( k + 1) x k +1 ( u ) . The key to this problem is to partition the observations as x k +1 ( u ) = x ( u,k + 1) x k ( u ) . With this we obtain the following updates R vx ( k + 1) = E v ( u ) x ( u,k + 1) x t k ( u ) = r x v ( k + 1) R vx ( k ) R x ( k + 1) = E x ( u,k + 1) x k ( u ) x ( u,k + 1) x t k ( u ) = 2 x ( k + 1) r t x v ( k + 1) r x v ( k + 1) R x ( k ) . We use the matrix inversion lemma to get = R 1 vx ( k + 1) = 1  1 r t x v ( k + 1) R 1 x ( k ) 1 R 1 x ( k ) r x v ( k + 1) R 1 x ( k ) 1 R 1 x ( k ) r x v ( k + 1) r t x v ( k + 1) R 1 x ( k ) , where we have added lines to show the partitioning more clearly and we have introduced the term for simplicity = 2 x ( k + 1) r t x v ( k + 1) R 1 x ( k ) r x v ( k + 1) . We can then multiply the two partitioned matrices together yielding R vx ( k + 1) R 1 x ( k + 1) = M P , where the matrices M and P are M = 1 r x v ( k + 1) R vx ( k ) R 1 x ( k ) r x v ( k + 1) ( n 1) 2 EE 562a Homework Solutions 5 22 March 2006 P = R vx ( k ) R 1 x ( k ) 1 r x v ( k + 1) R vx ( k ) R 1 x ( k ) r x v ( k + 1) r t x v ( k + 1) R 1 x ( k ) ( n k ) . Using this partitioned form, along with the partitioned version of x k +1 ( u ) = v k +1 ( u ) = M x ( u,k + 1) + Px k ( u ) = R vx ( k ) R 1 x ( k ) x k ( u ) + 1 r x v ( k + 1) R vx ( k ) R 1 x ( k ) r x v ( k + 1) x ( u,k + 1) r t x v ( k + 1) R 1 x ( k ) x k ( u ) = v k ( u ) + g ( k + 1) x ( u,k + 1) r t x v ( k + 1) R 1 x ( k ) x k ( u ) . This is in the desired form with the Kalman Gain vector defined by g ( k + 1) = r x v ( k + 1) R vx ( k ) R 1 x ( k ) r x v ( k + 1) 2 x ( k + 1) r t x v ( k + 1) R 1 x ( k ) r x v ( k + 1) . (b) The innovation provided by x ( u,k + 1) is x ( u,k + 1) r t x v ( k + 1) R 1 x ( k ) x k ( u ) , which can identified as the MMSE estimate of x ( u,k +1) based on the observation x k ( u ). So the from of this recursive estimator is intuitively satisfying: we update the estimate by a linear function of the error in predicting x ( u,k + 1) from x k ( u ). This may not seem all that useful if you consider the fact that we still need to estimate x ( u,k +1) from x k ( u ), which means that we need to compute R 1 x ( k ) in order to update the estimate. A recursion on this inverse matrix can be found when we have a fixed model for the relation between the observation and the desirable. For example, the state variable model would have the linear observation equation:...
View
Full
Document
 Spring '07
 ToddBrun

Click to edit the document details