275A-Solutions-5

275A-Solutions-5 - 1 ECE 275A Homework#5 Solutions – Fall 2009 1 We can write the problem in the equivalent form y = Ax n where A =(1 1 1 T n ∼

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 ECE 275A Homework #5 Solutions – Fall 2009 1. We can write the problem in the equivalent form, y = Ax + n, where A = (1 , 1 , 1) T , n ∼ N (0 , Σ), and Σ = diag( σ 2 1 ,σ 2 2 ,σ 2 3 ) is assumed known. Note that A is one-to-one. This yields, y ∼ N ( Ax, Σ) , and the corresponding likelihood function, L ( x ; y ) = p ( y ; x ) = 1 (2 π ) n 2 | Σ | 1 2 exp { − 1 2 ∥ y − Ax ∥ 2 Σ- 1 } . (a) Maximizing the likelihood function with respect to x is equivalent to minimizing the negative log–likelihood function, − ln p ( y ; x ), which in turn is equivalent to solving the weighted least–squares problem, ˆ x MLE = arg min x ∥ y − Ax ∥ 2 W , W = Σ − 1 . (1) With A ∗ = A T W = A T Σ − 1 = ( 1 σ 2 1 , 1 σ 2 2 , 1 σ 2 3 ) we have ˆ x MLE = A + y where A + is the Σ − 1-weighted pseudoinverse A + = ( A ∗ A ) − 1 A ∗ = ( A T Σ − 1 A ) − 1 A T Σ − 1 = ( σ 2 2 σ 2 3 , σ 2 1 σ 2 3 , σ 2 1 σ 2 2 ) σ 2 2 σ 2 3 + σ 2 1 σ 2 3 + σ 2 1 σ 2 2 . (b) From part (a), we have ˆ x MLE = A + y = σ 2 2 σ 2 3 y 1 + σ 2 1 σ 2 3 y 2 + σ 2 1 σ 2 2 y 3 σ 2 2 σ 2 3 + σ 2 1 σ 2 3 + σ 2 1 σ 2 2 = α 1 y 1 + α 2 y 2 + α 3 y 3 , where the weighting coefficients α i are nonnegative and sum to one. We see that the MLE is a convex combination (weighted average) of the measurements. (c) The matrix A has full column rank and therefore A + is a left–inverse, A + A = 1. We have, ˆ x MLE = A + y = A + ( Ax + n ) = x + A + n, 2 and therefore E x { ˆ x MLE } = x for all x , showing that the MLE is absolutely unbi- ased. Let ˜ x = ˆ x MLE − x = A + n . Then, Cov x { ˜ x } = E x { ˜ x ˜ x T } = A + E x { nn T } A + T = A + Σ A + T = ( A T Σ − 1 A ) − 1 , or Cov x { ˆ x MLE } = Cov x { ˜ x } = σ 2 1 σ 2 2 σ 2 3 σ 2 2 σ 2 3 + σ 2 1 σ 2 3 + σ 2 1 σ 2 2 , for all x . (d) Case (i). For σ 1 → 0, we obtain ˆ x MLE = y 1 and Cov x { ˆ x MLE } = 0. I.e., even a single perfect measurement allows us to identify x with zero error—a most reasonable result. Case (ii). For σ 1 → ∞ (and assuming that the other two measurement variances remain finite), the solution completely discounts the infinitely noisy (and hence worthless) measurement y 1 in favor of the finite uncertainty measurements y 2 and y 3 . Again, a very reasonable result. The resulting solution can be seen to be equal to the optimal MLE computed when only the measurements y 2 and y 3 are available. Case (iii). When σ 1 = σ 2 = σ 3 , we have ˆ x MLE = 1 3 ( y 1 + y 2 + y 3 ). This is rea- sonable as here t here is no rational reason to prefer any one measurement over the other two. Thus, by symmetry we expect the answer to be the simple sample mean, which is indeed the case. With Σ = σ 2 I , the weighted least–squares prob- lem (1) becomes equivalent to an unweighted least–squares problem because an overall constant factor (here, σ 2 ) does not affect the solution to the optimization problem. The solution to the unweighted problem is readily shown to be the sameproblem....
View Full Document

This note was uploaded on 01/14/2011 for the course ECE 210a taught by Professor Chandrasekara during the Fall '08 term at UCSB.

Page1 / 11

275A-Solutions-5 - 1 ECE 275A Homework#5 Solutions – Fall 2009 1 We can write the problem in the equivalent form y = Ax n where A =(1 1 1 T n ∼

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online