{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

275A-Solutions-4

# 275A-Solutions-4 - ECE 275A Homework#4 Solutions Fall 2009...

This preview shows pages 1–3. Sign up to view the full content.

ECE 275A Homework #4 Solutions – Fall 2009 Homework Solutions 1. (a) We have ( x ) = x H Π x 2Re x H By + y H Wy = x H Π x x H By y H B H x + y H Wy = x H Π x x H Π Π 1 By y H B H Π 1 Π x + y H Wy = ( x Π 1 By ) H Π ( x Π 1 By ) + y H Wy y H B H Π 1 By = ( x Π 1 By ) H Π ( x Π 1 By ) + y H ( W B H Π 1 B ) y Thus for all x ( x ) y H ( W B H Π 1 B ) y with equality if and only if x = Π 1 By . Thus we have proved that ˆ x = Π 1 By = arg min x ( x ) x ) = y H ( W B H Π 1 B ) y = min x ( x ) (b) It is straightforward to apply this result to the full column-rank, weighted least- squares problem. ( x ) = y Ax 2 W = ( y Ax ) H W ( y Ax ) = x H A H WA | {z } Π x x H A H W | {z } B y y H WA |{z} B H x + y H Wy = x H Π x x H By y H B H x + y H Wy = x H Π x 2Re x H By + y H Wy With A full column rank and W = W H > 0, the matrix Π is Hermitian and full rank. Thus the weighted least-squares estimate of x is ˆ x = Π 1 By = ( A H WA ) 1 A H Wy with optimal (minimal) least-squares cost x ) = y H ( W B H Π 1 B ) y = y H ( W WA ( A H WA ) 1 A H W ) y Comment. Suppose that y 1 , y 2 = y H 1 Wy 2 and x 1 , x 2 = x H 1 x 2 (i.e., Ω = I ) 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Then A = A H W A + = ( AA ) 1 A = ( A H WA ) 1 A H W P R ( A ) = AA + = A ( A H WA ) 1 A H W P N ( A * ) = I P R ( A ) This shows that the optimal cost can be rewritten as x ) = y H W ( I P R ( A ) ) y = y H WP N ( A * ) y = y, P 2 N ( A * ) = P N ( A * ) y, P N ( A * ) = P N ( A * ) y, P N ( A * ) or x ) = P N ( A * ) y 2 = P N ( A * ) y 2 W What is the optimal cost if y ∈ R ( A )? Does this make sense? Note that the optimal error (which must be orthogonal to the range of A ) is ˆ e = y ˆ y = y P R ( A ) y = ( I P R ( A ) ) y = P N ( A * ) y Therefore the optimal cost can also be written as x ) = ˆ e 2 = ˆ e 2 W showing that the optimal least-squares error is the minimal residual error “power”.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern