This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE 562a Homework Solutions 2 5 February 2007 1 1. Solution: When you see a problem like this you must immediately find the eigenvalues/vectors. This is easy for the given matrix because the third component of y ( u ) is uncorrelated with the first two components. This implies that e 3 = 1 , is an eigenvector with corresponding λ 3 = 3. The other two eigenvectors can be found by finding the eigenvectors of the (2 × 2) matrix 3 1 1 3 . so the other eigenvectors/values are e 1 = 1 √ 2 1 1 , λ 1 = 2 e 2 = 1 √ 2 1 1 , λ 2 = 4 . (a) The mean squared length of y ( u ) is E { y ( u )  2 } = E { y t ( u ) y ( u ) } = E { tr( y ( u ) y t ( u )) } = tr( E { y ( u ) y t ( u ) } ) = tr( R y ) = tr( K y ) = 9 , where we used m y = and tr( · ) is the trace function.. (b),(c) It is always true that the variance computation is V ar b t y ( u ) = b t K y b . In addition, since E { b t y ( u ) } = b t m y = 0 , there is no difference between the variance and the second moment (what if the mean of y ( u ) was not ?). It then follows that b max = e max = 1 √ 2 1 1 V ar b t max y ( u ) = λ max = 4 b min = e min = 1 √ 2 1 1 V ar b t min y ( u ) = λ min = 2 . (d) This is simply Chebychev’s inequality P { y ( u )  > 10 } ≤ E { y ( u )  2 } 10 2 = 9 / 100 = 0 . 09 . 2 EE 562a Homework Solutions 2 5 February 2007 2. Solution: In each of the parts of this problem we want to choose a deterministic matrix H and a deterministic vector c so that the random vector x ( u ) = Hw ( u ) + c has the desired second order statistics. We are given that K w = I and m w = . w (u) H Hw (u) + c c Desired 2nd Order Statistics In other words, we solve for H and c from m x = E { Hw ( u ) + c } K x = E { [ Hw ( u )][ Hw ( u )] † ) } Simplifying implies c = m x HH † = K x , so the choice of c is obvious, and H results from the factorization of the nonnegative definite covariance matrix. (a) Of course c = m x = [1 2 3] t . We’ll find H by the “direct method:” H = h 11 h 21 h 22 h 31 h 32 h 33 . Substituting this into HH † = K x yield the following equations h 2 11 = 1 h 11 h 21 = 1 h 11 h 31 = 1 h 2 21 + h 2 22 = 2 h 21 h 31 + h 22 h 32 = 2 h 2 31 + h 2 32 + h 2 33 = 3 . Solve this system of equations in the order in which they were written gives The result is H = 1 0 0 1 1 0 1 1 1 (b) Again c = m y = [1 1 1 1] t . By solving det( K y λ I ) = 0, the eigenvalues of this rank2 matrix are easily shown to be 4, 2, and 0 (twice). Corresponding eigenvectors ( e s) are then found by solving K y e = λ e for each choice of the eignevalue λ . In this case there is a twodimensional space of eigenvectors with eigenvalue 0 and any pair of linearly independent vectors from this space is a basis for that set. Because we want to use eigenvectors as elements of an orthogonal/unitary matrix, we add the constraint that the eigenvectors be an orthonormal set. Orthonormal eigenvectors and their eigenvaluesthe eigenvectors be an orthonormal set....
View
Full Document
 Spring '07
 ToddBrun
 Linear Algebra, Variance, Probability theory, kx

Click to edit the document details