finalsol - EE 278 Handout#19 Statistical Signal Processing...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: EE 278 Handout #19 Statistical Signal Processing Friday, August 14, 2009 Final Exam Solutions 1. (13 points) (a) (9 points) We will first calculate the following expectations that will be used in our calculations later on. E ( X 2 ) = 1; E ( XY 1 ) = E ( X 2 ) + E ( XZ 1 ) = 1; E ( XY 2 ) = 1; E ( Y 2 1 ) = E ( X 2 ) + E ( Z 2 1 ) + 2 E ( XZ 1 ) = 1 + 1 + 0 = 2; E ( Y 2 2 ) = 2; E ( Y 1 Y 2 ) = 1 + ρ Since X , Y 1 and Y 2 are all zero mean, our estimator is in the form of ˆ X = aY 1 + bY 2 . From the orthogonality principle we have, E ( Y 1 ( X- aY 1- bY 2 )) = 0 ⇒ 1 = 2 a + (1 + ρb ) E ( Y 2 ( X- aY 1- bY 2 )) = 0 ⇒ 1 = (1 + ρ ) a + 2 b By solving this equation we have, a opt = b opt = 1 3 + ρ The mean square error of this estimator is equal to: E ( X- a opt Y 1- b opt Y 2 ) 2 = E ( X 2 )- bracketleftbig E ( XY 1 ) E ( XY 2 ) bracketrightbig bracketleftbigg a opt b opt bracketrightbigg = 1- 2 ρ + 3 = ρ + 1 ρ + 3 (b) (4 points) For the second part of this problem we can easily calculate the deriva- tive. The derivative of MMSE with respect to ρ is equal to, ∂ MMSE ∂ρ = ( ρ + 3)- ( ρ + 1) ( ρ + 3) 2 = 2 ( ρ + 3) 2 > . Therefore the minimum happens at ρ =- 1 (MMSE = 0) and the worst case is when ρ = 1 (MMSE = 1 / 2). Intuitively speaking it is also clear that when the noises have negative correlation it is going to be the best since they cancel themselves in our estimation. 1 2. (15 points) Convergence in distribution. From the definition of s ( x ) and the CDF of X n , F X n ( x ) = 1- e- λx/n , we can directly compute the CDF of Y n as follows. Let 0 < y < 1. Then F Y n ( y ) = P { Y n ≤ y } = P { X n ∈ (0 , y ] or X n ∈ (1 , 1 + y ] or X n ∈ (2 , 2 + y ] or . . . } = ∞ summationdisplay i =0 P { X n ∈ ( i, i + y ] } = ∞ summationdisplay i =0 ( e- λi/n- e- λ/n ( i + y ) ) = parenleftBigg ∞ summationdisplay i =0 e- λi/n parenrightBigg ( 1- e- λy/n ) = 1- e- λy/n 1- e- λ/n . Using l’Hˆopital’s rule, we can compute the limit lim n →∞ F Y n ( y ) = lim n →∞ (- e- λy/n ) ( λy/n 2 ) (- e- λ/n ) ( λ/n 2 ) = y lim n →∞ e- λy/n e- λ/n = y. Thus, Y n converges in distribution to a uniform random variable, Y n → Y ∼ Unif[0 , 1] in distribution . 3. (27 points) • Process 1: First notice that the process is a Markov process , since given X ( t- 1) the only source of randomness that remains in X ( t ) is due to Z ( t ) which is independent of X (1) , X (2) , . . .X ( t- 2). For checking wide sense stationarity we should check the first and second order moments. We know that E ( Z ( t )) = 0 and E ( Z ( t ) 2 ) = 1 3 . For the first moment of this process we have, E ( X ( t )) = αE ( X ( t- 1))+ E ( Z ( t )) = αE ( X ( t- 1)) ⇒ E ( X ( t )) = α ( t- 1) E ( X (1)) = 0 ....
View Full Document

This note was uploaded on 10/26/2011 for the course MATH 180C taught by Professor Eggers during the Spring '09 term at Aarhus Universitet.

Page1 / 7

finalsol - EE 278 Handout#19 Statistical Signal Processing...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online