This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: EE 278 November 4, 2009 Statistical Signal Processing Handout #13 Sample Midterm Examination Problems 1. Inequalities. Label each of the following statements with =, , , or None . Label a statement with = if equality always holds. Label a statement with or if the corresponding inequality holds in general and strict inequality holds sometimes. If no such equality or inequality holds in general, label the statement as None . Justify your answers. a. P( A ) vs. 1 (P( A c ,B ) + P( A c ,B c )). b. E( X 1 X 2  X 3 ) vs. E( X 1  X 3 ) E( X 2  X 3 ) if X 1 and X 2 are independent. c. E[Var( X  Y,Z )] vs. E[Var( X  Y )]. d. E[Var( X  Y )] vs. E[Var( X  g ( Y ))]. (Hint: use the result of part (c).) e. E Z [E( X 2  Z ) E( Y 2  Z )] vs. [E Z (Cov( X,Y  Z ))] 2 . f. E parenleftBig log 2 parenleftBig 1 + X parenrightBigparenrightBig vs. 1 if X 0 and E( X ) 1. g. P { ( XY ) 2 > 16 } vs. 1 / 8 if E( X 4 ) = E( Y 4 ) = 2. Solution a. =. By the law of total probability P( A ) = 1 P( A c ) = 1 (P( A c ,B ) + P( A c ,B c )) . b. None . Independence does not necessarily imply conditional independence. c. . By the law of conditional variances (and conditioning both sides on Y ), it follows that E[Var( X  Y )] = E[Var( X  Y,Z )] + E[Var(E( X  Y,Z ))] . Thus E[Var( X  Y )] E[Var( X  Y,Z )]. This makes sense because with more observations ( Y,Z ), the MSE of the best estimate of the signal X should be less than or equal to that observing only Y . d. . From the previous result, it follows that E[Var( X  Y,g ( Y ))] E[Var( X  g ( Y ))] . But g ( Y ) is completely determined by Y , thus E[Var( X  Y,g ( Y ))] = E[Var( X  Y )]. This result makes sense because in general Y provides better information about the signal X than any function of it. e. . First note that E( X 2  Z ) Var( X  Z ) and E( Y 2  Z ) Var( Y  Z ). Thus E( X 2  Z ) E( Y 2  Z ) Var( X  Z )Var( Y  Z ) . Now, using Schwarz inequality, we obtain Var( X  Z )Var( Y  Z ) (Cov( X,Y  Z )) 2 . Taking expectations of both sides, we obtain E [Var( X  Z )Var( Y  Z )] E bracketleftbig (Cov( X,Y  Z )) 2 bracketrightbig . But E [(Cov( X,Y  Z )) 2 ] [E(Cov( X,Y  Z ))] 2 . f. . We use Jensens inequality twice and the fact that E ( X ) 1 E parenleftBig log 2 (1 + X ) parenrightBig log 2 parenleftBig 1 + E parenleftBig X parenrightBigparenrightBig log 2 parenleftBig 1 + radicalbig E( X ) parenrightBig log 2 (1 + 1) 1 ....
View
Full
Document
 Fall '09
 BalajiPrabhakar
 Signal Processing

Click to edit the document details