{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# hwsol3 - EE 278 Statistical Signal Processing Homework#3...

This preview shows pages 1–4. Sign up to view the full content.

EE 278 Monday, July 27, 2009 Statistical Signal Processing Handout #7 Homework #3 Solutions 1. (20 points) Schwarz inequality. a. Consider the quadratic equation in the parameter a : 0 = E (( X + aY ) 2 ) = E ( X 2 ) + 2 aE ( XY ) + a 2 E ( Y 2 ) . Since it is the expected value of a nonnegative random variable, E (( X + aY ) 2 ) 0 . If E (( X + aY ) 2 ) > 0 there are two imaginary solutions, while if E (( X + aY ) 2 ) = 0 there is one real solution. Thus the discriminant must satisfy 4( E ( XY )) 2 - 4 E ( X 2 ) E ( Y 2 ) 0 , which we can rewrite as E (( XY )) 2 E ( X 2 ) E ( Y 2 ) . Here is another proof. Let a = ± s E ( X 2 ) E ( Y 2 ) . Plugging these value into E (( X + aY ) 2 ) 0, we obtain - E ( XY ) p E ( X 2 ) E ( Y 2 ) and E ( XY ) p E ( X 2 ) E ( Y 2 ) Combining the two inequalities for E ( XY ) yields | E ( XY ) | ≤ p E ( X 2 ) E ( Y 2 ) E 2 ( XY ) E ( X 2 ) E ( Y 2 ) which is what we set out to prove. b. If X = cY then E (( cY · Y )) 2 = c 2 ( E ( Y 2 )) 2 = E (( cY ) 2 ) E ( Y 2 ) = E ( X 2 ) E ( Y 2 ) . Conversely, if E 2 ( XY ) = E ( X 2 ) E ( Y 2 ) then the discriminant of the quadratic equation in part (a) is 0. Therefore E (( X + aY ) 2 ) = 0, which means that X + aY = 0 with probability 1. Hence X = - aY with probability 1. Clearly, a = ± s E ( X 2 ) E ( Y 2 ) . c. The square of correlation coefficient is by definition ρ 2 X,Y = Cov 2 ( X, Y ) Var( X )Var( Y ) . If we define random variables U = X - E ( X ) and V = Y - E ( Y ), then ρ 2 X,Y = ( E ( UV )) 2 E ( U 2 ) E ( V 2 ) 1 , where the inequality follows from the Schwarz Inequality. Therefore | ρ X,Y | ≤ 1 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
d. By the Schwarz Inequality, E 2 ( XY ) E ( X 2 ) E ( Y 2 ) E ( XY ) p E ( X 2 ) E ( Y 2 ) . Therefore E (( X + Y ) 2 ) = E ( X 2 ) + 2 E ( XY ) + E ( Y 2 ) E ( X 2 ) + 2 p E ( X 2 ) p E ( Y 2 ) + E ( Y 2 ) = p E ( X 2 ) + p E ( Y 2 ) 2 . 2. (20 points) Jensen Inequality. a. For two x points with nonzero p X ( x ), the proof follows by the definition of convexity. Now, suppose that the proof holds for n x points with nonzero p X ( x ). To prove that it holds for n + 1 points, we need to show that g n +1 X i =1 p X ( x i ) x i ! n +1 X i =1 p X ( x i ) g ( x i ) . Define n X i =1 p X ( x i ) = λ, so p X ( x n +1 ) = 1 - λ. Then, n +1 X i =1 p X ( x i ) g ( x i ) = (1 - λ ) g ( x n +1 ) + λ n X i =1 p X ( x i ) λ g ( x i ) (1 - λ ) g ( x n +1 ) + λg n X i =1 p X ( x i ) λ x i ! g (1 - λ ) x n +1 + λ n X i =1 p X ( x i ) λ x i ! = g n +1 X i =1 p X ( x i ) x i ! . where the first inequality follows from the induction hypothesis and the second follows from the definition of convexity. b. Once a function g ( X ) has been shown to be convex, applying Jensen’s inequality E ( g ( X )) g ( E ( X )) gives the inequality relationship. i. e 2 x is convex since d 2 dx 2 e 2 x = 4 e 2 x 0 and therefore Ee 2 X e 2 EX . ii. - ln( x ) is convex since d 2 dx 2 ( - ln( x )) = 1 x 2 Page 2 of 12 EE 278, Summer 2009
and therefore E ( - ln( X )) ≥ - ln( EX ) . Consequently, by linearity of expectation E ln( X ) ln( EX ) . iii. Let Y = X 2 . Now y 6 is convex since d 2 dy 2 y 6 = 30 y 4 0 and therefore E ( X 12 ) = E ( Y 6 ) ( E ( Y )) 6 = ( E ( X 2 )) 6 . 3. (10 points) Conditional expectation. We are given X | (Λ = λ ) Exp( λ ). Therefore E ( X | Λ = λ ) = 1 λ . E ( X ) = E Λ ( E X ( X | Λ = λ )) = E Λ 1 λ = Z + -∞ 1 λ f Λ ( λ ) = Z 1 0 1 λ · 5 3 λ 2 3 = 5 3 Z 1 0 λ - 1 3 = 5 3 λ 2 3 2 3 1 0 = 5 2 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}