p \u0398 2 1 2 and p X 1 X 2 x 1 x 2 is constant for any given x 1 x 2 the MAP rule

# P θ 2 1 2 and p x 1 x 2 x 1 x 2 is constant for any

• Notes
• 122
• 50% (2) 1 out of 2 people found this document helpful

This preview shows page 99 - 104 out of 122 pages.

(1) =pΘ(2) =12andpX1X2(x1, x2) is constant for any given (x1, x2), the MAP rule reduces to the maximum likelihood rule which sets D ( x 1 , x 2 ) to the θ that maximizes p X 1 X 2 | Θ ( x 1 , x 2 | θ). Now p X 1 X 2 | Θ ( x 1 , x 2 | θ ) = Z -∞ f X 1 X 2 P | Θ ( x 1 , x 2 , p | θ ) dp = Z -∞ p X 1 X 2 | P Θ ( x 1 , x 2 | p, θ ) f P ( p | θ ) dp = Z -∞ p X 1 | P ( x 1 | p ) p X 2 | P ( x 2 | p ) f P ( p | θ ) dp. The last step follows from the fact that X 1 and X 2 are independent given P and no longer depend on Θ. 2
Therefore p X 1 X 2 | Θ (0 , 0 | 2) = Z -∞ p X 1 | P (0 | p ) p X 2 | P (0 | p ) f P | Θ ( p | 1) dp = Z 1 0 (1 - p ) 2 · 1 dp = 1 · p - p 2 + p 3 3 1 0 = 1 3 . The second step follows from the fact that given P = p , X 1 , X 2 are independent Bern ( p ) random variables. Similarly, p X 1 X 2 | Θ (0 , 0 | 2) = R 1 0 (1 - p ) 2 · 1 dp = 1 3 , p X 1 X 2 | Θ (0 , 1 | 2) = p X 1 X 2 | Θ (1 , 0 | 2) = R 1 0 p (1 - p ) · 1 dp = 1 6 , p X 1 X 2 | Θ (1 , 1 | 2) = R 1 0 p 2 · 1 dp = 1 3 , and On the other hand p X 1 X 2 | Θ (0 , 0 | 1) = p X 1 X 2 | Θ (0 , 1 | 1) = p X 1 X 2 | Θ (1 , 0 | 1) = p X 1 X 2 | Θ (1 , 1 | 1) = 1 4 . Thus the optimal decision rule is D ( x 1 , x 2 ) = ( 2 x 1 = x 2 1 otherwise. (b) The probability of errorP{D(X1, X2)6= Θ} P e = = p X 1 X 2 Θ (0 , 0 , 1) + p X 1 X 2 Θ (0 , 1 , 2) + p X 1 X 2 Θ (1 , 0 , 2) + p X 1 X 2 Θ (1 , 1 , 1) = p Θ (1) ( p X 1 X 2 | Θ (0 , 0 | 1) + p X 1 X 2 | Θ (1 , 1 | 1) ) + p Θ (2) ( 2 p X 1 X 2 | Θ (0 , 1 | 2) ) = ( 1 4 + 1 6 ) = 5 12 . 3
4. (a) (3 points) With U ( x ) denoting the step function, argue as follows: f X ( x ) = λe - λx U ( x ) f Y | X ( y | x ) = f Z | X ( y - x | x ) = f Z ( y - x ) = νe - ν ( y - x ) U ( y - x ) f XY ( x, y ) = f X ( x ) f Y | X ( y | x ) = λνe - λx e - ν ( y - x ) U ( x ) U ( y - x ) = λνe ( ν - λ ) x - νy U ( x ) U ( y - x ) (b) (5 points) Since Y = X + Z and X and Z are independent, f Y can be computed as the convolution between f X and f Z . Thus, for y 0, f Y ( y ) = Z y 0 νe - νz λe - λ ( y - z ) dz = νλe - λy Z y 0 e z ( λ - ν ) dz = νλe - λy 1 λ - ν e z ( λ - ν ) y 0 = νλ λ - ν e - λy ( e y ( λ - ν ) - 1 ) = νλ ν - λ ( e - λy - e - νy ) f Y ( y ) = U ( y ) νλ ν - λ ( e - λy - e - νy ) (c) (4 points) Use the definition of conditional PDF, f X | Y ( x | y ) = f XY ( x, y ) f Y ( y ) . Constrain the arguments to y x 0 and use previous results to obtain f X | Y ( x | y ) = λνe ( ν - λ ) x - νy νλ ν - λ ( e - λy - e - νy ) = ( ν - λ ) e ( ν - λ ) x e ( ν - λ ) y - 1 . For x / [0 , y ], we have f X | Y ( x | y ) = 0 by the fact that X, Z > 0 and Y = X + Z . (d) (5 points) The MMSE estimate is E ( X | Y ). We already know the conditional pdf f X | Y ( x | y ), which we can integrate over to obtain the conditional expected value. With y 0, E ( X | Y = y ) = ν - λ e ( ν - λ ) y - 1 Z y 0 xe ( ν - λ ) x dx 4
Let c = ν - λ . Using integration by parts, we have Z xe cx dx = x c e cx - Z 1 c e cx dx = x c e cx - 1 c 2 e cx = 1 c x - 1 c e cx E ( X | Y = y ) = c e cy - 1 1 c ( x - 1 c ) e cx y 0 = 1 e cy - 1 ( y - 1 c ) e cy + 1 c = - 1 c + ye cy e cy - 1 = y 1 - e - ( ν - λ ) y - 1 ν - λ (e) (3 points) We need to find ˆ X MAP = arg max x : 0 x y ( ν - λ ) e ( ν - λ ) x e ( ν - λ ) y - 1 . Since ν - λ > 0, the function to maximize is monotonously increasing in x . Thus maximization is achieved at the largest allowable x . We conclude ˆ X MAP = y. 5
EE 278 Saturday, July 18, 2009 Statistical Signal Processing Handout #8 Sample Midterm This is a sample midterm.

#### You've reached the end of your free preview.

Want to read all 122 pages?