{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT6_041F10_assn11_sol

# MIT6_041F10_assn11_sol - Massachusetts Institute of...

This preview shows pages 1–3. Sign up to view the full content.

Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) Problem Set 11 Solutions 1. Check book solutions . 2. (a) To find the MAP estimate, we need to find the value x that maximizes the conditional density f X | Y ( x | y ) by taking its derivative and setting it to 0. f X | Y ( x y ) = p Y | X ( y | x ) · f X ( x ) | p Y ( y ) e x x y 1 = µe µx y ! · · p Y ( y ) µ e ( µ +1) x y = x y ! p Y ( y ) · d d µ dx f X | Y ( x | y ) = dx y ! p Y ( y ) · e ( µ +1) x x y = µ x y 1 e ( µ +1) x ( y x ( µ + 1)) y ! p Y ( y ) Since the only factor that depends on x which can take on the value 0 is ( y x ( µ + 1)), the maximum is achieved at x ˆ MAP ( y ) = y 1 + µ It is easy to check that this value is indeed maximum (the first derivative changes from positive to negative at this value). (b) i. To show the given identity, we need to use Bayes’ rule. We first compute the denomi- nator, p Y ( y ) y e x x p Y ( y ) = µe µx dx 0 y ! = µ (1 + µ ) y +1 x y e (1+ µ ) x dx y ! (1 + µ ) y +1 0 µ = (1 + µ ) y +1 Then, we can substitute into the equation we had derived in part (a) f X | Y ( x | y ) = y ! p Y µ ( y ) x y e ( µ +1) x µ (1 + µ ) y +1 y e ( µ +1) x = x y ! µ (1 + µ ) y +1 y e ( µ +1) x = x y ! Thus, λ = 1 + µ . Page 1 of ??

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.041/6.431: Probabilistic Systems Analysis (Fall 2010) ii. We first manipulate xf X | Y ( x | y ): xf X | Y ( x | y ) = (1 + y µ ! ) y +1 x y +1 e ( µ +1) x y + 1 (1 + µ ) y +2 y +1 e ( µ +1) x = x 1 + µ ( y + 1)! y + 1 = 1 + µ f X | Y ( x | y + 1) Now we can find the conditional expectation estimator: x ˆ CE ( y ) = E [ X | Y = y ] = 0 xf X | Y ( x | y ) dx y + 1 y + 1 = 0 1 + µ f X | Y ( x | y + 1) dx = 1 + µ (c) The conditional expectation estimator is always higher than the MAP estimator by 1+ 1 µ . 3. (a) The likelihood function is k P k P T i ( T i = t i | Q = q ) = q k (1 q ) i t i k . i =1 To maximize the above probability we set its derivative with respect to q to zero k P P k k kq k 1 (1 q ) i t i k ( t i k ) q k (1 q ) i t i k 1 = 0 , i or equivalently k k (1 q ) ( t i k ) q = 0 , i which yields Q k = P k .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 7

MIT6_041F10_assn11_sol - Massachusetts Institute of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online