MIT6_262S11_lec21

Pr h 1 y p1fy h 1 y the probability that h

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ) + p1fY |H (￿ | 1) y y ￿ ￿ . Comparing Pr {H =0 | ￿ } and Pr {H =1 | ￿ }, y y p0fY |H (￿ | 0) y ￿ Pr {H =0 | ￿ } y = . Pr {H =1 | ￿ } y p1fY |H (￿ | 1) y ￿ The probability that H ￿ ￿ is the correct hypothesis, given = ￿ ￿ the observation, is Pr H =￿ | Y . Thus we maximize the a p osteriori probability of choosing ￿ orrectly by choosing c ￿ ￿ the maximum over ￿ of Pr H =￿ | Y . This is called the MAP rule (maximum a p osteriori prob­ ability). It requires knowing p0 and p1. 5 The MAP rule (and other decision rules) are clearer if we define the likelihood ratio, Λ(￿ ) = y The MAP rule is then Λ(￿ ) y ￿ > p1/p0 ≤ p1/p0 fY |H (￿ | 0) y ￿ y fY |H (￿ | 1) ￿ ; ; . select select ˆ=0 h ˆ=1. h Many decision rules, including the most common and the most sensible, are rules that compare Λ(￿ ) to a fixed y threshold, say η , independent of ￿ . Such decision rules y vary only in the way that η is chosen. Example: For maximum likelihood, the threshold is 1 (this is MAP for p0 = p1, but it is also used in other ways). 6 Back to random walks: Not...
View Full Document

This note was uploaded on 01/13/2012 for the course ELECTRICAL 6.262 taught by Professor Staff during the Fall '11 term at MIT.

Ask a homework question - tutors are online