This preview shows page 1. Sign up to view the full content.
Unformatted text preview: l be θM L = 1, which predicts that we will never ﬂip tails! How
ever, we, the modeler, suspect that the coin is probably fair, and can assign
α = β = 3 (or some other number with α = β ), and we get θM AP = 3/5.
Question How would you set α and β for the coin toss under a strong prior
belief vs. a weak prior belief that the probability of Heads was 1/8?
For large samples it is easy to see for the coin ﬂipping that the eﬀect of the
prior goes to zero:
lim θMAP = lim θML = θtrue . m→∞ m→∞ Why? Recall what know about regularization in machine learning - that data plus
knowledge implies generalization. The prior is the “knowledge” part. One
could interpret the MAP estimate as a regularized version of the ML estimate,
or a version with “shrinkage.”
Example 1. (Rare Events) The MAP estimate is particularly useful when
dealing with rare events. Suppose we are trying to estimate the probabil
ity that a given credit card transaction is fraudulent. Perhaps we...
View Full Document
- Spring '12