Coin flip example part 2 returning to the coin ip

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: od model, maximum likelihood is guaranteed to find the correct distribution, as m goes to infinity. In proving consistency, we do not get finite sample guar­ antees like with statistical learning theory; and data are always finite. Coin Flip Example Part 2. Returning to the coin flip example, equation (2), the log-likelihood is R(θ) = mH log θ + (m − mH ) log(1 − θ). We can maximize this by differentiating and setting to zero, and doing a few lines of algebra: dR(θ) ˆ dθ θ ˆ mH (1 − θML ) ˆ mH − θML mH ˆ θML 0 = m H m − mH − θ 1−θ ˆ = (m − mH )θML ˆ ˆ = mθML − θML mH mH = . m = ˆ θ (5) (It turns out not to be difficult to verify that this is indeed a maximum). In this case, the maximum likelihood estimate is exactly what we intuitively thought we should do: estimate θ as the observed proportion of Heads. 4 2.2 Maximum a p osteriori (MAP) estimation The MAP estimate is a pointwise estimate with a Bayesian flavor. Rather than finding θ tha...
View Full Document

This note was uploaded on 03/24/2014 for the course MIT 15.097 taught by Professor Cynthiarudin during the Spring '12 term at MIT.

Ask a homework question - tutors are online