Lecture 21 Notes

# 2 04 06 08 1 posterior after 4 h 7 t 5 4 3 2 1 0 0 02

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 p(1 − p) pxi (1 − p)1−xi Z i P 1 1+Pi xi 1+ i (1−xi ) = p (1 − p) Z ￿ ￿ = Beta(2 + i xi , 2 + i (1 − xi )) Prior for p 5 4 3 2 1 0 0 0.2 0.4 0.6 0.8 1 Posterior after 4 H, 7 T 5 4 3 2 1 0 0 0.2 0.4 0.6 0.8 1 Posterior after 10 H, 19 T 5 4 3 2 1 0 0 0.2 0.4 0.6 0.8 1 Predictive distribution Posterior is nice, but doesn’t tell us directly what we need to know We care more about P(xN+1 | x1, …, xN) By law of total probability, conditional independence: P (xN +1 | D) = = ￿ ￿ P (xN +1 , θ | D)dθ P (xN +1 | θ)P (θ | D)dθ Coin ﬂip example After 10 H, 19 T: p ~ Beta(12, 21) E(xN+1 | p) = p E(xN+1 | θ) = E(p | θ) = a/(a+b) = 12/33 So, predict 36.4% chance of H on next ﬂip Approximate Bayes Approximate Bayes Coin ﬂip example was easy In general, computing posterior (or predictive distribution) may be hard Solution: use the approximate integration techniques we’ve studied! Bayes as numerical integration Parameters θ, data D P(θ | D) = P(D | θ) P(θ) / P(D) Usually, P(θ) is simple; so is Z P(D | θ) So, P(θ | D) ! P(D | θ) P(θ) ‣ similarly for conditional model: if X ⊥ θ, ‣ P(θ | X, Y) ! P(Y | θ, X) P(θ) Perfect for MH P(I....
View Full Document

## This note was uploaded on 01/24/2014 for the course CS 15-780 taught by Professor Bryant during the Fall '09 term at Carnegie Mellon.

Ask a homework question - tutors are online