MIT15_097S12_lec15

Proof we will rely on jensens inequality which states

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: , u(yi ) = yi , φ(θ) = −θ, and g (θ) = θ. Then, f (yi )g (θ) exp (φ(θ)u(yi )) = θe−θyi = p(yi |θ). Thus the exponential distribution is an exponential family. 4 Posterior asymptotics Up to this point, we have defined a likelihood model that is parameterized by θ, assigned a prior distribution to θ, and then computed the posterior p(θ|y ). There are two natural questions that arise. First, what if we choose the ‘wrong’ likelihood model? That is, what if the data were actually gener­ ated by some distribution q (y ) such that q (y ) = p(y |θ) for any θ, but we use p(y |θ) as our likelihood model? Second, what if we assign the ‘wrong’ prior? We can answer both of these questions asymptotically as m → ∞. First we 15 must develop a little machinery from information theory. A useful way to measure the dissimilarity between two probability distribu­ tions is the Kullback-Leibler (KL) divergence, defined for two distributions p(y ) and q (y ) as: � q (y ) q (y ) = q (y ) log dy. D(q (·)||p(·)) := 1y∼q...
View Full Document

This note was uploaded on 03/24/2014 for the course MIT 15.097 taught by Professor Cynthiarudin during the Spring '12 term at MIT.

Ask a homework question - tutors are online