Mixtures of conjugate priors p θ k x k 1 w k p k θ

• Notes
• 16

This preview shows pages 15–16. Sign up to view the full content.

Mixtures of Conjugate Priors p ( θ ) = K X k =1 w k p k ( θ ) I w k is mixture weight 0 < w k < 1 and k w k = 1 I p k ( θ ) is a conjugate prior for θ p k ( θ ) = c k c ( θ ) n 0 k exp( θ n 0 k t 0 k ) where c k is the normalizing constant. Bayes Theorem: p ( θ ) = X w k c k c ( θ ) n 0 k exp( θ n 0 k t 0 k ) p ( Y | θ ) = c Y c ( θ ) n exp( θ n ¯ t ( Y )) p ( θ | Y ) p ( θ ) p ( Y | θ )

This preview has intentionally blurred sections. Sign up to view the full version.

Result p ( θ | Y ) c Y c ( θ ) n exp( θ n ¯ t ( Y )) X w k c k c ( θ ) n 0 k exp( θ n 0 k t 0 k ) Subject to integration to 1; update
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern