This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CSci 5512: Gibbs Sampling for Approximate Inference in Bayesian Networks Let p ( X 1 , . . . , X n | e 1 , . . . , e m ) denote the joint distribution of a set of random variables ( X 1 , . . . , X n ) conditioned on a set of evidence variables ( e 1 , . . . , e m ). Gibbs sampling is an algorithm to generate a sequence of samples from such a joint probability distribution. The purpose of such a sequence is to approximate the joint distribution (as with a histogram), or to compute an integral (such as an expected value). Gibbs sampling is applicable when the joint distribution is not known explicitly, but the con- ditional distribution of each variable is known. The Gibbs sampling algorithm is used to generate an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples comprises a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution. Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions.since Bayesian networks are typically specified as a collection of conditional distributions....
View Full Document
This note was uploaded on 02/07/2012 for the course CSCI 5512 taught by Professor Staff during the Spring '08 term at Minnesota.
- Spring '08