9.8-BayesianEst-BayesianNetworks

9.8-BayesianEst-BayesianNetworks - Machine Learning Srihari...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
Machine Learning Srihari 1 Bayesian Parameter Estimation in Bayesian Networks Sargur Srihari srihari@cedar.buffalo.edu
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari Topics 1. Bayesian network where parameters are included as variable nodes 2. Global parameter independence Leads to global decomposition 3. Examples of Bayesian parameter estimation for Bayesian networks 1. ICU network 2. Text classification Naïve Bayes models Latent Dirichlet allocation 2
Background image of page 2
Machine Learning Srihari Inclusion of Parameters as variables • Bayesian Framework (Approach) requires – specify a joint distribution over unknown parameters θ and data instances D • In turn, this joint distribution can be represented as a Bayesian network – instances and parameters of variables 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari Simple example with parameters • Consider the network X Y • Training data D = {X[m],Y[m]} for m=1,. .M • Unknown Parameter vectors θ X and θ Y|X • Meta network for describing learning set-up 4 Plate Model Ground Bayesian Network
Background image of page 4
Machine Learning Srihari Global Parameter Independence • If G is a Bayesian network with parameters • A prior P( θ ) satisfies global independence if – In prior distribution over parameters, nodes are independent of each other • Network with global parameter independence: 5 θ = ( X 1 | Pa X 1 ,.., X n | Pa Xn ) P ( ) = P ( X i | PaX i ) i
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari Use of Global Parameter independence • Commonly assumed • But may not always be appropriate – Student Example • Takes two courses from the same instructor, grade distribution may be the same. • So the parameters are dependent 6
Background image of page 6
Machine Learning Srihari Global Decomposition • Assume global parameter independence • Important conclusion arises – Parameters for different CPDs are D-separated – In X Y network • If x[m] and y[m] are observed for all m then θ X and θ Y|X are d-separated, i.e., • Conclusion – Given data set D we can determine posterior over θ X independently of posterior over θ Y|X 7 P ( θ X , Y | X | ) = P ( X | ) P ( Y | X | )
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

This document was uploaded on 02/25/2012.

Page1 / 21

9.8-BayesianEst-BayesianNetworks - Machine Learning Srihari...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online