{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

9.8-BayesianEst-BayesianNetworks

# 9.8-BayesianEst-BayesianNetworks - Machine Learning Srihari...

This preview shows pages 1–8. Sign up to view the full content.

Machine Learning Srihari 1 Bayesian Parameter Estimation in Bayesian Networks Sargur Srihari

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari Topics 1. Bayesian network where parameters are included as variable nodes 2. Global parameter independence Leads to global decomposition 3. Examples of Bayesian parameter estimation for Bayesian networks 1. ICU network 2. Text classification Naïve Bayes models Latent Dirichlet allocation 2
Machine Learning Srihari Inclusion of Parameters as variables Bayesian Framework (Approach) requires – specify a joint distribution over unknown parameters θ and data instances D In turn, this joint distribution can be represented as a Bayesian network – instances and parameters of variables 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari Simple example with parameters Consider the network X Y Training data D = {X[m],Y[m]} for m=1,..M Unknown Parameter vectors θ X and θ Y|X Meta network for describing learning set-up 4 Plate Model Ground Bayesian Network
Machine Learning Srihari Global Parameter Independence • If G is a Bayesian network with parameters A prior P( θ ) satisfies global independence if – In prior distribution over parameters, nodes are independent of each other Network with global parameter independence: 5 θ = ( θ X 1 | Pa X 1 ,.., θ X n | Pa Xn ) P ( θ ) = P ( θ X i | PaX i ) i

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari Use of Global Parameter independence Commonly assumed But may not always be appropriate – Student Example Takes two courses from the same instructor, grade distribution may be the same. So the parameters are dependent 6
Machine Learning Srihari Global Decomposition Assume global parameter independence Important conclusion arises – Parameters for different CPDs are D-separated – In X Y network • If x[m] and y[m] are observed for all m then θ X and θ Y|X are d-separated, i.e., • Conclusion – Given data set D we can determine posterior over θ X independently of posterior over θ Y|X 7 P ( θ X , θ Y | X | D ) = P ( θ X | D ) P ( θ Y | X | D )

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 21

9.8-BayesianEst-BayesianNetworks - Machine Learning Srihari...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online