BN-structure-2

# BN-structure-2 - Bayesian Networks Structure Learning Part...

This preview shows pages 1–5. Sign up to view the full content.

Bayesian Networks Structure Learning ( Part 2) BMI/CS 576 www.biostat.wisc.edu/bmi576/ Mark Craven [email protected] Fall 2011 The structure learning task structure learning methods have two main components 1. a scheme for scoring a given BN structure 2. a search procedure for exploring the space of structures

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Bayesian network structure learning we need a scoring function to evaluate candidate networks; Friedman et al. use one with the form score ( G : D ) = log P ( G | D ) " log P ( D | G ) + log P ( G ) log probability of data D given graph G log prior probability of graph G where they take a Bayesian approach to computing P ( D | G ) = P ( D | G , " ) P ( " | G ) d " # P ( D | G ) i.e. don’t commit to particular parameters in the Bayes net The Bayesian approach to structure learning How can we calculate the probability of the data without using specific parameters (i.e. probabilities in the CPDs)? let’s consider a simple case of estimating the parameter of a weighted coin…
0 1 ! h ! t The Beta distribution 1 1 ) 1 ( ) ( ) ( ) ( ) ( ! ! ! " " + " = t h t h t h P # \$ h ! t suppose we’re taking a Bayesian approach to estimating the parameter " of a weighted coin the Beta distribution provides an appropriate prior where ! # of “imaginary” heads we have seen already # of “imaginary” tails we have seen already continuous generalization of factorial function The Beta distribution P ( " | D ) = # ( h + M h + t + M t ) # ( h + M h ) # ( t + M t ) h + M h % 1 (1 % ) t + M t % 1 = Beta ( h + M h , t + M t ) suppose now we’re given a data set D in which we observe M h heads and M t tails the posterior distribution is also Beta: we say that the set of Beta distributions is a conjugate family for binomial sampling

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The Beta distribution P ( X = heads ) = P ( X = heads | " ) P ( ) d 0 1 # = P ( ) d 0 1 # = \$ h h + t assume we have a distribution P ( " ) that is Beta( # h , # t ) what’s the marginal probability (i.e. over all " ) that our next coin flip would be heads? M
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/15/2011 for the course BMI 576 taught by Professor Staff during the Fall '11 term at Wisc Green Bay.

### Page1 / 16

BN-structure-2 - Bayesian Networks Structure Learning Part...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online