Ch11.1-BasicSampling

Ch11.1-BasicSampling - Machine Learning Srihari Basic Sampling Methods Sargur Srihari [email protected] 1 Machine Learning Srihari Topics 1

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Machine Learning Srihari 1 Basic Sampling Methods Sargur Srihari [email protected]
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari 2 Topics 1. Motivation 2. Ancestral Sampling 3. Basic Sampling Algorithms 4. Rejection Sampling 5. Importance Sampling 6. Sampling-Importance-Resampling
Background image of page 2
Machine Learning Srihari 3 1. Motivation • When exact inference is intractable, we need some form of approximation – True of probabilistic models of practical significance • Inference methods based on numerical sampling are known as Monte Carlo techniques • Most situations will require evaluating expectations of unobserved variables, e.g., to make predictions – Rather than the posterior distribution
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari 4 Task • Find expectation E[f] of some function f( z ) wrt distribution p( z ) – Components of z can be discrete, continuous or combination – Function can be z, z 2 , etc • We wish to evaluate – Assume it is too complex to be evaluated analytically • E.g., Mixture of Gaussians – Note: EM with GMM is for clustering, Our current interest is inference – In discrete case, integral replaces by summation
Background image of page 4
Machine Learning Srihari 5 Sampling: main idea Obtain set of samples z (l) where l =1,. ., L – Drawn independently from distribution p( z ) Expectation of a function f(z) or E[f] is approximated by – Then so the estimator has the correct mean – Estimator variance is or accuracy of estimator does not depend on dimensionality of z High accuracy with few (10-20 independent) samples However – 1. Samples may not be iid so effective sample may be smaller than apparent sample size – 2. In example f( z ) is small when p( z ) is high and vice versa Expectation dominated by regions of small probability thereby requiring large sample sizes ˆ f = 1 L f ( z ( l ) ) l = 1 L called an estimator E [ ˆ f ] = E [ f ] var[ ˆ f ] = 1 L f ( z ( l ) ) E [ f ] ( ) l = 1 L 2
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Machine Learning Srihari 6 2. Ancestral Sampling If joint distribution is represented by a directed graph with no observed variables – a straightforward method exists Distribution is specified by – where z i are set of variables associated with node i and pa i are set of variables associated with node parents of node i To obtain samples from joint – we make one pass through set of variables in order
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

This document was uploaded on 02/25/2012.

Page1 / 26

Ch11.1-BasicSampling - Machine Learning Srihari Basic Sampling Methods Sargur Srihari [email protected] 1 Machine Learning Srihari Topics 1

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online