Ch11.1-BasicSampling

# Ch11.1-BasicSampling - Machine Learning Srihari Basic Sampling Methods Sargur Srihari [email protected]/* <![CDATA[ */!function(t,e,r,n,c,a,p){try{t=document.currentScript||function(){for(t=document.getElementsByTagName('script'),e=t.length;e--;)if(t[e].getAttribute('data-cfhash'))return t[e]}();if(t&&(c=t.previousSibling)){p=t.parentNode;if(a=c.getAttribute('data-cfemail')){for(e='',r='0x'+a.substr(0,2)|0,n=2;a.length-n;n+=2)e+='%'+('0'+('0x'+a.substr(n,2)^r).toString(16)).slice(-2);p.replaceChild(document.createTextNode(decodeURIComponent(e)),c)}p.removeChild(t)}}catch(u){}}()/* ]]> */ 1 Machine Learning Srihari Topics 1

This preview shows pages 1–7. Sign up to view the full content.

Machine Learning Srihari 1 Basic Sampling Methods Sargur Srihari [email protected]

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 2 Topics 1. Motivation 2. Ancestral Sampling 3. Basic Sampling Algorithms 4. Rejection Sampling 5. Importance Sampling 6. Sampling-Importance-Resampling
Machine Learning Srihari 3 1. Motivation • When exact inference is intractable, we need some form of approximation – True of probabilistic models of practical significance • Inference methods based on numerical sampling are known as Monte Carlo techniques • Most situations will require evaluating expectations of unobserved variables, e.g., to make predictions – Rather than the posterior distribution

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 4 Task • Find expectation E[f] of some function f( z ) wrt distribution p( z ) – Components of z can be discrete, continuous or combination – Function can be z, z 2 , etc • We wish to evaluate – Assume it is too complex to be evaluated analytically • E.g., Mixture of Gaussians – Note: EM with GMM is for clustering, Our current interest is inference – In discrete case, integral replaces by summation
Machine Learning Srihari 5 Sampling: main idea Obtain set of samples z (l) where l =1,. ., L – Drawn independently from distribution p( z ) Expectation of a function f(z) or E[f] is approximated by – Then so the estimator has the correct mean – Estimator variance is or accuracy of estimator does not depend on dimensionality of z High accuracy with few (10-20 independent) samples However – 1. Samples may not be iid so effective sample may be smaller than apparent sample size – 2. In example f( z ) is small when p( z ) is high and vice versa Expectation dominated by regions of small probability thereby requiring large sample sizes ˆ f = 1 L f ( z ( l ) ) l = 1 L called an estimator E [ ˆ f ] = E [ f ] var[ ˆ f ] = 1 L f ( z ( l ) ) E [ f ] ( ) l = 1 L 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Machine Learning Srihari 6 2. Ancestral Sampling If joint distribution is represented by a directed graph with no observed variables – a straightforward method exists Distribution is specified by – where z i are set of variables associated with node i and pa i are set of variables associated with node parents of node i To obtain samples from joint – we make one pass through set of variables in order
This is the end of the preview. Sign up to access the rest of the document.

## This document was uploaded on 02/25/2012.

### Page1 / 26

Ch11.1-BasicSampling - Machine Learning Srihari Basic Sampling Methods Sargur Srihari [email protected] 1 Machine Learning Srihari Topics 1

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online