lecture24

# lecture24 - CSE 6740 Lecture 24 How Do I Evaluate...

This preview shows pages 1–11. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CSE 6740 Lecture 24 How Do I Evaluate High-Dimensional Integrals? (Sampling) Alexander Gray [email protected] Georgia Institute of Technology CSE 6740 Lecture 24 – p. 1/ ? ? Today 1. Integration and Sampling 2. Monte Carlo Variance Reduction 3. Markov Chain Monte Carlo CSE 6740 Lecture 24 – p. 2/ ? ? Integration and Sampling Why integration, and why sampling. CSE 6740 Lecture 24 – p. 3/ ? ? Integration Suppose we want to find I = integraldisplay b ( x ) dx. (1) If x is low-dimensional, we can use standard quadrature techniques. However, quadrature techniques effectively grid up the space, so that their cost is exponential in the dimensionality D of x . CSE 6740 Lecture 24 – p. 4/ ? ? Integration Now suppose we have the form b ( x ) = a ( x ) f ( x ) , (2) where f is a probability density function. We get this form whenever we want to compute the expected value of a function a ( x ) , where x ∼ f : I = E ( a ) = integraldisplay a ( x ) f ( x ) dx. (3) CSE 6740 Lecture 24 – p. 5/ ? ? Integration and Sampling The law of large numbers ensures that the sample mean over iid samples from f converges to the integral: hatwide I = 1 S S summationdisplay s a ( x s ) → E ( a ) (4) as S → ∞ . hatwide I is an unbiased estimator of I . This is called Monte Carlo integration . CSE 6740 Lecture 24 – p. 6/ ? ? Integration and Sampling Its error is effectively its variance, which is 1 S integraldisplay ( a ( x )- E ( a )) 2 dx = σ 2 a /S. (5) An estimate of this is hatwide σ 2 = 1 S- 1 S summationdisplay s parenleftBig a ( x s )- hatwide I parenrightBig 2 . (6) CSE 6740 Lecture 24 – p. 7/ ? ? Integration and Sampling Expectations are ubiquitous in statistics but this idea happens to be critical for making Bayesian statistics practicable. Recall that for a dataset { x } ≡ { x 1 , . . . , x N } , the likelihood is f ( { x }| θ ) = f ( x 1 , . . . , x N | θ ) = N productdisplay i =1 f ( x i | θ ) = L ( θ ) , (7) and the posterior is f ( θ |{ x } ) = f ( { x }| θ ) f ( θ ) integraltext f ( { x }| θ ) f ( θ ) dθ = L ( θ ) f ( θ ) c ∝ L ( θ ) f ( θ ) (8) where c = integraltext f ( { x }| θ ) f ( θ ) dθ . CSE 6740 Lecture 24 – p. 8/ ? ? Integration and Sampling Bayesians want to compute the posterior mean θ = E ( θ ) = integraldisplay θf ( θ |{ x } ) dθ. (9) Note that this has the form we specified, where the integrand θ ∼ f ( θ |{ x } ) and a ( θ ) = θ . So if we can draw samples θ 1 , . . . , θ S from the posterior f ( θ |{ x } ) , 1 S S summationdisplay s θ s → E ( θ ) (10) as S → ∞ . CSE 6740 Lecture 24 – p. 9/ ? ? Integration and Sampling Bayesians also want to compute the 1- α posterior interval ( a, b ) such that integraltext a −∞ f ( θ |{ x } ) dθ = integraltext ∞ b f ( θ |{ x } ) dθ = α/ 2 and P ( θ ∈ ( a, b ) |{ x } ) = integraldisplay b a f ( θ |{ x } ) dθ = 1- α. (11) This can also be done by drawing samples θ s from the posterior f ( θ |{ x } ) . We can approximate the posterior....
View Full Document

{[ snackBarMessage ]}

### Page1 / 33

lecture24 - CSE 6740 Lecture 24 How Do I Evaluate...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online