{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# mcmc2 - Contents of Lecture III 1 The Central Limit Theorem...

This preview shows pages 1–8. Sign up to view the full content.

Contents of Lecture III 1. The Central Limit Theorem and Binning 2. Gaussian Error Analysis for Large and Small Samples 3. The Jackknife Approach 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The Central Limit Theorem and Binning How is the sum of two independent random variables y r = x r 1 + x r 2 . (1) distributed? We denote the probability density of y r by g ( y ) . The corresponding cumulative distribution function is given by G ( y ) = x 1 + x 2 y f 1 ( x 1 ) f 2 ( x 2 ) dx 1 dx 2 = + -∞ f 1 ( x ) F 2 ( y - x ) dx where F 2 ( x ) is the distribution function of the random variable x r 2 . We take the derivative and obtain the probability density of y r g ( y ) = dG ( y ) dy = + -∞ f 1 ( x ) f 2 ( y - x ) dx . (2) 2
The probability density of a sum of two independent random variables is the convolution of the probability densities of these random variables. Example: Sums of uniform random numbers, corresponding to the sums of an uniformly distributed random variable x r (0 , 1] : (a) Let y r = x r + x r , then g 2 ( y ) = y for 0 y 1 , 2 - y for 1 y 2 , 0 elsewhere . (3) (b) Let y r = x r + x r + x r , then g 3 ( y ) = y 2 / 2 for 0 y 1 , ( - 2 y 2 + 6 y - 3) / 2 for 1 y 2 , ( y - 3) 2 / 2 for 2 y 3 , 0 elsewhere . (4) 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The convolution (2) takes on a simple form in Fourier space . In statistics the Fourier transformation of the probability density is known as characteristic function , defined as the expectation value of e itx r : φ ( t ) = e itx r = + -∞ e itx f ( x ) dx . (5) The characteristic function is particularly useful for investigating sums of random variables, y r = x r 1 + x r 2 : φ y ( t ) = e itx r 1 + itx r 2 = + -∞ + -∞ e itx 1 e itx 2 f 1 ( x 1 ) f 2 ( x 2 ) dx 1 dx 2 = φ x 1 ( t ) φ x 2 ( t ) . The characteristic function of a sum of random variables is the product of their characteristic functions. The result generalizes immediately to N random variables y r = x r 1 + . . . + x r N . (6) 4
The characteristic function of y r is φ y ( t ) = N i =1 φ x i ( t ) (7) and the probability density of y r is the Fourier back-transformation of this characteristic function g ( y ) = 1 2 π + -∞ dt e - ity φ y ( t ) . (8) The probability density of the sample mean is obtained as follows: The arithmetic mean of y r is x r = y r /N . We denote the probability density of y r by g N ( y ) and the probability density of the arithmetic mean by g N ( x ) . They are related by g N ( x ) = N g N ( N x ) . (9) 5

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This follows by substituting y = N x into g N ( y ) dy : 1 = + -∞ g N ( y ) dy = + -∞ g N ( N x ) 2 d x = + -∞ g N ( x ) d x . Example: 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 g x g 2 (x) g 3 (x) Figure 1: Probability densities for the arithmetic means of two and three uniformly distributed random variables, g 2 ( x ) and g 3 ( x ) , respectively. 6
This suggests that sampling leads to convergence of the mean by reducing its variance. We use the characteristic function to understand the general behavior. The characteristic function of a sum of independent random variables is the product of their individual characteristic functions φ y ( t ) = [ φ x ( t )] N . (10) The characteristic function for the corresponding arithmetic average is φ x ( t ) = + -∞ d x e it x g N ( x ) = + -∞ Nd x e it x g N ( N x ) = + -∞ dy exp i t N y g N ( y ) .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}