This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Contents of Lecture III 1. The Central Limit Theorem and Binning 2. Gaussian Error Analysis for Large and Small Samples 3. The Jackknife Approach 1 The Central Limit Theorem and Binning How is the sum of two independent random variables y r = x r 1 + x r 2 . (1) distributed? We denote the probability density of y r by g ( y ) . The corresponding cumulative distribution function is given by G ( y ) = Z x 1 + x 2 y f 1 ( x 1 ) f 2 ( x 2 ) dx 1 dx 2 = Z +  f 1 ( x ) F 2 ( y x ) dx where F 2 ( x ) is the distribution function of the random variable x r 2 . We take the derivative and obtain the probability density of y r g ( y ) = dG ( y ) dy = Z +  f 1 ( x ) f 2 ( y x ) dx . (2) 2 The probability density of a sum of two independent random variables is the convolution of the probability densities of these random variables. Example: Sums of uniform random numbers, corresponding to the sums of an uniformly distributed random variable x r (0 , 1] : (a) Let y r = x r + x r , then g 2 ( y ) = y for y 1 , 2 y for 1 y 2 , elsewhere . (3) (b) Let y r = x r + x r + x r , then g 3 ( y ) = y 2 / 2 for 0 y 1 , ( 2 y 2 + 6 y 3) / 2 for 1 y 2 , ( y 3) 2 / 2 for 2 y 3 , elsewhere . (4) 3 The convolution (2) takes on a simple form in Fourier space . In statistics the Fourier transformation of the probability density is known as characteristic function , defined as the expectation value of e itx r : ( t ) = h e itx r i = Z +  e itx f ( x ) dx . (5) The characteristic function is particularly useful for investigating sums of random variables, y r = x r 1 + x r 2 : y ( t ) = D e itx r 1 + itx r 2 E = Z +  Z +  e itx 1 e itx 2 f 1 ( x 1 ) f 2 ( x 2 ) dx 1 dx 2 = x 1 ( t ) x 2 ( t ) . The characteristic function of a sum of random variables is the product of their characteristic functions. The result generalizes immediately to N random variables y r = x r 1 + ... + x r N . (6) 4 The characteristic function of y r is y ( t ) = N Y i =1 x i ( t ) (7) and the probability density of y r is the Fourier backtransformation of this characteristic function g ( y ) = 1 2 Z +  dte ity y ( t ) . (8) The probability density of the sample mean is obtained as follows: The arithmetic mean of y r is x r = y r /N . We denote the probability density of y r by g N ( y ) and the probability density of the arithmetic mean by b g N ( x ) . They are related by b g N ( x ) = N g N ( N x ) . (9) 5 This follows by substituting y = N x into g N ( y ) dy : 1 = Z +  g N ( y ) dy = Z +  g N ( N x ) 2 d x = Z +  b g N ( x ) d x . Example: 0.5 1 1.5 2 2.5 0.2 0.4 0.6 0.8 1 g x g 2 (x) g 3 (x) Figure 1: Probability densities for the arithmetic means of two and three uniformly distributed random variables, b g 2 ( x ) and b g 3 ( x ) , respectively....
View
Full
Document
This note was uploaded on 11/10/2011 for the course PHY 5157 taught by Professor Berg during the Fall '08 term at University of Florida.
 Fall '08
 Berg

Click to edit the document details