This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 LAW OF LARGE NUMBERS Lecture 14 ORIE3500/5500 Summer2009 Chen Class Today Law of Large Numbers Convergence Normal (Gaussian) Distribution 1 Law of Large Numbers The law of large numbers or l.l.n. is one of the most important theorems in probability and is the backbone of most statistical procedures. Theorem. If X 1 ,...,X n are independent and identically distributed(iid) with mean , then the sample mean X n converges to the true mean as n in- creases, that is, X n- ,n . Before we try to see why we should expect this let us recall a few properties of the the sample mean, X n = X 1 + + X n n . 1. Expected value of the sample mean, E ( X n ) = E X 1 + + X n n = 1 n E ( X 1 + + X n ) = 1 n nE ( X 1 ) = . 1 2 NOTIONS OF CONVERGENCE 2. If var ( X 1 ) = 2 , then variance of the sample mean, var ( X n ) = var X 1 + + X n n = 1 n 2 var ( X 1 + + X n ) = 1 n 2 ( var ( X 1 ) + + var ( X n )) (by independence) = 1 n 2 n var ( X 1 ) = 2 n . This means that the variance of the sample mean decreases as the sample size increases. Recall that the variance of a random variable measures the dispersion of the random variable about its mean. So if the variance is decreasing to 0, then the random variable is slowly shrinking to its mean. It becomes more and more concentrated around the population mean. The Chebyshevs inequality completes the argument. For any > P [ | X n- | > ] var ( X n ) 2 = 2 /n 2 ,n . This shows that whatever small number positive we choose, the probability that the sample mean is more than distance away from the true mean goes to zero. So we proved the LLN in the case when the variance of X 1 is finite....
View Full Document
This note was uploaded on 09/22/2009 for the course ORIE 3500 taught by Professor Weber during the Summer '08 term at Cornell University (Engineering School).
- Summer '08