{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture 14 - 1 LAW OF LARGE NUMBERS Lecture 14...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 LAW OF LARGE NUMBERS Lecture 14 ORIE3500/5500 Summer2009 Chen Class Today Law of Large Numbers Convergence Normal (Gaussian) Distribution 1 Law of Large Numbers The law of large numbers or l.l.n. is one of the most important theorems in probability and is the backbone of most statistical procedures. Theorem. If X 1 , . . . , X n are independent and identically distributed(iid) with mean μ , then the sample mean ¯ X n converges to the true mean μ as n in- creases, that is, ¯ X n -→ μ, n → ∞ . Before we try to see why we should expect this let us recall a few properties of the the sample mean, ¯ X n = X 1 + · · · + X n n . 1. Expected value of the sample mean, E ( ¯ X n ) = E X 1 + · · · + X n n · = 1 n E ( X 1 + · · · + X n ) = 1 n nE ( X 1 ) = μ. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 NOTIONS OF CONVERGENCE 2. If var ( X 1 ) = σ 2 , then variance of the sample mean, var ( ¯ X n ) = var X 1 + · · · + X n n · = 1 n 2 var ( X 1 + · · · + X n ) = 1 n 2 ( var ( X 1 ) + · · · + var ( X n )) (by independence) = 1 n 2 n · var ( X 1 ) = σ 2 n . This means that the variance of the sample mean decreases as the sample size increases. Recall that the variance of a random variable measures the dispersion of the random variable about its mean. So if the variance is decreasing to 0, then the random variable is slowly shrinking to its mean. It becomes more and more concentrated around the population mean. The Chebyshev’s inequality completes the argument. For any ² > 0 P [ | ¯ X n - μ | > ² ] var ( ¯ X n ) ² 2 = σ 2 /n ² 2 0 , n → ∞ . This shows that whatever small number positive ² we choose, the probability that the sample mean is more than ² distance away from the true mean goes to zero. So we proved the LLN in the case when the variance of X 1 is finite. It can be proved without this assumption as well. Note that the statement of LLN does not assume anything about the variance of X 1 .
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}