This preview shows pages 1–10. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: n observations is deﬁned by ¯ X n = ¯ X = X 1 + X 2 + ... + X n n , n = 1 , 2 ,... . Recall that E ( ¯ X n ) = μ, and Var ( ¯ X n ) = σ 2 n , n = 1 , 2 ,... . Conclusion : The sample mean becomes more and more concentrated around the (population) true mean μ . The Law of Large Numbers : The sample mean of a sequence of iid random variables with mean μ , converges to this true mean as the sample size n grows. ¯ X n → μ as n → ∞ . 1 . Markov’s inequality For any nonnegative random variable X and a number a > P ( X > a ) ≤ EX a . 2 . Chebyshev’s inequality For any random variable X with mean μ , and number h > P (  Xμ  > h ) ≤ Var( X ) h 2 . 3 . The Law of Large Numbers For any ± > P (± ± ¯ X nμ ± ± > ± ) → 0 as n → ∞ ....
View
Full
Document
This note was uploaded on 12/03/2010 for the course OR&IE 3500 taught by Professor Samorodnitsky during the Fall '10 term at Cornell University (Engineering School).
 Fall '10
 SAMORODNITSKY

Click to edit the document details