This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: n observations is deﬁned by ¯ X n = ¯ X = X 1 + X 2 + ... + X n n , n = 1 , 2 ,... . Recall that E ( ¯ X n ) = μ, and Var ( ¯ X n ) = σ 2 n , n = 1 , 2 ,... . Conclusion : The sample mean becomes more and more concentrated around the (population) true mean μ . The Law of Large Numbers : The sample mean of a sequence of iid random variables with mean μ , converges to this true mean as the sample size n grows. ¯ X n → μ as n → ∞ . 1 . Markov’s inequality For any nonnegative random variable X and a number a > P ( X > a ) ≤ EX a . 2 . Chebyshev’s inequality For any random variable X with mean μ , and number h > P (  Xμ  > h ) ≤ Var( X ) h 2 . 3 . The Law of Large Numbers For any ± > P (± ± ¯ X nμ ± ± > ± ) → 0 as n → ∞ ....
View
Full Document
 Fall '10
 SAMORODNITSKY
 Normal Distribution, Variance, Laplace, Probability theory, Characteristic function

Click to edit the document details