This** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
This** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
This** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
This** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
This** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
**Unformatted text preview: **n ob-servations is deﬁned by ¯ X n = ¯ X = X 1 + X 2 + ... + X n n , n = 1 , 2 ,... . Recall that E ( ¯ X n ) = μ, and Var ( ¯ X n ) = σ 2 n , n = 1 , 2 ,... . Conclusion : The sample mean becomes more and more concentrated around the (popula-tion) true mean μ . The Law of Large Numbers : The sample mean of a sequence of iid random variables with mean μ , converges to this true mean as the sample size n grows. ¯ X n → μ as n → ∞ . 1 . Markov’s inequality For any nonnegative random variable X and a number a > P ( X > a ) ≤ EX a . 2 . Chebyshev’s inequality For any random variable X with mean μ , and number h > P ( | X-μ | > h ) ≤ Var( X ) h 2 . 3 . The Law of Large Numbers For any ± > P (± ± ¯ X n-μ ± ± > ± ) → 0 as n → ∞ ....

View
Full Document

- '10
- SAMORODNITSKY
- Normal Distribution, Variance, Laplace, Probability theory, Characteristic function