b.lect17

# b.lect17 - Outline The LLN and the Consistency of Averages...

This preview shows pages 1–7. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem Lecture 17 Chapter 5: Two Basic Approximation Results M. George Akritas M. George Akritas Lecture 17 Chapter 5: Two Basic Approximation Results Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem The LLN and the Consistency of Averages Convolutions Distribution of Sums of Normal Random Variables The Central Limit Theorem The DeMoivre-Laplace Theorem M. George Akritas Lecture 17 Chapter 5: Two Basic Approximation Results Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem I In Chapter 1 we saw that each of X , b p and S 2 approximates the value population parameter ith estimates. I In Lab 4 we mentioned that the above (as well as all estimators used in statistics) have the consistency property. I The theoretical underpinning of the consistency property is the Law of Large Numbers , or LLN for short. Theorem (The Law of Large Numbers) Let X 1 ,..., X n be a simple random sample from a population with finite mean μ . Then, for the estimation error | X- μ | we have | X- μ | → , as n → ∞ . M. George Akritas Lecture 17 Chapter 5: Two Basic Approximation Results Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem Commentaries: 1. The statement in Theorem 5.2.1, p.241, involves a function g : Setting Y i = g ( X i ) we have that the estimation error | Y- μ Y | tends to zero as n → ∞ . I Using g ( x ) = x 2 the LLN yields the consistency of S 2 , since S 2 = n n- 1 " 1 n n X i =1 X 2 i- ( X ) 2 # 2. Theorem 5.2.1 states the LLN in terms of a type of convergence, called convergence in probability : | Y- μ Y | → in probability as n → ∞ , meaning that P ( | Y- μ Y | > ) → , for any > M. George Akritas Lecture 17 Chapter 5: Two Basic Approximation Results Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem Commentaries (continued): 3. Convergence in probability is easily proved via Chebyshev’s Inequality which states that P ( | Y- μ Y | > ) ≤ Var( Y ) 2 (2.1) Since Var( Y ) = σ 2 Y / n , the right hand side of (2.1) is Var( Y ) 2 ≤ σ 2 Y n 2 → , as n → ∞ . M. George Akritas Lecture 17 Chapter 5: Two Basic Approximation Results Outline The LLN and the Consistency of Averages Convolutions The Central Limit Theorem Distribution of Sums of Normal Random Variables I The distribution of the sum of two independent random variables is called the convolution of the two distributions. I We saw that 1. If X i ∼ Bin( n i , p ), i = 1 , .. . , k , are independent then X 1 + ··· + X k ∼ Bin( n 1 + ··· + n k , p )....
View Full Document

{[ snackBarMessage ]}

### Page1 / 23

b.lect17 - Outline The LLN and the Consistency of Averages...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online