This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Therefore, Z n → d N (0 , 1). 13 Asymptotic inference about the mean Let X 1 ,X 2 ,... be an infinite sequence of iid random variables with common ex- pected value μ = E ( X ) and common variance σ 2 = Var( X ). Suppose we observe just the first n random variables, X 1 ,...,X n . What can we say about the expected value μ ? The law of large numbers tells us that the sample mean ¯ X n should be close to μ when n is large. The central limit theorem can be used to get a more precise idea of just how close ¯ X n should be to μ . Mathematically, the central limit theorem says that √ n ( ¯ X n- μ ) → d N (0 ,σ 2 ). We cannot know the exact value of σ 2 merely by observing our sample X 1 ,...,X n . But we can estimate it with the sample variance ˆ σ 2 n , defined as ˆ σ 2 n = 1 n n X i =1 ( X i- ¯ X n ) 2 . Some people prefer to define the sample variance with a divisor of n- 1 instead of n , but this makes no difference to us, since we will be looking for limiting results as n → ∞ . With a little algebra, we can see that ˆ σ 2 n = 1 n n X i =1 X 2 i- ¯ X 2 n . 16 The law of large numbers tells us that 1 n ∑ n i =1 X 2 i → p E ( X 2 ) as n → ∞ . It also tells us that ¯ X n → p E ( X ), and so the continuous mapping theorem implies that- ¯ X 2 n → p- E ( X ) 2 . Therefore, applying part 1 of Slutsky’s theorem, we find that ˆ σ 2 n → p E ( X 2 )- E ( X ) 2 = Var( X ) = σ 2 as n → ∞ . In other words, the sample variance ˆ σ 2 n provides a consistent estimate of the population variance σ 2 . Since the square root function is continuous, the continuous mapping theorem implies that ˆ σ n → p σ . Combining this result with part 3 of Slutsky’s theorem and the fact that √ n ( ¯ X n- μ ) → d N (0 ,σ 2 ), we find that ¯ X n- μ ˆ σ n / √ n = √ n ( ¯ X n- μ ) ˆ σ n → d N (0 ,σ 2 ) σ = N (0 , 1) , provided that σ > 0 (which will be true unless every X i takes the same value). We have shown that the distribution of the ratio ¯ X n- μ ˆ σ n / √ n approximates the N (0 , 1) distribution in large samples. This fact can be used to test hypotheses about the value of μ , or to form confidence intervals for μ . Suppose we wish to test the hypothesis that μ = μ . Here, μ is the true expected value of X , which we do not know, and μ is a conjectured value of μ . We can calculate the t-statistic t n = ¯ X n- μ ˆ σ n / √ n from our sample X 1 ,...,X n . If the hypothesis μ = μ is true, then our earlier argument shows that t n has the N (0 , 1) distribution in large samples. Therefore, it should lie between- 1 . 96 and 1 . 96 with probability approximately equal to 0 . 95. On the other hand, if μ 6 = μ , we have no reason to believe that t n will lie in this range, and in fact it can be shown that it will be greater than 1.96 in absolute value with probability approaching one as n → ∞ . Therefore, if we calculate t n and find that it does not lie between -1.96 and 1.96, we reject theand find that it does not lie between -1....
View Full Document
- Spring '08
- Normal Distribution, Probability theory, probability density function