Armed with chebyshev’s inequality and our four

Info iconThis preview shows pages 12–15. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Armed with Chebyshev’s inequality and our four basic facts, we may proceed to proving the law of large numbers. We need to show that lim n →∞ P ( | ¯ X n- μ | > ε ) = 0 for all ε > . For any ε > 0, Chebyshev’s inequality tells us that P ( | ¯ X n- μ | > ε ) ≤ 1 ε 2 E ( ( ¯ X n- μ ) 2 ) . Therefore, to show that ¯ X n → p μ , it is enough for us to show that lim n →∞ E ( ( ¯ X n- μ ) 2 ) = 0 . Using facts (1) and (2) above, we can see that E ( ¯ X n ) = E 1 n n X i =1 X i ! = 1 n E n X i =1 X i ! = 1 n n X i =1 E ( X i ) = 1 n n X i =1 μ = μ. Therefore, E ( ( ¯ X n- μ ) 2 ) = E ( ( ¯ X n- E ( ¯ X n )) 2 ) = Var( ¯ X n ) , 12 and so to complete the proof, we need only show that lim n →∞ Var( ¯ X n ) = 0. Using fact (3) above, we can see that Var( ¯ X n ) = Var 1 n n X i =1 X i ! = 1 n 2 Var n X i =1 X i ! . Fact (4) tells us that the variance of the sum of the X i ’s is equal to the sum of the variances of the X i ’s, plus two times the sum of all the covariances between different X i ’s. But we assumed earlier that the X i ’s are iid, meaning in particular that all the X i ’s are independent of one another. Therefore, all of the covariances are equal to zero, and so the variance of the sum of the X i ’s is just the sum of the variances: Var n X i =1 X i ! = n X i =1 Var( X i ) = n X i =1 σ 2 = nσ 2 . We have now established that Var( ¯ X n ) = 1 n 2 Var n X i =1 X i ! = σ 2 n , which converges to zero as n → ∞ . This completes our proof of the law of large numbers. 10 Convergence in distribution Suppose we have an infinite sequence of random variables Z 1 ,Z 2 ,Z 3 ,... . Previously, we used the notion of convergence in probability to describe the possibility that this sequence of random variables may appear to be converging to some constant value. Convergence in distribution describes a different kind of convergence. Suppose that each Z n has cdf F n . If there is a random variable Z with cdf F , and if the cdfs F n become arbitrarily close to the cdf F as n → ∞ , then Z n is said to converge in distribution to Z . To be more precise, the sequence of random variables Z 1 ,Z 2 ,Z 3 ,... is said to converge in distribution to Z if lim n →∞ F n ( x ) = F ( x ) for all x. We write this as Z n → d Z . For a simple example of convergence in distribution, suppose that each Z n has the U (0 , 1 + 1 /n ) distribution. In this case, the cdf of Z n resembles the cdf of the U (0 , 1) distribution more and more closely as n → ∞ . The cdf of Z n is given by F n ( x ) = for x < nx n +1 for 0 ≤ x ≤ 1 + 1 n 1 for x > 1 + 1 n , 13 while the cdf of the U (0 , 1) distribution is given by F ( x ) = 0 for x < x for 0 ≤ x ≤ 1 1 for x > 1 ....
View Full Document

{[ snackBarMessage ]}

Page12 / 17

Armed with Chebyshev’s inequality and our four basic...

This preview shows document pages 12 - 15. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online