elemprob-fall2010-page42

elemprob-fall2010-page42 - n is called the partial sum...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
When b - a is small, there is a correction that makes things more accurate, namely replace a by a - 1 2 and b by b + 1 2 . This correction never hurts and is sometime necessary. For example, in tossing a coin 100 times, there ispositive probability that there are exactly 50 heads, while without the correction, the answer given by the normal approximation would be 0. An example. We toss a coin 100 times. What is the probability of getting 49, 50, or 51 heads? Answer. We write P (49 S n 51) = P (48 . 5 S n 51 . 5) and then continue as above. 17 Limit laws Suppose X i are independent and have the same distribution. In the case of continuous or discrete random variables, this means they all have the same density. We say the X i are i.i.d., which stands for “independent and identically distributed.” Let S n = n i =1 X i . S
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: n is called the partial sum process. Theorem 17.1 E | X i | < and let = E X i . Then S n n . This is known as the strong law of large numbers (SLLN). The convergence here means that S n ( ) /n for every S , where S is the probability space, except possibly for a set of of probability 0. The proof of Theorem 13.1 is quite hard, and we prove a weaker version, the weak law of large numbers (WLLN). The WLLN states that for every a > 0, P S n n-E X 1 > a as n . It is not even that easy to give an example of random variables that satisfy the WLLN but not the SLLN. Before proving the WLLN, we need an inequality called Chebyshevs in-equality. 42...
View Full Document

This note was uploaded on 12/29/2011 for the course MATH 316 taught by Professor Ansan during the Spring '10 term at SUNY Stony Brook.

Ask a homework question - tutors are online