{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

elemprob-fall2010-page42

# elemprob-fall2010-page42 - n is called the partial sum...

This preview shows page 1. Sign up to view the full content.

When b - a is small, there is a correction that makes things more accurate, namely replace a by a - 1 2 and b by b + 1 2 . This correction never hurts and is sometime necessary. For example, in tossing a coin 100 times, there ispositive probability that there are exactly 50 heads, while without the correction, the answer given by the normal approximation would be 0. An example. We toss a coin 100 times. What is the probability of getting 49, 50, or 51 heads? Answer. We write P (49 S n 51) = P (48 . 5 S n 51 . 5) and then continue as above. 17 Limit laws Suppose X i are independent and have the same distribution. In the case of continuous or discrete random variables, this means they all have the same density. We say the X i are i.i.d., which stands for “independent and identically distributed.” Let S n =
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: n is called the partial sum process. Theorem 17.1 E | X i | < ∞ and let μ = E X i . Then S n n → μ. This is known as the strong law of large numbers (SLLN). The convergence here means that S n ( ω ) /n → μ for every ω ∈ S , where S is the probability space, except possibly for a set of ω of probability 0. The proof of Theorem 13.1 is quite hard, and we prove a weaker version, the weak law of large numbers (WLLN). The WLLN states that for every a > 0, P ±² ² ² S n n-E X 1 ² ² ² > a ³ → as n → ∞ . It is not even that easy to give an example of random variables that satisfy the WLLN but not the SLLN. Before proving the WLLN, we need an inequality called Chebyshev’s in-equality. 42...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online