0809Convergence2H - O.H. Probability and Markov Chains –...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: O.H. Probability and Markov Chains – MATH 2561/2571 E09 3 Convergence of random variables In probability theory one uses many different modes of convergence of random vari- ables, many of which are crucial for applications. In this section we shall consider some of the most important of them: convergence in L p , convergence in probability and almost sure convergence. 3.1 Weak laws of large numbers Definition 3.1. Let p > be fixed. We say that a sequence X j , j ≥ 1 , of random variables converges to a random variable X in L p (write X n L p → X ) as n → ∞ , if E ˛ ˛ X n − X ˛ ˛ p → as n → ∞ . Example 3.2. Let ` X n ´ n ≥ 1 be a sequence of random variables such that for some real numbers ( a n ) n ≥ 1 , we have P ` X n = a n ´ = r n , P ` X n = 0 ´ = 1 − r n . (3.1) Then X n L p → iff E ˛ ˛ X n ˛ ˛ p ≡ ( a n ) p r n → as n → ∞ . The following result is often referred to as the L 2 weak law of large numbers ( L 2- WLLN ) Theorem 3.3. Let X j , j ≥ 1 , be a sequence of uncorrelated random variables with E X j = μ and Var ( X j ) ≤ C < ∞ . Denote S n = X 1 + ··· + X n . Then 1 n S n L 2 → μ as n → ∞ . Proof. Immediate from E “ 1 n S n − μ ” 2 = E ( S n − nμ ) 2 n 2 = Var ( S n ) n 2 ≤ Cn n 2 → as n → ∞ . Definition 3.4. We say that a sequence X j , j ≥ 1 , of random variables converges to a random variable X in probability (write X n P → X ) as n → ∞ , if for every fixed ε > P ` | X n − X | ≥ ε ´ → as n → ∞ . Example 3.5. Let the sequence ` X n ´ n ≥ 1 be as in (3.1). Then for every ε > P ` | X n | ≥ ε ´ ≤ P ` X n negationslash = 0) = r n , so that X n P → if r n → as n → ∞ . In fact, the usual Weak Law of Large Numbers ( WLLN ) is nothing else than a convergence in probability result: Theorem 3.6. Under the conditions of Theorem 3.3, 1 n S n P → μ as n → ∞ . Exercise 3.7. Derive Theorem 3.6 from the Chebyshev inequality. We prove Theorem 3.6 using the following simple fact: 26 O.H. Probability and Markov Chains – MATH 2561/2571 E09 Lemma 3.8. Let X j , j ≥ 1 , be a sequence of random variables. If X n L p → X for some fixed p > , then X n P → X as n → ∞ . Proof. Applying the generalized Markov inequality with g ( x ) = x p and Z n = | X n − X | ≥ 0, we get: for every fixed ε > P ` Z n ≥ ε ´ ≡ P ` | X n − X | p ≥ ε p ´ ≤ E | X n − X | p ε p → as n → ∞ . Proof of Theorem 3.6. The result follows immediately from Theorem 3.3 and Lemma 3.8. square As the following example shows, a high dimensional cube is almost a sphere. Example 3.9. Let X j , j ≥ 1 be iid with X j ∼ U ( − 1 , 1) . Then the variables Y j = ( X j ) 2 satisfy E Y j = 1 3 , Var ( Y j ) ≤ E [( Y 2 j )] = E [( X j ) 4 ] ≤ 1 ....
View Full Document

This note was uploaded on 05/12/2010 for the course APPLIED ST 2010 taught by Professor Various during the Spring '10 term at Universidad Nacional Agraria La Molina.

Page1 / 7

0809Convergence2H - O.H. Probability and Markov Chains –...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online