*This preview shows
page 1. Sign up
to
view the full content.*

**Unformatted text preview: **ving the important results for expectation that were stated in Lecture #19. Recall that a random variable is said to have ﬁnite mean or have ﬁnite expectation
or be integrable if E(X ) < ∞. The vector space of all integrable random variable on a given
probability space (Ω, F , P ) is denoted by L1 and if 1 ≤ p < ∞, then
Lp = {random variables X : Ω → R such that |X |p ∈ L1 }.
Theorem 21.1 (Cauchy-Schwartz Inequality). If X , Y ∈ L2 , then XY ∈ L1 and
|E(XY )| ≤ E(X 2 )E(Y 2 ). Proof. Since 0 ≤ (X + Y )2 = X 2 + Y 2 + 2XY and 0 ≤ (X − Y )2 = X 2 + Y 2 − 2XY , we
conclude that 2|XY | ≤ X 2 + Y 2 implying 2E(|XY |) ≤ E(X 2 ) + E(Y 2 ). Thus, if X, Y ∈ L2 ,
we conclude that XY ∈ L1 . For every x ∈ R, note that
0 ≤ E((xX + Y )2 ) = x2 E(X 2 ) + 2xE(XY ) + E(Y 2 ).
Since x2 E(X 2 ) + 2xE(XY ) + E(Y 2 ) is a non-negative quadratic in x, its discriminant is
necessarily non-positive; that is,
4[E(XY )]2 − 4E(X 2 )E(Y 2 ) ≤ 0,
or, equivalently,
|E(XY )| ≤ E(X 2 )E(Y 2 )
as required.
Theorem 21.2. Let X : (Ω, F , P) → (R, B ) be a random variable.
(a) (Markov’s Inequality) If X ∈ L1 , then
P {|X | ≥ a} ≤ E(|X |)
a for every a > 0.
(b) (Chebychev’s...

View
Full
Document