This preview shows page 1. Sign up to view the full content.
Unformatted text preview: is undeﬁned if both 0 xfX (x) dx = 1 and
R0
−1 (−x)fX (x) dx = −1. Otherwise, the expected value is given by
E [X ] = Z 1 xfX (x) dx. (1.22) −1 An example of a rv whose expectation is undeﬁned is given by the following example of a
Cauchy rv.
1
Example 1.3.3. A Cauchy rv has the probability density fX (x) = π(1+x2 ) . Note that,
R1
both as x → 1 and as x → −1, x/(1 + x)2 goes to zero as 1/x. Thus 0 xfX (x) dx and
R0
−1 −xfX (x) dx are b oth inﬁnite and E [X ] is undeﬁned. On the other hand, we see from
symmetry that the Cauchy principal value of the integral in (1.22) is given by lim Z +A A→1 −A x
dx = 0.
π (1 + x)2 18 CHAPTER 1. INTRODUCTION AND REVIEW OF PROBABILITY P
The rv M in Example 1.3.3 has this same behavior, i.e., limA→1 A =−A pM (m) = 0. For
m
most uses of expectations, a rv whose expectation is undeﬁned behaves very diﬀerently from
one whose expectation is deﬁned, and the existence of a Cauchy principal value is not of
great signiﬁcance.
The expected value of a rv X will also be called its mean and will be denoted both by E [X ]
and by X .
We next derive an alternate expression for the expected value in (1.20) of a positive discrete
rv . This new expression is given directly in terms of the distribution function. We then
use this new expression as a general deﬁnition of expectation which applies to all rv’s,
whether discrete, continuous, or arbitrary. It contains none of the convergence questions
that sometimes arise for continuous rv’s with very wild densities.
For a positive discrete rv X , we will show that (1.20) is simply the integral of the complementary distribution function, where the complementary distribution function of a rv is
deﬁned as Pr {X > x} = 1 − FX (x).
Z1
E [X ] =
[1 − FX (x)] dx.
(1.23)
0 a1
pX (a1 ) a1 pX (a1 )
pX (a2 )
pX (a3 )
pX (a4 ) a2 a2 pX (a2 ) a3
a3 pX (a3 ) 1 − FX (x) a4 a4 pX (a4 ) Figure 1.3: For this example, X takes on four possible values, 0 ≤ a1 < a2 < a3 < a4 . Thus 1 − FX (x) = Pr {X > x} = 1 for x < a1 . For a1 ≤ x < a2 , Pr {X > x} =
1 − pX (a1 ), and Pr {X > x} has similar drops as x reaches a2 , a3 , and a4 . E [X ], from
P
(1.20), is i ai pX (ai ), which is the sum of the rectangles in the ﬁgure. This is also
R1
the area under the curve 1 − FX (x), i.e., 0 [1 − FX (x)] dx. It can be seen that this
argument applies to any nonnegative rv, thus verifying (1.23). This relationship is just as important for conceptual purposes as for computational purposes.
To derive it for a discrete random variable, consider the sketch of the complementary distriP
bution function, Pr {X > x} = 1 − FX (x) in Figure 1.3. The ﬁgure shows that
ai pX (ai )
is equal to the area under the curve 1 − FX (x) from x = 0 to 1.
For an arbitrary nonnegative rv, we can visualize quantizing the distribution function, using
the above argument, and then passing to the limit of arbitrarily ﬁne quantizing. Since there
are no mathematical subtleties in integrating an arbitrary nonnegative decreasing function,
this is not only useful in calculating expectations, but is also the fundamental way to deﬁne
expectation. Note that it is possible for X to be inﬁnite, but this simply arises when the
integral in (1.23) is inﬁnite. One can also derive (1.23) from (1.22) by integration by parts, 1.3. PROBABILITY REVIEW 19 but it is the more fundamental graphical argument that shows that the complementary
distribution function can always be integrated with either a ﬁnite or inﬁnite value.
For an arbitrary (rather than nonnegative) rv, the mean, as illustrated in Figure 1.4 and
analyzed in Exercise 1.7, generalizes to
E [X ] = − Z 0 FX (x) dx + −1 Z 1 0 (1 − FX (x)) dx. (1.24) 1 − FX (0) 1 − FX (x) FX (0)
FX (x) Figure 1.4: The ﬁgure shows the complementary distribution function, 1 − FX (x), of a rv X for x between 0 and 1 and shows the distribution function, FX (x), for x between
−1 and 0. The expectation of X , E [X ] is the area under the curve on the right less
the area under the curve on the left. If the area under each curve is ﬁnite, then E [X ]
is ﬁnite. If one area is ﬁnite and the other inﬁnite, then E [X ] is ±1, and if both are
inﬁnite then E [X ] is undeﬁned. Deﬁnition 1.6. The expectation of a rv X for which at least one of the two terms in (1.24)
is ﬁnite is given by (1.24). If both these terms are inﬁnite, the expectation is undeﬁned.
Given this as the deﬁnition of expectation, one can use the arguments above to see that the
formulas for discrete and continuous expectations in (1.20) and (1.22) respectively follow
from this deﬁnition. 1.3.7 Random variables as functions of other random variables Random variables (rv’s) are often deﬁned in terms of each other. For example, if g is a
function from R to R and X is a rv, then Y = g (X ) is the random variable that maps each
sample point ω into the composite function g (X (ω )). As indicated in Exercise 1.15, one
can ﬁnd the expected value of Y (if it is deﬁned) in either of the following...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details