Chapter 6: Sums of Random Variables
6. Introduction
There are many applications of probability theory in which random variables of the form
W
n
=
X
1
+
· · ·
X
n
appear. Our goal is to derive the
n
dimensional probability model of
W
n
.
First, we will consider expected values related to
W
n
rather than a complete model of
W
n
. There are many applications where this information is essentially all we need. We will
then consider techniques that allow us to derive a complete model of
W
n
when
X
1
, . . . , X
n
are mutually independent. As we shall see, a useful way to analyze the sum of independent
random variables is to transform the PDF or PMF of each random variable
X
1
, . . . , X
n
to a
moment generating function
.
6.1 Expected Values of Sums
Section 4.7 in Chapter 4 addressed computing the expected values and variances of pairs
of random variables. Those theorems can be generalized to describe the expected values and
variances of sums of random variables. Generalizing Theorem 4.14 results in the following
theorem, which essentially states that the expected value of the sum equals the sum of the
expected values, whether or not
X
1
, . . . , X
n
are independent.
Theorem 6.1
For any set of random variables
X
1
, . . . , X
n
, the
expected value
of
W
n
=
X
1
+
· · ·
+
X
n
is
E
[
W
n
] =
E
[
X
1
] +
E
[
X
2
] +
· · ·
+
E
[
X
n
]
.
Similarly, generalizing Theorem 4.15 results in the following theorem regarding the vari
ance of a sum of random variables.
Theorem 6.2
The
variance
of
W
n
=
X
1
+
· · ·
+
X
n
is
Var[
W
n
] =
n
X
i
=1
Var[
X
i
] + 2
n

1
X
i
=1
n
X
j
=
i
+1
Cov[
X
i
, X
j
]
.
Note that in terms of the random vector
X
= [
X
1
· · ·
X
n
]
0
, Var[
W
n
] is the sum of all the
elements in the covariance matrix
C
X
. The diagonal of this matrix consists of all the Var[
X
i
]
= Cov[
X
i
, X
j
] terms, and the offdiagonal elements of this matrix are the Cov[
X
i
, X
j
] =
Cov[
X
j
, X
i
] terms. The variance of the sum of of the random variables
X
1
, . . . , X
n
becomes
simplified when
X
1
, . . . , X
n
are uncorrelated, as stated in the following theorem, because
under these conditions Cov[
X
i
, X
j
] = 0 for
i
6
=
j
.
Theorem 6.3
When
X
1
, . . . , X
n
are
uncorrelated
,
Var[
W
n
] = Var[
X
1
] +
· · ·
+ Var[
X
n
]
.
89
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
In the following examples, we make use of a technique whereby we denote the random
variable
X
i
as an
indicator variable
such that
X
i
=
(
1
some particular event
i
occurs
,
0
otherwise.
Example 1 (pages 245246)
At a party of
n
≥
2 people, each person throws a hat in a common box. The box is shaken
and each person blindly draws a hat from the box
without replacement
. A match occurs if a
person draws his or her own hat. Let
V
n
denote the number of matches.
(a) Determine
E
[
V
n
] and Var[
V
n
], the expected value and variance of the number of
matches.
(b) Suppose after drawing a hat, each person immediately returns to the box the hat that
he or she drew, prior to the next person’s draw of a hat—that is, the hats are now being
drawn
with replacement
. Determine the expected value and variance of the number of
matches.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '09
 Normal Distribution, Probability theory

Click to edit the document details