Theorem 5.2
If
X
1
, . . . , X
n
are
continuous random variables
with joint PDF
f
X
1
,...,X
n
(
x
1
, . . . , x
n
), then
f
X
1
,...,X
n
(
x
1
, . . . , x
n
)
≥
0
,
F
X
1
,...,X
n
(
x
1
, . . . , x
n
) =
Z
x
1
-∞
· · ·
Z
x
n
-∞
f
X
1
,...,X
n
(
u
1
, . . . , u
n
)
du
1
· · ·
du
n
,
Z
∞
-∞
· · ·
Z
∞
-∞
f
X
1
,...,X
n
(
x
1
, . . . , x
n
)
dx
1
· · ·
dx
n
= 1
.
In many situations, an event
A
is described in terms of a property of
X
1
, . . . , X
n
(for
example,
A
is the event such that max
i
X
i
≤
100). To find the probability of the event
A
,
we sum the joint PMF or integrate the joint PDF over all
x
1
, . . . , x
n
that belong to
A
, as
stated in the following theorem.
Theorem 5.3
The probability of an event
A
expressed in terms of the random variables
X
1
, . . . , X
n
is, in the
discrete case
,
P
[
A
] =
X
(
x
1
,...,x
n
)
∈
A
P
X
1
,...,X
n
(
x
1
, . . . , x
n
)
,
(where the single summation actually represents a multiple sum over the
n
random variables)
and, in the
continuous case
,
P
[
A
] =
Z
· · ·
Z
A
f
X
1
,...,X
n
(
x
1
, . . . , x
n
)
dx
1
· · ·
dx
n
.
Example 1 (page 213)
Consider a set of
n
independent trials in which there are
r
possible outcomes
s
1
, . . . , s
r
for
each trial. In each trial,
P
[
s
i
] =
p
i
. Let
N
i
equal the number of times that outcome
s
i
occurs
over
n
trials. What is the joint PMF of
N
1
, . . . , N
r
?
It turns out that the solution to this problem appeared in Theorem 1.19, which used the
multinomial coefficient
. We now see that the solution to this problem actually represents a
type of joint PMF known as the
multinomial distribution
:
P
[
N
1
, . . . , N
r
] =
n
n
1
, . . . , n
r
!