Copyright c
±
2006 by Karl Sigman
1
Review of Probability
Random variables are denoted by
X
,
Y
,
Z
, etc. The
cumulative distribution function (c.d.f.)
of a random variable
X
is denoted by
F
(
x
) =
P
(
X
≤
x
)
,
∞
< x <
∞
, and if the random
variable is continuous then its probability density function is denoted by
f
(
x
) which is related
to
F
(
x
) via
f
(
x
) =
F
0
(
x
) =
d
dx
F
(
x
)
F
(
x
) =
Z
x
∞
f
(
y
)
dy.
The
probability mass function (p.m.f.)
of a discrete random variable is given by
p
(
k
) =
P
(
X
=
k
)
,
∞
< k <
∞
,
for integers
k
.
1

F
(
x
) =
P
(
X > x
) is called the
tail
of
X
and is denoted by
F
(
x
) = 1

F
(
x
). Whereas
F
(
x
) increases to 1 as
x
→ ∞
, and decreases to 0 as
x
→ ∞
, the tail
F
(
x
) decreases to 0 as
x
→ ∞
and increases to 1 as
x
→ ∞
.
If a r.v.
X
has a certain distribution with c.d.f.
F
(
x
) =
P
(
X
≤
x
), then we write, for
simplicity of expression,
X
∼
F.
(1)
1.1
Moments and variance
The expected value of a r.v. is denote by
E
(
X
) and deﬁned by
E
(
X
) =
∞
X
k
=
∞
kp
(
k
)
,
discrete case
,
E
(
X
) =
Z
∞
∞
xf
(
x
)
dx,
continuous case.
E
(
X
) is also referred to as the
ﬁrst moment
or mean of
X
(or of its distribution).
Higher moments
E
(
X
n
)
, n
≥
1 can be computed via
E
(
X
n
) =
∞
X
k
=
∞
k
n
p
(
k
)
,
discrete case
,
E
(
X
n
) =
Z
∞
∞
x
n
f
(
x
)
dx,
continuous case,
and more generally
E
(
g
(
X
)) for a function
g
=
g
(
x
) can be computed via
E
(
g
(
X
)) =
∞
X
k
=
∞
g
(
k
)
p
(
k
)
,
discrete case
,
E
(
g
(
X
)) =
Z
∞
∞
g
(
x
)
f
(
x
)
dx,
continuous case.
1