4
The standard normal distribution
The standard normal distribution is another very important example of a con
tinuous distribution.
A random variable
X
is said to have the standard normal
distribution, written
X
∼
N
(0
,
1), if it is continuous with pdf
f
X
given by
f
X
(
x
) =
1
√
2
π
e

x
2
/
2
.
The graph of this function is the wellknown “bell curve”. One can show that the
area under the bell curve is equal to one, but this is a little tricky and we will not
provide details.
The numbers 0 and 1 in the expression
N
(0
,
1) refer to the expected value and
variance of the standard normal distribution, which are zero and one respectively.
We can show that the expected value is zero in the following way:
E
(
X
)
=
Z
∞
∞
x
·
1
√
2
π
e

x
2
/
2
d
x
=

1
√
2
π
Z
∞
∞
(

x
)
·
e

x
2
/
2
d
x
=

1
√
2
π
e

x
2
/
2
∞
∞
=

1
√
2
π
(0

0) = 0
.
4
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
To show that the variance of the standard normal distribution is one, we use a
slightly more complicated argument involving integration by parts:
Var(
X
)
=
E
(
X
2
)
=
Z
∞
∞
x
2
·
1
√
2
π
e

x
2
/
2
d
x
=
Z
∞
∞

x
√
2
π
·

x
e

x
2
/
2
d
x
=

x
√
2
π
e

x
2
/
2
∞
∞
+
Z
∞
∞
1
√
2
π
e

x
2
/
2
d
x
=
(0

0) + 1 = 1
.
The standard normal distribution is referred to as “standard” because it has
mean zero and variance one. More generally, we say that
X
is normally distributed
with mean
μ
and variance
σ
2
, written
X
∼
N
(
μ, σ
2
), if it is continuous with pdf
f
X
(
x
) =
1
√
2
πσ
2
e

(
x

μ
)
2
/
2
σ
2
.
One can show, using arguments similar to those used in the previous paragraph,
that this formula for
f
X
does indeed yield
E
(
X
) =
μ
and Var(
X
) =
σ
2
.
A very useful property of the normal distribution is that a linear function of a
normal random variable is still normal, but with different mean and variance. That
is, if
X
∼
N
(
μ, σ
2
), then
a
+
bX
∼
N
(
a
+
bμ, b
2
σ
2
). This property of the normal
distribution allows us to “standardize” a normal random variable by subtracting its
mean and dividing by its standard deviation. The standardized random variable
will then have the standard normal distribution.
5
Joint mass and density functions
The notions of discrete and continuous probability distributions may easily be
extended to the case where we have two (or more) random variables.
Suppose
(
X, Y
) is a pair of random variables. The pair (
X, Y
) is said to be discrete if it can
take on only a finite number of pairs of values. The probabilities with which (
X, Y
)
is equal to each of those pairs are given by its
joint probability mass function
, or
joint pmf, which we denote
f
XY
. The probability that (
X, Y
) is equal to some fixed
pair of values (
x, y
) is equal to
f
XY
:
P
(
X
=
x
and
Y
=
y
) =
f
XY
(
x, y
)
.
Since the values of
f
XY
(
x, y
) for different (
x, y
)’s represent probabilities, we must
have
f
XY
(
x, y
)
≥
0 for all
x
and
y
, and the sum of the probabilities must equal
one:
∑
x,y
f
XY
(
x, y
) = 1.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 Stohs
 Normal Distribution, Probability theory, probability density function

Click to edit the document details