The first equality comes from
V ar
(
u
i

X
i
) =
E
(
u
2
i

X
i
)

E
(
u
i

X
i
)
2
and the assumption
E
(
u
i

X
i
) = 0 while the second equality comes from the fact that given
X
i
(i.e., if we know
X
i
), the variation of
Y
i
only comes from the variation of
u
i
.
Therefore, we only need to find
P
(
Y
i
= 1

X
i
) and
P
(
Y
i
= 0

X
i
) to calculate
E
(
u
2
i

X
i
). Let us take the conditional expectations
on both sides of the model. From the model
Y
i
=
β
0
+
β
1
X
i
+
u
i
, we get
E
(
Y
i

X
i
) =
E
(
β
0
+
β
1
X
i
+
u
i

X
i
)
=
E
(
β
0

X
i
) +
E
(
β
1
X
i

X
i
) +
E
(
u
i

X
i
)
=
β
0
+
β
1
X
i
.
Here we exploited the properties of conditional expectations (which should be familiar) and
the assumption
E
(
u
i

X
i
) = 0. On the other hand, since
Y
i
is a Bernoulli random variable given
X
i
,
E
(
Y
i

X
i
) = 1
·
P
(
Y
i
= 1

X
i
) + 0
·
P
(
Y
i
= 0

X
i
) =
P
(
Y
i
= 1

X
i
). This combined with the
above result yields,
P
(
Y
i
= 1

X
i
) =
β
0
+
β
1
X
i
P
(
Y
i
= 0

X
i
) = 1

P
(
Y
i
= 1

X
i
) = 1

β
0

β
1
X
i
.
The conditional variance of
Y
i
given
X
i
is then obtained from the product of
P
(
Y
i
= 1

X
i
)
and
P
(
Y
i
= 0

X
i
):
V ar
(
Y
i

X
i
) = (
β
0
+
β
1
X
i
)(1

β
0

β
1
X
i
).
(If
Z
is a random variable which is one with probability
p
and zero with probability
q
= 1

p
(i.e.,
Z
is a Bernoulli random variable), then
E
(
Z
) =
p
and
V ar
(
Z
) =
p
·
q
. Likewise, the variance
of
Y
i
given
X
i
is
p
·
q
where
p
is
P
(
Y
i
= 1

X
i
) and
q
is
P
(
Y
i
= 0

X
i
)).
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Winter '08
 Stohs
 Normal Distribution, Probability theory, probability density function, Yi, Cumulative distribution function

Click to edit the document details