This preview shows pages 1–3. Sign up to view the full content.
Class Notes for Econometrics
Variance of Ordinary Least Squares under the Gauss
Markov assumptions
Jean Eid
Assume that the following hold
MLR1
Linearity in the parameter
MLR2
Independence of the error term u and random sampling
MLR3
No perfect Collinearity
MLR4
Zero conditional mean
MLR5
Homoskedatsticity i.e
var
(
u
i

x
) =
σ
2
Mean Squared Error
Suppose we have data
x
1
,x
2
,x
3
,
···
,x
N
that we think are coming from a model
y
i
=
β
0
+
β
1
x
i
+
u
i
Suppose that MLR1MLR5 hold. Our goal is to use the information available in the data to make
guesses about
β
0
,
β
1
. One such guess would be the OLS estimator. Another one is the method of
moment estimator, a third would be whatever the data is we guess
β
0
to be 1 and
β
1
to be 5. Ideally,
we need some measure that tells us for example whether OLS is better than the third guess we made
above. One way is to only pick up the estimators that are unbiased. In the last lecture notes we proved
that OLS is unbiassed. However, suppose you have two estimators that are unbiased, how do you pick
and choose which one to work with. One other property we would like is to have the estimate going
in probability to the true parameter. We write this as
plim
n
→∞
±

ˆ
β
o,n

β
0

< ±
²
= 1
Where
ˆ
β
o,n
is the OLS estimate based on n observations. What we are trying to say here is that we
want an estimator that gives us an estimate that has the following property. As we increase the sample
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document size the probability that our estimate is the actual true value of the parameter approaches 1. In other
words, we are more and more certain that our estimate is the true value in the population. So this is a
very important property that we would like to have. We call this property consistency.
However, we can prove this property for OLS through the variance and the expectation. This
is called mean square error consistent which implies consistency and is denoted by
MSE
. MSE is
deﬁned in the following way
MSE
=
E
±
(
δ
(
data
)

θ
)
2
²
where
δ
(
data
)
is an estimate of
θ
. In the case above it was
ˆ
β
0
is an estimate of
β
0
. The reason we
write the function
δ
as above is to stress the fact that the estimates functions of the data eg. look at the
OLS estimates, they are functions of
x
1
,x
2
,x
3
,
···
,x
N
and
y
1
,y
2
,y
3
,
···
,y
N
. However, to conserve
space, I will from now on write
δ
to mean
δ
data
We say an estimator is consistent when
lim
n
→∞
MSE
= 0
Now, look at the deﬁnition of MSE, and let
μ
δ
denote the population expected value of
δ
data
.
E
δ

β
2
=
E
δ

μ
δ
+
μ
δ

θ
2
=
E
δ

μ
δ
2
+
μ
δ

θ
2

2
δ

μ
δ
μ
δ

θ
=
E
δ

μ
δ
2
+
E
μ
δ

θ
2

2
E
δ

μ
δ
μ
δ

θ
=
E
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 10/16/2011 for the course ECONOMICS 655 taught by Professor Jeaneid during the Fall '11 term at Wilfred Laurier University .
 Fall '11
 JEANEID
 Econometrics

Click to edit the document details