This preview shows pages 1–4. Sign up to view the full content.
LECTURE 7: INFERENCE WITH OLS
PDF’s of OLS ESTIMATORS
•
Need to add one more assumption to CLRM:
•
Classical Linear Regression Model (Revisited)
Assumptions of the CLRM:
1.
For each i, the population regression function of Y
i
given (X
i1
, X
i2
, X
i3
, …,
X
ik
) is
linear
, i.e.
Y
i
=
β
0
+
β
1
X
i1
+
β
2
X
i2
+
β
3
X
i3
+ … +
β
k
X
ik
+
ε
i
(PRF)
2.
(X
i1
, X
i2
, X
i3
, … X
ik
) are nonstochastic variables (i.e., their values are fixed
numbers in repeated samples)
3.
The expected, or mean, value of the disturbance term
ε
i
is zero:
E(
ε
i
) = 0
4.
The variance of each
ε
i
is constant for all i, that is,
ε
i
is
homoskedastic
:
Var(
ε
i
) =
σ
2
5.
There is no correlation between two error terms. This is the assumption of
no autocorrelation
or
no serial correlation
:
Cov(
ε
i
,
ε
j
) = 0
∀
i
≠
j
6.
No exact collinearity exists between X
1
and X
2
.
7.
In the PRF
Y
i
=
β
0
+
β
1
X
i
+
ε
i
ε
i
is normally distributed
ε
i
~ N(0,
σ
2
)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document •
Assumption 6 is
equivalent
to assuming that Y is normally distributed with
mean equal to
β
0
+
β
1
X
1
+
β
2
X
2
+ … +
β
k
X
k
and variance equal to
σ
2
:
Y ~ N(
β
0
+
β
1
X
1
+
β
2
X
2
+ … +
β
k
X
k
,
σ
2
)
•
Normality may be a bad assumption, for example for nonnegative variables
(e.g. wages, prices) or for variables that take on only a small number of
values. Sometimes taking a nonlinear transformation (e.g. taking the natural
logarithm) of the dependent variable makes normality plausible.
•
Normality is a convenient assumption because it implies normality of the
OLS estimators (since they are linear functions of the normal Y’s).
•
We know that a linear function of a normally distributed variable is itself
normally distributed. If our PRF is:
Y
i
=
β
0
+
β
1
X
i
+
ε
i
Since our OLS estimators for
0
β
ˆ
and
1
β
ˆ
are linear functions of
ε
i
, then they
themselves are normally distributed.
)
σ
,
β
(
N
~
β
ˆ
2
β
ˆ
0
0
0
2
2
i
2
i
0
2
β
ˆ
σ
)
X
X
n
X
)
β
ˆ
var(
σ
0
∑
∑
−
=
=
)
σ
,
β
(
N
~
β
ˆ
2
β
ˆ
1
1
1
∑
−
=
=
2
i
2
1
2
1
β
ˆ
)
X
X
(
σ
)
β
ˆ
var(
σ
•
More generally:
)
1
,
0
(
N
~
)
β
ˆ
(
se
β
β
ˆ
j
j
j
−
FUN FACTS
•
The (conditional) standard error of
j
β
ˆ
depends on the unknown
σ
2
. If we use
the
unbiased
estimator
∑
=
−
=
n
1
i
2
i
2
ε
ˆ
k
n
1
σ
ˆ
For estimating the standard error of
j
β
ˆ
, then the distribution of the
standardized
j
β
ˆ
is no longer standard normal but will be asymptotically
normal:
)
1
,
0
(
N
~
hat
))^
β
ˆ
(
se
(
β
β
ˆ
A
j
j
j
−
•
Even if assumption 7 does not hold
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 03/20/2009 for the course ECON 103 taught by Professor Sandrablack during the Winter '07 term at UCLA.
 Winter '07
 SandraBlack

Click to edit the document details