This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 1.1 Twovariable OLS:
distributions
73261 Econometrics
August 30
Wooldridge 2.5 Announcements
Quiz this Friday, covering the material
from this & previous week
HW#1 – will be complete by end of day
Due next Wed., Sept. 8
You can do all of it after today Next Monday – Labor day
No class next Monday, Sept. 6 73 2 Au 3
61
g1
1.1 Two vriable OLS
a p. 2
© CMU / Y. Kryukov Review & plan
Up to now: (β0,β1) are constants
ˆˆ
Our estimates β 0 , β1 are random variables True coefficients Today
Distribution of the estimates
So we can tell how precise our estimates are
And do statistical tests p. 3
© CMU / Y. Kryukov Approach
ˆ
Want distribution of β ’s
ˆ
β ’s are functions of Xi’s and Yi’s
Yi’s are function of Xi’s and ui’s
We observe Xi’s
So we condition on them
I.e. we hold Xi’s fixed
Some books just have Xi as part of population ˆ
What could β ’s be given Xi?
ˆ
Only independent random variable is ui
β
Which affects ’s via Yi’s ˆ
How do β ’s depend on Yi’s and ui‘s?
p. 4
© CMU / Y. Kryukov Estimator as a function of error
Take estimator of β1 : Some algebra gives us : ˆ= N
β1 ∑i =1 Xi − X ( N X2−X2 ) Yi = ∑i =1Wi Yi
N 1 Expand Yi and simplify ˆ
β1 =
73 2 Au 3
61
g1
1.1 Two vriable OLS
a = 0 * β 0 + 1* β1 + ∑i =1Wi ui
N 1 p. 5
© CMU / Y. Kryukov ˆ
Mean of β1
Mean, conditional on X = (X1,…,XN): [ ˆ
E β1  X =
Key assumption: E [u  x ] = 0 satisfied if u is independent of x.
Assumption gives us an . . . . . . . . estimator: [ ˆ
E β1  X =
Assumption failure is called . . . . . . . . . . . . :
a third variable correlated with both x and y
73 2 Au 3
61
g1
1.1 Two vriable OLS
a p. 6
© CMU / Y. Kryukov ˆ
Variance of β1 [ ˆ
V β1  X =
Another assumption: V [u  x ] = σ 2
satisfied if u is independent of x.
If assumption holds, variance becomes: [ ˆ
V β1  X = σ2 ∑ (X
N i =1 −X) 2 i Assumption failure = . . . . . . . . . . . . . . . :
estimate OK, but tests are invalid
73 2 Au 3
61
g1
1.1 Two vriable OLS
a p. 7
© CMU / Y. Kryukov ˆ
Mean and Variance of β 0
ˆ
β 0 is again a linear function of Yi’s:
ˆ =Y −β X = N
ˆ
β0
∑i =1
1
Under the same assumptions: ˆ
V [β  X ] = ˆ
E β0  X =
0 σ 2 ∑i =1 X i
N 2 N ∑i =1 (X i − X )
N 2 ˆ
Proofs use same approach as β1
73 2 Au 3
61
g1
1.1 Two vriable OLS
a p. 8
© CMU / Y. Kryukov ˆ
ˆ
Covariance of β 0 and β1
ˆ
ˆ
Conditional on X, β 0 and β1 are . . . . . . . . . . .
Both are weighted sums of same ui’s
Yet more derivations give us: [ ˆˆ
cov β 0 , β1  X [( )( )] ˆ
ˆ
= E β 0 − β 0 β1 − β1  X
N
2
σ ∑i =1 X i
=−
N
2
N ∑i =1 (X i − X ) Example demonstrates 73 2 Au 3
61
g1
1.1 Two vriable OLS
a p. 9
© CMU / Y. Kryukov Example 3: 73 2 Au 2
61
g6
1.1 Two variable OLS
 Estimate distribution p. 10
© CMU / Y. Kryukov How do we know σ 2 ?
Is σ part of population, data or estimate?
.........
We need to . . . . . . . . What is σ 2 ?
Variance of u How do you estimate that?
Do we observe u? Is it in our data?
Can our estimates help us?
...
p. 11
© CMU / Y. Kryukov Estimating σ 2
Did we see an estimate of variance before? ˆ
Apply to U
We can prove that: ∑ N ˆ
Ui =
i =1 The formula is 1
N
ˆ
σ=
U i2
∑i =1 ˆ
N −2
2 Why (N – 2)?
p. 12
© CMU / Y. Kryukov Estimated variance of betas
Take variance formulas,
Replace actual σ 2 with estimate σ 2:
ˆ [ ˆˆ
V β1  X = ˆ
σ2 ∑ (X − X )
ˆ
σ∑ X
ˆ  X ]=
ˆ
V [β
N ∑ (X − X )
ˆ
σ∑ X
ˆˆˆ ˆ ˆ
cov[β , β  X ] = −
N ∑ (X − X )
N 2 i =1 2 i N 2 i =1 0 i N i =1 2 i 2 N i =1 0 1 N i =1 i 2 i p. 13
© CMU / Y. Kryukov Standard errors
Sample counterpart of standard deviation: ˆ
s.e.( β j ) =
Informal “rule of thumb”:
ˆ
ˆ
If β j ≥ 2 * s.e.( β j )
, it is likely that . . . . . .
i.e. xj has a meaningful effect on y.
This is a simplified version of a formal
statistical test (which we will derive later)
Software reports s.e. along with coefficients
p. 14
© CMU / Y. Kryukov Variancecovariance matrix
Matrix of estimated variances
ˆ
and covariances of β ‘s ()
() ˆ
⎛ ⎡β 0 ⎤ ⎞ ⎡ V β
ˆˆ
0
ˆ
V⎜⎢ ⎥⎟ = ⎢
⎜ ⎢ β ⎥ ⎟ cov β 0 , β1
ˆ
ˆˆˆ ˆ ˆ
⎝⎣ 1 ⎦⎠ ⎣ ()
() ˆˆ
cov β 0 , β1 ⎤
⎥
ˆ
ˆβ
V1 ⎦ Can be reported after running a regression in
statistical software
It is used in various tests p. 15
© CMU / Y. Kryukov Degrees of freedom (d.f.)
1
N
ˆ
ˆ
σ=
U i2
∑i =1
N −2
2 2
1
N
ˆ
σ=
∑i =1 (Yi − Y )
N −1
2
Y (N – 2), (N – 1) are called degrees of freedom:
We can prove that they eliminate bias
# of r.v.’s (N) minus # of estimates:
ˆ2
σY
Y is an estimate, hence (N – 1) in
ˆ
ˆ
U i is a function of the two β ‘s
Why are we subtracting # of estimates?
N
2
B/c we use estimates to minimize ∑ (...)
i =1
p. 16
© CMU / Y. Kryukov Twovariable OLS, N = 2
How do you fit a line to two points? p. 17
© CMU / Y. Kryukov Twovariable OLS, N = 2
Draw a line through them. U i2 ?
What is ∑i =1 ˆ
N U i2 = 0
∑i =1 ˆ
N There is
no need for OLS
when N ≤ 2
p. 18
© CMU / Y. Kryukov Twovariable OLS, N = 4
How many points you cannot fit on the line? How many
ˆ
U i ‘s ≠0?
2= N – 2 p. 19
© CMU / Y. Kryukov ...
View Full
Document
 Fall '09
 Kyrkv
 Econometrics

Click to edit the document details