This preview shows page 1. Sign up to view the full content.
Unformatted text preview: that is1
E (u ) = 0
Cov(x j , u ) = 0, for j = 1, 2,..., k VER. 10/23/2012. © P. KOLM 11 Large Sample Inference (1/2)
Recall:
• Under the classical linear model (CLM) assumptions, the sampling
distributions are normal, so we could derive t  and F distributions for
hypothesis testing
• The normality was due to assuming the population error was normal
• The assumption of normal errors implied that the distribution of the
dependent variable, y , given the independent variables, x 1,..., x k , was normal
as well VER. 10/23/2012. © P. KOLM 12 Large Sample Inference (2/2)
• There are many situations in which this exact normality assumption will fail
o For example: Any skewed variable (like hedge fund returns, option
returns, wages, savings, etc.) cannot be normal as a normal distribution
is symmetric
• Note that the normality assumption is not needed to conclude OLS is BLUE
o We only used it for inference (hypothesis testing) VER. 10/23/2012. © P. KOLM 13 Central Limit Theorem
• Recall: The central limit theorem states that the standardized average of any
population with mean μ and variance σ 2 is asymptotically ∼ N (0,1), that is
Y −μ a
Z=
∼ N (0,1)
σ
n • Asymptotic normality means that P(Z < z ) → Φ(z ) as n → ∞ (where Φ(z )
denotes a normal CDF)
• Based on the central limit theorem, we can show that OLS estimators are
asymptotically normal. In particular, we have the important result on the
next page VER. 10/23/2012. © P. KOLM 14 Asymptotic Normality of OLS (1/2)
Facts: Under the GaussMarkov assumptions, we have that
1. For each j = 1,..., k (excluding the intercept, j=0)
ˆ
(β a j s ˆ
where se (β j ) = 2
j SSTj (1 − R ) ˆ
− β j ) se (β j ) ∼ N (0,1) (the usual OLS standard error) n
1
ˆ
2. s =
∑ ui2 is a consistent estimator of σ2
n − k − 1 i =1
2 3. ( a ˆ
n (β j − β j ) ∼ N 0, Avar where Avar ˆ
( n (β − β )))
j j ˆ
( n (β − β )) is the asymptotic variance of
j j ˆ
n (β j − β j ) (for the ˆ
slope coefficients).2 We say that β j is asymptotically normally distributed VER. 10/23/2012. © P. KOLM 15 Remarks:
• Because the t distribution approaches the normal distribution for large df ,
we can write
a ˆ
ˆ
(βj − βj ) se (βj ) ∼ tn −k −1 ∼ N (0,1) • Note that while we no longer need to assume normality with a large sample,
we do still need homoscedasticity (For you: Why?)
• Hypothesis testing: Asymptotic normality implies that . . .
o . . . the t statistic is approximately normal if the sample size is large
enough → we can conduct ttests of single regression coefficients using
the normal distribution
o . . . the F statistic is approximately F distributed if the sample is large
enough3 → thus for testing exclusion restrictions or other multiple
hypothesis, nothing changes from what we have done before VER. 10/23/2012. © P. KOLM 16 OLS is Asymptotically Efficient
• Under the GaussMarkov assumptions, the OLS estimators will have the
smallest asymptotic variances
• We...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details