This preview shows page 1. Sign up to view the full content.
Unformatted text preview: (b j ) = b j , j = 0,1,..., k (For you: Prove this.) Technical remark: Only MLR.1-MLR.4 are required to show this VER. 9/25/2012. © P. KOLM 16 Main Results for the Multivariate Case: The Sampling Variance The sample variance of the estimators are given by
Var (b j ) =
» s2 ( SSTj 1 - Rj2
s2 ( SSTj 1 - Rj2 ) ) where
n SSTj = å (x ij - x j ) 2 i =1 Rj2 is the R-squared from regressing x j on all other independent variables (all other x ’s)
ui2 = SSR We estimate s by s =
n - k - 1 i =1
o Note, here df denote the degrees of freedom, that is 2 2 df = #observations - #(number of estimated parameters) VER. 9/25/2012. © P. KOLM 17 ˆ
Graph: The Sampling Variance of b1 as a Function of R12 VER. 9/25/2012. © P. KOLM 18 Some Important Terminology
ˆ Standard deviation of b :
sd (b j ) = Var (b j ) = s ( SSTj 1 - Rj2 ) ˆ Standard error of b j : ˆ
se(b j ) = s ( SSTj 1 - Rj2 ) where
s= s =
n - k - 1 i =1
2 Note that s is not known and therefore needs to be estimated (i.e. s).
Therefore, it is the standard error that we use in hypothesis testing VER. 9/25/2012. © P. KOLM 19 Remarks
Var (b j ) =
» 1. s2 ( SSTj 1 - Rj2
s2 ( SSTj 1 - Rj2 )
) A larger error variance, s 2 , implies a larger variance for the OLS
estimators 2. A larger total sample variation, SSTj , implies a smaller variance for the
OLS estimators 3. A larger Rj 2 implies a larger variance for the estimators (e.g.
multicollinearity) 4. Technical remark: The Gauss-Markov assumptions (MLR.1-MLR.5) are
required to derive the sample variance VER. 9/25/2012. © P. KOLM 20 Main Results for the Multivariate Case: The Gauss-Markov Theorem Given the Gauss-Markov assumptions (MLR.1-MLR.5) it can be shown that OLS
is the best linear unbiased estimator (BLUE)
What does this mean? OLS is guaranteed to be optimal amongst all linear estimators (under the Gauss-Markov assumptions) “Best” or “optimal” = lowest variance “Linear” = the class of linear estimators (For you: Why is this important? Do we care?) VER. 9/25/2012. © P. KOLM 21 Main Results for the Multivariate Case: “OLS Under CLM is MVUE” Given the classical linear model assumptions (MLR.1-MLR.6) it can be shown
that OLS is not only BLUE, but also the minimum variance unbiased estimator
What does this mean? Under the CLM assumptions, OLS has the smallest variance amongst all
unbiased estimators This holds for all unbiased estimators, not just the linear ones! VER. 9/25/2012. © P. KOLM 22 Goodness-of-Fit R 2 can of course also be used in the multiple regression context: The proportion of the variation in y explained by the independent x - variables R 2 = SSE / SST = 1 – SSR / SST , 0 £ R 2 £ 1 Remarks: R 2 can never decrease when another independent variable is added to a regression, and usually will increase
o This is because SSR is non-increasing in k Because R 2 will increase with the number of independent variables, it is not a good way to compare models VER. 9/25/2012. © P. KOLM 23 Adjusted R-Squared Adjusted R-Squared is defined by
éSSR (n - k - 1)ù
R º 1= 1éSST...
View Full Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
- Fall '14