This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 Lecture Notes 5: Multicollinarity and Heteroskedasticity Multicollinearity Multicollinearity refers to fact that one of the regressor is a perfect linear function of the other regressors. Example: For the regression model g G G G G G G three explanators G u G and G are perfectly collinear if G G G and , are nonzero. h Implications: Regression coefficients remain indeterminate and their standard errors are infinite. Example: For the model g G G G G the estimated version is (in deviation form) G G G G , where G G , G G . If G = G , then and . Example: Consumption = + Income + Wealth + G Problem: Here Income and Wealth highly correlated - wealthier people generally tend to have higher incomes. Solution: We need a sufficient number of sample observations of wealthy individuals with low-income and high-income individuals with low-wealth. h Practical Consequences: 1. OLS estimators have large variances and covariances making precise estimation difficult 2. Wider confidence intervals implying acceptance of zero null hypothesis more likely 3. Insignificant t-ratio of one or more coefficients 4. Higher R 2 2 h Detection: 1. Higher R 2 but few significant t-ratios 2. Higher pair-wise correlations among regressors (say, in excess of 0.8) 3. Look at Variance Inflation Factor (VIF). If VIF=1, there exists no collinearity. In a multiple regression model (with k regressors) one can calculate k different VIFs, one for each g G , by running an OLS regression that has g G as a function of all the other explanatory variables in the original regression. For example, one of all the other explanatory variables in the original regression....
View Full Document
- Fall '08