Lecture35-37

Lecture35-37 - 140 Lecture 35- Multicollinearity 1 We are...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
140 Lecture 35- Multicollinearity 1 We are now going to look at the second classical regression assumption concerning multicollinearity. Recall that multicollinearity exists when one independent variable can be written as a linear function of the remaining independent variables: X B BX BX k t t = + + + +++ + .... ... . t 2 it tt i i i i k −− ++ 01 1 2 2 11 Note that we have not included on the right hand side of the regression equation. If we had, we would have perfect correlation since a variable is always perfectly correlated with itself! A variable having such a high correlation with a dependent variable is known as a dominant variable. Dominant variables are examples of tautologies, i.e., things that are definitionally true. We learn nothing by including them, hence do not include them! X it Note that it is not necessary for every coefficient in the above equation to be non-zero. It is sufficient that one or more be nonzero. When we have the case as above, we say that we have perfect multicollinearity. In such a case all of one independent variable can be explained by the remaining independent variables. When this case arises, we will not be able to estimate a regression equation. The reason for this is that we will be asking the computer to divide by 0 at a certain step in the regression estimate. You can see this most clearly by looking at the case where we have two independent variables. Suppose that: . Now recall the expression for computing the standard error of the estimated coefficient: XBB X t 10 1 =+ SE B S XX r t ( $ ). .. 1 22 2 12 2 1 = c h c h . As the two independent variables become more and more correlated, the correlation coefficient gets closer and closer to 1. If there is perfect correlation, the correlation coefficient is equal to one, and we have zero in the denominator, which we know is not permissible. The solution for perfect multicollinearity is simple. Since one of the independent variables is completely explained by the remaining independent variables, nothing is lost if we drop it. And that is what we do! Imperfect multicollinearity. The real problem arises when we have some multicollinearity, but not perfect multicollinearity. As we will see, the closer we get to perfect multicollinearity the greater the problem becomes. To say that we have some multicollinearity is to say that some portion of one independent variable can be explained by the remaining independent variables. Since only some portion is explained, some portion is not. We could write this as: X i i i k t t = + + + + 2 . ε t + . Note the inclusion of the stochastic term to signify that not all of the independent variable is explained by the remaining independent variables. Note again that it is not necessary that all of the coefficients be non-zero.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
141 What problems are caused by imperfect multicollinearity?
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/03/2011 for the course ECO 230 taught by Professor Yongjinpark during the Spring '11 term at Conn College.

Page1 / 12

Lecture35-37 - 140 Lecture 35- Multicollinearity 1 We are...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online