CHAPTER 3
TEACHING NOTES
For undergraduates, I do not work through most of the derivations in this chapter, at least not in
detail.
Rather, I focus on interpreting the assumptions, which mostly concern the population.
Other than random sampling, the only assumption that involves more than population
considerations is the assumption about no perfect collinearity, where the possibility of perfect
collinearity in the sample (even if it does not occur in the population) should be touched on.
The
more important issue is perfect collinearity in the population, but this is fairly easy to dispense
with via examples.
These come from my experiences with the kinds of model specification
issues that beginners have trouble with.
The comparison of simple and multiple regression estimates – based on the particular sample at
hand, as opposed to their statistical properties – usually makes a strong impression.
Sometimes I
do not bother with the “partialling out” interpretation of multiple regression.
As far as statistical properties, notice how I treat the problem of including an irrelevant variable:
no separate derivation is needed, as the result follows form Theorem 3.1.
I do like to derive the omitted variable bias in the simple case.
This is not much more difficult
than showing unbiasedness of OLS in the simple regression case under the first four Gauss
Markov assumptions.
It is important to get the students thinking about this problem early on,
and before too many additional (unnecessary) assumptions have been introduced.
I have intentionally kept the discussion of multicollinearity to a minimum.
This partly indicates
my bias, but it also reflects reality.
It is, of course, very important for students to understand the
potential consequences of having highly correlated independent variables.
But this is often
beyond our control, except that we can ask less of our multiple regression analysis.
If two or
more explanatory variables are highly correlated in the sample, we should not expect to precisely
estimate their ceteris paribus effects in the population.
I find extensive treatments of multicollinearity, where one “tests” or somehow “solves” the
multicollinearity problem, to be misleading, at best.
Even the organization of some texts gives
the impression that imperfect multicollinearity is somehow a violation of the GaussMarkov
assumptions:
they include multicollinearity in a chapter or part of the book devoted to “violation
of the basic assumptions,” or something like that. I have noticed that master’s students who have
had some undergraduate econometrics are often confused on the multicollinearity issue.
It is
very important that students not confuse multicollinearity among the included explanatory
variables in a regression model with the bias caused by omitting an important variable.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 Jack
 Econometrics, Regression Analysis, Corr, faminc, hsperc

Click to edit the document details