{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

5Multiple Regression Analysis - Inference

5Multiple Regression Analysis - Inference - Multiple...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Multiple Regression Analysis: Inference PAM 3100 Professor Michael Lovenheim Fall 2010 PAM 3100 Multiple Regression Analysis: Inference
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Overview Thus far, we have seen that under the Gauss-Markov Assumptions, OLS is unbiased and we can solve for the variance of OLS estimators. We care not only about the expected value of our estimators, but their precision in order to make statistical inference about the population. For example: say we estimate an elasticity of cigarette sales of -0.150. With what certainty can we rule out that the true elasticity is 0 or -1? We therefore need the “sampling distribution” of the OLS estimators. Once we have the distribution of the estimators, we can see how likely it would be to pull our estimates out of a distribution with any true mean. PAM 3100 Multiple Regression Analysis: Inference
Background image of page 2
Normality Assumption Recall that there are 5 Gauss-Markov Assumptions. To derive the distribution of our estimators, we need a sixth assumption, often called the normality assumption: 6) The population error U is independent of the explanatory variables, x 1 , x 2 ,... x k and is normally distributed with zero mean and variance σ 2 : U N(0, σ 2 ). Another way to write this assumption is that Y | X N ( β 0 + β 1 x 1 + · · · + β k x k , σ 2 ). Note that Assumption 6 implies that E [ U | X ] = E [ U ] = 0 and Var( U | X ) = σ 2 . We call Assumptions 1-6 the Classical Linear Model (CLM) Assumptions. PAM 3100 Multiple Regression Analysis: Inference
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Normality Assumption How strong is this assumption? In reality, there are many variables that are not normally distributed. Often, logging variables makes them approximately normal: e.g, wages and prices. We call these variables log normal. It turns out, as long as the sample size is large enough, non-normality of the dependent variable is not a problem - this is what the Central Limit Theorem buys us. A bigger problem is that often non-normal variables suggest the linear functional form is inappropriate. This violates assumption 1 of the CLM, which can make our estimates biased. The solutions to this problem are mostly beyond this course. PAM 3100 Multiple Regression Analysis: Inference
Background image of page 4
Normality of OLS Estimators Theorem : Under the CLM Assumptions, conditional on the sampling values of the independent variables, ˆ β j N ( β j , var ( ˆ β j )). Therefore, ˆ β j - β j sd ( ˆ β j ) N (0 , 1). The idea of the proof is that ˆ β j is equal to β j plus a linear combination of normally distributed random errors with mean zero. A linear combination of normal random variables is also normal, and we know from the previous section what the mean and variance of ˆ β j are. PAM 3100 Multiple Regression Analysis: Inference
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
A Review of Hypothesis Testing Hypothesis testing begins by making a Null Hypothesis about a parameter. This is denoted H 0 .
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}