lecture8

# lecture8 - ECON 103 Lecture 8 Inference with OLS Maria...

This preview shows pages 1–8. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECON 103, Lecture 8: Inference with OLS Maria Casanova February 4th (version 0) Maria Casanova Lecture 8 Requirements for this lecture: Chapter 5 of Stock and Watson Maria Casanova Lecture 8 1. Distribution of OLS estimators Let’s consider the following linear regression model: Y i = β + β 1 X 1 i + ... + β k X ki + ε i , ε i | X ∼ N (0 , σ 2 ) This model maintains the least square assumptions (assumptions 1. to 4. in lecture 7) plus two additional ones: Ass5: The variance of each ε i is constant given X , that is, ε i is homoskedastic. Ass6: Given X , ε i is normally distributed. Maria Casanova Lecture 8 1. Distribution of OLS estimators Under assumptions 1. to 6. we can derive exact distribution for the OLS estimators. In particular, if ε is normally distributed conditional on X , so is Y (as it is a linear function of ε ): Y | X ∼ N ( E ( Y | X ) , Var ( Y | X )) The mean of Y is equal to: E ( Y | X ) = E ( β + β 1 X 1 + ... + β k X k + ε | X ) = = E ( β + β 1 X 1 + ... + β k X k | X ) + E ( ε | X ) = β + β 1 X 1 + ... + β k X k Maria Casanova Lecture 8 1. Distribution of OLS estimators The variance of Y is equal to: Var ( Y | X ) = Var ( β + β 1 X 1 + ... + β k X k + ε | X ) = = Var ( β + β 1 X 1 + ... + β k X k | X ) + Var ( ε | X ) = σ 2 Then: Y ∼ N ( β + β 1 X 1 + ... + β k X k , σ 2 ) Normality may be a bad assumption, for example for non-negative variables (e.g. wages, prices) or for variables that take on only a small number of values. Sometimes taking a nonlinear transformation of Y (e.g. taking the natural logarithm) makes normality more plausible. Maria Casanova Lecture 8 1. Distribution of OLS estimators Normality is a convenient assumption because it implies that the OLS estimators are exactly normally distributed (since they are linear functions of Y ). Therefore, ˆ β ∼ N ( β , σ 2 ˆ β ) , σ 2 ˆ β = ∑ X 2 i n ∑ ( X i- ¯ X ) 2 σ 2 ˆ β 1 ∼ N ( β 1 , σ 2 ˆ β 1 ) , σ 2 ˆ β 1 = 1 ∑ ( X i- ¯ X ) 2 σ 2 More generally: ˆ β j- β j σ ˆ β j ∼ N (0 , 1) Maria Casanova Lecture 8 1. Distribution of OLS estimators The (conditional) variance of ˆ β j depends on the unknown parameter σ 2 . In practice, we substitute it with its unbiased estimator: ˆ σ 2 = 1 n- k n X i =1 ˆ ε 2 As a consequence of this substitution, the distribution of the standardized ˆ β j is no longer standard normal but a t with n- k degrees of freedom: ˆ β j- β j s . e . ( ˆ β j ) ∼ t n- k The t distribution converges to a normal...
View Full Document

{[ snackBarMessage ]}

### Page1 / 24

lecture8 - ECON 103 Lecture 8 Inference with OLS Maria...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online