{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture8 - ECON 103 Lecture 8 Inference with OLS Maria...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
ECON 103, Lecture 8: Inference with OLS Maria Casanova April 23rd (version 2) Maria Casanova Lecture 8
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Requirements for this lecture: Chapter 5 of Stock and Watson Maria Casanova Lecture 8
Background image of page 2
1. Distribution of OLS estimators Let’s consider the following linear regression model: Y i = β 0 + β 1 X 1 i + ... + β k X ki + ε i , ε i | X N (0 , σ 2 ) This model maintains the least square assumptions (assumptions 1. to 4. in lecture 7) plus two additional ones: Ass5: The variance of each ε i is constant given X , that is, ε i is homoskedastic. Ass6: Given X , ε i is normally distributed. Maria Casanova Lecture 8
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1. Distribution of OLS estimators Under assumptions 1. to 6. we can derive exact distribution for the OLS estimators. In particular, if ε is normally distributed conditional on X , so is Y (as it is a linear function of ε ): Y | X N ( E ( Y | X ) , Var ( Y | X )) The mean of Y is equal to: E ( Y | X ) = E ( β 0 + β 1 X 1 + ... + β k X k + ε | X ) = = E ( β 0 + β 1 X 1 + ... + β k X k | X ) + E ( ε | X ) = β 0 + β 1 X 1 + ... + β k X k Maria Casanova Lecture 8
Background image of page 4
1. Distribution of OLS estimators The variance of Y is equal to: Var ( Y | X ) = Var ( β 0 + β 1 X 1 + ... + β k X k + ε | X ) = = Var ( β 0 + β 1 X 1 + ... + β k X k | X ) + Var ( ε | X ) = σ 2 Then: Y N ( β 0 + β 1 X 1 + ... + β k X k , σ 2 ) Normality may be a bad assumption, for example for non-negative variables (e.g. wages, prices) or for variables that take on only a small number of values. Sometimes taking a nonlinear transformation of Y (e.g. taking the natural logarithm) makes normality more plausible. Maria Casanova Lecture 8
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1. Distribution of OLS estimators Normality is a convenient assumption because it implies that the OLS estimators are exactly normally distributed (since they are linear functions of Y ). Therefore, ˆ β 0 N ( β 0 , σ 2 ˆ β 0 ) , σ 2 ˆ β 0 = X 2 i n ( X i - ¯ X ) 2 σ 2 ˆ β 1 N ( β 1 , σ 2 ˆ β 1 ) , σ 2 ˆ β 1 = 1 ( X i - ¯ X ) 2 σ 2 More generally: ˆ β j - β j σ ˆ β j N (0 , 1) Maria Casanova Lecture 8
Background image of page 6
1. Distribution of OLS estimators The (conditional) variance of ˆ β j depends on the unknown parameter σ 2 . In practice, we substitute it with its unbiased estimator: ˆ σ 2 = 1 n - k n X i =1 ˆ ε 2 As a consequence of this substitution, the distribution of the standardized ˆ β j is no longer standard normal but a t with n - k degrees of freedom: ˆ β j - β j s . e . ( ˆ β j ) t n - k The t distribution converges to a normal when n is large.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}