lecture8

lecture8 - ECON 103, Lecture 8: Inference with OLS Maria...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECON 103, Lecture 8: Inference with OLS Maria Casanova February 4th (version 0) Maria Casanova Lecture 8 Requirements for this lecture: Chapter 5 of Stock and Watson Maria Casanova Lecture 8 1. Distribution of OLS estimators Lets consider the following linear regression model: Y i = + 1 X 1 i + ... + k X ki + i , i | X N (0 , 2 ) This model maintains the least square assumptions (assumptions 1. to 4. in lecture 7) plus two additional ones: Ass5: The variance of each i is constant given X , that is, i is homoskedastic. Ass6: Given X , i is normally distributed. Maria Casanova Lecture 8 1. Distribution of OLS estimators Under assumptions 1. to 6. we can derive exact distribution for the OLS estimators. In particular, if is normally distributed conditional on X , so is Y (as it is a linear function of ): Y | X N ( E ( Y | X ) , Var ( Y | X )) The mean of Y is equal to: E ( Y | X ) = E ( + 1 X 1 + ... + k X k + | X ) = = E ( + 1 X 1 + ... + k X k | X ) + E ( | X ) = + 1 X 1 + ... + k X k Maria Casanova Lecture 8 1. Distribution of OLS estimators The variance of Y is equal to: Var ( Y | X ) = Var ( + 1 X 1 + ... + k X k + | X ) = = Var ( + 1 X 1 + ... + k X k | X ) + Var ( | X ) = 2 Then: Y N ( + 1 X 1 + ... + k X k , 2 ) Normality may be a bad assumption, for example for non-negative variables (e.g. wages, prices) or for variables that take on only a small number of values. Sometimes taking a nonlinear transformation of Y (e.g. taking the natural logarithm) makes normality more plausible. Maria Casanova Lecture 8 1. Distribution of OLS estimators Normality is a convenient assumption because it implies that the OLS estimators are exactly normally distributed (since they are linear functions of Y ). Therefore, N ( , 2 ) , 2 = X 2 i n ( X i- X ) 2 2 1 N ( 1 , 2 1 ) , 2 1 = 1 ( X i- X ) 2 2 More generally: j- j j N (0 , 1) Maria Casanova Lecture 8 1. Distribution of OLS estimators The (conditional) variance of j depends on the unknown parameter 2 . In practice, we substitute it with its unbiased estimator: 2 = 1 n- k n X i =1 2 As a consequence of this substitution, the distribution of the standardized j is no longer standard normal but a t with n- k degrees of freedom: j- j s . e . ( j ) t n- k The t distribution converges to a normal...
View Full Document

This note was uploaded on 03/15/2010 for the course ECON 103 taught by Professor Sandrablack during the Winter '07 term at UCLA.

Page1 / 24

lecture8 - ECON 103, Lecture 8: Inference with OLS Maria...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online