This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Economics 513, USC, Fall 2005 1 Lecture 5. Ordinary Least Squares: Estimation, Inference and Predicting Outcomes Let us review the basics of the linear model. We have n units (individuals, firms, or other economic agents) drawn randomly from a large population. On each unit we observe on outcome y i for unit i , and a Kdimensional vector of explanatory variables x i = ( x i 1 , x i 2 , . . . , x iK ) (where typically the first covariate is a constant, x i 1 = 1 for all i = 1 , . . . , n .) We are interested in explaining the distribution of y i in terms of the explana tory variables x i using a linear model: y i = x i + i . (1) In matrix notation, Y = X + , or, avoiding vector and matrix notation completely, y i = 1 x i 1 + . . . + K x iK + i = K X k =1 k x ik + i . We consider a sequence of increasingly weaker assumptions on the relation between i and x i . First, we assume that the residuals i are independent of the covariates or regressors, and normally distributed with mean zero and variance 2 : Assumption 1 i  x i N (0 , 2 ) . We can weaken this considerably. First, we could relax normality and only assume indepen dence: Assumption 2 i x i , combined with the normalization that E [ i ] = 0. We can even weaken this assumption further by requiring only meanindependence Economics 513, USC, Fall 2005 2 Assumption 3 E [ i  x i ] = 0 , or even further, requiring only zero correlation: Assumption 4 E [ i x i ] = 0 . We will also assume that the observations are drawn randomly from some population. We can also do most of the analysis by assuming that the covariates are fixed, but this complicates matters for some results, and it does not help very much. Assumption 5 The pairs ( x i , y i ) are independent draws from some distribution, with the first two moments of x i finite. The (ordinary) least squares estimator for solves min n X i =1 ( y i x i ) 2 . This leads to = ( X X ) 1 ( X Y ) . Note that we adopt an alternative notation for the OLS estimator instead of b . In the sequel I will also use for e , and 2 for s 2 . The (exact) distribution of the OLS estimator is N , 2 ( X X ) 1 . Without the normality of the it is difficult to derive the exact distribution of . However, we have = + 1 n n X i =1 x i x i ! 1 1 n n X i =1 x i i Economics 513, USC, Fall 2005 3 Hence, even under assumption 5, E( x i i ) = 0, so that by the Law of Large Numbers 1 n n X i =1 x i i p and 1 n n X i =1 x i x i p E( xx ) so that p Also, under assumption 5 (and all stronger assumptions 14) and a second moment condition on (variance finite and equal to 2 ), we can establish asymptotic normality by the Central Limit Theorem....
View
Full
Document
 Fall '07
 Rashidian
 Economics, Econometrics

Click to edit the document details