This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 3.2 OLS Fitted Values and Residuals after obtaining OLS estimates, we can then obtain fitted or predicted values for y: (3.20) ˆ ... ˆ ˆ ˆ ˆ 2 2 1 1 ik k i i i x x x y β β β β + + + + =given our actual and predicted values for y, as in the simple regression case we can obtain residuals: (3.21) ˆ ˆ i i i y y u =a positive uhat indicates underprediction (y>yhat) and a negative uhat indicates overprediction (y<yhat) 3.2 OLS Fitted Values and Residuals We can extend the single variable case to obtain important properties for fitted values and residuals: 1) The sample average of the residuals is zero, therefore: ˆ y y = 2) Sample covariance between each independent variable and the OLS residual is zero…Therefore the sample covariance between the OLS fitted values and the OLS residual is zeroSince the fitted values come from our independent variables and OLS estimates 3.2 OLS Fitted Values and Residuals 3) The point ) , ,..., , ( 2 1 y x x x n Is always on the OLS regression line: k k x x x y β β β β ˆ ... ˆ ˆ ˆ 2 2 1 1 + + + + = Notes: These properties come from the FOC’s in (3.13):the first FOC says that the sum of residuals is zero and proves (1)the rest of the FOC’s imply zero covariance between the independent variables and uhat (2)(3) follows from (1) 3.2 “Partialling Out” In multiple regression analysis, we don’t need formulas to obtain OLS’s estimates of B jHowever, explicit formulas can give us interesting propertiesIn the 2 independent variable case: (3.22) ˆ ˆ ˆ 2 1 1 1 ∑ ∑ = i i i r y r βWhere rhat are the residuals from regressing x 1 on x 2ie: the regression: 2 1 1 x ˆ ˆ ˆ φ φ + = x 3.2 “Partialling Out” rhat i1 are the part of x i1 that are uncorrelated with x 12rhat i1 is equivalent to x i1 after x i2 ’s effects have been “partialled out” or “netted out”thus B 1 hat measures x 1 ’s effect on y after x 2 has been “partialled out”In a regression with k variables, the residuals come from a regression of x 1 on ALL other x’sin this case B 1 hat measures x 1 ’s effect on y after all other x’s have been “partialled out” 3.2 Comparing Simple and Multiple Regressions3....
View
Full
Document
This note was uploaded on 03/14/2009 for the course ECON ECON 399 taught by Professor Priemaza during the Spring '09 term at University of Alberta.
 Spring '09
 Priemaza
 Econometrics

Click to edit the document details