LinearRegression3

9252012 p kolm 9 parallels with the two variable case

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ith the Two-Variable Case b0 is the intercept b1,..., bk are the slope parameters u is the error term (or disturbance) As before: We make a zero conditional mean assumption, E (u | x1, x 2,¼, x k ) = 0 We solve for b0, b1,..., bk by minimizing the sum of squared residuals o That’s why we call it ordinary least-squares (OLS) VER. 9/25/2012. © P. KOLM 10 Example Multiple regression model: wage = b0 + b1educ + b2exper + u Fitted regression model: ˆ ˆ ˆ ˆ wage = b0 + b1educ + b2exper + u Interpretation: ˆ b1 is the estimated increase in the wage for a unit increase in educ holding exper constant VER. 9/25/2012. © P. KOLM 11 Example (Excel Output) SUMMARY OUTPUT Regression Statistics Multiple R 0.368583072 R Square 0.135853481 Adjusted R Square 0.13399909 Standard Error 376.2948248 Observations 935 ANOVA df Regression Residual Total Intercept educ exper 2 932 934 SS 20747023.1 131969145.1 152716168.2 Coefficients Standard Error -272.5278605 107.2627094 76.21638857 6.296603998 17.63776991 3.1617754 MS F 10373511.55 73.26040306 141597.7952 t Stat -2.540751226 12.10436429 5.578438593 P-value 0.01122266 1.98778E-31 3.18016E-08 Significance F 2.81657E-30 Lower 95% Upper 95% -483.0322737 -62.02344727 63.85922421 88.57355293 11.43274601 23.84279381 VER. 9/25/2012. © P. KOLM 12 Example (Matlab Output) Ordinary Least- squares Estimates Dependent Variable = R-squared = 0.1359 Rbar-squared = 0.1340 sigma^2 = 141597.7952 Durbin-Watson = 1.8348 Nobs, Nvars = 935, wage 3 *************************************************************** Variable Coefficient t-statistic t-probability intercept -272.527860 -2.540751 0.011223 educ 76.216389 12.104364 0.000000 exper 17.637770 5.578439 0.000000 VER. 9/25/2012. © P. KOLM 13 Example - Interpretation The fitted regression line is: wage = -272.5 + 76.22educ + 17.64exper Holding experience fixed, a one year increase in education increases the monthly wage by 76.22 dollars Holding education fixed, a one year increase in experience increases the monthly wage by 17.64 dollars When education and experience are zero, wages are predicted to be -$272.5 VER. 9/25/2012. © P. KOLM 14 Classical Linear Regression Assumptions (Multivariable Case) Population model is linear in parameters: y = b0 + b1x1 + b2x 2 +¼+ bk x k + u [MLR.1] {(x i1, xi 2,¼, x ik , yi ) : i = 1,2,¼, n} is a random sample from the population model, so that yi =b0 + b1x i 1 + b2x i 2 +¼+ bk x ik + ui [MLR.2] E (u | x1, x 2,¼x k ) = 0 , implying that all of the explanatory variables are [MLR.3] uncorrelated with the error None of the x ’s is constant, and there are no exact linear relationships among them1 [MLR.4] Homoskedasticity: Assume Var (u | x 1, x 2,..., x k ) = s 2 [MLR.5] Normality: u N (0, s 2 ) [MLR.6] (needed for hypothesis testing, etc.) → MLR.1-MLR.5 are known as the Gauss-Markov assumptions → MLR.1-MLR.6 are called the classical linear model assumptions (CLM) We now take a look at the main results for the multivariate case VER. 9/25/2012. © P. KOLM 15 Main Results for the Multivariate Case: Unbiasedness OLS is unbiased, that is ˆ E...
View Full Document

This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.

Ask a homework question - tutors are online