*This preview shows
pages
1–10. Sign up
to
view the full content.*

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
**Unformatted text preview: **Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Statistics 191: Introduction to Applied Statistics Multiple linear regression Jonathan Taylor Department of Statistics Stanford University January 26, 2010 1 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Multiple linear regression Outline Specifying the model. Fitting the model: least squares. Interpretation of the coefficients. More on F-statistics. Matrix approach to linear regression. T-statistics revisited. More F statistics. Tests involving more than one . 2 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Job supervisor data Description Variable Description Y Overall supervisor job rating X 1 How well do they handle complaints X 2 Do they allow special priveleges X 3 Give opportunity to learn new things X 4 Raises based on performance X 5 Too critical of poor performance X 6 Good rate of advancement 3 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Job supervisor data R code 4 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Specifying the model Multiple linear regression model Rather than one predictor, we have p = 6 predictors. Y i = + 1 X i 1 + + p X ip + i Errors are assumed independent N (0 , 2 ), as in simple linear regression. Coefficients are called (partial) regression coefficients because they allow for the effect of other variables. 5 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Geometry of Least Squares 6 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Fitting the model Least squares Just as in simple linear regression, model is fit by minimizing SSE ( ,..., p ) = n X i =1 ( Y i- ( + p X j =1 j X ij )) 2 = k Y- b Y ( ) k 2 Minimizers: b = ( b ,..., b p ) are the least squares estimates: are also normally distributed as in simple linear regression. 7 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Error component Estimating 2 As in simple regression b 2 = SSE n- p- 1 2 2 n- p- 1 n- p- 1 independent of b . Why 2 n- p- 1 ? Typically, the degrees of freedom in the estimate of 2 is n- #number of parameters in regression function. 8 / 1 Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Interpretation of j s Supervisor example Take 1 for example. This is the amount the average job rating increases for one unit of Handles complaints, keeping everything else constant....

View
Full
Document