{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

4 Multiple Linear Regression

# 4 Multiple Linear Regression - Statistics 191 Introduction...

This preview shows pages 1–10. Sign up to view the full content.

Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Statistics 191: Introduction to Applied Statistics Multiple linear regression Jonathan Taylor Department of Statistics Stanford University January 26, 2010 1 / 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Multiple linear regression Outline Specifying the model. Fitting the model: least squares. Interpretation of the coefficients. More on F -statistics. Matrix approach to linear regression. T -statistics revisited. More F statistics. Tests involving more than one β . 2 / 1
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Job supervisor data Description Variable Description Y Overall supervisor job rating X 1 How well do they handle complaints X 2 Do they allow special priveleges X 3 Give opportunity to learn new things X 4 Raises based on performance X 5 Too critical of poor performance X 6 Good rate of advancement 3 / 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Job supervisor data R code 4 / 1
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Specifying the model Multiple linear regression model Rather than one predictor, we have p = 6 predictors. Y i = β 0 + β 1 X i 1 + · · · + β p X ip + ε i Errors ε are assumed independent N (0 , σ 2 ), as in simple linear regression. Coefficients are called (partial) regression coefficients because they “allow” for the effect of other variables. 5 / 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Geometry of Least Squares 6 / 1
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Fitting the model Least squares Just as in simple linear regression, model is fit by minimizing SSE ( β 0 , . . . , β p ) = n X i =1 ( Y i - ( β 0 + p X j =1 β j X ij )) 2 = k Y - b Y ( β ) k 2 Minimizers: b β = ( b β 0 , . . . , b β p ) are the “least squares estimates”: are also normally distributed as in simple linear regression. 7 / 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Error component Estimating σ 2 As in simple regression b σ 2 = SSE n - p - 1 σ 2 · χ 2 n - p - 1 n - p - 1 independent of b β . Why χ 2 n - p - 1 ? Typically, the degrees of freedom in the estimate of σ 2 is n - #number of parameters in regression function. 8 / 1
Statistics 191: Introduction to Applied Statistics Jonathan Taylor Department of Statistics Stanford University Interpretation of β j ’s Supervisor example Take β 1 for example. This is the amount the average job rating increases for one “unit” of “Handles complaints”, keeping everything else constant.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}