{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

sta108_handout11

# sta108_handout11 - Handout 11 Multiple regression For the...

This preview shows pages 1–2. Sign up to view the full content.

Handout 11 Multiple regression For the Electric bill data we have Y = annual electric bill, X 1 = monthly household income, X 2 =number of persons, X 3 =living area. The model is Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + ε . The fitted regression is ˆ Y = - 358 . 4 + . 0751 X 1 + 55 . 09 X 2 + . 2811 X 3 , SSE ( X 1 , X 2 , X 3 ) = 550163 , SSTO = 3701668 . The correlation matrix is: Bill Income Persons Area Bill 1 .837 .494 .905 Income 1 .143 .961 Person 1 .366 Area 1 Three testing problems. I. An overall test for the regression: H 0 : β 1 = β 2 = β 3 = 0 against H 1 : at least one of β 1 , β 2 , β 3 is nonzero. Note that ” H 0 is true” is equivalent to ”knowing X 1 , X 2 , X 3 does not help in predicting Y ”. II. Can a predictor variable, say X 1 , can be dropped from the model? This is equivalent to asking ”does adding variable X 1 to the model Y = β 0 + β 2 X 2 + β 3 X 3 + ε improve prediction of Y ”. In order to settle the question ”can we can drop variable X 1 from the full model Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + ε ” we need to test H 0 : β 1 = 0 against H 1 : β 1 6 = 0. There are two (equivalent) tests for this: a) t-test, b) (partial) F-test. III. Can we drop more than one variable, say X 1 and X 3 , from the model Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + ε ? An equivalent question is: does the addition of variables X 1 and X 3 to the model Y = β 0 + β 2 X 2 + ε significantly improve the prediction of Y ? Statistically this is equivalent to testing H 0 : β 1 = β 3 = 0 against H 1 : not both of β 1 and β 3 are zero.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}