Lecture 9 slides(Categories)

Lecture 9 slides(Categories) - Last time Discussed the...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Last time Discussed the basics of multiple linear regression, from both algebriac and geometric perspectives Showed that the least squares estimators are unbiased and found their variance-covariance matrix Lecture 9 – p. 1/33 Today’s material Finish 3.4 (have now covered 3.1 - 3.4) Start 3.10 (will come back to other sections later) Lecture 9 – p. 2 / Partitioning Regression Sums of Squares If we order the covariates so that the the intercept vector ( X 0 ) and the other ˜ p variables are numbered X 0 , X 1 , ... X ˜p Can then write SS Reg = y t b X ( X t X ) - 1 X t ˜ X ( ˜ X t ˜ X ) - 1 ˜ X t B y + ˜ b t ˜ X t ˜ X ˜ b n ¯ y 2 = R ( β p , β p - 1 , ...β ˜ p +1 | β ˜ p , ..., β 0 ) + R ( β ˜ p , ..., β 0 ) Lecture 9 – p. 3/33 Partitioning Regression Sums of Squares R ( β p , β p - 1 , ...β ˜ p +1 | β ˜ p , ..., β 0 ) is the additional regression sums of squares explained by covariates X ˜p + 1 , ..., X p beyond the smaller model using only ˜ X Partitioning allows us to test complex hypotheses R ( β ˜ p , ..., β 0 ) = ˜ b t ˜ X t ˜ X ˜ b n ¯ y 2 is equivalent to the regression sums of squares under a null hypothesis that ( β ˜p + 1 , ..., β p ) = β * = 0 , i.e. for y = ˜ X ˜ β + X * β * + ǫ where X * contain the covariate vectors from X not contained in ˜ X , β * is fixed to be 0 Lecture 9 – p. 4 /
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Testing complex hypotheses To test a null hypothesis that a subset of the parameters, β * = 0 , intuitively want to look for significant explanation beyond the best explanation by maximizing only over ˜ X Another way of looking at it is to examine the amount of residual sums of squares reduced by allowing β * n = 0 Can write R ( β * | ˜ β ) = SS Reg,H 1 SS Reg,H 0 That means R ( β * | ˜ β ) = SS Tot SS Res,H 1 ( SS Tot SS Res,H 0 ) = SS Res,H 0 SS Res,H 1 Lecture 9 – p. 5/33 Testing subsets of parameters Under H 0 , R ( β * | ˜ β ) 2 has a χ 2 p 1 distribution, where p 1 = p ˜ p Also under H 0 , s 2 ( n k ) , where k = p + 1 , has a χ 2 n - k distribution Therefore, under H 0 , F = R ( β * | ˜ β ) /p 1 s 2 ∼ F p 1 ,n - k Lecture 9 – p. 6 / Back to Stat course example Assume that we’re testing for a significant association of either the homework grade OR the midterm grade Therefore, our hypotheses would be: H 0 : β hw = β midterm = 0 H 1 : β hw n = 0 OR β midterm n = 0 Lecture 9 – p. 7/33 Example: Stat500 > none.mod<-lm(final˜1) > all.mod<-lm(final˜hw+midterm) > anova(none.mod,all.mod) Analysis of Variance Table Model 1: final ˜ 1 Model 2: final ˜ hw + midterm Res.Df RSS Df Sum of Sq F Pr(>F) 1 54 1325.25 2 52 925.96 2 399.28 11.211 8.948e-05 *** Lecture 9 – p. 8 /
Background image of page 2
Example: Stat500 > ### Same test appears at bottom > summary(all.mod)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 9

Lecture 9 slides(Categories) - Last time Discussed the...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online