chap3 - CHAPTER 3 MULTIPLE REGRESSION 1 Let be the set of...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CHAPTER 3 MULTIPLE REGRESSION 1 Let be the set of statistical subjects (units). Examples of : { 1 , 2 , 3 ,...,n } { All patients in a clinical trial } { All the samples tested in an experiment } On subject i , we observe y i ,x i 1 ,...,x ip . 2 The basic equation of multiple regression is: y i = p X j =1 x ij j + i , 1 i n, where the i satisfy one of the same two con- ditions as in Chapter 2: (a) i uncorrelated, mean 0, common variance 2 , (b) i independent N [0 , 2 ]. Often assume x i 1 = 1 for all i , so that 1 is an intercept. 3 Method of least squares: choose estimates 1 ,..., p , to minimize S = n X i =1 y i- p X j =1 x ij j 2 . Differentiating with respect to 1 ,..., p leads to the set of p simultaneous linear equations in p unknowns: n X i =1 x ik y i- p X j =1 x ij j = 0 , k = 1 , 2 ,...,p. These equations are known as the normal equa- tions and they are a fundamental starting point for the analysis of linear models. 4 Rewrite in vector notation: Define Y = y 1 . . . y n , = 1 . . . p , = 1 . . . n , X = x 11 x 12 ... x 1 p x 21 x 22 ... x 2 p . . . . . . . . . . . . x n 1 x n 2 ... x np . Then Y = X + . The normal equations may also be written in vector notation as X T Y = X T X . Usually assume X T X invertible: then = ( X T X )- 1 X T Y. 5 Geometrical Explanation Let x i = ( x 1 i ,...,x ni ) . Both Y and x i s are points in R n . We can define inner product on R n as < x,y > = x y, thus || x- y || 2 = < x- y,x- y > = ( x- y ) ( x- y ) , and SSE= || Y- X || 2 . Method of least squares: choose estimate such that || Y- X || 2 is minimized. 6 Geometrical Explanation...
View Full Document

This note was uploaded on 11/17/2011 for the course STOR 664 taught by Professor Staff during the Fall '11 term at UNC.

Page1 / 84

chap3 - CHAPTER 3 MULTIPLE REGRESSION 1 Let be the set of...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online