LARS tutorial

LARS tutorial - Section 1 Linear Models The linear model...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Section 1 Linear Models The linear model has been the mainstay of statistics. Despite the great inroads made by modern nonparametric regression techniques, linear models remain important, and so we need to understand them well. theory of least squares computational aspects distributional aspects linear models in R formulas for expressing models contrasts 1 1 Theory of Least Squares N measurements x i R p , y i R , i = 1 ,...,N , N > p . Linear Model: y i = + p summationdisplay j =1 x ij j + i (1) with i i.i.d. , E ( i ) = 0 , Var ( i ) = 2 . We either assume the linear model is correct, or more realistically think of it as a linear approximation to the regression model E ( y i | x i ) = f ( x i ) Either way, the most popular way of fitting the model is least squares : pick , j , j = 1 ,...,p, to minimize RSS ( , 1 ,..., p ) = N summationdisplay i =1 ( y i p summationdisplay j =1 x ij j ) 2 (2) 1 2 Vector notation Absorb into , and augment the vector x i with a 1 (and let the new dimension be p for simplicity). Write y = y 1 . . . y N ( N 1) X = x T 1 . . . x T N ( N p ) Then (2) can be written RSS ( ) = bardbl y X bardbl 2 = ( y X ) T ( y X ) (3) RSS / = 2 X T ( y X ) = 0 = ( X T X ) 1 X T y if X T X is invertible. This is the text book solution to the least squares problem. 1 3 Geometry of Least Squares The geometrical solution is more revealing. x 1 x p y y y X X y X M y = X is the orthogonal projection of y onto the subspace M R n spanned by the columns of X . This is true even if X is not of full column rank. Proof: Pythagoras. y y M arrowdblbothv ( y X ) x j j ( x j is a column of X here) arrowdblbothv X T ( y X ) = 0 1 4 Computational Aspects Q-R decomposition of X : X N p = Q N N R N p = Q 1 Q 2 R where Q has orthonormal columns: Q T Q = I (and rows?) R is upper triangular, and may not have full rank: or r p r R R Rank p Rank r < p R 1 R 11 R 12 1 5 For the full rank case, bardbl y X bardbl 2 = vextenddouble vextenddouble Q T y R vextenddouble vextenddouble 2 = vextenddouble vextenddouble Q T 1 y R 1 vextenddouble vextenddouble 2 + vextenddouble vextenddouble Q T 2 y vextenddouble vextenddouble...
View Full Document

Page1 / 17

LARS tutorial - Section 1 Linear Models The linear model...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online