{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

LARS tutorial - Section 1 Linear Models The linear model...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Section 1 Linear Models The linear model has been the mainstay of statistics. Despite the great inroads made by modern nonparametric regression techniques, linear models remain important, and so we need to understand them well. theory of least squares computational aspects distributional aspects linear models in R formulas for expressing models contrasts 1 – 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Theory of Least Squares N measurements x i R p , y i R , i = 1 , . . . , N , N > p . Linear Model: y i = β 0 + p summationdisplay j =1 x ij β j + ε i (1) with ε i i.i.d. , E ( ε i ) = 0 , Var ( ε i ) = σ 2 . We either assume the linear model is correct, or more realistically think of it as a linear approximation to the regression model E ( y i | x i ) = f ( x i ) Either way, the most popular way of fitting the model is least squares : pick β 0 , β j , j = 1 , . . . , p, to minimize RSS ( β 0 , β 1 , . . . , β p ) = N summationdisplay i =1 ( y i β 0 p summationdisplay j =1 x ij β j ) 2 (2) 1 – 2
Background image of page 2
Vector notation Absorb β 0 into β , and augment the vector x i with a 1 (and let the new dimension be p for simplicity). Write y = y 1 . . . y N ( N × 1) X = x T 1 . . . x T N ( N × p ) Then (2) can be written RSS ( β ) = bardbl y bardbl 2 = ( y ) T ( y ) (3) RSS /∂β = 2 X T ( y ) = 0 ˆ β = ( X T X ) 1 X T y if X T X is invertible. This is the text book solution to the least squares problem. 1 – 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Geometry of Least Squares The geometrical solution is more revealing. x 1 x p y ˆ y y y X ˆ β M ˆ y = X ˆ β is the orthogonal projection of y onto the subspace M⊂ R n spanned by the columns of X . This is true even if X is not of full column rank. Proof: Pythagoras. y ˆ y ⊥M arrowdblbothv ( y X ˆ β ) x j j ( x j is a column of X here) arrowdblbothv X T ( y X ˆ β ) = 0 1 – 4
Background image of page 4
Computational Aspects Q-R decomposition of X : X N × p = Q N × N R N × p = Q 1 Q 2 R 0 where Q has orthonormal columns: Q T Q = I (and rows?) R is upper triangular, and may not have full rank: or r p r R R 0 0 0 Rank p Rank r < p R 1 R 11 R 12 1 – 5
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
For the full rank case, bardbl y bardbl 2 = vextenddouble vextenddouble Q T y vextenddouble vextenddouble 2 = vextenddouble vextenddouble Q T 1 y R 1 β vextenddouble vextenddouble 2 + vextenddouble vextenddouble Q T 2 y vextenddouble vextenddouble 2 ˆ β = R 1 1 Q T 1 y RSS ( ˆ β ) = vextenddouble vextenddouble Q T 2 y vextenddouble vextenddouble 2 Effects: e = Q T y - Coordinates of y on columns of Q .
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}