This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Linear, Ridge Regression, and Principal Component Analysis Linear, Ridge Regression, and Principal Component Analysis
Jia Li
Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu http://www.stat.psu.edu/jiali Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Introduction to Regression Input vector: X = (X1 , X2 , ..., Xp ). Output Y is realvalued. Predict Y from X by f (X ) so that the expected loss function E (L(Y , f (X ))) is minimized. Square loss: The optimal predictor L(Y , f (X )) = (Y  f (X ))2 . f (X ) = argminf (X ) E (Y  f (X ))2 = E (Y  X ) . Jia Li The function E (Y  X ) is the regression function.
http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Example
The number of active physicians in a Standard Metropolitan Statistical Area (SMSA), denoted by Y , is expected to be related to total population (X1 , measured in thousands), land area (X2 , measured in square miles), and total personal income (X3 , measured in millions of dollars). Data are collected for 141 SMSAs, as shown in the following table. i: 1 2 3 ... 139 140 141 X1 9387 7031 7017 ... 233 232 231 X2 1348 4069 3719 ... 1011 813 654 X3 72100 52737 54542 ... 1337 1589 1148 Y 25627 15389 13326 ... 264 371 140 Goal: Predict Y from X1 , X2 , and X3 . Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Linear Methods The linear regression model f (X ) = 0 +
p j=1 Xj j . What if the model is not true? It is a good approximation Because of the lack of training data/or smarter algorithms, it is the most we can extract robustly from the data. Quantitative inputs Transformations of quantitative inputs, e.g., log(), (). 2 3 Basis expansions: X2 = X1 , X3 = X1 , X3 = X1 X2 . Comments on Xj : Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Estimation The issue of finding the regression function E (Y  X ) is converted to estimating j , j = 0, 1, ..., p. Training data: {(x1 , y1 ), (x2 , y2 ), ..., (xN , yN )}, where xi = (xi1 , xi2 , ..., xip ) . Denote = (0 , 1 , ..., p )T . The loss function E (Y  f (X ))2 is approximated by the empirical loss RSS()/N: RSS() =
N i=1 (yi  f (xi ))2 = N i=1 (yi  0  p j=1 xij j )2 . Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Notation The input matrix X 1 1 ... 1 Output vector y: of dimension N (p + 1): x1,1 x1,2 ... x1,p x2,1 x2,2 ... x2,p ... ... ... ... xN,1 xN,2 ... xN,p y1 y y= 2 ... yN Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis ^ The estimated is . The fitted values at the training inputs: ^ yi = 0 + ^ and y1 ^ y ^ ^ y= 2 ... yN ^ p j=1 ^ xij j Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Point Estimate ^ The least square estimation of is ^ = (XT X)1 XT y The fitted value vector is ^ ^ y = X = X(XT X)1 XT y Hat matrix: H= X(XT X)1 XT Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Geometric Interpretation Each column of X is a vector in an Ndimensional space (NOT the pdimensional feature vector space). X = (x0 , x1 , ..., xp ) ^ The fitted output vector y is a linear combination of the column vectors xj , j = 0, 1, ..., p. ^ y lies in the subspace spanned by xj , j = 0, 1, ..., p. ^ ^ RSS() = y  y 2 . ^ ^ y  y is perpendicular to the subspace, i.e., y is the projection of y on the subspace. The geometric interpretation is very helpful for understanding coefficient shrinkage and subset selection.
http://www.stat.psu.edu/jiali Jia Li Linear, Ridge Regression, and Principal Component Analysis Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Example Results for the SMSA Problem ^ Yi = 143.89 + 0.341Xi1  0.0193Xi2 + 0.255Xi3 . ^ RSS() = 52942336. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis If the Linear Model Is True E (Y  X ) = 0 + p Xj j j=1 The least square estimation of is unbiased, ^ E (j ) = j j = 0, 1, ..., p . To draw inferences about , further assume: Y = E (Y  X ) + where N(0, 2 ) and is independent of X . Xij are regarded as fixed, Yi are random due to . ^ Estimation accuracy: Var () = (XT X)1 2 . ^ Under the assumption, N(, (XT X)1 2 ) . Confidence intervals can be computed and significant tests can be done.
http://www.stat.psu.edu/jiali Jia Li Linear, Ridge Regression, and Principal Component Analysis GaussMarkov Theorem Assume the linear model is true. For any linear combination of the parameters 0 ,...,p , ^ ^ denoted by = aT , aT is an unbiased estimation since is unbiased. The least squares estimate of is ^ ^ = aT = aT (XT X)1 Xy ~T y , a which is linear in y. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Suppose c T y is another unbiased linear estimate of , i.e., E (c T y) = . The least square estimate yields the minimum variance among all linear unbiased estimate. Var (~T y) Var (c T y) . a j , j = 0, 1, ..., p are special cases of aT , where aT only has one nonzero element that equals 1. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Subset Selection and Coefficient Shrinkage Biased estimation may yield better prediction accuracy. ^ ^ ^ ~ Squared loss: E (  1)2 = Var (). For = , a 1, a ~ ~ ~ E (  1)2 = Var () + (E ()  1)2 = 12 + ( 1  1)2 .
a a Practical consideration: interpretation. Sometimes, we are not satisfied with a "black box". Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis ^ Assume N(1, 1). The squared error loss is reduced by shrinking the estimation. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Subset Selection To choose k predicting variables from the total of p variables, ^ search for the subset yielding minimum RSS(). Forward stepwise selection: start with the intercept, then sequentially adds into the model the predictor that most improves the fit. Backward stepwise selection: start with the full model, and sequentially deletes predictors. How to choose k: stop forward or backward stepwise selection when no predictor produces the F ratio statistic greater than a threshold. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Ridge Regression
Centered inputs Suppose xj , j = 1, ..., p, are mean removed. ^ 0 = y = N yi /N. i=1 If we remove the mean of yi , we can assume E (Y  X ) =
p j=1 j Xj Input matrix X has p (rather than p + 1) columns. ^ = (XT X)1 XT y ^ y = X(XT X)1 XT y Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Singular Value Decomposition (SVD) If the column vectors of X are orthonormal, i.e., the variables Xj , j = 1, 2, ..., p, are uncorrelated and have unit norm. ^ j are the coordinates of y on the orthonormal basis X. T . X = UDV In general U = (u1 , u2 , ..., up ) is an N p orthogonal matrix. uj , j = 1, ..., p form an orthonormal basis for the space spanned by the column vectors of X. V = (v1 , v2 , ..., vp ) is an p p orthogonal matrix. vj , j = 1, ..., p form an orthonormal basis for the space spanned by the row vectors of X. D = diag (d1 , d2 , ..., dp ), d1 d2 ... dp 0 are the singular values of X. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Principal Components The sample covariance matrix of X is S = XT X/N . Eigen decomposition of XT X: XT X = (UDVT )T (UDVT ) = VDUT UDVT = VD2 VT The eigenvectors of XT X, vj , are called principal component direction of X. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis It's easy to see that zj = Xvj = uj dj . Hence uj , is simply the projection of the row vectors of X, i.e., the input predictor vectors, on the direction vj , scaled by dj . For example X1,1 v1,1 + X1,2 v1,2 + + X1,p v1,p X2,1 v1,1 + X2,2 v1,2 + + X2,p v1,p z1 = . . . . . . . . . XN,1 v1,1 + XN,2 v1,2 + + XN,p v1,p The principal components of X are zj = dj uj , j = 1, ..., p. The first principal component of X, z1 , has the largest sample variance amongst all normalized linear combinations of the columns of X. 2 Var (z1 ) = d1 /N . Subsequent principal components zj have maximum variance dj2 /N, subject to being orthogonal to the earlier ones.
http://www.stat.psu.edu/jiali Jia Li Linear, Ridge Regression, and Principal Component Analysis Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Ridge Regression Minimize a penalized residual sum of squares p p N ^ ridge = argmin (yi  0  xij j )2 + j2 i=1 j=1 j=1 Equivalently ^ ridge = argmin subject to Jia Li p j=1 N i=1 (yi  0  p j=1 xij j )2 j2 s . or s controls the model complexity.
http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Solution With centered inputs, RSS() = (y  X)T (y  X) + T , and ^ ridge = (XT X + I)1 XT y Solution exists even when XT X is singular, i.e., has zero eigen values. When XT X is illconditioned (nearly singular), the ridge regression solution is more robust. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Geometric Interpretation Center inputs. Consider the fitted response ^ ^ y = X ridge = X(XT X + I)1 XT y = UD(D2 + I)1 DUT y p dj2 = uj 2 uT y , dj + j
j=1 where uj are the normalized principal components of X. Ridge regression shrinks the coordinates with respect to the orthonormal basis formed by the principal components. Coordinate with respect to the principal component with a smaller variance is shrunk more.
http://www.stat.psu.edu/jiali Jia Li Linear, Ridge Regression, and Principal Component Analysis Instead of using X = (X1 , X2 , ..., Xp ) as predicting variables, use the transformed variables (X v1 , X v2 , ..., X vp ) as predictors. ~ The input matrix is X = UD (Note X = UDVT ). Then for the new inputs ^ jridge = dj uT y , 2+ j dj 2 ^ Var (j ) = 2 dj where 2 is the variance of the error term in the linear model. The factor of shrinkage given by ridge regression is dj2 dj2 + . Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis The Geometric interpretation of principal components and shrinkage by ridge regression. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis ^ Compare squared loss E (j  j )2 Without shrinkage: 2 /dj2 . With shrinkage: Bias 2 + Variance. (j  j = dj2 dj2 + )2 + dj2 2 2 ( 2 ) dj2 dj + 2 2 2 j 2 dj (dj + 2 ) dj2 (dj2 + )2 2 Consider the ratio between squared loss dj2 (dj2 + 2 j2 ) (dj2 + )2
2 . Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis The ratio between the squared loss with and without shrinkage. The amount of shrinkage is set by = 1.0. The four curves correspond to 2 / 2 = 0.5, 1.0, 2.0, 4.0. When 2 / 2 = 0.5, 1.0, 2.0, shrinkage always leads to lower squared loss. When 2 / 2 = 4.0, shrinkage leads to lower squared loss when dj2 0.71. Shrinkage is more beneficial when dj2 is small.
Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Principal Components Regression (PCR) In stead of smoothly shrinking the coordinates on the principal components, PCR either does not shrink a coordinate at all or shrinks it to zero. Principal component regression forms the derived input columns zm = Xvm , and then regresses y on z1 , z2 , ..., zM for some M p. Principal components regression discards the p  M smallest eigenvalue components. Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis Jia Li http://www.stat.psu.edu/jiali Linear, Ridge Regression, and Principal Component Analysis The Lasso The lasso estimate is defined by ^ lasso = argmin subject to
p j=1 N i=1 (yi  0  p j=1 xij j )2 j  s p
2 j=1 j Comparison with ridge regression: L2 penalty replaced by the L1 lasso penalty p j . j=1 is Some of the coefficients may be shrunk to exactly zero. Orthonormal columns in X are assumed in the following figure.
http://www.stat.psu.edu/jiali Jia Li Linear, Ridge Regression, and Principal Component Analysis Jia Li http://www.stat.psu.edu/jiali ...
View
Full
Document
This note was uploaded on 02/04/2012 for the course STAT 557 taught by Professor Jiali during the Fall '09 term at Pennsylvania State University, University Park.
 Fall '09
 JIALI
 Statistics

Click to edit the document details