Chp5 - Copy - 5 Basis Expansions and Regularization 5.1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
5 Basis Expansions and Regularization 5.1 Introduction We have already made use of models linear in the input features, both for regression and classification. Linear regression, linear discriminant analysis, logistic regression and separating hyperplanes all rely on a linear model. It is extremely unlikely that the true function f ( X ) is actually linear in X . In regression problems, f ( X )=E( Y | X ) will typically be nonlinear and nonadditive in X , and representing f ( X ) by a linear model is usually a con- venient, and sometimes a necessary, approximation. Convenient because a linear model is easy to interpret, and is the first-order Taylor approxima- tion to f ( X ). Sometimes necessary, because with N small and/or p large, a linear model might be all we are able to fit to the data without overfit- ting. Likewise in classification, a linear, Bayes-optimal decision boundary implies that some monotone transformation of Pr( Y =1 | X ) is linear in X . This is inevitably an approximation. In this chapter and the next we discuss popular methods for moving beyond linearity. The core idea in this chapter is to augment/replace the vector of inputs X with additional variables, which are transformations of X , and then use linear models in this new space of derived input features. Denote by h m ( X ):I R p ±→ IR t h e m th transformation of X , m = 1 ,...,M . We then model f ( X )= M ± m =1 β m h m ( X ) , (5.1) © Springer Science+Business Media, LLC 2009 T. Hastie et al., The Elements of Statistical Learning, Second Edition, 139 DOI: 10.1007/b94608_5,
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
140 5. Basis Expansions and Regularization a linear basis expansion in X . The beauty of this approach is that once the basis functions h m have been determined, the models are linear in these new variables, and the fitting proceeds as before. Some simple and widely used examples of the h m are the following: h m ( X )= X m ,m =1 ,...,p recovers the original linear model. h m ( X X 2 j or h m ( X X j X k allows us to augment the inputs with polynomial terms to achieve higher-order Taylor expansions. Note, however, that the number of variables grows exponentially in the de- gree of the polynomial. A full quadratic model in p variables requires O ( p 2 ) square and cross-product terms, or more generally O ( p d )fora degree- d polynomial. h m ( X ) = log( X j ) , p X j ,... permits other nonlinear transformations of single inputs. More generally one can use similar functions involv- ing several inputs, such as h m ( X || X || . h m ( X I ( L m X k <U m ), an indicator for a region of X k .By breaking the range of X k up into M k such nonoverlapping regions results in a model with a piecewise constant contribution for X k . Sometimes the problem at hand will call for particular basis functions h m , such as logarithms or power functions. More often, however, we use the basis expansions as a device to achieve more flexible representations for f ( X ).
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 07/14/2010 for the course STAT 132 taught by Professor Haulk during the Spring '10 term at The University of British Columbia.

Page1 / 51

Chp5 - Copy - 5 Basis Expansions and Regularization 5.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online