Chp7 - Copy

# Chp7 - Copy - 7 Model Assessment and Selection 7.1...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 7 Model Assessment and Selection 7.1 Introduction The generalization performance of a learning method relates to its predic- tion capability on independent test data. Assessment of this performance is extremely important in practice, since it guides the choice of learning method or model, and gives us a measure of the quality of the ultimately chosen model. In this chapter we describe and illustrate the key methods for perfor- mance assessment, and show how they are used to select models. We begin the chapter with a discussion of the interplay between bias, variance and model complexity. 7.2 Bias, Variance and Model Complexity Figure 7.1 illustrates the important issue in assessing the ability of a learn- ing method to generalize. Consider first the case of a quantitative or interval scale response. We have a target variable Y , a vector of inputs X , and a prediction model ˆ f ( X ) that has been estimated from a training set T . The loss function for measuring errors between Y and ˆ f ( X ) is denoted by L ( Y, ˆ f ( X )). Typical choices are L ( Y, ˆ f ( X )) = ( Y − ˆ f ( X )) 2 squared error | Y − ˆ f ( X ) | absolute error . (7.1) © Springer Science+Business Media, LLC 2009 T. Hastie et al., The Elements of Statistical Learning, Second Edition, 219 DOI: 10.1007/b94608_7, 220 7. Model Assessment and Selection 5 10 15 20 25 30 35 . . 2 . 4 . 6 . 8 1 . 1 . 2 Model Complexity (df) P r e d i c t i o n E r r o r High Bias Low Bias High Variance Low Variance FIGURE 7.1. Behavior of test sample and training sample error as the model complexity is varied. The light blue curves show the training error err , while the light red curves show the conditional test error Err T for 100 training sets of size 50 each, as the model complexity is increased. The solid curves show the expected test error Err and the expected training error E[ err] . Test error , also referred to as generalization error , is the prediction error over an independent test sample Err T = E[ L ( Y, ˆ f ( X )) |T ] (7.2) where both X and Y are drawn randomly from their joint distribution (population). Here the training set T is fixed, and test error refers to the error for this specific training set. A related quantity is the expected pre- diction error (or expected test error) Err = E[ L ( Y, ˆ f ( X ))] = E[Err T ] . (7.3) Note that this expectation averages over everything that is random, includ- ing the randomness in the training set that produced ˆ f . Figure 7.1 shows the prediction error (light red curves) Err T for 100 simulated training sets each of size 50. The lasso (Section 3.4.2) was used to produce the sequence of fits. The solid red curve is the average, and hence an estimate of Err....
View Full Document

## This note was uploaded on 07/14/2010 for the course STAT 132 taught by Professor Haulk during the Spring '10 term at UBC.

### Page1 / 41

Chp7 - Copy - 7 Model Assessment and Selection 7.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online