This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CS229 Lecture notes Andrew Ng Part VI Learning Theory 1 Bias/variance tradeoff When talking about linear regression, we discussed the problem of whether to fit a “simple” model such as the linear “ y = θ + θ 1 x ,” or a more “complex” model such as the polynomial “ y = θ + θ 1 x + ··· θ 5 x 5 .” We saw the following example: 1 2 3 4 5 6 7 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x y 1 2 3 4 5 6 7 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x y 1 2 3 4 5 6 7 0.5 1 1.5 2 2.5 3 3.5 4 4.5 x y Fitting a 5th order polynomial to the data (rightmost figure) did not result in a good model. Specifically, even though the 5th order polynomial did a very good job predicting y (say, prices of houses) from x (say, living area) for the examples in the training set, we do not expect the model shown to be a good one for predicting the prices of houses not in the training set. In other words, what’s has been learned from the training set does not generalize well to other houses. The generalization error (which will be made formal shortly) of a hypothesis is its expected error on examples not necessarily in the training set. Both the models in the leftmost and the rightmost figures above have large generalization error. However, the problems that the two models suffer from are very different. If the relationship between y and x is not linear, 1 2 then even if we were fitting a linear model to a very large amount of training data, the linear model would still fail to accurately capture the structure in the data. Informally, we define the bias of a model to be the expected generalization error even if we were to fit it to a very (say, infinitely) large training set. Thus, for the problem above, the linear model suffers from large bias, and may underfit (i.e., fail to capture structure exhibited by) the data. Apart from bias, there’s a second component to the generalization error, consisting of the variance of a model fitting procedure. Specifically, when fitting a 5th order polynomial as in the rightmost figure, there is a large risk that we’re fitting patterns in the data that happened to be present in our small, finite training set, but that do not reflect the wider pattern of the relationship between x and y . This could be, say, because in the training set we just happened by chance to get a slightly more-expensive-than-average house here, and a slightly less-expensive-than-average house there, and so on. By fitting these “spurious” patterns in the training set, we might again obtain a model with large generalization error. In this case, we say the model has large variance. 1 Often, there is a tradeoff between bias and variance. If our model is too “simple” and has very few parameters, then it may have large bias (but small variance); if it is too “complex” and has very many parameters, then it may suffer from large variance (but have smaller bias). In the example above, fitting a quadratic function does better than either of the extremes of a first or a fifth order polynomial.or a fifth order polynomial....
View Full Document
- Fall '09
- Machine Learning, generalization error