So we apply to to and take the average to get the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: een Bias and variance ( variance- tradeoff) for model complexity control. How do we choos e a good clas s ifier? Our goal is to find a classifier that minimizes the true error rate . Figure 1: An example of a model with a family of polynomials Recall the empirical error rate is a classifier and we want to minimize the error rate. So we apply . to to , and take the average to get the empirical true error rate estimation of probability that There is a downward bias to this estimate meaning that it is always less than the true error rate. If there is a change in our complexity from low to high, our error rate is always decreasing. When we apply our model to the test data, our error rate will start to decrease to a point, but then it will increase since the model hasn't seen it before. This can be explained since training error will decrease when we fit the model better by increasing its complexity, but as we have seen, this complex model will not generalize well, resulting in a larger test error. We use our test data (from the test sample line shown on Figure 2) to get our empirical error rate. Right complexity is defined as where error rate of the test data is minimum; and this is one idea behind complexity control. Figure 2 We assume that we have samples distribution. We want to estimate a parameter or some other quantity. The unknown parameter observations, that follow some (possibly unknown) of the unknown distribution. This parameter may be the mean is a fixed real number . , the variance . To estimate it, we use an estimator which is a function of our Figure 3 One property we desire of the estimator is that it is correct on average, that is, it is unbiased. . However, there is a more important property for an estimator than just being unbiased: the mean squared error. In statistics, there are problems for which it may be good to use an estimator with a small bias. In some cases, an estimator with a small bias may have lesser mean squared error or be median- un...
View Full Document

This document was uploaded on 03/07/2014.

Ask a homework question - tutors are online