lecture19 (1)

lecture19 (1) - Suggested Reading V. N. Vapnik. Statistical...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 19: Leave-one-out approximations Sayan Mukherjee Description We introduce the idea of cross-validation, leave-one-out in its extreme form. We show that the leave-one-out estimate is almost unbiased. We then show a series of approximations and bounds on the leave-one-out error that are used for computational efficiency. First this is shown for least-squares loss then for the SVM loss function. We close by reporting in a worst case analysis the leave-one-out error is not a significantly better estimate of expected error than is the training error.
Background image of page 1
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Suggested Reading V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. Chapelle et al Choosing Multiple Parameters for Support Vector Machines. Machine Learning, 2002. Wahba, G. Spline Models for Observational Data Series in Applied Mathematics, Vol. 59, SIAM, 1990. T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In Proceedings of the Seventh International Workshop on Artificial Intelligence and Statistics, 1999....
View Full Document

Ask a homework question - tutors are online