This preview shows pages 1–6. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Announcements Final 78:15 PM, Wed. 12/15 here Q/A session 11noon Mon. 12/13 2405SC Projects (for 4 credits) due Tue. 12/7 Code Sample I/O (if it doesnt work, say so) Paper discussing What you did & why What you learned How you would do it differently given 1 Computational Learning Theory How Much Data is Enough? Training set is evidence for which h H is Correct: [Simple, Proper, Realizable??] learning Best: Agnostic learning Remember: training set = labeled independent samples from an underlying population Suppose we perform well on the training set How well will perform on the underlying population? This is the test accuracy or utility of a concept (not how well it classifies the training set) 2 What Makes a Learning Problem Hard? How do we measure hard? Computation time? Space complexity? What is the valuable resource? Training examples Hard learning problems require more training examples Hardest learning problems require the entire example space to be labeled 3 [Simple] Learning PAC formulation Probably Approximately Correct Example space X sampled with a fixed but unknown distribution D Some target concept h* H is used to label an iid (according to D ) sample S of N examples Finite H Algorithm: return any h H that agrees with all N training examples S  S  = N Choose N sufficiently large that with high confidence (1 ) h has accuracy of at least 1 0 < , << 1 H N ln 1 ln 1 4 Simple Learning (simple derivation) What is the probability that a bad hypothesis looks good?...
View
Full
Document
 Fall '08
 Levinson,S

Click to edit the document details