{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lect25 - Announcements • Final 7-8:15 PM Wed 12/15 here...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Announcements • Final 7-8:15 PM, Wed. 12/15 here • Q/A session 11-noon Mon. 12/13 2405SC • Projects (for 4 credits) due Tue. 12/7 – Code – Sample I/O (if it doesn’t work, say so) – Paper discussing • What you did & why • What you learned • How you would do it differently given… 1 Computational Learning Theory How Much Data is Enough? • Training set is evidence for which h H is – Correct: [Simple, Proper, Realizable??] learning – Best: Agnostic learning • Remember: training set = labeled independent samples from an underlying population • Suppose we perform well on the training set • How well will perform on the underlying population? • This is the test accuracy or utility of a concept (not how well it classifies the training set) 2 What Makes a Learning Problem Hard? • How do we measure “hard”? • Computation time? • Space complexity? • What is the valuable resource? • Training examples • Hard learning problems require more training examples • Hardest learning problems require the entire example space to be labeled 3 [Simple] Learning • PAC formulation • Probably Approximately Correct • Example space X sampled with a fixed but unknown distribution D • Some target concept h* H is used to label an iid (according to D ) sample S of N examples • Finite H • Algorithm: return any h H that agrees with all N training examples S | S | = N • Choose N sufficiently large that with high confidence (1- ) h has accuracy of at least 1- 0 < , << 1 H N ln 1 ln 1 4 Simple Learning (simple derivation) • What is the probability that a bad hypothesis looks good?...
View Full Document

{[ snackBarMessage ]}

Page1 / 19

Lect25 - Announcements • Final 7-8:15 PM Wed 12/15 here...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online