{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

class06 (1)

# class06 (1) - Generalization Bounds and Stability 9.520...

This preview shows pages 1–8. Sign up to view the full content.

Generalization Bounds and Stability 9.520 Class 06, 26 February 2003 Alex Rakhlin

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Plan Generalization Bounds Stability Generalization Bounds Using Stability
Algorithms We defne an algorithm A to be a mapping From a training set S = { z 1 , . . . , z } to a Function f S . Here, z i ( x i , y i ). Throughout the next several lectures, we assume that A is deterministic, and that A does not depend on the ordering oF the points in the training set. These assumptions are not very restrictive, but greatly simpliFy the math. How can we measure “goodness” oF f S ?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Risks Recall that in Lecture 2 we’ve defned the true (expected) risk: I [ f S ] = IE ( x ,y ) [ V ( f S ( x ) , y )] = Z V ( f S ( x ) , y ) ( x , y ) and the empirical risk: I S [ f S ] = 1 X i =1 V ( f S ( x i ) , y i ) . Note : the true and empirical risks are denoted in Bous- R ( A , S ) and ˆ R ( A , S ), respectively, to emphasize the algorithm that produced f S . Note : the loss is sometimes written as c ( f, z ) = V ( f ( x ) , y ), where z = ( x , y ).
Generalization Bounds Our goal is to choose an algorithm A so that I [ f S ] will be small. This is difcult because we can’t measure I [ f S ]. We can, however, measure I S [ f S ]. A generalization bound is a (probabilistic) bound on how big the deFect D [ f S ] = I [ f S ] - I S [ f S ] can be. IF we can bound the deFect and we can observe that I S [ f S ] is small, then I [ f S ] must be small. Note that this is consistency , as we’ve de±ned in Lect. 2: D [ f S ] 0, as → ∞ .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Properties of Generalization Bounds, I What will a generalization bound depend on? A gener- alization bound is a way of saying that the performance of a function on the training set has to be similar to its performance on future examples. For this reason, gener- alization bounds are always probabilistic : they hold with some (high) probability, to take into account the (low) chance that you’ll see a very unrepresentative training set.
Generalization bounds depend on some measure of the size of the hypothesis space we allow ourselves to choose from. As the hypothesis space gets smaller, the generalization bound will get tighter (but the empirical performance will often go down). As the hypothesis space gets bigger, the generalization bound will get looser. The bound will depend on the number of samples we have.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 25

class06 (1) - Generalization Bounds and Stability 9.520...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online