4.4 Other Bounds on Generalisation and Luckiness
The previous section considered bounds on generalisation performance in terms of measures of the margin distribution.
We argued that the bounds we must use in high dimensional spaces must be able to take advantage of favourable input
distributions that are in some sense aligned with the target function. The bounds must avoid dependence on the
dimension of the input space in favour of dependence on quantities measured as a result of the training algorithm,
quantities that effectively assess how favourable the input distribution is. We described three results showing
dependences on three different measures of the margin distribution. We will briefly argue in this section that bounds of
this type do not necessarily have to depend on margin value. In particular the size of a sample compression scheme can
be used to bound the generalisation by a relatively straightforward argument due to Littlestone and Warmuth. A sample
compression scheme is defined by a fixed rule
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 10/15/2011 for the course MBAHRM 565 taught by Professor Profbhattacharya during the Spring '11 term at IIT Kanpur.
 Spring '11
 ProfBhattacharya

Click to edit the document details