�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
�
Lecture
11
Optimistic
VC
inequality.
18.465
Last
time
we
proved
the
Pessimistic
VC
inequality:
�
V
1
2
en
2
nt
n
≥
t
≤
4
n
�
i
=1
I
(
X
i
∈
C
)
−
P
(
C
)
e
−
8
P
sup
,
V
C
which
can
be
rewritten
with
8
2
en
log 4
+
V
log
+
u
t
=
V
n
as
≤
1
n
n
�
i
=1
8
2
en
I
(
X
i
∈
C
)
−
P
(
C
)
log 4
+
V
log
≥
1
−
e
−
u
P
+
u
sup
.
V
n
C
V
log
n
n
Hence,
the
rate
is
.
In
this
lecture
we
will
prove
Optimistic
VC
inequality,
which
will
improve
on
this
rate
when
P
(
C
)
is
small.
As
before,
we
have
pairs
(
X
i
, Y
i
),
Y
i
=
±
1.
These
examples
are
labeled
according
to
some
unknown
C
0
such
that
Y
= 1
if
X
=
C
0
and
Y
= 0
if
X /
∈
C
0
.
Let
C
=
{
C
:
C
⊆
X}
,
a
set
of
classifiers.
C
makes
a
mistake
if
X
∈
C
\
C
0
∪
C
0
\
C
=
C
�
C
0
.
Similarly
to
last
lecture,
we
can
derive
bounds
on
n
1
I
(
X
i
∈
C
�
C
0
)
−
P
(
C
�
C
0
)
sup
C
,
n
i
=1
where
(
) is the generalization error.
P
C
C
�
0
�
Let
C
�
=
{
C
�
C
0
:
C
∈
C}
.
One
can
prove
that
V C
(
C
�
)
≤
V C
(
C
)
and
�
n
(
C
�
, X
1
, . . . , X
n
)
≤ �
n
(
C,
X
1
, . . . , X
n
).
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '07
 Panchenko
 Statistics

Click to edit the document details