lecture14-SVMs-handout-6-per

Datasetsthatarelinearlyseparablewithsomenoiseworkoutgr

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: onding
xi
is
a
support
vector.
   Then
the
classifying
func)on
will
have
the
form:
 f(x) = ΣαiyixiTx + b   No)ce
that
it
relies
on
an
inner product
between
the
test
point
x and
the
support
 vectors
xi
–
we
will
return
to
this
later.
   Also
keep
in
mind
that
solving
the
op)miza)on
problem
involved
compu)ng
the
 inner
products
xiTxj
between
all
pairs
of
training
points.
 12
 2 Sec. 15.2.1 Introduc)on to Informa)on Retrieval Sou
Margin
Classifica)on


 Sec. 15.2.1 Introduc)on to Informa)on Retrieval Sou
Margin
Classifica)on
 Mathema)cally
   The
old
formula)on:
   If
the
training
data
is
not
 linearly
separable,
slack variables
ξi
can
be
added
to
 allow
misclassifica)on
of
 difficult
or
noisy
examples.
   Allow
some
errors
   Let
some
points
be
moved
 to
where
they
belong,
at
a
 cost
   S)ll,
try
to
minimize
training
 set
errors,
and
to
place
 hyperplane
“far”
from
each
 class
(large
margin)
 Find w and b such that Φ(w) =½ wTw is minimized and for all {(xi yi (wTxi + b) ≥ 1 ,yi)}   The
new
formula)on
incorpora)ng
slack
variables:
 ξi Find w and b such that Φ(w) =½ wTw + CΣξi is minimized and for all {(xi yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i ξj ,yi)}   Parameter
C
can
be
viewed
as
a
way
to
control
overfivng
–
a
 regulariza)on
term
 13
 Sec. 15.2.1 Introduc)on to Informa)on Retrieval 14
 Sec. 15.1 Introduc)on to Informa)on Retrieval Sou
Margin
Classifica)on
–
Solu)on
 Classifica)on
with
SVMs
   The
dual
problem
for
sou
margin
classifica)on:
   Given
a
new
point
x,
we
can
score
its
projec)on
 onto
the
...
View Full Document

This document was uploaded on 02/26/2014.

Ask a homework question - tutors are online