This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Support Vector Machines For Classification 9.520 Class 06, 24 February 2003 Ryan Rifkin Plan • Regularization derivation of SVMs • Geometric derivation of SVMs • Optimality, Duality and Large Scale SVMs • SVMs and RLSC: Compare and Contrast The Regularization Setting (Again) We are given ‘ examples ( x 1 , y 1 ) , . . . , ( x ‘ , y ‘ ), with x i ∈ IR n and y i ∈ { 1 , 1 } for all i . As mentioned last class, we can find a classification function by solving a regularized learning problem: min f ∈H 1 ‘ ‘ X i =1 V ( y i , f ( x i )) + λ  f  2 K . Note that in this class we are specifically consider binary classification . The Hinge Loss The classical SVM arises by considering the specific loss function V ( f ( x ) , y ) ≡ (1 yf ( x )) + , where ( k ) + ≡ max( k, 0) . The Hinge Loss321 1 2 3 0.5 1 1.5 2 2.5 3 3.5 4 y * f(x) Hinge Loss Substituting In The Hinge Loss With the hinge loss, our regularization problem becomes min f ∈H 1 ‘ ‘ X i =1 (1 y i f ( x i )) + + λ  f  2 K . Slack Variables This problem is nondifferentiable (because of the “kink” in V ), so we introduce slack variables ξ i , to make the problem easier to work with: min f ∈H 1 ‘ ∑ ‘ i =1 ξ i + λ  f  2 K subject to : y i f ( x i ) ≥ 1 ξ i i = 1 , . . . , ‘ ξ i ≥ i = 1 , . . . , ‘ Applying The Representer Theorem Substituting in: f * ( x ) = ‘ X i =1 c i K ( x , x i ) , we arrive at a constrained quadratic programming problem: min c ∈ IR ‘ 1 ‘ ∑ ‘ i =1 ξ i + λ c T K c subject to : y i ∑ ‘ j =1 c j K ( x i , x j ) ≥ 1 ξ i i = 1 , . . . , ‘ ξ i ≥ i = 1 , . . . , ‘ Adding A Bias Term If we add an unregularized bias term b , which presents some theoretical difficulties to be discussed later, we arrive at the “primal” SVM: min c ∈ IR ‘ ,ξ ∈ IR ‘ 1 ‘ ∑ ‘ i =1 ξ i + λ c T K c subject to : y i ( ∑ ‘ j =1 c j K ( x i , x j ) + b ) ≥ 1 ξ i i = 1 , . . . , ‘ ξ i ≥ i = 1 , . . . , ‘ Forming the Lagrangian For reasons that will be clear in a few slides, we derive the Wolfe dual quadratic program using Lagrange multiplier techniques: L ( c , ξ, b, α, ζ ) = 1 ‘ ‘ X i =1 ξ i + λ c T K c ‘ X i =1 α i y i ‘ X j =1 c j K ( x i , x j ) + b  1 + ξ i  ‘ X i =1 ζ i ξ i We want to minimize L with respect to c , b , and ξ , and maximize L with respect to α and ζ , subject to the con straints of the primal problem and nonnegativity constraints on α and ζ . Eliminating b and ξ ∂L ∂b = 0 = ⇒ ‘ X i =1 α i y i = 0 ∂L ∂ξ i = 0 = ⇒ 1 ‘ α i ζ i = 0 = ⇒ ≤ α i ≤ 1 ‘ We write a reduced Lagrangian in terms of the remaining variables: L R ( c , α ) = λ c T K c ‘ X i =1 α i ( y i ‘ X j =1 c j K ( x i , x j ) 1) Eliminating c Assuming the K matrix is invertible, ∂L R ∂ c = 0 = ⇒ 2 λK c KY α = 0 = ⇒ c i = α i y i 2 λ Where Y is a diagonal matrix whose i ’th diagonal element is y i ; Y α is a vector whose i ’th element is α i y i . The Dual Program...
View
Full
Document
This note was uploaded on 11/11/2011 for the course BIO 9.07 taught by Professor Ruthrosenholtz during the Spring '04 term at MIT.
 Spring '04
 RuthRosenholtz

Click to edit the document details