class05

class05 - Support Vector Machines For Classification 9.520...

Info iconThis preview shows pages 1–13. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Support Vector Machines For Classification 9.520 Class 05, 22 February 2006 Ryan Rifkin Plan • Regularization derivation of SVMs • Geometric derivation of SVMs • Optimality, Duality and Large Scale SVMs • SVMs and RLSC: Compare and Contrast The Regularization Setting (Again) We are given n examples ( x 1 , y 1 ) , . . . , ( x n , y n ), with x i ∈ IR d and y i ∈ {− 1 , 1 } for all i . As mentioned last class, we find a classification function by solving a regularization: n 1 summationdisplay min V ( y i , f ( x i )) + λ || f || 2 K . f ∈H n i =1 In this class we specifically consider binary classification . The Hinge Loss The classical SVM arises by considering the specific loss function V ( f ( x ) , y ) ≡ (1 − yf ( x )) + , where ( k ) + ≡ max( k, 0) . The Hinge Loss 0 0.5 1 1.5 2 2.5 3 3.5 4 Hinge Loss −3 −2 −1 0 1 2 y * f(x) 3 Substituting In The Hinge Loss With the hinge loss, our regularization problem becomes n 1 summationdisplay min (1 − y i f ( x i )) + + λ || f || 2 K . f ∈H n i =1 Slack Variables This problem is non-differentiable (because of the “kink” in V ), so we introduce slack variables ξ i , to make the problem easier to work with: min f ∈H 1 n ∑ n i =1 ξ i + λ || f || 2 K subject to : y i f ( x i ) ≥ 1 − ξ i i = 1 , . . . , n ξ i ≥ 0 i = 1 , . . . , n summationdisplay ∑ ∑ Applying The Representer Theorem Substituting in: n ∗ f ( x ) = c i K ( x , x i ) , i =1 we arrive at a constrained quadratic programming problem: min 1 i n =1 ξ i + λ c T K c c ∈ IR n n n subject to : y i j =1 c j K ( x i , x j ) ≥ 1 − ξ i i = 1 , . . . , n ξ i ≥ 0 i = 1 , . . . , n ∑ ∑ Adding A Bias Term If we add an unregularized bias term b , we arrive at the “primal” SVM: min 1 n n i =1 ξ i + λ c T K c c ∈ IR n ,ξ ∈ IR n n subject to : y i ( j =1 c j K ( x i , x j ) + b ) ≥ 1 − ξ i i = 1 , . . . , n ξ i ≥ 0 i = 1 , . . . , n summationdisplay summationdisplay summationdisplay Forming the Lagrangian We derive the Wolfe dual quadratic program using La- grange multiplier techniques: n 1 summationdisplay L ( c , ξ, b, α, ζ ) = ξ i + λ c T K c n i =1 n n − α i y i c j K ( x i , x j ) + b − 1 + ξ i i =1 j =1 n − ζ i ξ i i =1 We want to minimize L with respect to c , b , and ξ , and maximize L with respect to α and ζ , subject to the con- straints of the primal problem and nonnegativity constraints on α and ζ . summationdisplay summationdisplay Eliminating b and ξ n ∂L summationdisplay = 0 = ⇒ α i y i = 0 ∂b i =1 ∂L 1 = 0 = ⇒ − α i − ζ i = 0 ∂ξ i n 1 = ⇒ 0 ≤ α i ≤ n We write a reduced Lagrangian in terms of the remaining variables: n n T L R ( c , α ) = λ c K c − α i ( y i c j K ( x i , x j ) − 1) i =1 j =1 Eliminating c Assuming the K matrix is invertible, ∂L R = 0 = ⇒ 2 λK c − KY α = 0 ∂ c α i y i = ⇒ c i = 2 λ Where Y is a diagonal matrix whose i ’th diagonal element is y i ; Y α is a vector whose i ’th element is α i y i . ∑...
View Full Document

This note was uploaded on 11/11/2011 for the course BIO 9.07 taught by Professor Ruthrosenholtz during the Spring '04 term at MIT.

Page1 / 44

class05 - Support Vector Machines For Classification 9.520...

This preview shows document pages 1 - 13. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online