class05 - Support Vector Machines For Classication 9.520...

Info icon This preview shows pages 1–14. Sign up to view the full content.

View Full Document Right Arrow Icon
Support Vector Machines For Classification 9.520 Class 05, 22 February 2006 Ryan Rifkin
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Plan Regularization derivation of SVMs Geometric derivation of SVMs Optimality, Duality and Large Scale SVMs SVMs and RLSC: Compare and Contrast
Image of page 2
The Regularization Setting (Again) We are given n examples ( x 1 , y 1 ) , . . . , ( x n , y n ), with x i IR d and y i {− 1 , 1 } for all i . As mentioned last class, we find a classification function by solving a regularization: n 1 summationdisplay min V ( y i , f ( x i )) + λ || f || 2 K . f ∈H n i =1 In this class we specifically consider binary classification .
Image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Hinge Loss The classical SVM arises by considering the specific loss function V ( f ( x ) , y ) (1 yf ( x )) + , where ( k ) + max( k, 0) .
Image of page 4
The Hinge Loss 0 0.5 1 1.5 2 2.5 3 3.5 4 Hinge Loss −3 −2 −1 0 1 2 y * f(x) 3
Image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Substituting In The Hinge Loss With the hinge loss, our regularization problem becomes n 1 summationdisplay min (1 y i f ( x i )) + + λ || f || 2 K . f ∈H n i =1
Image of page 6
Slack Variables This problem is non-differentiable (because of the “kink” in V ), so we introduce slack variables ξ i , to make the problem easier to work with: min f ∈H 1 n n i =1 ξ i + λ || f || 2 K subject to : y i f ( x i ) 1 ξ i i = 1 , . . . , n ξ i 0 i = 1 , . . . , n
Image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
summationdisplay Applying The Representer Theorem Substituting in: n f ( x ) = c i K ( x , x i ) , i =1 we arrive at a constrained quadratic programming problem: min 1 i n =1 ξ i + λ c T K c c IR n n n subject to : y i j =1 c j K ( x i , x j ) 1 ξ i i = 1 , . . . , n ξ i 0 i = 1 , . . . , n
Image of page 8
Adding A Bias Term If we add an unregularized bias term b , we arrive at the “primal” SVM: min 1 n n i =1 ξ i + λ c T K c c IR n IR n n subject to : y i ( j =1 c j K ( x i , x j ) + b ) 1 ξ i i = 1 , . . . , n ξ i 0 i = 1 , . . . , n
Image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
summationdisplay summationdisplay summationdisplay Forming the Lagrangian We derive the Wolfe dual quadratic program using La- grange multiplier techniques: n 1 summationdisplay L ( c , ξ, b, α, ζ ) = ξ i + λ c T K c n i =1 n n α i y i c j K ( x i , x j ) + b 1 + ξ i i =1 j =1 n ζ i ξ i i =1 We want to minimize L with respect to c , b , and ξ , and maximize L with respect to α and ζ , subject to the con- straints of the primal problem and nonnegativity constraints on α and ζ .
Image of page 10
summationdisplay summationdisplay Eliminating b and ξ n ∂L summationdisplay = 0 = α i y i = 0 ∂b i =1 ∂L 1 = 0 = α i ζ i = 0 ∂ξ i n 1 = 0 α i n We write a reduced Lagrangian in terms of the remaining variables: n n T L R ( c , α ) = λ c K c α i ( y i c j K ( x i , x j ) 1) i =1 j =1
Image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Eliminating c Assuming the K matrix is invertible, ∂L R = 0 = 2 λK c KY α = 0 c α i y i = c i = 2 λ Where Y is a diagonal matrix whose i ’th diagonal element is y i ; Y α is a vector whose i ’th element is α i y i .
Image of page 12
The Dual Program Substituting in our expression for c , we are left with the following “dual” program: n max i =1 α i 4 1 λ α T α IR n n subject to : = 0 i =1 y i α i 0 α i 1 i = 1 , . . . , n n Here, Q is the matrix
Image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 14
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern