class05

class05 - Support Vector Machines For Classification 9.520...

Info iconThis preview shows pages 1–13. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Support Vector Machines For Classification 9.520 Class 05, 22 February 2006 Ryan Rifkin Plan Regularization derivation of SVMs Geometric derivation of SVMs Optimality, Duality and Large Scale SVMs SVMs and RLSC: Compare and Contrast The Regularization Setting (Again) We are given n examples ( x 1 , y 1 ) , . . . , ( x n , y n ), with x i IR d and y i { 1 , 1 } for all i . As mentioned last class, we find a classification function by solving a regularization: n 1 summationdisplay min V ( y i , f ( x i )) + || f || 2 K . f H n i =1 In this class we specifically consider binary classification . The Hinge Loss The classical SVM arises by considering the specific loss function V ( f ( x ) , y ) (1 yf ( x )) + , where ( k ) + max( k, 0) . The Hinge Loss 0 0.5 1 1.5 2 2.5 3 3.5 4 Hinge Loss 3 2 1 0 1 2 y * f(x) 3 Substituting In The Hinge Loss With the hinge loss, our regularization problem becomes n 1 summationdisplay min (1 y i f ( x i )) + + || f || 2 K . f H n i =1 Slack Variables This problem is non-differentiable (because of the kink in V ), so we introduce slack variables i , to make the problem easier to work with: min f H 1 n n i =1 i + || f || 2 K subject to : y i f ( x i ) 1 i i = 1 , . . . , n i 0 i = 1 , . . . , n summationdisplay Applying The Representer Theorem Substituting in: n f ( x ) = c i K ( x , x i ) , i =1 we arrive at a constrained quadratic programming problem: min 1 i n =1 i + c T K c c IR n n n subject to : y i j =1 c j K ( x i , x j ) 1 i i = 1 , . . . , n i 0 i = 1 , . . . , n Adding A Bias Term If we add an unregularized bias term b , we arrive at the primal SVM: min 1 n n i =1 i + c T K c c IR n , IR n n subject to : y i ( j =1 c j K ( x i , x j ) + b ) 1 i i = 1 , . . . , n i 0 i = 1 , . . . , n summationdisplay summationdisplay summationdisplay Forming the Lagrangian We derive the Wolfe dual quadratic program using La- grange multiplier techniques: n 1 summationdisplay L ( c , , b, , ) = i + c T K c n i =1 n n i y i c j K ( x i , x j ) + b 1 + i i =1 j =1 n i i i =1 We want to minimize L with respect to c , b , and , and maximize L with respect to and , subject to the con- straints of the primal problem and nonnegativity constraints on and . summationdisplay summationdisplay Eliminating b and n L summationdisplay = 0 = i y i = 0 b i =1 L 1 = 0 = i i = 0 i n 1 = 0 i n We write a reduced Lagrangian in terms of the remaining variables: n n T L R ( c , ) = c K c i ( y i c j K ( x i , x j ) 1) i =1 j =1 Eliminating c Assuming the K matrix is invertible, L R = 0 = 2 K c KY = 0 c i y i = c i = 2 Where Y is a diagonal matrix whose i th diagonal element is y i ; Y is a vector whose i th element is i y i ....
View Full Document

Page1 / 44

class05 - Support Vector Machines For Classification 9.520...

This preview shows document pages 1 - 13. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online