# series4 - Exercises Machine Learning Laboratory Dept of...

• Homework Help
• 5
• 100% (1) 1 out of 1 people found this document helpful

This preview shows pages 1–3. Sign up to view the full content.

Machine Learning Laboratory Dept. of Computer Science, ETH Z¨urich Prof. Dr. Joachim M. Buhmann Web Email questions to: Alberto Giovanni Busetto [email protected] Exercises Machine Learning AS 2012 Series 4, Nov 6th, 2012 (SVMs) Problem 1 (Support Vector Machines): The objective of this exercise is to implement the 1-norm soft margin support vector machine. This SVM is defined by the optimization problem minimize h w , w i + C l X i =1 ξ i with respect to ξ, w , b and subject to the constraints y i ( h w , x i i + b ) 1 - ξ i ξ i 0 . Here, w denotes the weight vector of the hyperplane, x i are the training data points and ξ i the corresponding slack variables. The dual optimization problem is given by maximize W ( α ) = l X i =1 α i - 1 2 l X i =1 l X j =1 y i y j α i α j K ( x i , x j ) , with respect to α and subject to the constraints l X i =1 α i y i = 0 0 α i C . y 1 , . . . , y l denote the class labels for the training vectors x i . The α i are the Lagrange parameters, and K denotes the kernel function. We write α * i for the optimal values of the Lagrange parameters computed as solution of the above dual problem. The offset (bias) b * can be computed by noting that any support vector x i for which 0 < α i < C satisfies y i f ( x i ) = 1 , resulting in b * := y i - X j SV y j α * j K ( x i , x j ) . Whilst we can solve this for any support vector x i , for numerical stability, we compute this offset b by averaging over the individual offset value given by all the support vectors. From the solution of the maximization problem and the bias, the classification function is constructed as f ( x ) := l X i =1 y i α * i K ( x i , x ) + b * . The predicted class label hyp ∈ {- 1 , 1 } of a test value x is determined as hyp := sign ( f ( x )) . The SVM implementation consists of two matlab functions, one for training [yALPHA, B, SV] = svmtrain(SAMPLES, CLASSES, C, KERNEL, PARAM), and one for classification: hyp = svmclass(X, yALPHA, B, SV, KERNEL, PARAM).

This preview has intentionally blurred sections. Sign up to view the full version.

The parameters are: yALPHA Lagrange parameters of the dual problem weighted by the label y i α i . B Bias. SV The support vectors. SAMPLES The matrix of input vectors from the training data set, with each row corresponding to one data vector. CLASSES Vector of training class labels; CLASSES(i) specifies the class label of SAMPLES(i,:) . C Soft margin parameter ( C in the optimization problem above) KERNEL We want to be able to specify different types of kernels (see below), so we hand over this string with possible values ’linear’ , ’polynomial’ or ’rbf’ .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern