Cs eunredubebis mathmethods svmlecturepdf wikicour

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ue which is used to construct SVM solutions which are nonlinear in the data. Results of some experiments which were inspired by these arguments are also presented. The writer gives numerous examples and proofs of most of the key theorems, he hopes the people can find old material is cast in a fresh light since the paper includes some new material. Limitation of SVM algorithm [33] (http://www.cs e.unr.edu/~bebis /MathMethods /SVM/lecture.pdf) wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 66/74 10/09/2013 Stat841 - Wiki Cour se Notes The biggest limitation of SVM lies in the choice of the kernel (the best choice of kernel for a given problem is still a research problem). A second limitation is speed and size (mostly in training - for large training sets, it typically selects a small number of support vectors, thereby minimizing the computational requirements during testing). Non-linear hypersurfaces and Non-Separable classes - November 20, 2009 Kernel Trick (http://en.wikipedia.org/wiki/Kernel_trick) We talked about the curse of dimensionality at the beginning of this course, however, we now turn to the power of high dimensions in order to find a linearly separable hyperplane between two classes of data points. To understand this, imagine a two dimensional prison where a two dimensional person is constrained. Suppose magically we give the person a third dimension, then he can escape from the prison. In other words, the prison and the person are linearly separable now with respect to the third dimension. The intuition behind the "kernel trick" is basically to map data to a higher dimension so that they are linearly separable by a hyperplane. The original optimal hyperplane algorithm proposed by Vladimir Vapnik (http://en.wikipedia.org/wiki/Vladimir_Vapnik) in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non- linear classifiers by applying the kernel trick to maximum- margin hyperplanes. The algori...
View Full Document

Ask a homework question - tutors are online