The following simple method was devised in order to

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: thm is very similar, except that every dot product is replaced by a non- linear kernel function as below. This allows the algorithm to fit the maximum- margin hyperplane in a transformed feature space. We have seen SVM as a linear classification problem that finds the maximum margin hyperplane in the given input space. However, for many real world problems a more complex decision boundary is required. The following simple method was devised in order to solve the same linear classification problem but in a higher dimensional space, a 'feature space', under which the maximum margin hyperplane is better suited. Let be a mapping, Imagine the point is a person. They're stuck. We wish to find a such that our data will be suited for separation by a hyperplane. Given this function, we are lead to solve the previous constrained quadratic optimization on the transformed dataset, such that and The solution to this optimization problem is now well known; however a workable must be determined. Possibly the largest drawback in this method is that we must compute the inner product of two vectors in the high dimensional space. As the number of dimensions in the initial data set increases, the inner product becomes computationally intensive or impossible. However, we have a very useful result that says that there exists a class of functions, for any function , , which satisfy the above requirements and that Escape through the third dimension! Where K is the kernel function in the input space satisfying Mercer's condition (http://en.wikipedia.org/wiki/Mercer%27s_condition) (to guarantee that it indeed corresponds to certain mapping function ). As a result, if the objective function depends on inner products but not on coordinates, we can always use the kernel function to implicitly calculate in the feature space without storing the huge data. Not only does this solve the computation problems but it no longer requires us to explicitly determine a specific mapping function in order to use this method. In fact, it is now possible to use an infinite dimensional...
View Full Document

Ask a homework question - tutors are online