This preview shows pages 1–2. Sign up to view the full content.
The material covered in the first five chapters has given us the foundation on which to introduce
Support Vector Machines, the learning approach originally developed by Vapnik and co
workers. Support Vector Machines are a system for efficiently training the linear learning
machines introduced in
Chapter 2
in the kernelinduced feature spaces described in
Chapter 3
,
while respecting the insights provided by the generalisation theory of
Chapter 4
, and exploiting
the optimisation theory of
Chapter 5
. An important feature of these systems is that, while
enforcing the learning biases suggested by the generalisation theory, they also produce ‘sparse’
dual representations of the hypothesis, resulting in extremely efficient algorithms. This is due to
the Karush–Kuhn–Tucker conditions, which hold for the solution and play a crucial role in the
practical implementation and analysis of these machines. Another important feature of the
Support Vector approach is that due to Mercer's conditions on the kernels the corresponding
optimisation problems are convex and hence have no local minima. This fact, and the reduced
number of nonzero parameters, mark a clear distinction between these system and other pattern
recognition algorithms, such as neural networks. This chapter will also describe the optimisation
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '11
 ProfBhattacharya

Click to edit the document details