{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

It rarely requires feature selection as it inherently

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: lasses are not linearly separable a hyperplane is selected such that as few as possible document vectors are located on the “wrong” side. SVMs can be used with non-linear predictors by transforming the usual input features in a non-linear way, e.g. by defining a feature map φ ( t 1 , . . . , t N ) = t 1 , . . . , t N , t 2 , t 1 t 2 , . . . , t N t N −1 , t 2 N 1 Subsequently a hyperplane may be defined in the expanded input space. Obviously such non-linear transformations may be defined in a large number of ways. Band 20 – 2005 35 Hotho, Nürnberger, and Paaß The most important property of SVMs is that learning is nearly independent of the dimensionality of the feature space. It rarely requires feature selection as it inherently selects data points (the support vectors) required for a good classification. This allows good generalization even in the presence of a large number of features and makes SVM especially suitable for the classification of texts (Joachims 1998). In the case of textual data the choice of the kernel function has a minimal effect on the accuracy of classifi...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online