In the derivation it turns out that the global

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: coding approach. wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 34/74 10/09/2013 Stat841 - Wiki Cour se Notes A paper uses the LS- SVM framework to solve the KLR problem. In that paper,they see that the minimization of the negative penalized log likelihood criterium is equivalent to solving in each iteration a weighted version of least squares support vector machines (wLS- SVMs). In the derivation it turns out that the global regularization term is reflected as usual in each step. In a similar iterative weighting of wLS- SVMs, with different weighting factors is reported to converge to an SVM solution. Unlike SVMs, KLR by its nature is not sparse and needs all training samples in its final model. Different adaptations to the original algorithm were proposed to obtain sparseness. The second one uses a sequential minimization optimization (SMO) approach and in the last case, the binary KLR problem is reformulated into a geometric programming system which can be efficiently solved by an interior- point algorithm. In the LS- SVM framework, fixed- size LS- SVM has shown its value on large data sets. It approximates the feature map using a spectral decomposition, which leads to a sparse representation of the model when estimating in the primal space. They use this technique as a practical implementation of KLR with estimation in the primal space. To reduce the size of the Hessian, an alternating descent version of Newton’s method is used which has the extra advantage that it can be easily used in a distributed computing environment. The proposed algorithm is compared to existing algorithms using small size to large scale benchmark data sets. Paper's Link: [[17] (ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/karsmakers/20070424IJCNN_pk.pdf) ] Perceptron (Foundation of Neural Network) Se parating Hype rplane Clas s ifie rs Separating hyperplane trys to separate the data using linear decision boundaries. When the classes overlap, it can be generalized to support vector machine, which constructs nonlinear boundaries by constructing a linear boundary in an enlarged...
View Full Document

This document was uploaded on 03/07/2014.

Ask a homework question - tutors are online