fa13-cs188-lecture-22-1PP

So far a very strange way of doing a very simple

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: weight vectors (or the feature vectors)! Dual Perceptron   Start with zero counts (alpha)   Pick up training instances one by one   Try to classify xn,   If correct, no change!   If wrong: lower count of wrong class (for this instance), raise count of right class (for this instance) Kernelized Perceptron   If we had a black box (kernel) K that told us the dot product of two examples x and x’:   Could work en)rely with the dual representa)on   No need to ever take dot products (“kernel trick”)   Like nearest neighbor – work with black ­box similari)es   Downside: slow if many examples get nonzero alpha Kernels: Who Cares?   So far: a very strange way of doing a very simple calcula)on   “Kernel trick”: we can subs)tute any* similarity func)on in place of the dot product   Lets us learn new kinds of hypotheses *...
View Full Document

This note was uploaded on 12/22/2013 for the course CS 188 taught by Professor Staff during the Fall '08 term at University of California, Berkeley.

Ask a homework question - tutors are online