percept - Perceptron Learning Algorithm Perceptron Learning...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Perceptron Learning Algorithm Perceptron Learning Algorithm Jia Li Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu http://www.stat.psu.edu/jiali Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Separating Hyperplanes Construct linear decision boundaries that explicitly try to separate the data into different classes as well as possible. Good separation is defined in a certain form mathematically. Even when the training data can be perfectly separated by hyperplanes, LDA or other linear methods developed under a statistical framework may not achieve perfect separation. Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Review of Vector Algebra A hyperplane or affine set L is defined by the linear equation: L = {x : f (x) = 0 + T x = 0} . For any two points x1 and x2 lying in L, T (x1 - x2 ) = 0, and hence = / is the vector normal to the surface of L. For any point x0 in L, T x0 = -0 . The signed distance of any point x to L is given by T (x - x0 ) = = 1 ( T x + 0 ) 1 f (x) . (x) f Hence f (x) is proportional to the signed distance from x to the hyperplane defined by f (x) = 0. http://www.stat.psu.edu/jiali Jia Li Perceptron Learning Algorithm Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Rosenblatt's Perceptron Learning Goal: find a separating hyperplane by minimizing the distance of misclassified points to the decision boundary. Code the two classes by yi = 1, -1. If yi = 1 is misclassified, T xi + 0 < 0. If yi = -1 is misclassified, T xi + 0 > 0. Since the signed distance from xi to the decision boundary is T xi +0 , the distance from a misclassified xi to the decision x boundary is -yi (i +0 ) . Denote the set of misclassified points by M. The goal is to minimize: D(, 0 ) = - yi ( T xi + 0 ) . iM http://www.stat.psu.edu/jiali T Jia Li Perceptron Learning Algorithm Stochastic Gradient Descent To minimize D(, 0 ), compute the gradient (assuming M is fixed): D(, 0 ) D(, 0 ) 0 = - = - yi xi , yi . iM iM Stochastic gradient descent is used to minimize the piecewise linear criterion. Adjustment on , 0 is done after each misclassified point is visited. http://www.stat.psu.edu/jiali Jia Li Perceptron Learning Algorithm The update is: 0 0 + yi xi yi . Here is the learning rate, which in this case can be taken to be 1 without loss of generality. (Note: if T x + 0 = 0 is the decision boundary, T x + 0 = 0 is also the boundary.) Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Issues If the classes are linearly separable, the algorithm converges to a separating hyperplane in a finite number of steps. A number of problems with the algorithm: When the data are separable, there are many solutions, and which one is found depends on the starting values. The number of steps can be very large. The smaller the gap, the longer it takes to find it. When the data are not separable, the algorithm will not converge, and cycles develop. The cycles can be long and therefore hard to detect. Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Optimal Separating Hyperplanes Suppose the two classes can be linearly separated. The optimal separating hyperplane separates the two classes and maximizes the distance to the closest point from either class. There is a unique solution. Tend to have better classification performance on test data. The optimization problem: max C ,0 subject to 1 yi ( T xi + 0 ) C , i = 1, ..., N Every point is at least C away from the decision boundary T x + 0 = 0. http://www.stat.psu.edu/jiali Jia Li Perceptron Learning Algorithm Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm For any solution of the optimization problem, any positively scaled multiple is a solution as well. We can set = 1/C . The optimization problem is equivalent to: min ,0 1 2 2 subject to yi ( T xi + 0 ) 1, i = 1, ..., N This is a convex optimization problem. Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm The Lagrange sum is: 1 LP = min 2 - ai [yi ( T xi + 0 ) - 1] . ,0 2 i=1 N Setting the derivatives to zero, we obtain: = N i=1 N i=1 ai yi xi , ai yi . 0 = Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm Substitute into LP , we obtain the Wolfe dual LD = N i=1 ai - 1 ai ak yi yk xiT xk 2 i=1 k=1 N N subject to ai 0 . This is a simpler convex optimization problem. Jia Li http://www.stat.psu.edu/jiali Perceptron Learning Algorithm The Karush-Kuhn-Tucker conditions require: ai [yi ( T xi + 0 ) - 1] = 0 i . If ai > 0, then yi ( T xi + 0 ) = 1, that is, xi is on the boundary of the slab. If yi ( T xi + 0 ) > 1, that is, xi is not on the boundary of the slab, ai = 0. The points xi on the boundary of the slab are called support points. The solution vector is a linear combination of the support points: = ai yi xi . i:ai >0 http://www.stat.psu.edu/jiali Jia Li ...
View Full Document

Ask a homework question - tutors are online