This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ar combination of components.
A Pe rce ptron Example
The perceptron network can figure out the decision boundray line even if we dont know how to draw the line. We just have to give it some examples first. For example:
Fe ature s :x1, x2, x3 Ans we r
1,0,0 +1 1,0,1 +1 1,1,0 +1 0,0,1 -1 0,1,1 -1 1,1,1 -1 Then the perceptron starts out not knowing how to separate the answers so it guesses. For example we input 1,0,0 and it guesses - 1. But the right answer is +1. So the
perceptron adjusts its line and we try the next example. Eventually the perceptron will have all the answers right.
=:0; wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 35/74 10/09/2013 Stat841 - Wiki Cour se Notes cagd0
n The Perceptron (Lecture October 23, 2009)
A Perceptron can be modeled as shown in Figure 1 of the previous lecture where
represent the feature data,
, where is the model intercept and is a linear combination of some weights of these inputs, and indicates the sign of the expression and returns the label of the data point. The Perceptron algorithm seeks a linear boundary between two classes. A linear decision boundary can be
The algorithm begins with an arbitrary hyperplane
(initial guess). Its goal
is to minimize the distance between the decision boundary and the misclassified data points. This is illustrated in
Figure 2. It attempts to find the optimal by iteratively adjusting the decision boundary until all points are on the
correct side of the boundary. It terminates when there are no misclassified points. Figure 2: This figure shows a misclassified point
and the movement of the decision boundary. De rivation: The dist ance bet ween t he decision boundary and misclassif ied point s.
If and both lie on the decision boundary then, denotes an inner product. Since the inner product is 0 and
View Full Document
This document was uploaded on 03/07/2014.
- Winter '13