NN_perceptron_1

NN_perceptron_1 - Single Layer Neural Network Xingquan(Hill...

Info iconThis preview shows pages 1–18. Sign up to view the full content.

View Full Document Right Arrow Icon
Single Layer Neural Network Xingquan (Hill) Zhu
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Outline Perceptron for Classification Perceptron training rule Why perceptron training rule work? Gradient descent learning rule Incremental stochastic gradient descent Delta Rule (Adaline: Adaptive Linear Element)
Background image of page 2
Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just one neuron:
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Single layer perceptrons Generalization to single layer perceptrons with more neurons is easy because: The output units are independent among each other Each weight only affects one of the outputs
Background image of page 4
Perceptron: Neuron Model The (McCulloch-Pitts) perceptron is a single layer NN with a non-linear ϕ , the sign function
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Perceptron for Classification The perceptron is used for binary classification. • Given training examples of classes C 1 , C 2 train the perceptron in such a way that it classifies correctly the training examples: If the output of the perceptron is +1 then the input is assigned to class C 1 If the output is -1 then the input is assigned to C 2
Background image of page 6
Perceptron Training How can we train a perceptron for a classification task? We try to find suitable values for the weights in such a way that the training examples are correctly classified. Geometrically, we try to find a hyper- plane that separates the examples of the two classes.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Perceptron Geometric View The equation below describes a (hyper-)plane in the input space consisting of real valued 2D vectors. The plane splits the input space into two regions, each of them describing one class. 0 w x w 0 2 1 i i i = + = x 2 C 2 x 1 decision boundary w 1 x 1 + w 2 x 2 + w 0 = 0 decision region for C1
Background image of page 8
Example: AND Here is a representation of the AND function White means false , black means true for the output -1 means false , +1 means true for the input -1 AND -1 = false -1 AND +1 = false +1 AND -1 = false +1 AND +1 = true
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example: AND continued A linear decision surface separates false from true instances
Background image of page 10
Example: AND continued Watch a perceptron learn the AND function:
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example: XOR Here’s the XOR function: -1 XOR -1 = false -1 XOR +1 = true +1 XOR -1 = true +1 XOR +1 = false Perceptrons cannot learn such linearly inseparable functions
Background image of page 12
Example: XOR continued Watch a perceptron try to learn XOR
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 -1 -1 -1 -1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 -1 -1 -1 -1 -1 - 1 +1 -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Background image of page 14
Example How to train a perceptron to recognize this 3? Assign –1 to weights of input values that are equal to -1, +1 to weights of input values that are equal to +1, and –63 to the bias. Then the output of the perceptron will be 1 when presented with a “prefect” 3, and at most –1 for all other patterns.
Background image of page 15

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 +1 -1 -1 -1 -1 -1 +1 +1 +1 -1 -1 -1 +1 -1 -1 -1 +1 -1 -1 -1 -1 -1 -1 - 1 +1 -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Background image of page 16
Example What if a slightly different 3 is to be recognized, like the one in the previous slide?
Background image of page 17

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 18
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/15/2011 for the course CAP 4630 taught by Professor Staff during the Fall '08 term at FAU.

Page1 / 48

NN_perceptron_1 - Single Layer Neural Network Xingquan(Hill...

This preview shows document pages 1 - 18. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online