3_10_09_SupervisedLearning

3_10_09_SupervisedLearning - Spider This model develops...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon
Spider!
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 2
This model develops quasi-periodic columns, with variations due to random initial conditions.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Unsupervised learning is self-organization to maximize extraction of information from input.
Background image of page 4
How can we modify networks like these to learn how to perform tasks well?
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
One way is by having a supervisor tell the network whether its performance is good.
Background image of page 6
Another way is by having a supervisor tell the network what is the correct answer. Spider!
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
In learning models, the dynamics of firing are much faster than those of synaptic plasticity. Hence, a good approximation is The simplest rule following Hebb’s conjecture is v = w u τ w d w dt = v u In the simplest case, one uses this rule with w d w dt = v u
Background image of page 8
In Hebbian supervised learning without error correction, the output v is the correct answer given in samples (superscripts are not powers): For stability of weights, we add decay: τ w d w dt = v u w d w dt = 1 N S v m u m m = 1 N S w d w dt = −α w + 1 N S v m u m m = 1 N S
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Example 1: In a perceptron, a nonlinear map that classifies binary-vector inputs into one of two categories, the desired output is A perceptron only classifies inputs perfectly under the condition of linear separability. This condition is the existence of a hyperplane that divides the input space such that one of the portions corresponds to v = 1 and the other to v = -1.
Background image of page 10
Image of page 11
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 36

3_10_09_SupervisedLearning - Spider This model develops...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online