{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

3_10_09_SupervisedLearning

# 3_10_09_SupervisedLearning - Outline of the Lecture Hebbian...

This preview shows pages 1–11. Sign up to view the full content.

Outline of the Lecture Outline of the Lecture Hebbian Supervised Learning Hebbian Supervised Learning Without Error Correction                             With Error  Without Error Correction                             With Error  Correction  Correction      Spider!

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Hebbian Supervised  Hebbian Supervised  Learning Benefits from  Learning Benefits from  External Information  External Information  about the Truth  about the Truth
This model develops quasi-periodic columns, with variations due to random initial conditions.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Unsupervised learning is self-organization to maximize extraction of information from input.
How can we modify networks like these to learn how to perform tasks well?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
One way is by having a supervisor tell the network whether its performance is good.
Another way is by having a supervisor tell the network what is the correct answer. Spider!

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
In learning models, the dynamics of firing are much faster than those of synaptic plasticity. Hence, a good approximation is The simplest rule following Hebb’s conjecture is v = ρ ϖ ρ υ τ w d r w dt = v r u In the simplest case, one uses this rule with τ w d r w dt = v r u
In Hebbian supervised learning without error correction, the output v is the correct answer given in samples (superscripts are not powers): For stability of weights, we add decay: τ w d r w dt = v r u τ w d r w dt = 1 N S v m r u m m =1 N S å τ w d r w dt = - a r w + 1 N S v m r u m m =1 N S å

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example 1: In a perceptron, a nonlinear map that classifies binary-vector inputs into one of two categories, the desired output is
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 36

3_10_09_SupervisedLearning - Outline of the Lecture Hebbian...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online