3_10_09_SupervisedLearning

3_10_09_SupervisedLearning - Outline of the Lecture Outline...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Outline of the Lecture Outline of the Lecture Hebbian Supervised Learning Hebbian Supervised Learning Without Error Correction With Error Without Error Correction With Error Correction Correction Spider! Hebbian Supervised Hebbian Supervised Learning Benefits from Learning Benefits from External Information External Information about the Truth about the Truth This model develops quasi-periodic columns, with variations due to random initial conditions. Unsupervised learning is self-organization to maximize extraction of information from input. How can we modify networks like these to learn how to perform tasks well? One way is by having a supervisor tell the network whether its performance is good. Another way is by having a supervisor tell the network what is the correct answer. Spider! In learning models, the dynamics of firing are much faster than those of synaptic plasticity. Hence, a good approximation is The simplest rule following Hebbs conjecture is v = w d r w d t = v r u In the simplest case, one uses this rule with w d r w d t = v r u In Hebbian supervised learning without error correction, the output v is the correct answer given in samples (superscripts are not powers): For stability of weights, we add decay: w d r w d t = v r u w d r w d t = 1 N S v m r u m m = 1 N S w d r w d t = - a r w + 1 N S v m r u m m = 1 N S Example 1: In a perceptron, a nonlinear map that classifies binary-vector inputs into one of two categories, the desired output is A perceptron only classifies inputs perfectly under the condition...
View Full Document

This note was uploaded on 06/08/2009 for the course BME 575L taught by Professor Grzywacz during the Spring '09 term at USC.

Page1 / 36

3_10_09_SupervisedLearning - Outline of the Lecture Outline...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online