M 1 when learning stops w 0 we have yx t wxx t normal

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: For "batch learning" with all the data included, the learning rule becomes: M ΔW = k ∑ ( y ( m ) − y ( m ) )x ( m )T where y ( m ) = Wx ( m ) is the output vector. m =1 When learning stops (ΔW = 0), we have YX T = WXX T (normal equation). Thus W = YX T ( XX T )−1 = YX † if the matrix inverse exists. Computational theories of learning •  Supervised learning: A teaching signal knows the exact value of the desired output and corrects the error of the actual output. Examples: simple and multilayer perceptrons •  Unsupervised learning: No explicit teaching signal. Examples: Hebb rule, self-organizing maps •  Reinforcement learning: A reward signal without knowing the exact output. dependent learning. More precise informa- touch a lever after the appearance of a small established body o tion about the role played by midbrain do- light. Before training and in the initial 7). From this pers paminergic activity derives from experiments phases of training, most dopamine neurons mine neurons do Schultz Behavioral and Brain single 2010, 6:24 in which activity of F...
View Full Document

This document was uploaded on 02/28/2014 for the course BME 580.402 at Johns Hopkins.

Ask a homework question - tutors are online