quiz3_2008

6 m m 5 s density 4 s s m 3 m s 2 a b 1 0 0 1 2 3 4 5

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: You need not hand this in. 16 Backpropagation Notes New weights depend on a learning rate and the derivative of a performance function with respect to weights: (1) wi' = wi + r ∂P ∂wi Using the chain rule, where yi designates a neuron's output: ∂P ∂P ∂yi (2) = ∂wi ∂yi ∂wi wi xi yi For the standard performance function, where y* is the final desired output and y is the actual final output, P =−1 ( y * − y )2 : 2∑ (3) ∂P ∂ = (− 1 2 ( y * − y ) 2 ) = ( y * − y ) ∂yi ∂yi For a neural net, the total input to a neuron is z = ∑w x ii (Note that xi is sometimes written yi to indicate that in a multilayer network, the input for one node is the output for a previous layer node) For a sigmoid neural net, the output of a neuron, where z is the total input, is y = s( z) = Recall that the derivative ∂y = y (1 − y ) . ∂z 1 . 1 + e− z So for the output layer of a sigmoid neural net: (4) ∂yi ∂y ∂z = = y (1 − y ) xi ∂wi...
View Full Document

Ask a homework question - tutors are online