ln023 - Training using Errors In training a neural network...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Training using Errors In training a neural network error is very important Only errors allow us to refine the network weights We continue to refine the weights until the network classifies perfectly or with an acceptable error margin
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Errors in the Perceptron ) f ( x ) Single layer -- we can update the weights directly!
Background image of page 2
Backpropagation ( training_examples, , in, out, hidden ) Each training example is a pair of the form ( x , y ), where x is the vector of network input values and y is the vector of target network output values. is the learning rate, in is the number of network inputs, out is the number of output units and hidden is the number of units in the hidden layer. The output from unit i is denoted o i , and the weight from unit i to unit j is denoted w ij . Create a feed-forward network with in inputs, hidden hidden units, and out output units. Initialize all network weights to small random numbers (-.05 ~ .05).
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 5

ln023 - Training using Errors In training a neural network...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online