Unformatted text preview: layers and adjust weights to minimize error.
Back- Propagation neural network is a good method for following situations:
wikicour senote.com/w/index.php?title= Stat841&pr intable= yes 39/74 10/09/2013 Stat841 - Wiki Cour se Notes The problem is very complex and the number of input or output data points is very large, and have no idea to relate the input to the output.
The solution varies over time within the bounds of the given input and output data, or the output is not easy to measure. Neural Networks (NN) - October 30, 2009 Back-propagation
The idea is that we first feed an input (we can normalize the data before feeding) from the training set to the Neural Network, then find the error rate at the output and then
we propagate the error to previous layers and for each edge of weight we find . Having the error rates at hand we adjust the weight of each edge by taking steps proportional to the negative of the gradient to decrease the error at output. The next step is to apply the next input from the training set and go through the described
adjustment procedure. The overview of Back- propagation algorithm:
1. Feed a point
2. Evaluate in the training set to the network, and find the output of all the nodes.
for all output units, where yk is the expected output and 3. By propagating to the previous layers evaluate all
4. Using s for hidden units: is the real output.
where i is associated to the previous layer. find all the derivatives. 5. Adjust each weight by taking steps proportional to the negative of the gradient:
6. Feed the next point in the training set and repeat the above steps.
Advantage of Back propageation:
Reduce the cost of computing derivatives by a factor of the number of derivatives to be calculated when minimizing the error.
Allow higher degrees of nonlinearity and precision to be applied to problems.
How to initialize the we ights
This still leaves the question of how to initialize the weights
. The method of choosing weights mentioned in class was to randomize the weights before the first step.
This is not likely to be near...
View Full Document
- Winter '13