chapter4 - Chapter 4 Multilayer Perceptron Chapter 4 ---...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 4 Multilayer Perceptron Chapter 4 --- Multilayer Perceptron 2 Multilayer Perceptron r A generalization of the single-layer perceptron to enhance its computational power. r Training method : error backpropagation algorithm which is based on the error- correction learning rule. r Requirement : nonlinear neuronal function should be smooth (i.e., differentiable everywhere ). Chapter 4 --- Multilayer Perceptron 3 Multilayer Perceptron Chapter 4 --- Multilayer Perceptron 4 Backpropagation Training Algorithm r A systematic method for training multilayer ANN, error is back ward propagated to adjust the weights during training phase. Therefore it ’ s called backpropagation training. r Requires that the nonlinear neuronal function be differentiable everywhere . A good choice is the sigmoid function (or logistic / squashing function) Chapter 4 --- Multilayer Perceptron 5 Backpropagation Training Algorithm r Training Objective : to adjust the weights so that application of a set of inputs produces the desired set of outputs. r Belongs to the category of Supervised Learning. Chapter 4 --- Multilayer Perceptron 6 Graphical Representation Chapter 4 --- Multilayer Perceptron 7 Mathematical Analysis r Consider neuron j , r Define error signal e j ( n ) = d j ( n ) - y j ( n ) r Define instantaneous squared error for output neuron j = r Instantaneous sum of squared errors of the network where the summation is over all output neurons. v n w n y n and y n v n j ji i i j j ( ) ( ) ( ) ( ) ( ( ) ) = = ∑ ϕ 1 2 2 e n j ( ) E n e n j j ( ) ( ) = ∑ 1 2 2 Chapter 4 --- Multilayer Perceptron 8 Mathematical Analysis Make use of the steepest gradient descent concept, where local gradient points to the required changes in synaptic weights. ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ E n w n E n e n e n y n y n v n v n w n ji j j j j j j ji ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) = ⋅ ⋅ ⋅ = ⋅- ⋅ ⋅ e n v n y n j j i ( ) ( ( )) ( ) ' 1 ϕ Δ w n E n w n n y n ji ji j i ( ) ( ) ( ) ( ) ( ) = - = η ∂ ∂ η δ δ ϕ j j j n e n v n ( ) ( ) ( ( )) ' = ⋅---- (4.1) Chapter 4 --- Multilayer Perceptron 9 Mathematical Analysis If neuron j is an output neuron Easy, as we know the target value of output neurons. e j ( n ) = d j ( n ) - y j ( n )---- (4.2)---- (4.3)           ⋅           ⋅           =           Δ ) ( ) ( ) ( n y j neuron of signal input n gradient local parameter rate learning n w correction Weight i j ji δ η δ ϕ j j j n e n v n ( ) ( ) ( ( )) ' = ⋅ Chapter 4 --- Multilayer Perceptron 10 Mathematical Analysis If neuron j is a hidden neuron Difficult, as there ’ s no target value for hidden neurons....
View Full Document

This note was uploaded on 04/13/2011 for the course EE 4210 taught by Professor Wong during the Spring '10 term at City University of Hong Kong.

Page1 / 56

chapter4 - Chapter 4 Multilayer Perceptron Chapter 4 ---...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online