chapter19.pdf

# Proportion correct on test set training set size

• Notes
• 21

This preview shows page 12 - 18 out of 21 pages.

Proportion correct on test set Training set size Decision tree Perceptron Chapter 19, Sections 1–5 12

Subscribe to view the full document.

Multilayer perceptrons Layers are usually fully connected; numbers of hidden units typically chosen by hand Input units Hidden units Output units a i W j,i a j W k,j a k Chapter 19, Sections 1–5 13
Expressiveness of MLPs All continuous functions w/ 2 layers, all functions w/ 3 layers -4 -2 0 2 4 x 1 -4 -2 0 2 4 x 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 h W ( x 1 , x 2 ) -4 -2 0 2 4 x 1 -4 -2 0 2 4 x 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 h W ( x 1 , x 2 ) Chapter 19, Sections 1–5 14

Subscribe to view the full document.

Back-propagation learning Output layer: same as for single-layer perceptron, W j,i W j,i + α × a j × i where i = Err i × g 0 ( in i ) Hidden layer: back-propagate the error from the output layer: j = g 0 ( in j ) X i W j,i i . Update rule for weights in hidden layer: W k,j W k,j + α × a k × j . (Most neuroscientists deny that back-propagation occurs in the brain) Chapter 19, Sections 1–5 15
Back-propagation derivation The squared error on a single example is defined as E = 1 2 X i ( y i - a i ) 2 , where the sum is over the nodes in the output layer. ∂E ∂W j,i = - ( y i - a i ) ∂a i ∂W j,i = - ( y i - a i ) ∂g ( in i ) ∂W j,i = - ( y i - a i ) g 0 ( in i ) in i ∂W j,i = - ( y i - a i ) g 0 ( in i ) ∂W j,i X j W j,i a j = - ( y i - a i ) g 0 ( in i ) a j = - a j i Chapter 19, Sections 1–5 16

Subscribe to view the full document.

Back-propagation derivation contd. ∂E ∂W k,j = - X i ( y i - a i ) ∂a i ∂W k,j = - X i ( y i - a i ) ∂g ( in i ) ∂W k,j = - X i ( y i - a i ) g 0 ( in i ) in i ∂W k,j = - X i i ∂W k,j X j W j,i a j = - X i i W j,i ∂a j ∂W k,j = - X i i W j,i ∂g ( in j ) ∂W k,j = - X i i W j,i g 0 ( in j ) in j ∂W k,j = - X i i W j,i g 0 ( in j ) ∂W k,j X k W k,j a k = - X i i W j,i g 0 ( in j ) a k = - a k j Chapter 19, Sections 1–5 17
Back-propagation learning contd.
You've reached the end of this preview.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern