This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: capable of calculating XOR . The numbers within the perceptrons represent each perceptrons' explicit threshold. The numbers that annotate arrows represent the weight of the inputs. This net assumes that if the treshhold is not reached, zero (not 1) is output. x y ! ! 1 1 Note : no linear decision surface exists for this dataset. = TRUE ! = FALSE Representational Power Every bounded continuous function can be approximated with arbitrarily small error by a network with one hidden layer. Any function can be approximated to arbitrary accuracy by a network with two hidden layers. Hidden Layer Representations Target Function: Can this be learned? Hidden Layer Representations 1 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 1 1 0 Hidden layers allow a network to invent appropriate internal representations....
View Full
Document
 Spring '11
 Staff
 Computer Science, Machine Learning

Click to edit the document details