{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# ln021 - capable of calculating XOR The numbers within the...

This preview shows pages 1–9. Sign up to view the full content.

Learning We have seen machine learning with different representations: (1) Decision trees -- symbolic representation of various decision rules -- “disjunction of conjunctions” (2) Perceptron -- learning of weights that represent alinear decision surface classifying a set of objects into two groups Different representations give rise to different hypothesis or model spaces . Machine learning algorithms search these model spaces for the best fitting model . Chap 19 Alex

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Perceptron Learning Revisited R Demo
What About Non-Linearity? x 1 x 2 ! ! ! ! ! ! = +1 = -1 Decision Surface Can we learn this decision surface? …Yes! Multi-Layer Perceptronsl.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Multi-Layer Perceptrons x 0 x 1 x 2 X n-1 x n y Input Layer Hidden Layer Output Layer Combination Function Transfer Function Linear Unit
Artificial Neural Networks Feed-forward with Backpropagation Signal Feed-forward Error Backpropagation x 0 x 1 x 2 X n-1 x n y Input Layer Hidden Layer Output Layer

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Learning the XOR Function A multi-layer Perceptron

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: capable of calculating XOR . The numbers within the perceptrons represent each perceptrons' explicit threshold. The numbers that annotate arrows represent the weight of the inputs. This net assumes that if the treshhold is not reached, zero (not -1) is output. x y ! ! 1 1 Note : no linear decision surface exists for this dataset. = TRUE ! = FALSE Representational Power Every bounded continuous function can be approximated with arbitrarily small error by a network with one hidden layer. Any function can be approximated to arbitrary accuracy by a network with two hidden layers. Hidden Layer Representations Target Function: Can this be learned? Hidden Layer Representations 1 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 1 1 0 Hidden layers allow a network to invent appropriate internal representations....
View Full Document

{[ snackBarMessage ]}

### Page1 / 9

ln021 - capable of calculating XOR The numbers within the...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online