# tut3_sol - 1 EE4210 Solution to Tutorial 3 Train a...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 EE4210 Solution to Tutorial 3 Train a Perceptron to perform the logical NOR function (a)    <- ≥ = = 1 1 ) ( v v v y ϕ Use Conventional Perceptron Training Algorithm, Correctly classified => Δ w i = 0 for i = 1, 2 Wrongly classified => Δ w i = η d x i = d x i since η = 1. In vector form, v = w T x and Δ w = d x x T = [-1 x 1 x 2 ] d w T = [ θ w 1 w 2 ] v y Δ w T = [ Δ θ Δ w 1 Δ w 2 ] 1 st Epoch [ -1 1 1 ] -1 [ 0 0 0 ] 0 1 [ 1 -1 -1 ] [ -1 1 -1 ] -1 [ 1 -1 -1 ] -1 -1 [ 0 0 0 ] [ -1 -1 1 ] -1 [ 1 -1 -1 ] -1 -1 [ 0 0 0 ] [ -1 -1 -1 ] 1 [ 1 -1 -1 ] 1 1 [ 0 0 0 ] 2 nd Epoch [ -1 1 1 ] -1 [ 1 -1 -1 ] -3 -1 [ 0 0 0 ] Converged after 1 iteration, the weight values are w 1 = w 2 = - 1 and θ = 1. Decision Boundary : w 1 x 1 + w 2 x 2- θ = 0 => - x 1 - x 2- 1 = 0 => x 2 = - x 1- 1 (slope = - 1, y-intercept = - 1) 1 -1-1 1 y = - 1 y = 1 x 1 x 2 2 (b)    <- ≥ = = 1 1 ) ( v v v y ϕ Use Delta Rule Training Algorithm, Δ w i = η ( d – y ) x i = ( d – y ) x i...
View Full Document

{[ snackBarMessage ]}

### Page1 / 4

tut3_sol - 1 EE4210 Solution to Tutorial 3 Train a...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online