# hw2_soln - CS 478 Machine Learning: Homework 2 Suggested...

This preview shows pages 1–3. Sign up to view the full content.

CS 478 Machine Learning: Homework 2 Suggested Solutions 1 Separable or Not? (a) See the following tree: (b) If we draw a large enough sample, there would be at least two points on each of the four positions. Since there is no noise in the label, two examples at each position would give a majority class with the correct label in 3-NN. Comment: Most who make mistakes in this question think that there could only be one training point at each of the four positions, which is not true. Some also erroneously thought that a training point lying exactly at the same position as the test point does not count as a neighbour. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
(c) We can set ~w 0 such that w 0 j = w j σ j , b 0 = b + ~w · . Then we have y i ( ~w 0 · ~ z i + b 0 ) = y i ( d X j =1 w 0 j z ij + b 0 ) = y i ( d X j =1 w j σ j ( x ij - μ j σ j ) + b + ~w · ) = y i ( d X j =1 w j x ij - d X j =1 w j μ j + b + ~w · ) = y i ( ~w · ~x i - ~w · + b + ~w · ) = y i ( ~w · ~x i + b ) > 0 (1) Thus a linearly separable dataset is still separable after standardization. 2 More than Average (a) After 3 iterations. Number of mistakes made: 2,2,1,0,0 (b) 1. The original ﬁle list all the positive examples followed by all the negative ones. When the algorithm sweeps through the positive examples there are no updates from the negative examples, and vice versa. Therefore it takes more passes to collect enough updates from positive and negative examples for the perceptron to converge. However, although the number of epochs increases, the number of updates made could be similar. (c) training: [0.82174999999999998, 0.82837499999999997, 0.82206250000000003, 0.79881250000000004, 0.80199999999999994, 0.80924999999999991, 0.82174999999999998, 0.81587500000000013, 0.8183125, 0.82081250000000006] test: [0.82375000000000009, 0.82850000000000001, 0.81874999999999998, 0.79225000000000001, 0.79499999999999993, 0.80075000000000007, 0.82275000000000009,
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 10/02/2008 for the course CS 478 taught by Professor Joachims during the Spring '08 term at Cornell University (Engineering School).

### Page1 / 6

hw2_soln - CS 478 Machine Learning: Homework 2 Suggested...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online