Machine Learning
Homework 3 Solutions
Problem 1
sketch of decision boundaries using KNN classifiers.
N=1
N=3
Problem 2
The key point of this problem is to understand that training samples are randomly generated, and
that training set is going to determine your test error.
(a)
The
probability of making an error using KNN is going to be
)
,
(
)
,
(
)
(
1
2
2
1
C
P
P
C
P
P
e
P
c
c
n
+
=
,
)
,
(
2
1
C
P
P
c
means, when the test point is actually in Hyper Sphere
2
C
but somehow the KNN
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Documentmethod returns the result as in
1
C
.
The only chance of KNN making a mistake is when it falls in hypersphere A and there are
less than (k1)/2 sample points in hyper sphere A.
2
1
)
(
2
1
)

(
)
(
*
)

(
)
,
(
2
2
1
0
2
1
2
2
1
2
1
=
=
=
∑
−
=
C
P
C
C
P
P
C
P
C
P
P
C
P
P
k
j
n
j
n
c
c
c
∑
−
=
=
+
=
2
1
0
1
2
2
1
2
1
)
,
(
)
,
(
)
(
k
j
n
j
n
c
c
n
C
C
P
P
C
P
P
e
P
(b)
Show that for this case the 1nearest neighbor rule has a lower error probability than the k
nearest neighbor rule for all k>1.
Since
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '09
 yangliu
 Machine Learning, Hidden Markov model, Viterbi, observation sequence, Viterbi decoder, lower error probability

Click to edit the document details