View the step-by-step solution to:

Question

4. (Classification 10 pt)) Suppose you are working in a binary classification

problem so C " t0, 1u say (think of 0 is spam and 1 as not-spam). Now suppose instead of the 0 ´ 1 loss function your loss function is: 3 lpφpxq, yq " \$ '& '% 100 if φpxq " 0, y " 1 1 if φpxq " 1, y " 0. For any classification rule φ let Rpφq " EplpφpXq, Y qq denote the risk of the classifier i.e. the average loss made by the classifier on a new "typical" data point pX, Y q. By repeating the calculations we did in class when proving the optimality of the Bayes classifier for 0-1 loss, find the optimal classifier for the above loss function (i.e. find an expression in terms of the conditional probability mass function of Y given X " x).

﻿﻿  4. (Classification 10 pt) (you might be able to do this only after Thursday's class)
Suppose you are working in a binary classification problem so C = {0, 1} say (think of 0 is spam and 1
as not-spam). Now suppose instead of the 0 - 1 loss function your loss function is:
CO
1(\$(x) , y ) =
100 if p(x) = 0, y = 1
if o(x) = 1, y = 0.
For any classification rule o let R() = E(1(6(X), Y) ) denote the risk of the classifier i.e. the average
loss made by the classifier on a new &quot;typical&quot; data point (X, Y). By repeating the calculations we did
in class when proving the optimality of the Bayes classifier for 0-1 loss, find the optimal classifier for
the above loss function (i.e. find an expression in terms of the conditional probability mass function
of Y given X = x).

Why Join Course Hero?

Course Hero has all the homework and study help you need to succeed! We’ve got course-specific notes, study guides, and practice tests along with expert tutors.

• -

Study Documents

Find the best study resources around, tagged to your specific courses. Share your own to gain free Course Hero access.

Browse Documents