Question

4. (Classification 10 pt)) Suppose you are working in a binary classification problem so C " t0, 1u say (think of 0 is spam and 1 as not-spam). Now suppose instead of the 0 ´ 1 loss function your loss function is: 3 lpφpxq, yq " $ '& '% 100 if φpxq " 0, y " 1 1 if φpxq " 1, y " 0. For any classification rule φ let Rpφq " EplpφpXq, Y qq denote the risk of the classifier i.e. the average loss made by the classifier on a new "typical" data point pX, Y q. By repeating the calculations we did in class when proving the optimality of the Bayes classifier for 0-1 loss, find the optimal classifier for the above loss function (i.e. find an expression in terms of the conditional probability mass function of Y given X " x).

hw4Q4.png

Image transcriptions

4. (Classification 10 pt) (you might be able to do this only after Thursday's class) Suppose you are working in a binary classification problem so C = {0, 1} say (think of 0 is spam and 1 as not-spam). Now suppose instead of the 0 - 1 loss function your loss function is: CO 1($(x) , y ) = 100 if p(x) = 0, y = 1 if o(x) = 1, y = 0. For any classification rule o let R() = E(1(6(X), Y) ) denote the risk of the classifier i.e. the average loss made by the classifier on a new "typical" data point (X, Y). By repeating the calculations we did in class when proving the optimality of the Bayes classifier for 0-1 loss, find the optimal classifier for the above loss function (i.e. find an expression in terms of the conditional probability mass function of Y given X = x).

(Classification 10 pt) (you might be able to do this only after Thursday's class) Suppose you are working in a binary classification problem so C =...
Get unstuck

372,107 students got unstuck by Course
Hero in the last week

step by step solutions

Our Expert Tutors provide step by step solutions to help you excel in your courses