Question

# 4. (Classification 10 pt)) Suppose you are working in a binary classification

problem so C " t0, 1u say (think of 0 is spam and 1 as not-spam). Now suppose instead of the 0 ´ 1 loss function your loss function is: 3 lpφpxq, yq " $ '& '% 100 if φpxq " 0, y " 1 1 if φpxq " 1, y " 0. For any classification rule φ let Rpφq " EplpφpXq, Y qq denote the risk of the classifier i.e. the average loss made by the classifier on a new "typical" data point pX, Y q. By repeating the calculations we did in class when proving the optimality of the Bayes classifier for 0-1 loss, find the optimal classifier for the above loss function (i.e. find an expression in terms of the conditional probability mass function of Y given X " x).

### Recently Asked Questions

- A specific weighted die has a 1/4 probability of rolling a "6". This weighted die and one or more of the usual, fair, 6-sided die are placed in a bag. One die

- Linear regression is used to predict the value of one variable from another variable. Since it is based on correlation, it cannot provide causation. In

- A certain used car lot is analyzing their car sales: If they sell a car, it appears that the probability they sell a Ford is 0.5. The probability that they