# HW4 - MS&E 226 Small Data Problem Set 4 Due 5:00 PM...

This preview shows pages 1–3. Sign up to view the full content.

MS&E 226 Problem Set 4 “Small” Data Due: November 18, 2016, 5:00 PM , submitted through Gradescope Problem 1. (Naive Bayes) In this problem, we study a particularly simple approach to classifica- tion: the naive Bayes classifier. This is one of the most widely used classification techniques, and despite its simplicity is known to work surprisingly well in a wide range of settings. We build the naive Bayes’ classifier for a spam detection problem. The data we work with is the dataset spam.csv , available from . csv . 1 This dataset has n = 4601 rows; each row represents information about a single e-mail mes- sage. There are 49 columns. The last column indicates whether the email was a spam or not ( Y = 1 means spam, Y = 0 means not spam). The remaining p = 48 columns are binary values. Each column j = 1 , ..., 48 represents a word, and the value in a particular row indicates whether that word was present in the email or not ( X j = 1 means the word was present, X j = 0 means that it was not present). As we discussed in class, the Bayes optimal classifier for 0-1 loss is the Bayes classifier : given a covariate vector ~ X , we compute the probabilities P ( Y = 0 | ~ X ) and P ( Y = 1 | ~ X ) . If the message is more likely to be spam than not, we predict 1; otherwise we predict 0. A limitation of Bayes classifiers is that you need to know the population model. In practice, we rarely know these probabilities, so they must be estimated from the data. That’s where the Naive Bayes approach comes in. (a) In general, we know that P ( Y = y | ~ X = ~x ) = P ( Y = y, ~ X = ~x ) P ( ~ X = ~x ) . That means a message with covariates ~x is more likely to be spam than not if P ( Y = 1 , ~ X = ~x ) > P ( Y = 0 , ~ X = ~x ) , as the denominator does not depend on Y . Show that ˆ Q EPM ~x,y = 1 n n X i =1 I { ~ X = ~x, Y i = y } is an unbiased estimate of P ( Y = y, ~ X = ~x ) . This is called the empirical population model (EPM). Consider the following classifier: if ˆ Q ~ X, 1 > ˆ Q ~ X, 0 , predict 1; otherwise predict 0. Explain why this is not a good classifier if n 2 p +1 . (b) (DELETED) 1 This dataset is derived from a spam dataset in the UCI ML repository; see . edu/ml/datasets/Spambase .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The Naive Bayes classifier takes a different approach to reduce the “curse of dimensionality.” Note that by Bayes’ rule we have: P ( Y = y | ~ X = ~x ) = P ( ~ X = ~x | Y = y ) P ( Y = y ) P ( ~ X = ~x ) . (1) The Naive Bayes classifier assumes that the covariates are independent given Y = y ; that is: P ( ~ X = ~x | Y = y ) = p Y j =1 P ( X j = x j | Y = y ) . As before, we can drop the denominator P ( ~ X = ~x ) , so we want to classify Y = 1 if: P ( Y = 1) p Y j =1 P ( X j = x j | Y = 1) > P ( Y = 0) p Y j =1 P ( X j = x j | Y = 0) , and we want to classify Y = 0 otherwise. The Naive Bayes classifier uses an estimate for each of the quantities in the preceding expression.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern