cs221-section4

cs221-section4 - Logistic Regression and Decision Trees CS...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Logistic Regression and Decision Trees CS 221 Section 4 October 16, 2009 Today we will derive the gradient descent update rule for logistic regression using maximum likelihood and also go over an example of creating decision trees. 1 Maximum likelihood Maximum likelihood is a general parameter estimation method. The intuition behind maximum likelihood is that we want to choose a hypothesis which makes the data as probable as possible. This will require us to make assumptions about the way that our data is generated. In general we will define probabilistic models which will describe our data generation. 1.1 Example Suppose we are given the task of predicting the probability that a future tossed thumbtack will land with the pointy side up. To aid us in this task we are given a dataset which contains the results of a set of tosses of the thumbtack in question. How should we proceed? Lets model the thumbtack flip in the following way: as a Bernoulli random variable, where the probability that the thumbtack lands point up is and the probability that it lands point down is 1- . Lets also assume that each toss was independent, with result drawn from the same Bernoulli distribution. Lets say that our data D contains 8 examples where the thumbtack landed point up, and 2 where it landed point down. We can now talk about the prob- ability of this data, assuming the model parameter . This probability is: p ( D ; ) = 8 (1- ) 2 We call this probability the likelihood . Our task is to choose the parameter that we feel best describes the probability that the thumbtack lands point up, and we have a tool to tell us the likelihood of any that we pick. Which should we pick? 1 The principle of maximum likelihood says that we should choose so as to make the probability of the data as high as possible. I.e. we should choose the value for that maximizes the likelihood. So, what would this be for our example? We need to solve = arg max 8 (1- ) 2 In general it is awkward to take derivatives of products like this, instead we can maximize the log likelihood log p ( D ; ). This will give us the same answer as maximizing the likelihood because the logarithm is a monotonically increas- ing function. We can find the maximum likelihood in our example by setting log p ( D ; ) to zero: log p ( D ; ) = ( log ( 8 (1- ) 2 )) = ( log 8 + log(1- ) 2 ) = (8log + 2log(1- )) = 8 - 2 1- 2 1- = 8 2 = 8- 8 10 = 8 = 0 . 8 Thus, 0.8 is our maximum likelihood estimate for the parameter . In other words, the data we saw is most likely if = 0 . 8. Note that this matches the intuition of the situation. If someone had asked you what the probability of a thumbtack landing point up was, and also told you that it landed point up 8 out of 10 times previously, you might have answered 80% as your best guess.out of 10 times previously, you might have answered 80% as your best guess....
View Full Document

Page1 / 10

cs221-section4 - Logistic Regression and Decision Trees CS...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online