cs229-notes2

cs229-notes2 - CS229 Lecture notes Andrew Ng Part IV...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS229 Lecture notes Andrew Ng Part IV Generative Learning algorithms So far, weve mainly been talking about learning algorithms that model p ( y | x ; ), the conditional distribution of y given x . For instance, logistic regression modeled p ( y | x ; ) as h ( x ) = g ( T x ) where g is the sigmoid func- tion. In these notes, well talk about a different type of learning algorithm. Consider a classification problem in which we want to learn to distinguish between elephants ( y = 1) and dogs ( y = 0), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight linethat is, a decision boundarythat separates the elephants and dogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly. Heres a different approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of what dogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks more like the elephants or more like the dogs we had seen in the training set. Algorithms that try to learn p ( y | x ) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels { , 1 } , (such as the perceptron algorithm) are called discrim- inative learning algorithms. Here, well talk about algorithms that instead try to model p ( x | y ) (and p ( y )). These algorithms are called generative learning algorithms. For instance, if y indicates whether a example is a dog (0) or an elephant (1), then p ( x | y = 0) models the distribution of dogs features, and p ( x | y = 1) models the distribution of elephants features. After modeling p ( y ) (called the class priors ) and p ( x | y ), our algorithm 1 2 can then use Bayes rule to derive the posterior distribution on y given x : p ( y | x ) = p ( x | y ) p ( y ) p ( x ) . Here, the denominator is given by p ( x ) = p ( x | y = 1) p ( y = 1) + p ( x | y = 0) p ( y = 0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p ( x | y ) and p ( y ) that weve learned. Actually, if were calculating p ( y | x ) in order to make a prediction, then we dont actually need to calculate the denominator, since arg max y p ( y | x ) = arg max y p ( x | y ) p ( y ) p ( x ) = arg max y p ( x | y ) p ( y ) ....
View Full Document

This note was uploaded on 01/24/2010 for the course CS 229 at Stanford.

Page1 / 14

cs229-notes2 - CS229 Lecture notes Andrew Ng Part IV...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online