cs229-notes2

cs229-notes2 - CS229 Lecture notes Andrew Ng Part IV...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
CS229 Lecture notes Andrew Ng Part IV Generative Learning algorithms So far, we’ve mainly been talking about learning algorithms that model p ( y | x ; θ ), the conditional distribution of y given x . For instance, logistic regression modeled p ( y | x ; θ )as h θ ( x )= g ( θ T x )where g is the sigmoid func- tion. In these notes, we’ll talk about a di±erent type of learning algorithm. Consider a classi²cation problem in which we want to learn to distinguish between elephants ( y = 1) and dogs ( y = 0), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to ²nd a straight line—that is, a decision boundary—that separates the elephants and dogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly. Here’s a di±erent approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of what dogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks more like the elephants or more like the dogs we had seen in the training set. Algorithms that try to learn p ( y | x ) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels { 0 , 1 } , (such as the perceptron algorithm) are called discrim- inative learning algorithms. Here, we’ll talk about algorithms that instead try to model p ( x | y )(and p ( y )). These algorithms are called generative learning algorithms. For instance, if y indicates whether a example is a dog (0) or an elephant (1), then p ( x | y = 0) models the distribution of dogs’ features, and p ( x | y = 1) models the distribution of elephants’ features. After modeling p ( y ) (called the class priors )and p ( x | y ), our algorithm 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 can then use Bayes rule to derive the posterior distribution on y given x : p ( y | x )= p ( x | y ) p ( y ) p ( x ) . Here, the denominator is given by p ( x p ( x | y =1 ) p ( y )+ p ( x | y = 0) p ( y = 0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p ( x | y )and p ( y ) that we’ve learned. Actually, if were calculating p ( y | x ) in order to make a prediction, then we don’t actually need to calculate the denominator, since arg max y p ( y | x ) = arg max y p ( x | y ) p ( y ) p ( x ) = arg max y p ( x | y ) p ( y ) . 1 Gaussian discriminant analysis The Frst generative learning algorithm that we’ll look at is Gaussian discrim- inant analysis (GDA). In this model, we’ll assume that p ( x | y ) is distributed according to a multivariate normal distribution. Lets talk briefly about the properties of multivariate normal distributions before moving on to the GDA model itself.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This document was uploaded on 10/28/2010.

Page1 / 14

cs229-notes2 - CS229 Lecture notes Andrew Ng Part IV...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online