This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CS229 Lecture notes Andrew Ng 1 The perceptron and large margin classifiers In this final set of notes on learning theory, we will introduce a different model of machine learning. Specifically, we have so far been considering batch learning settings in which we are first given a training set to learn with, and our hypothesis h is then evaluated on separate test data. In this set of notes, we will consider the online learning setting in which the algorithm has to make predictions continuously even while it’s learning. In this setting, the learning algorithm is given a sequence of examples ( x (1) , y (1) ) , ( x (2) , y (2) ) , . . . ( x ( m ) , y ( m ) ) in order. Specifically, the algorithm first sees x (1) and is asked to predict what it thinks y (1) is. After making its pre- diction, the true value of y (1) is revealed to the algorithm (and the algorithm may use this information to perform some learning). The algorithm is then shown x (2) and again asked to make a prediction, after which y (2) is revealed, and it may again perform some more learning. This proceeds until we reach ( x ( m ) , y ( m ) ). In the online learning setting, we are interested in the total...
View Full Document
This note was uploaded on 01/24/2010 for the course CS 229 at Stanford.
- Machine Learning