7 Million Study Materials
From students who've taken these classes before
Personal attention for all your questions
Learn
93% of our members earn better grades
Stanford | CS CS229
AI
Professors
• Andrew Ng

#### 28 sample documents related to CS CS229

• Stanford CS CS229
CS145 Midterm Examination Autumn 2006, Prof. Widom Please read all instructions (including these) carefully. There are 6 problems on the exam, with a varying number of points for each problem and subproblem for a total of 75 points to be completed in 75

• Stanford CS CS229
List of related AI Classes CS229 covered a broad swath of topics in machine learning, compressed into a single quarter. Machine learning is a hugely inter-disciplinary topic, and there are many other sub-communities of AI working on related topics, or wor

• Stanford CS CS229
Hidden Markov Models Fundamentals Daniel Ramage CS229 Section Notes December 1, 2007 How can we apply machine learning to data that is represented as a sequence of observations over time? For instance, we might be interested in discovering the sequence of

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Supervised learning Lets start by talking about a few examples of supervised learning problems. Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon: Living area (feet2 ) 2104 1600 2

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part IV Generative Learning algorithms So far, weve mainly been talking about learning algorithms that model p(y |x; ), the conditional distribution of y given x. For instance, logistic regression modeled p(y |x; ) as h (x) =

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part V Support Vector Machines This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe is indeed the best) o-the-shelf supervised learning algorithm. To tell t

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part VI Learning Theory 1 Bias/variance tradeo When talking about linear regression, we discussed the problem of whether to t a simple model such as the linear y = 0 + 1 x, or a more complex model such as the polynomial y = 0

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part VI Regularization and model selection Suppose we are trying select among several dierent models for a learning problem. For instance, we might be using a polynomial regression model h (x) = g (0 + 1 x + 2 x2 + + k xk ),

• Stanford CS CS229
CS229 Lecture notes Andrew Ng 1 The perceptron and large margin classiers In this nal set of notes on learning theory, we will introduce a dierent model of machine learning. Specically, we have so far been considering batch learning settings in which we a

• Stanford CS CS229
CS229 Lecture notes Andrew Ng The k -means clustering algorithm In the clustering problem, we are given a training set cfw_x(1) , . . . , x(m) , and want to group the data into a few cohesive clusters. Here, x(i) Rn as usual; but no labels y (i) are given

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Mixtures of Gaussians and the EM algorithm In this set of notes, we discuss the EM (Expectation-Maximization) for density estimation. Suppose that we are given a training set cfw_x(1) , . . . , x(m) as usual. Since we are in

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part IX The EM algorithm In the previous set of notes, we talked about the EM algorithm as applied to tting a mixture of Gaussians. In this set of notes, we give a broader view of the EM algorithm, and show how it can be appl

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part X Factor analysis When we have data x(i) Rn that comes from a mixture of several Gaussians, the EM algorithm can be applied to t a mixture model. In this setting, we usually imagine problems were the we have sucient data

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x Rn as approximately lying in some k -dimension subspace, where k n. Specif(i) ically, we imagined that each point x was

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part XII Independent Components Analysis Our next topic is Independent Components Analysis (ICA). Similar to PCA, this will nd a new basis in which to represent our data. However, the goal is very dierent. As a motivating exa

• Stanford CS CS229
CS229 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make their outputs mimic the labels y given in the tr

• Stanford CS CS229
CS 229 Machine Learning Handout #1: Course Information Teaching Staff and Contact Info Professor: Andrew Ng Office: Gates 156 TA: Paul Baumstarck Office: B24B TA: Catie Chang Office: B24A TA: Chuong (Tom) Do Office: B24A TA: Zico Kolter (head TA) Office:

• Stanford CS CS229
Advice for applying Machine Learning Andrew Ng Stanford University Andrew Y. Ng Todays Lecture Advice on how getting learning algorithms to different applications. Most of todays material is not very mathematical. But its also some of the hardest materi

• Stanford CS CS229
CS229 Practice Midterm 1 CS 229, Autumn 2007 Practice Midterm Notes: 1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to

• Stanford CS CS229
CS229 Problem Set #1 1 CS 229, Public Course Problem Set #1: Supervised Learning 1. Newtons method for computing least squares In this problem, we will prove that if we use Newtons method solve the least squares optimization problem, then we only need one

• Stanford CS CS229
CS229 Problem Set #2 1 CS 229, Public Course Problem Set #2: Kernels, SVMs, and Theory 1. Kernel ridge regression In contrast to ordinary least squares which has a cost function J () = 1 2 m (T x(i) y (i) )2 , i=1 we can also add a term that penalizes lar

• Stanford CS CS229
CS229 Problem Set #3 1 CS 229, Public Course Problem Set #3: Learning Theory and Unsupervised Learning 1. Uniform convergence and Model Selection In this problem, we will prove a bound on the error of a simple model selection procedure. Let there be a bin

• Stanford CS CS229
CS229 Problem Set #4 1 CS 229, Public Course Problem Set #4: Unsupervised Learning and Reinforcement Learning 1. EM for supervised learning In class we applied EM to the unsupervised learning setting. In particular, we represented p(x) by marginalizing ov

• Stanford CS CS229
CS229 Final Project Guidelines 1 CS 229, Autumn 2007 Final Project Guidelines and Suggestions 1 Project overview One of CS229s goals is to prepare you to (i) apply state-of-the-art machine learning algorithms to an application, and (ii) do research in mac

• Stanford CS CS229
CS229 Problem Set #2 Solutions 1 CS 229, Public Course Problem Set #2 Solutions: Theory 1. Kernel ridge regression Kernels, SVMs, and In contrast to ordinary least squares which has a cost function J () = 1 2 m (T x(i) y (i) )2 , i=1 we can also add a ter

• Stanford CS CS229
CS229 Problem Set #3 Solutions 1 CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning 1. Uniform convergence and Model Selection In this problem, we will prove a bound on the error of a simple model selection procedure

• Stanford CS CS229
CS229 Problem Set #4 Solutions 1 CS 229, Public Course Problem Set #4 Solutions: Unsupervised Learning and Reinforcement Learning 1. EM for supervised learning In class we applied EM to the unsupervised learning setting. In particular, we represented p(x)

• Stanford CS CS229
CS 229 Machine Learning Handout #2: Tentative Course Schedule Syllabus Introduction (1 class) Basic concepts. Supervised learning. (6 classes) Supervised learning setup. LMS. Logistic regression. Perceptron. Exponential family. Generative learning algorit