{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

cs229-notes1

# cs229-notes1 - CS229 Lecture notes Andrew Ng Supervised...

This preview shows pages 1–4. Sign up to view the full content.

CS229 Lecture notes Andrew Ng Supervised learning Lets start by talking about a few examples of supervised learning problems. Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon: Living area (feet 2 ) Price (1000\$s) 2104 400 1600 330 2400 369 1416 232 3000 540 . . . . . . We can plot this data: 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 100 200 300 400 500 600 700 800 900 1000 housing prices square feet price (in \$1000) Given data like this, how can we learn to predict the prices of other houses in Portland, as a function of the size of their living areas? 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
CS229 Winter 2003 2 To establish notation for future use, we’ll use x ( i ) to denote the “input” variables (living area in this example), also called input features , and y ( i ) to denote the “output” or target variable that we are trying to predict (price). A pair ( x ( i ) , y ( i ) ) is called a training example , and the dataset that we’ll be using to learn—a list of m training examples { ( x ( i ) , y ( i ) ); i = 1 , . . . , m } —is called a training set . Note that the superscript “( i )” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X denote the space of input values, and Y the space of output values. In this example, X = Y = R . To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X Y so that h ( x ) is a “good” predictor for the corresponding value of y . For historical reasons, this function h is called a hypothesis . Seen pictorially, the process is therefore like this: Training set house.) (living area of Learning algorithm h predicted y x (predicted price) of house) When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression prob- lem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.
3 Part I Linear Regression To make our housing example more interesting, lets consider a slightly richer dataset in which we also know the number of bedrooms in each house: Living area (feet 2 ) #bedrooms Price (1000\$s) 2104 3 400 1600 3 330 2400 3 369 1416 2 232 3000 4 540 . . . . . . . . . Here, the x ’s are two-dimensional vectors in R 2 . For instance, x ( i ) 1 is the living area of the i -th house in the training set, and x ( i ) 2 is its number of bedrooms. (In general, when designing a learning problem, it will be up to you to decide what features to choose, so if you are out in Portland gathering housing data, you might also decide to include other features such as whether each house has a fireplace, the number of bathrooms, and so on. We’ll say more about feature selection later, but for now lets take the features as given.) To perform supervised learning, we must decide how we’re going to rep- resent functions/hypotheses h in a computer. As an initial choice, lets say we decide to approximate y as a linear function of x : h θ ( x ) = θ 0 + θ 1 x 1 + θ 2 x 2 Here, the θ i ’s are the parameters (also called weights ) parameterizing the

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 30

cs229-notes1 - CS229 Lecture notes Andrew Ng Supervised...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online