6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
J. J. Doe, MIT ID# 000000000
Problem 1
In this problem we use sequential active learning to estimate a linear model
y = w1 x + w0 +
where the input space (x values) ar
6.867 Machine learning
Final exam
December 3, 2004
Your name and MIT ID:
(Optional) The grade you would give to yourself + a brief justication.
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare
(http:/oc
6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
Problem 1
In this problem we use sequential active learning to estimate a linear model
y = w1 x + w0 +
where the input space (x values) are restricted to be within [1,
6.867 Machine learning
Final exam (Fall 2003)
December 14, 2003
Problem 1: your information
1.1. Your name and MIT ID:
1.2. The grade you would give to yourself + brief justication (if you feel that
theres no question your grade should be an A, then just
6.867 Machine learning and neural networks
FALL 2001 Final exam
December 11, 2001
(2 p oints) Your name and MIT ID #:
(4 p oints) The grade you would give to yourself + brief justication. If you
feel that theres no question that your grade should be A (an
6.867 Machine learning
Final exam
December 5, 2002
(2 p oints) Your name and MIT ID:
(4 p oints) The grade you would give to yourself + a brief justication:
Problem 1
We wish to estimate a mixture of two experts model for the data displayed in Figure 1.
T
6.867 Machine learning
Final exam
December 5, 2002
(2 p oints) Your name and MIT ID:
J Doe, #000
(4 p oints) The grade you would give to yourself + a brief justication:
A or perhaps A- if there are any typos or other errors in the solutions.
Problem 1
We
6.867 Machine learning
Final exam (Fall 2003)
December 10, 2003
Problem 1: your information
1.1. Your name and MIT ID:
1.2. The grade you would give to yourself + brief justication (if you feel that
theres no question your grade should be an A, then just
6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
Problem 1
We are interested here in a particular 1-dimensional linear regression problem. The dataset
corresponding to this problem has n examples (x1 , y1 ), . . . , (
6.867 Machine learning
Mid-term exam
October 22, 2002
(2 p oints) Your name and MIT ID:
Problem 1
We are interested here in a particular 1-dimensional linear regression problem. The dataset
corresponding to this problem has n examples (x1 , y1 ), . . . ,
6.867 Machine learning
Mid-term exam
October 13, 2006
(2 p oints) Your name and MIT ID:
Problem 1
Suppose we are trying to solve an active learning problem, where the possible inputs you
can select form a discrete set. Specically, we have a set of N unlab
6.867 Machine learning
Mid-term exam
October 15, 2003
(2 p oints) Your name and MIT ID:
SOLUTIONS
Problem 1
Suppose we are trying to solve an active learning problem, where the possible inputs you
can select form a discrete set. Specically, we have a set
6.867 Machine learning
Mid-term exam
October 13, 2004
(2 p oints) Your name and MIT ID:
Problem 1
1
0
1
1
0
x
1
1
noise
noise
noise
1
0
1
1
A
0
x
1
0
1
1
0
x
1
C
B
1. (6 p oints) Each plot above claims to represent prediction errors as a function of
x f
6.867 Machine learning
Mid-term exam
October 13, 2004
(2 p oints) Your name and MIT ID:
T. Assistant, 968672004
Problem 1
prediction error
1
1
1
0
0
0
1
1
0
1
1
1
A
0
1
1
1
B
0
1
C
1. (6 p oints) Each plot above claims to represent prediction errors as
6.867 Machine learning
Final exam
December 3, 2004
Your name and MIT ID:
J. D. 00000000
(Optional) The grade you would give to yourself + a brief justication.
A. why not?
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. M
6.867 Machine learning
Mid-term exam
October 18, 2006
(2 p oints) Your name and MIT ID:
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare
(http:/ocw.mit.edu/), Massachusetts Institute of Technology. Down
The MIT License (MIT)
Copyright (c) 2013 Eric Romano (@gelstudios).
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restrict
[![Build Status](https:/travis-ci.org/gelstudios/gitfiti.svg?branch=master)]
(https:/travis-ci.org/gelstudios/gitfiti)
gitfiti _noun_ : Carefully crafted graffiti in a github commit history calendar.
An example of gitfiti in the wild:
![alt text](https:/r
6.867 Machine learning and neural networks
FALL 2001 Final exam
December 11, 2001
(2 p oints) Your name and MIT ID #:
(4 p oints) The grade you would give to yourself + brief justication. If you
feel that theres no question that your grade should be A (an
ECE 901
Lecture 13: Maximum Likelihood Estimation
R. Nowak
5/17/2009
The focus of this lecture is to consider another approach to learning based on maximum likelihood
estimation. Unlike earlier approaches considered here we are willing to make somewhat st
Advice for applying
machine learning
Deciding what
to try next
Machine Learning
Debugging a learning algorithm:
Suppose you have implemented regularized linear regression to predict housing
prices.
However, whe
Neural Networks:
Representa1on
Nonlinear
hypotheses
Machine Learning
Nonlinear Classica/on
x2
x1
size
# bedrooms
# oors
age
Andrew Ng
What is this?
You see this:
But the camera sees this:
Andrew Ng
Comput
Regulariza*on
The problem of
over6ng
Machine Learning
Size
Price
Price
Price
Example: Linear regression (housing prices)
Size
Size
Over&ng: If we have too many features, the learned hypothesis
may t the training
Linear Regression with
mul2ple variables
Mul2ple features
Machine Learning
Mul4ple features (variables).
Size (feet2) Price ($1000)
2104
1416
1534
852
460
232
315
178
Andrew Ng
Mul4ple features
Linear Algebra
review (op3onal)
Matrices and
vectors
Machine Learning
Andrew Ng
Matrix: Rectangular array of numbers:
Dimension of matrix: number of rows x number of columns
Andrew Ng
Matrix Elements (entries of ma