6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
J. J. Doe, MIT ID# 000000000
Problem 1
In this problem we use sequential active learning to estimate a linear model
y = w1 x + w0 +
where the input space (x values) ar
6.867 Machine learning
Mid-term exam
October 18, 2006
(2 p oints) Your name and MIT ID:
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare
(http:/ocw.mit.edu/), Massachusetts Institute of Technology. Down
6.867 Machine learning
Final exam
December 3, 2004
Your name and MIT ID:
J. D. 00000000
(Optional) The grade you would give to yourself + a brief justication.
A. why not?
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. M
6.867 Machine learning
Mid-term exam
October 13, 2004
(2 p oints) Your name and MIT ID:
T. Assistant, 968672004
Problem 1
prediction error
1
1
1
0
0
0
1
1
0
1
1
1
A
0
1
1
1
B
0
1
C
1. (6 p oints) Each plot above claims to represent prediction errors as
6.867 Machine learning
Mid-term exam
October 13, 2004
(2 p oints) Your name and MIT ID:
Problem 1
1
0
1
1
0
x
1
1
noise
noise
noise
1
0
1
1
A
0
x
1
0
1
1
0
x
1
C
B
1. (6 p oints) Each plot above claims to represent prediction errors as a function of
x f
6.867 Machine learning
Mid-term exam
October 15, 2003
(2 p oints) Your name and MIT ID:
SOLUTIONS
Problem 1
Suppose we are trying to solve an active learning problem, where the possible inputs you
can select form a discrete set. Specically, we have a set
6.867 Machine learning
Mid-term exam
October 13, 2006
(2 p oints) Your name and MIT ID:
Problem 1
Suppose we are trying to solve an active learning problem, where the possible inputs you
can select form a discrete set. Specically, we have a set of N unlab
6.867 Machine learning
Mid-term exam
October 22, 2002
(2 p oints) Your name and MIT ID:
Problem 1
We are interested here in a particular 1-dimensional linear regression problem. The dataset
corresponding to this problem has n examples (x1 , y1 ), . . . ,
6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
Problem 1
We are interested here in a particular 1-dimensional linear regression problem. The dataset
corresponding to this problem has n examples (x1 , y1 ), . . . , (
6.867 Machine learning
Final exam
December 3, 2004
Your name and MIT ID:
(Optional) The grade you would give to yourself + a brief justication.
1
Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare
(http:/oc
6.867 Machine learning
Mid-term exam
October 8, 2003
(2 p oints) Your name and MIT ID:
Problem 1
In this problem we use sequential active learning to estimate a linear model
y = w1 x + w0 +
where the input space (x values) are restricted to be within [1,
The MIT License (MIT)
Copyright (c) 2013 Eric Romano (@gelstudios).
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restrict
Machine Learning Library Reference
Boca Raton Documentation Team
Machine Learning Library Reference
Machine Learning Library Reference
Boca Raton Documentation Team
Copyright 2013 HPCC Systems. All rights reserved
We welcome your comments and feedback abo
Outline
Basis Expansion and
Regularization
Piece-wise Polynomials and Splines
Wavelet Smoothing
Smoothing Splines
Automatic Selection of the Smoothing Parameters
Nonparametric Logistic Regression
Prof. Liqing Zhang
Multidimensional Splines
Dept. Com
Outline
Linear Classifiers
Dept. Computer Science &
Engineering,
Shanghai Jiao Tong University
Linear Regression
Linear and Quadratic Discriminant Functions
Reduced Rank Linear Discriminant Analysis
Logistic Regression
Separating Hyperplanes
2016/12/12
Li
Outline
Linear Methods for
Regression
The simple linear regression model
Multiple linear regression
Model selection and shrinkage the state of
the art
Dept. Computer Science & Engineering,
Shanghai Jiao Tong University
2016/12/12
Linear Methods for Reg
Outline
Linear Regression and Nearest Neighbors
method
Overview of Supervised
Learning
Statistical Decision Theory
Local Methods in High Dimensions
Statistical Models, Supervised Learning and
Function Approximation
Structured Regression Models
Class
Outline
Linear Regression and Nearest Neighbors
method
Overview of Supervised
Learning
Statistical Decision Theory
Local Methods in High Dimensions
Statistical Models, Supervised Learning and
Function Approximation
Structured Regression Models
Class
The Element of Statistical Learning Chapter 4
[email protected]
January 6, 2011
Ex. 4.1 Show how to solve the generalized eigenvalue problem max aT Ba subject to aT Wa = 1 by
transforming to a standard eigenvalue problem.
Answer W is the common covariance matri
The Element of Statistical Learning Chapter 5
[email protected]
January 6, 2011
Ex. 5.9 Derive the Reinsch form S = (I + K)1 for the smoothing spline.
Answer Let K = (NT )1 N N1 , so K does not depend on , and we have
N = NT KN
S = N(NT N + N )1 NT
= N(NT N + N
The Element of Statistical Learning Chapter 6
[email protected]
January 6, 2011
PN
PN
Ex. 6.2 Show that i=1 (xi x0 )li (x0 ) = 0 for local linear regression. Define bj (x0 ) = i=1 (xi
x0 )j li (x0 ). Show that b0 (x0 ) = 1 for local polynomial regression of an
The Element of Statistical Learning Chapter 2
[email protected]
January 3, 2011
Ex. 2.1 Suppose each of K-classes has an associated target tk , which is a vector of all zeros, except
a one in the kth position. Show that classifying to the largest element of y a
The Element of Statistical Learning Chapter 3
[email protected]
January 4, 2011
Ex. 3.5 Consider the ridge regression problem (3.41). Show that this problem is equivalent to the
problem
( N
)
p
p
X
X
X
c
c2
c
c 2
= argmin
j
yi 0
(xij x
j )j +
c
i=1
j=1
j=1
G
6.867 Machine learning
Final exam (Fall 2003)
December 14, 2003
Problem 1: your information
1.1. Your name and MIT ID:
1.2. The grade you would give to yourself + brief justication (if you feel that
theres no question your grade should be an A, then just
6.867 Machine learning and neural networks
FALL 2001 Final exam
December 11, 2001
(2 p oints) Your name and MIT ID #:
(4 p oints) The grade you would give to yourself + brief justication. If you
feel that theres no question that your grade should be A (an
6.867 Machine learning
Final exam
December 5, 2002
(2 p oints) Your name and MIT ID:
(4 p oints) The grade you would give to yourself + a brief justication:
Problem 1
We wish to estimate a mixture of two experts model for the data displayed in Figure 1.
T
Clustering
Unsupervised learning
introduc3on
Machine Learning
Supervised learning
Training set:
Andrew Ng
Unsupervised learning
Training set:
Andrew Ng
Applica2ons of clustering
Market s