CSC321 Winter 2015 - Assignment 2
Convolutional Neural Nets
Due Date: Tuesday March 10, 2015 (at the start of the class).
TA: Ryan Kiros ([email protected])
In this assignment we apply neural ne
from pylab import *
from numpy import *
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
import time
from scipy.misc import imread
from scipy.misc import imresize
import matplotlib.ima
probability and matrix decomposition tutorial
Paul Vicol
February 7, 2017
CSC 321, University of Toronto
1
tutorial outline
1.
2.
3.
4.
Review of Probability
Expectation and Variance
Matrix Terminolog
import os
import pandas
from matplotlib.pyplot import *
from numpy import *
from numpy.linalg import norm
os.chdir("/home/guerzhoy/Desktop/CSC321/tutorials/tut1")
def f(x, y, theta):
x = vstack( (ones
Restricted Boltzmann Machines
http:/deeplearning4j.org/rbm-mnist-tutorial.html
Slides from Hugo Larochelle, CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Geoffrey Hinton, and Yosh
Autoencoders/Diabolo networks
Slides from Hugo Larochelle, CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Geoffrey Hinton, and Yoshua
Michael Guerzhoy
Bengio
Goal
Want to obtain g
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
import time
from scipy.misc import imread
from scipy.misc import imresize
import matplotlib.imag
CSC321 Winter 2017
Homework 3
Homework 3 Solutions
1. Hard-Coding a Network. The idea is that each of the hidden units in the first layer will
respond to a violation of one of the inequalities. The ou
CSC320: Assignment # 1
Due on Friday, January 1, 2016
Firstname Lastname
January 24, 2015
1
Firstname Lastname
CSC320 (L0101): Assignment # 1
PROBLEM 0
Problem 0
Dataset description
The dataset consi
CSC321 Winter 2017
Homework 5
Homework 5 Solutions
1. Regularized Linear Regression
(a) Find an expression for the weight w which minimizes the L2 -regularized loss:
EL2 = E +
2
w
2
We have to minimi
Copyright Policy
All content included on the Site or third-party platforms as part of the class, such
as text, graphics, logos, button icons, images, audio clips, video clips, live
streams, digital do
A Brief Intro to Bayesian Inference
Thomas Bayes (c. 1701 1761)
CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Michael Guerzhoy
Tossing a Coin
Suppose the coin came up Heads 65 ti
import os
import pandas
from matplotlib.pyplot import *
from numpy import *
from numpy.linalg import norm
os.chdir("/home/guerzhoy/Desktop/Link to CSC321/tutorials/tut1")
def f(x, y, theta):
x = vstac
Preventing Overfitting in Neural Networks
John Klossner, The New Yorker
CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Slides from Geoffrey Hinton
Michael Guerzhoy
Overfitting
The
Multilayer Neural Network for
Classification (to be modified later)
is large if the
probability that the
correct class is i is high
1
(2,1,1)
3
2
=1
encoded using one-hot
encoding
3
5
hidden
lay
Solutions
1. Regularized linear regression.
(a) [3 pts] The gradient descent update rules for the regularized cost
function Ereg will be of the form:
Ereg
wj
Ereg
bb
b
wj wj
Now for the weights wj we
Tutorial 1. Linear Regression
January 11, 2017
1
Tutorial: Linear Regression
Agenda: 1. Spyder interface 2. Linear regression running example: boston data 3. Vectorize cost
function 4. Closed form sol
MixtureModel
March 24, 2017
1
Tutorial: Mixture Model
Agenda:
1.
2.
3.
4.
Multivariate Gaussian
Maximum Likelihood estimation of the mean parameter
Bayesian estimation of the mean parameter
Expectatio
CSC 321 H1S
Gradients and Steepest Ascent
Winter 2016
Suppose we have a function f of two variables. At the point (x0 , y0 ), what is the vector that points in
the direction of steepest ascent?
As dis
CSC 321 H1S
Tutorial 1 outline (Last update: January 23, 2016)
Winter 2016
Objective function for linear regression:
C(; x, y) =
X
(y (i) (T x(i) )2 .
i
Have the students work on obtaining the gradien
CSC 321 H1S
Metropolis Algorithm Tutorial (Last update: April 8, 2016)
Winter 2016
d e f p s t a r (w, x , y , sigma , sigma w ) :
l o g l i k = sum( .5 l o g ( 2 p i sigma 2) ( dot (w, x)y ) 2 / ( 2
Welcome to CSC321
Vincent Van Gogh, The Starry Night (1889)
http:/clowndotpy.com/deepdream/
CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Many slides from Daniel Sheldon
Michael G
Training RBMs
http:/deeplearning4j.org/rbm-mnist-tutorial.html
Slides from Hugo Larochelle, CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Geoffrey Hinton, and Yoshua
Michael Guerz
CSC321 Winter 2017
Homework 8
Homework 8 Solutions
Deadline: Wednesday, March 29, at 11:59pm.
1. Categorial Distribution.
(a) As maximizing the probability p(X; ) is equivalent to maximizing log p(X;
Solutions to Tutorial3 Question 2
Calculate backprop equations:
Network architecture:
z = Wx
h = ReLU(z)
y = v> h
1
E = (y t)2
2
Solution:
Backprop equations in scalar form with indices:
E =1
y = E(
Introduction to Convolutional Networks
Slides from Geoffrey Hinton, Alyosha Efros,
Andrej Karpathy
CSC321: Intro to Machine Learning and Neural Networks, Winter 2016
Michael Guerzhoy
Computing Feature
Midterm for CSC321, Intro to Neural Networks
Winter 2017, afternoon section
Tuesday, Feb. 28, 1:10-2pm
Name:
Student number:
This is a closed-book test. It is marked out of 15 marks. Please answer
ALL