E6892 Bayesian Models for Machine Learning
Columbia University, Fall 2015
Lecture 6, 10/15/2015
Instructor: John Paisley
Variational inference review (simple notation)
For this fast review, we compress this into a simple notation. We have data X generate

E6892 Bayesian Models for Machine Learning
Columbia University, Fall 2015
Lecture 7, 10/30/2015
Instructor: John Paisley
Lets look at another example of a standard model that is easily learned with variational inference.
Latent Dirichlet allocation (LDA)

E6892 Bayesian Models for Machine Learning
Columbia University, Fall 2015
Lecture 9, 11/12/2015
Instructor: John Paisley
Clustering with the Gaussian mixture model (GMM)
We next look at a fundamental problem in machine learningclustering data. There are

E6892 Bayesian Models for Machine Learning
Columbia University, Fall 2015
Lecture 12, 12/10/2015
Instructor: John Paisley
Non-negative matrix factorization
Goal: We have a M N data matrix X where Xij 0. We want to approximate this with a
product of two n

E6892 Bayesian Models for Machine Learning
Columbia University, Fall 2015
Lecture 10, 11/19/2015
Instructor: John Paisley
Mixture models with Dirichlet priors
Review: We have data X = cfw_x1 , . . . , xn . We will assume x Rd , but the following discussi