Closely related to a particular form of neural

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: isotropic Gaussian noise in observed dimensions Probabilis4c PCA: Constrained covariance Factor Analysis Rela4on to Neural Networks •  PCA is closely related to a particular form of neural network •  An autoencoder is a neural network whose outputs are its own inputs •  The goal is to minimize reconstruction error. Autoencoders •  Define: •  Goal: •  If g is linear: z = g (W x ) x = g (V z ) ˆ N 1￿ minimize ||xn − xn ||2 ˆ 2N n=1 N ￿ 1 minimize ||xn − V W xn ||2 2N n=1 •  In other words, the optimal solution is PCA. Autoencoders: Nonlinear PCA •  What if g() is not linear? •  Then we are basically doing nonlinear PCA •  Some subtleties (see Bishop) but in general this is an accurate description Comparing Reconstruc4ons Independent Components Analysis (ICA) •  ICA is another continuous latent variable model, but it has a non-Gaussian and factorized prior on the latent variables •  Why non-Gaussian? •  In PPCA, FA – form of latent space is specific, but not particular choice of coordinates – both invariant to rotations in latent space •  Note that in PCA, the data distribution is uncorrelated, but not necessarily independent •  ICA finds independent components through both assumptions: non-Gaussian and factorized Blind Source Separa4on...
View Full Document

Ask a homework question - tutors are online