This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Adaptive Algorithms for PCA Review of Principal Component Analysis Solves the matrix equation Diagonalizes the covariance matrix of the input signal, The eigenvectors become eigenfilters and they span the frequency spectrum of the input signal. This is true only if the dimensionality of the data is very high. (From the spectral decomposition property of eigenvectors) The eigenvectors being linearly independent form the most optimal basis for any vector space. This is the motivation for using eigenvectors in data compression (KLT) = V RV = RV V T How to solve PCA using adaptive structures? Although there are many numerical techniques (SVD) to solve PCA, for realtime applications, we need iterative algorithms that solve PCA using one sample of data at a time. MATLAB programmers use eig function. I METHOD  Minimization of mean square reconstruction error N N M Input=x Y=W T x Wy x = Desired response is the input itself. { } 2 x x E J = Minimize the cost function using gradient method with constraint The weight matrix will converge to a rotation of the eigenvector matrix as T is a square orthogonal matrix...
View
Full
Document
This note was uploaded on 06/05/2011 for the course EEL 6502 taught by Professor Principe during the Spring '08 term at University of Florida.
 Spring '08
 PRINCIPE
 Frequency

Click to edit the document details