Lecture2 - Adaptive Algorithms for PCA PART II Ojas rule is...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Adaptive Algorithms for PCA PART – II
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Oja’s rule is the basic learning rule for PCA and extracts the first principal component Deflation procedure can be used to estimate the minor eigencomponents Sanger’s rule does an on-line deflation and uses Oja’s rule to estimate the eigencomponents Problems with Sanger’s rule- Strictly speaking, Sanger’s rule is non-local and makes it a little harder for VLSI implementation. Non-local rules are termed as biologically non-plausible! (As engineers, we don’t care very much about this) Sanger’s rule converges slowly. We will see later that many algorithms for PCA converge slowly.
Background image of page 2
Other Adaptive structures for PCA The first step would be to change the architecture of the network so that the update rules become local. INPUT X(n) WEIGHTS -W LATERAL WEIGHTS - C
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the Rubner-Tavan Model. Output vector y is given by Cy Wx y + = x1 x2 w1 w2 c y1 y2 1 2 2 1 1 cy X w y X w y T T + = = C is a lower triangular matrix and this is usually called as
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 11

Lecture2 - Adaptive Algorithms for PCA PART II Ojas rule is...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online