L8 - EE7750 MACHINE RECOGNITION OF PATTERNS Lecture 8:...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
EE7750 MACHINE RECOGNITION OF PATTERNS Lecture 8: Principal Component Analysis
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Curse of Dimensionality For a given dataset size, the performance of a classifier will eventually start to decrease with increased dimensionality.
Background image of page 2
Curse of Dimensionality What are the implications of the curse of dimensionality? To maintain a sampling density, the number of examples (dataset) should be increased exponentially. For example: To have M samples per bin; we need M D samples in a D dimensional space. The complexity of the density estimate will be higher in higher dimensional spaces. (More samples are needed to learn the density well.) How do we beat the curse of dimensionality? By incorporating prior knowledge By imposing smoothness on the density estimate By reducing dimensionality
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Dimensionality Reduction How can we reduce dimensionality? Feature extraction finds a mapping such that the transformed vector preserves (most of) the information.
Background image of page 4
Image of page 5
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/21/2010 for the course EE EE7750 taught by Professor Bahadirgunturk during the Fall '10 term at LSU.

Page1 / 11

L8 - EE7750 MACHINE RECOGNITION OF PATTERNS Lecture 8:...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online