L7 - eager learning , which Compiles the training data into...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon
EE7750 MACHINE RECOGNITION OF PATTERNS Lecture 7: Nonparametric Density Estimation 2
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
k Nearest Neighbor (kNN) Approach Fix k, and determine the minimum volume V that encloses k samples from the dataset.
Background image of page 2
kNN Approach In general, the estimates obtained by the kNN approach are not very satisfactory: The estimates are prone to noise. They may have discontinuities. The resulting density diverges when summed over all samples.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The value of k
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
K Nearest Neighbor Classification
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
K Nearest Neighbor Classification
Background image of page 8
K Nearest Neighbor Classification
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
kNN Classifier kNN is considered a lazy learning algorithm: Defers data processing until it receives a request to classify a sample. Classifies by using its stored training data. This opposite strategy is the
Background image of page 10
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 12
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: eager learning , which Compiles the training data into a model or compressed description (such as density parameters in statistical PR and graph structure and weights in neural PR). And then discards the training data. Classifies a sample based on the stored model. During training, lazy algorithms have less computational cost. During testing, eager algorithms have less computational cost. kNN Classifier Large k yields smoother decision regions. If k is too large, locality of the estimation is destroyed because farther examples are taken into account. kNN Classifier Large k yields smoother decision regions. If k is too large, locality of the estimation is destroyed because farther examples are taken into account....
View Full Document

This note was uploaded on 09/21/2010 for the course EE EE7750 taught by Professor Bahadirgunturk during the Fall '10 term at LSU.

Page1 / 12

L7 - eager learning , which Compiles the training data into...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online