This preview shows pages 1–12. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: eager learning , which Compiles the training data into a model or compressed description (such as density parameters in statistical PR and graph structure and weights in neural PR). And then discards the training data. Classifies a sample based on the stored model. During training, lazy algorithms have less computational cost. During testing, eager algorithms have less computational cost. kNN Classifier Large k yields smoother decision regions. If k is too large, locality of the estimation is destroyed because farther examples are taken into account. kNN Classifier Large k yields smoother decision regions. If k is too large, locality of the estimation is destroyed because farther examples are taken into account....
View
Full
Document
This note was uploaded on 09/21/2010 for the course EE EE7750 taught by Professor Bahadirgunturk during the Fall '10 term at LSU.
 Fall '10
 BahadirGunturk

Click to edit the document details