{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}


143 nearestneighbor learning algorithm pscience

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: t classifica)ons are consistent with the given training data   It is likle used outside text classifica)on Sec.14.3   kNN = k Nearest Neighbor           To classify a document d into class c: Define k ­neighborhood N as k nearest neighbors of d Count number of documents i in N that belong to c Es)mate P(c|d) as i/k Choose as class argmaxc P(c|d) [ = majority class]   It has been used quite effec)vely for text classifica)on   But in general worse than Naïve Bayes   Again, cheap to train and test documents 15 Introduc)on to Informa)on Retrieval Sec.14.3 Example: k=6 (6NN) 16 Introduc)on to Informa)on Retrieval Sec.14.3 Nearest ­Neighbor Learning Algorithm P(science| )?   Learning is just storing the representa)ons of the training examples in D.   Tes)ng instance x (under 1NN):   Compute similarity between x and all examples in D.   Assign x the category of the most similar example in D. Government Science Arts   Does not explicitly compute a generaliza)on or category prototypes.   Also called:   Case ­based learning   Memory ­based learning   Lazy learning   Ra)onale of kNN: con)guity hypothesis 17 18 3 Introduc)on to Informa)on Retrieval Sec.14.3 Introduc)on to Informa)on Retrieval Sec.14.3 kNN Is Close to Op)mal k Nearest Neighbor   Cover and Hart (1967)   Asympto)cally, the error rate of 1 ­nearest ­neighbor classifica)on is less than twice the Bayes rate [error rate of   Using only the closest example (1NN) to determine the class is subject to errors due to: classifier knowing model that gen...
View Full Document

{[ snackBarMessage ]}