This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e speciﬁc order of the words
in a document is not very important. Even more we may assume that for
documents of a given class a word appears in the document irrespective of the
presence of other words. This leads to a simple formula for the conditional
probability of words given a class Lc
p ( t1 , . . . , t ni | L c ) = ni ∏ p(t j | Lc )
j =1 Combining this “naïve” independence assumption with the Bayes formula
deﬁnes the Naïve Bayes classiﬁer (Good 1965). Simpliﬁcations of this sort are
required as many thousand different words occur in a corpus.
The naïve Bayes classiﬁer involves a learning step which simply requires the
estimation of the probabilities of words p(t j | Lc ) in each class by its relative
frequencies in the documents of a training set which are labelled with Lc . In the
classiﬁcation step the estimated probabilities are used to classify a new instance
according to the Bayes rule. In order to reduce the number of probabilities
p(t j | Lm ) to be estimated, we can...
View Full Document
- Summer '11