SP11 cs188 lecture 20 -- naive bayes II++ 6PP

SP11 cs188 lecture 20 -- naive bayes II++ 6PP -...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 CS 188: Artificial Intelligence Spring 2011 Lecture 20: Naïve Bayes 4/11/2011 Pieter Abbeel – UC Berkeley Slides adapted from Dan Klein. Announcements § W4 due right now § P4 out, due Friday § Tracking of top 20 teams begins tonight 2 Survey § More comprehensive discussion on Wednesday § Immediate actions points: § Graduate / Undergraduate assessment § Slides++ § Two review sessions: sent request to campus § Camera-man tracks all activity Today § Naïve Bayes § Inference § Parameter estimation § Generalization and overfitting § Smoothing § General classification concepts § Confidences § Precision-Recall 4 Example Classification Tasks § In classification, we predict labels y (classes) for inputs x § Examples: § Spam detection (input: document, classes: spam / ham) § OCR (input: images, classes: characters) § Medical diagnosis (input: symptoms, classes: diseases) § Automatic essay grader (input: document, classes: grades) § Fraud detection (input: account activity, classes: fraud / no fraud) § Customer service email routing § many more § Classification is an important commercial technology! Bayes Nets for Classification § One method of classification: § Use a probabilistic model! § Features are observed random variables F i § Y is the query variable § Use probabilistic inference to compute most likely Y § You already know how to do this inference
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 General Naïve Bayes § A general naive Bayes model: § We only specify how each feature depends on the class § Total number of parameters is linear in n Y F 1 F n F 2 |Y| parameters n x |F| x |Y| parameters |Y| x |F| n parameters Inference for Naïve Bayes § Goal: compute posterior over causes § Step 1: get joint probability of causes and evidence § Step 2: get probability of evidence § Step 3: renormalize + General Naïve Bayes § What do we need in order to use naïve Bayes? § Inference (you know this part) § Start with a bunch of conditionals, P(Y) and the P(F i |Y) tables § Use standard inference to compute P(Y|F 1 F n ) § Nothing new here § Estimates of local conditional probability tables § P(Y), the prior over labels § P(F i |Y) for each feature (evidence variable) § These probabilities are collectively called the parameters of the model and denoted by θ § Up until now, we assumed these appeared by magic, but § they typically come from training data: we ` ll look at this now A Digit Recognizer § Input: pixel grids § Output: a digit 0-9 Naïve Bayes for Digits § Simple version: § One feature F ij for each grid position <i,j> § Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image § Each input maps to a feature vector, e.g.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/26/2011 for the course CS 188 taught by Professor Staff during the Spring '08 term at Berkeley.

Page1 / 6

SP11 cs188 lecture 20 -- naive bayes II++ 6PP -...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online