This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS 188 Introduction to Spring 2009 Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a twopage crib sheet, doublesided. Please use nonprogrammable calculators only. Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. All short answer sections can be successfully answered in a few sentences at most. Last Name First Name SID Login GSI Section Time All the work on this exam is my own. (please sign) For staff use only Q. 1 Q. 2 Q. 3 Q. 4 Q. 5 Q. 6 Total /16 /15 /20 /10 /21 /18 /100 2 THIS PAGE INTENTIONALLY LEFT BLANK NAME: 3 1. (12 points) Linear Naive Bayes Recall that a Naive Bayes classifier with observed random variables F i ( i = 1 , . . . , n ) and the query variable Y uses the classification rule: arg max y P ( y  f 1 , . . . , f n ) = arg max y P ( y ) n Y i =1 P ( f i  y ) And a linear classifier (for example, perceptron) uses the classification rule: arg max y n X i =0 w y,i f i where f = 1 is a bias feature (a) (8 pt) Consider a Naive Bayes classifier with binaryvalued features, i.e. f i [0 , 1]. Prove that it is also a linear classifier, by defining weights w y,i (for i = 0 , . . . , n ) such that both decision rules above are equivalent. The weights should be expressed in terms of the Naive Bayes probabilities P ( y ) and P ( f i  y ). You may assume that all the Naive Bayes probabilities are nonzero. (b) (4 pt) For the training set below with binary features F 1 and F 2 and label Y , either name a smoothing method that would estimate a naive Bayes model that would correctly classify all training set data, or state that it is impossible (i.e., there is no smoothing method that would give appropriate probabilities). F 1 F 2 Y 1 1 1 1 1 1 4 2. (15 points) Blind Connect Three In Connect Three, players alternate dropping pieces into one of four columns. A player wins by having three consecutive pieces of their color either horizontally, vertically, or diagonally. Assume columns have infinite height. A dropped piece always occupies the lowest open space in that column. You are playing the game blindfolded against a random opponent. You cant see the opponents moves....
View
Full
Document
This note was uploaded on 04/16/2010 for the course COMPUTER S 188 taught by Professor Abbel during the Fall '10 term at University of California, Berkeley.
 Fall '10
 Abbel

Click to edit the document details