{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Computer Science 188 - Spring 2002 - Russell - Final Exam

Computer Science 188 - Spring 2002 - Russell - Final Exam -...

Info iconThis preview shows pages 1–5. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 4
Background image of page 5
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS 188 introduction to Al Spring 2002 Stuart Russeli Finai You have 2 hours and 50 minutes. The exam is open-book, open-notes. 100 points total. You will not necessarily finish ali questions, so do your best ones first. Write your answers in blue books. Check you haven’t skipped any by accident. Hand them all in. Panic not. HAND IN THE EXAM COPY AS WELL AS YOUR BLUE BOOKS. DO NOT DISCLOSE ANY EXAM CONTENT OR DISCUSS WITH OTHER STUDENTS!!!” 1. (12 pts.) True/False Decide if each of the following is true or false. If you are not sure you may wish to provide a brief explanation to follow your answer. (a) (2) The truth of any English sentence can be determined given a grammar and given semantic definitions for all the words. (b) (2) Using dynamic Bayesian networks for speech recognition instead of HMMS does not necessarily change the complexity of the problem. (c) (2) It is not always possible to determine the size of an object from a single image. (d) (2) There is no clause that, when resolved with itself, yields (after factoring) the clause (-'P V n62). (e) (2) Every partial-order plan with no open conditions and no possible threats has a linearization that is a correct solution. (f) (2) There exists a set S of Horn clauses such that the assignment in which every symbol is false is not a model of S. 2. (15 pts.) Logic (a) (2) Translate into good, natural English (no as and ys!): Va, y,£ Speakchmguaye(:c, i) A SpeaksLanguage(y, 1) => Understands(a:, y) A Understands(y, 9:) (b) (3) Translate into first-order logic the following sentences: 1. “If some0ne understands someone, then he is that someone’s friend.” ii. “Friendship is transitive.” Remember to define all predicate, function, or constants and avoid the LongPredicateNames trap. (c) (5) Suppose that Ann and Bob speak Fi‘ench and Bob and Cal speak German. Prove, using any first-order logical theorem-proving method you like, that Ann is Cal’s friend, using as axioms the Slantences from parts (a) and (b). Explain each step in detail, including any unifications required. You may abbreviate any symbols as necessary. ((1) (5) Give a formal proof that the sentence in (a) is entailed by the sentence Va:, 3),! SpeaksLangnage(a:, l) /\ SpeaksLangaage(y, I) => Understands(m,y) 3. (14 pts.) Games Consider a two-player game featuring a board with four locations, numbered 1 through 4 and arranged in a line. Each player has a single token. Player A starts with his token on space 1, and player B starts with his token on space 4. Player A moves first. The two players take turns moving, and each player must move his token to an open adjacent space in either direction. If the opponent occupies an adjacent space, then a player may jump over the opponent to the next open space if any. (For example, if A is on 3 and B is on 2, then A may move back to 1.) The game ends when one player reaches the opposite end of the board. If player A reaches space 4 first, then the value of the game is +1; if player B reaches space 1 first, then the value of the game is —1. (a) (5) On a fresh page, draw the complete game tree, using the following conventions: I Write each state as (3A, 33) where 3,; and .93 denote the token locations. I Put the terminal states in square boxes, and annotate each with its game value in a circle. 0 Put loop states (states that already appear on the path to the root) in double square boxes. Since it is not clear how to assign values to loop states, annotate each with a “?” in a circle. (b) (4) Now mark each node with its backed-up minimax value (also in a circle). Explain in words how you handled the “?” values, and why. (c) (5) Explain why the standard minimax algorithm would fail on this game tree and briefly sketch how you might fix it, drawing on yorir answer to (b). Does your modified algorithm give optimal decisions for all games with loops? 4. (12 pts.) MDPs and Games (ANSWER Q.3 FIRST) Now we will take a different approach to the game in Q.3, viewing it in the framework of MDPs. Here is the state—space graph for the game, showing moves by A as solid lines and moves by B as dashed lines. (114) _(21l4) —(3II4) (1 is) —(2. E3) —1—-® t12>—l-—<3;2)[email protected] (2.1) Q Q (a) (5) Consider a general zero-sum, turn-taking, stochastic MDP with two players A and B. Let 024(3) be the utility of state a when it is A‘s turn to move in s, and let 173(3) be the utility of state 3 when it is B’s turn to move in 5. Let R(s) be the reward in a. All rewards and utilities are calculated from A's point of view (just as in a minimax game tree). Write down Bellman equations defining U A(s) and UB(3). (b) (5) Briefly explain how to do two~player value iteration with these equations and apply value iteration to the game from Q3, using the f0110wing table. We have initialized Up, and marked the terminal values as fixed. Your job is to complete the next two rows (in your blue book). (1.4) (2,4) (3,4) (1,3) (2,3) (43) l (1.2) (3.2) (4.2) (2,1 U A 0 O 0 0 0 +1 0 0 +1 -1 UB +1 +1 —1 UA +1 +1 —1 (c) (2) Define a Suitable termination condition for two—player value iteration. 5. (22 'pts.) Statistical learning, Bayes nets In this question we will look at mafirnum likelihood learning, _ as discussed 1n the lecture on Chapter 19. (a) (2) Consider a single Boolean random variable Y (the “classification”). Let the prior probability P(Y = true) ' be 1r. Let‘s try to find 11-, given a training set D=(y1, . . . ,yN) with N independent samples of Y. Fur- thermore, suppose p of the N are positive and 11 of the N are negative. Write down an expression for the. likefihbod of D (i. e., the probability of seeing this particular sequence of examples, given a fixed value of 1r) intermsofw,p,andn (b) (3) By differentiating the log likelihood L, find the value of 11' that maximizes the likelihood. (c) (2) Now suppose we'add m l: Boolean random variables X1, X3,“ ”X; (the “attributes”) that describe each sample; ”and suppose we assume that the attributes are conditionally independent of each other given the goal Y. Disaw the Bayes net corresponding to this assumption. (d) (4) Write down the likelihood for the data including the attributes, using the following additional notation: I a,- lS P(_X.' = WIY = true). 0 B.- is P(X.- =truelY= false). 0 p? is the count of samples for which X,- =true and Y =true. -0 112' is the count of'samples for which X; = false and Y =true. . o 'p" is the count of samples for which X.= true and Y- false. 0 11" is the count of samples for which X5: false and 1’: false; [Hint consider first the probability of seeing a. single. example with specified values for X1,X2, . ”,in and Y.] (e) (5) By differentiating the log likelihood L, find the values of cc.- and .13; (in terms of the various counts) that maximize the likelihood and say in words what. these values represent. (f) (3) Let k: 2, and consider a data set with 4 examples as follows: Compute the maximum likelihood estimates of 11-, a1, a2, .81, and fig. (g) (2) Given these estimates of at, a1, a2, ,31, and 62, what are the posterior probabilities P(_Y =truelxl,:cg) for each example? (11) (2) Comment on the connection between this result and the capabilities of a single-layei‘ perceptron. 6. (15 pts.) Natural language ' ' The next page shows the lexicon and grammar rules _for “wumpus pidg‘in” (slightly modified from the book). (a) (3) Which of the following sentences are generated by the grammar (possibly more than one): i. I see the gold but it is near the smelly wumpus. ii. I shoot the breeze back east in Boston . _ iii. You that sniell the wumpus that stinks I go and you kill it (b) (4) Propose a modified rule for relative clauses that also allows the sentence “The wumpus that the dogs see stinks.” (c) (4) Show a parse tree of this sentence using your new rule. (d) (4) in English it is also legal to say “The wumpus the _do'gs see stinks,” omitting the word “that”. It 15 _.,not however, legal to say “The _wumpus the dogs I smell see stinks. ”_1 Make minimal adjustments to the _ grammar to allow the first sentence but not to allow the second. . 1This is the result of removing two “that”s from “The wumpus that the dogs that I smell See stinks.” Said carefully, the latter is really a sentence of English. Noon -+ stench | breeze| glitter l nothing I wnmpusl pit| dogsl gold] eastl Verb —} is] see| smeH| shoot| feel. | stinks | go! grab! carry| killl turnl Adjective —} rightf teft E east | 3011th back | sme££y| Advert —> here I there | nearby I ahead | rightl left} eastl southl back1 Pronoun —> me| you} I| it} Nome —> Johnl Mary | Bostonl Aristotlel Article —> the| o| an| Preposttton —> to} in I on] neo'rl Conjunction —> and| or | butl Digita0l1|2|3l4|5|617|8|9 The lexicon for wumpus pidgin. NP VP I + feel a breeze S Conjunction S I feel a. breeze + and + I smell a wumpus Pronoun I Noun dogs Artécle Noun the + wumpus Digit Digit 3 4 NP PP the wumpus + to the east NP RelClouse the wumpus + that is smelly Verb stinks VP NP feel + a breeze VP Adjective is + smelly VP PP turn + to the east VP Advert: g0 + ahead PP Preposition NP to + the east ReICZouse that VP that + is smelly The grammar for wumpus pidgin, with example phrases for each rule. 7. (10 pts.) Robotics Consider the two-link robotic arm shown in the following figure. The arm rotates at the pivot in the center and its position is defined by two angles: 31 is the angle between the :t-axis and the first link (ranging from 0 to 211') and 93 is the angle between the first and second links (also ranging from 0 to 21:). The square block on the left and the vertical wall on the right represent obstacles. Both a start configuration (solid lines) and. a goal configuration (dashed lines) are shown (a) (5) Choose the appropriate configuration Space from above. (b) (5) Copy the configuration space diagram, mark the start and goal configurations, and show an appropriate plan for the robot to move from the start to the goal. ...
View Full Document

{[ snackBarMessage ]}