This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS221 Midterm 1 STANFORD UNIVERSITY CS 221 Practice Midterm, Fall 2007 Question Points 1 Short Answers /32 2 Motion Planning /12 3 Search Space Formulation /16 4 A* /12 5 Supervised Learning /20 6 Reinforcement Learning /14 7 Constraint Satisfaction /14 Total /120 Name of Student: Exam policy: This exam is openbook and opennotes. Any printed material is allowed. However, the use of mobile devices is not per mitted. This includes laptops, cellular phones and pagers. Time: 3 hours. The Stanford University Honor Code: I attest that I have not given or received aid in this examination, and that I have done my share and taken an active part in seeing to it that others as well as myself uphold the spirit and letter of the Honor Code. Signed: CS221 Midterm 2 1. Short answers [32 points] The following questions require a true/false accompanied by one sentence of explanation, or a very short answer. To discourage random guessing, one point will be deducted for a wrong answer on multiple choice (such as yes/no or true/false) questions! Also, no credit will be given for answers without a correct explanation. (a) [4 points] For this question only, assume that there are no ties in the priority queue for A* search or uniformcost search (i.e., all f and g values are unique). i. [2 points] A* search with an admissible heuristic never expands more nodes than a uniformcost search for the same problem. [True/False] ii. [2 points] A* search with an inadmissible heuristic never expands more nodes than a uniformcost search for the same problem. [True/False] (b) [3 points] Linear regression as studied in class optimizes ∑ i ( y ( i ) − ∑ j θ j x ( i ) j ) 2 over a training set. The maximum likelihood estimate for θ will be unchanged if we optimize ∑ i ( y ( i ) − ∑ j θ j x ( i ) j ) 4 instead. [True/False] (c) [2 points] For any bounded, continuous, differentiable function f , gradient descent with a small enough learning rate always converges to the global minimum . [True/False] (d) [2 points] We can find the maximum of a function such as f ( θ ) = − ( θ − 5) 2 analyt ically by setting the derivative ∂f/∂θ to zero and solving for θ . Given this general CS221 Midterm 3 method for maximizing a function, why did we need a gradient ascent method for maximizing the loglikelihood for logistic regression? (e) [6 points] At any stage in CSP search, the variable ordering picks the next uninstan tiated variable to instantiate. One common heuristic is the minimum remaining values heuristic (MRV), which picks the variable with the fewest remaining legal values. Given a CSP with binary constraints, consider two search algorithms: arc consistency with an arbitrary variable ordering (AC), and forward checking with variable ordering using the minimum remaining values heuristic (FC+MRV). Which of the following is true?...
View
Full Document
 Winter '09
 KOLLER,NG
 Artificial Intelligence, Least Squares, Optimization, A* search algorithm, Admissible heuristic, Consistent heuristic

Click to edit the document details