practice_midterm

practice_midterm - CS221 Midterm 1 STANFORD UNIVERSITY CS...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CS221 Midterm 1 STANFORD UNIVERSITY CS 221 Practice Midterm, Fall 2007 Question Points 1 Short Answers /32 2 Motion Planning /12 3 Search Space Formulation /16 4 A* /12 5 Supervised Learning /20 6 Reinforcement Learning /14 7 Constraint Satisfaction /14 Total /120 Name of Student: Exam policy: This exam is open-book and open-notes. Any printed material is allowed. However, the use of mobile devices is not per- mitted. This includes laptops, cellular phones and pagers. Time: 3 hours. The Stanford University Honor Code: I attest that I have not given or received aid in this examination, and that I have done my share and taken an active part in seeing to it that others as well as myself uphold the spirit and letter of the Honor Code. Signed: CS221 Midterm 2 1. Short answers [32 points] The following questions require a true/false accompanied by one sentence of explanation, or a very short answer. To discourage random guessing, one point will be deducted for a wrong answer on multiple choice (such as yes/no or true/false) questions! Also, no credit will be given for answers without a correct explanation. (a) [4 points] For this question only, assume that there are no ties in the priority queue for A* search or uniform-cost search (i.e., all f and g values are unique). i. [2 points] A* search with an admissible heuristic never expands more nodes than a uniform-cost search for the same problem. [True/False] ii. [2 points] A* search with an inadmissible heuristic never expands more nodes than a uniform-cost search for the same problem. [True/False] (b) [3 points] Linear regression as studied in class optimizes i ( y ( i ) j j x ( i ) j ) 2 over a training set. The maximum likelihood estimate for will be unchanged if we optimize i ( y ( i ) j j x ( i ) j ) 4 instead. [True/False] (c) [2 points] For any bounded, continuous, differentiable function f , gradient descent with a small enough learning rate always converges to the global minimum . [True/False] (d) [2 points] We can find the maximum of a function such as f ( ) = ( 5) 2 analyt- ically by setting the derivative f/ to zero and solving for . Given this general CS221 Midterm 3 method for maximizing a function, why did we need a gradient ascent method for maximizing the log-likelihood for logistic regression? (e) [6 points] At any stage in CSP search, the variable ordering picks the next uninstan- tiated variable to instantiate. One common heuristic is the minimum remaining values heuristic (MRV), which picks the variable with the fewest remaining legal values. Given a CSP with binary constraints, consider two search algorithms: arc consistency with an arbitrary variable ordering (AC), and forward checking with variable ordering using the minimum remaining values heuristic (FC+MRV). Which of the following is true?...
View Full Document

Page1 / 16

practice_midterm - CS221 Midterm 1 STANFORD UNIVERSITY CS...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online