{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

05-midterm - CS570 Midterm Exam Name Question Your Points 1...

Info icon This preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
CS570 Midterm Exam March 25, 2009 Name: Question Your Points Points 1 17 2 14 3 6 4 6 5 4 6 15 7 8 8 15 Total 85 1
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1 True/False (17 points) For each of the following statements, answer True or False. Also, add a short explanation of your answer. An answer without any explanation will get zero points. A. (3 points) Best-first search is a greedier algorithm than A* search. Thus, it may find sub-optimal solutions, but by expanding nodes closer to the goal sooner, it is guaranteed to find a goal sooner (after an equal or fewer number of node expansions) than A*. [True / False] B. (3 points) Let h 1 and h 2 be two admissible heuristic functions. Then, max ( h 1 , 0 . 5 * h 2 ) is also admissible. [True / False] C. Suppose you have a CSP problem, and you run arc consistency starting from the initial state (before any variables are assigned). After apply- ing arc consistency, all variables have one or more possible value, and there is a variable V i whose domain D i has exactly one possible value remaining ( | D i | = 1). (a) (2 points) There must be at least one solution to this CSP problem. [True / False] 2
Image of page 2
(b) (2 points) Any solution to this CSP problem must have the vari- able V i instantiated to the value in D i . [True / False] D. The standard alpha-beta pruning performs a depth-first exploration (to a specified depth) of the game tree. (a) (2 points) Alpha-beta pruning can be generalized to do a breadth- first exploration of the game tree and still get the optimal answer. [True / False] (b) (2 points) Alpha-beta pruning can be generalized to do an iterative- deepening exploration of the game tree and still get the optimal answer. [True / False] E. (3 points) Suppose you have an MDP, where the reward is 0 per step until the robot gets to a goal (but not terminal) state, and the reward is 1 from then on (the rewards keep accumulating). Also γ < 1. Suppose that you have two policies π 1 and π 2 , both of which are guaranteed to get your robot to the goal. Also, suppose that starting from any state s , the expected (i.e., average) number of steps that π 1 takes to get to the goal is less than the expected number of steps that π 2
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern