solution4 to hw4

Artificial Intelligence: A Modern Approach

  • Homework Help
  • PresidentHackerCaribou10582
  • 48

Info icon This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ICS 171, Summer 2000: Homework 4 Solutions 1 Design evaluation functions for any two of backgammon, chess, checkers, tic-tac-toe, othello, connect-4 or any two board games of your choice. There are many possible answers here. Your evaluation function needs to do at least two things: 1 It needs to map board states back to a real valued number, and 2 It needs to adjust the score for both the players and opponents con gurations e.g., score = quality of state for player 1 - quality of state for player 2. See question 5.1 for an example of an evaluation function for tic-tac-toe. Note that a particularly bad evaluation function for tic-tac-toe is something like: f = number of X's - number of O's. This function clearly fails to distinguish between any board states at the same depth in the search: all boards at the same depth will have identical f values since players alternate moves. 2 Shown below is a game tree where the root node is a MAX node. 2 MAX 2 1 1 MIN 2 8 3 1 5 1 7 MAX 2 -3 8 5 3 -2 -2 1 5 4 1 -1 -1 7 Assume that: the tree is explored by minimax and alpha-beta in a left to right manner the tree is explored to depth 3 and no further the numbers beneath the leaves of the tree are the evaluation function values for the corresponding states 1 Write in the boxes the minimax values for each state. Indicate the move chosen by MAX the computer as its rst move. 3 Shown below is the same game tree as in Question 2. Again, the root node is a MAX node. Which states will not be evaluated in minimax search with alpha-beta pruning? Show the nodes that are not evaluated by marking them with an X. By not evaluate" we mean that no minimax values are calculated for that node. MAX MIN MAX 2 -3 8 5 3 -2 -2 1 5 4 1 -1 -1 7 The minimax algorithm returns the best move for MAX under the assuption that MIN plays optimally. What happens when MIN plays suboptimally? Russell & Norvig, page 148. The outcome for MAX can only be the same or better if MIN plays suboptimally compared to MIN playing optimally. If a deterministic model is available for MIN's irrationality, then it can be applied to the game tree in the same way as the optimal policy. However, in all real games the opponent is only reasonable" in some unspeci ed way and a pure minimax strategy can do far worse than some other schemes. Suppose MAX assumes MIN is rational and minimax says MIN will win. In such cases, all moves are losing and are equally good", including those that lose immediately! A better algorithm would make moves for which it will be very di cult for MIN to nd the winning line. Notice also that minimax never sets traps." solution from R & N 4 2 ...
View Full Document

  • '
  • NoProfessor
  • Max, Minimax, evaluation function

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern