lecture4 ch5

Artificial Intelligence: A Modern Approach

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
1 ICS-171:Lecture 4: 1 Lecture 4: Game Playing and Search ICS 171, Summer 2000 ICS-171:Lecture 4: 2 Outline Computer programs which play 2-player games game-playing as search with the complication of an opponent General principles of game-playing and search evaluation functions, minimax principle alpha-beta-pruning, heuristic techniques Status of Game-Playing Systems in chess, checkers, backgammon, Othello, etc, computers routinely defeat leading world players Applications? think of “nature” as an opponent: economics, medicine, etc
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 ICS-171:Lecture 4: 3 Chess Rating Scale 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 1966 1971 1976 1981 1986 1991 1997 Ratings Garry Kasparov (current World Champion) Deep Blue Deep Thought ICS-171:Lecture 4: 4 Game-Playing and AI Game-playing is a good problem for AI research: all the information is available • i.e., human and computer have equal information game-playing is non-trivial • need to display “human-like” intelligence • some games (such as chess) are very complex • requires decision-making within a time-limit – more realistic than other search problems games are played in a controlled environment • can do experiments, repeat games, etc: good for evaluating research systems can compare humans and computers directly • can evaluate percentage of wins/losses to quantify performance
Background image of page 2
3 ICS-171:Lecture 4: 5 Search and Game Playing Consider a board game e.g., chess, checkers, tic-tac-toe configuration of the board = unique arrangement of “pieces” each possible configuration = state in search space Statement of Game as a Search Problem States = board configurations Operators = legal moves Initial State = current configuration Terminal State (Goal ) = winning configuration ICS-171:Lecture 4: 6 Game Tree Representation New aspect to search problem there’s an opponent we cannot control how can we handle this? S Computer Moves Opponent Moves Computer Moves G Possible Goal State lower in Tree (winning situation for computer)
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 ICS-171:Lecture 4: 7 Complexity of Game Playing Imagine we could predict the opponent’s moves given each computer move How complex would search be in this case? worst case, it will be O(b d ) Chess: • b ~ 35 (average branching factor) • d ~ 100 (depth of game tree for typical game) • b d ~ 35 100 ~10 154 nodes!! (“only” about 10 40 legal states) Tic-Tac-Toe • ~5 legal moves, total of 9 moves • 5 9 = 1,953,125 • 9! = 362,880 (Computer goes first) • 8! = 40,320 (Computer goes second) well-known games can produce enormous search trees ICS-171:Lecture 4: 8 Utility Functions Utility Function: defined for each terminal state in a game assigns a numeric value for each terminal state these numbers represent how “valuable” the state is for the computer • positive for winning • negative for losing • zero for a draw Typical values from -infinity (lost) to +infinity (won) or [-1, +1].
Background image of page 4
5 ICS-171:Lecture 4: 9 Greedy Search with Utilities A greedy search strategy using utility functions
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 16

lecture4 ch5 - Lecture 4: Game Playing and Search ICS 171,...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online