Unformatted text preview: CPS 170 Search I Ron Parr With thanks to Vince Conitzer for some slides and ﬁgures What is Search? • Search is a basic problem
solving method • We start in an iniEal state • We examine states that are (usually) connected by a sequence of acEons to the iniEal state • Note: Search is a thought experiment • We aim to ﬁnd a soluEon, which is a sequence of acEons that brings us from the iniEal state to the goal state, minimizing cost 1 Search vs. Web Search • When we issue a search query using Google, does Google really go poking around the web for us? • Not in real Eme! • Google spiders the web conEnually, caches results • Uses page rank algorithm to ﬁnd the most “popular” web pages that are consistent with your query Overview • Problem FormulaEon • Uninformed Search – DFS, BFS, IDDFS, etc. • Informed Search – Greedy, A* • ProperEes of HeurisEcs 2 Problem FormulaEon • Four components of a search problem – IniEal State – AcEons – Goal Test – Path Cost • OpEmal soluEon = lowest path cost to goal Example: Path Planning 1 3 1 1 1 2 Start 1 1 3 Goal 1 2 2 Find shortest route from one city to another using highways. 3 Example 8(15)
puzzle 8 3 1 2 4 1 SoluEon 2 7 6 3 4 5 5 6 7 8 Possible Start State Goal State AcEons: UP, DOWN, RIGHT, LEFT “Real” Problems • Robot moEon planning • Drug design • LogisEcs – Route planning – Tour Planning • Assembly sequencing • Internet rouEng 4 Why Use Search? • Other algorithms exist for these problems: – Dijkstra’s Algorithm – Dynamic programming – All
pairs shortest path • Use search when it is too expensive to enumerate all states • 8
puzzle has 362,800 states • 15
puzzle has 1.3 trillion states • 24
puzzle has 1025 states Basic Search Concepts • Assume a tree
structured space (for now) • Nodes: Places in search tree (states exist in the problem space) • Search tree: porEon of state space visited so far • AcEons: Connect states to next states • Expansion: GeneraEon of next states for a state • FronEer: Set of states visited, but not expanded • Branching factor: Max no. of successors = b • Goal depth: Depth of shallowest goal = d 5 Example Search Tree b=2 FronEer 8
puzzle 1 2 4 8 6 1 2 4 5 3 4 7 8 6 7 . . . 3 7 1 5 2 1 5 3 4 8 6 7 5 2
3 8 6 . . . 6 Generic Search Algorithm FuncEon Tree
Search(problem, Queuing
Fn) fringe = Make
Queue(Make
Node(IniEal
State(problem))) loop do if empty(fringe) then return failure node = pop(fringe) if Goal
Test(problem, state) then return node fringe = Add
To
Queue(fringe, expand(node, problem)) end InteresEng details are in the implementaEon of Add
To
Queue EvaluaEng Search Algorithms • Completeness: – Is the algorithm guaranteed to ﬁnd a soluEon when there is one? • OpEmality: – Does the algorithm ﬁnd the opEmal soluEon? • Time complexity • Space complexity 7 Uninformed Search: BFS FronEer is a FIFO 1 2 4 3 5 6 7 BFS ProperEes • Completeness: Y • OpEmality: Y (for uniform cost) • Time complexity: O(bd+1) • Space complexity: O(bd+1) 8 Uninformed Search: DFS FronEer is a LIFO 1 2 3 5 4 6 7 DFS ProperEes • Completeness: N (unless tree is ﬁnite) • OpEmality: N • Time complexity: O(bm+1) (m = depth we hit, m>d?) • Space complexity: O(bm) 9 IteraEve Deepening • Want: – DFS memory requirements – BFS opEmality, completeness • Idea: – Do a depth
limited DFS for depth m – Iterate over m IDDFS 1 2 5 3 7 6 8 4 9 10 11 10 IDDFS ProperEes • Completeness: Y • OpEmality: Y (whenever BFS is opEmal) • Time complexity: O(bd+2) • Space complexity: O(bd) IDDFS vs. BFS Theorem: IDDFS visits no more than twice as many nodes for a binary tree as BFS. Proof: Assume the tree boooms out at depth d, BFS visits: d +1
2 − 1
In the worst case, IDDFS does no more than: d ∑ (2 i =1 d
i +1 d − 1) = ∑ 2 i +1 − ∑1 = (2d +2 − 2) − d ≤ 2(2d +1 − 1) = 2 × BFS (d ) € i =1 i =1 What about b
ary trees? IDDFS relaEve cost is lower! € 11 Bi
direcEonal Search image from csalbpc3.massey.ac.nz/notes/59302/fig03.17.gif d /2
d /2
d
b + b << b
€ Issues with Bi
direcEonal Search • Uniqueness of goal – Suppose goal is parking your car – Huge no. of possible goal states (conﬁguraEons of other vehicles) • Invertability of acEons 12 What About Repeated States (graphs) 2 C B 2 A cycles 3 exponenEally large search trees • Can cause incompleteness or enormous runEmes • Can maintain list of previously visited states to avoid this – If new path to the same state has greater cost, don’t pursue it further – Leads to Eme/space tradeoﬀ • “Algorithms that forget their history are doomed to repeat it” [Russell and Norvig] Informed Search • Idea: Give the search algorithm hints • HeurisEc funcEon: h(x) • h(x) = esEmate of cost to goal from x • If h(x) is 100% accurate, then we can ﬁnd the goal in O(bd) Eme 13 Greedy Search • Expand node with lowest h(x) • OpEmal if h(x) is 100% correct • How can we get into trouble with this? What Price Greed? h=2 h=1 IniEal State h=1 h=1 h=1 h=1 h=1 Goal What’s broken with greedy search? 14 A* • Path cost so far: g(x) • Total cost esEmate: f(x) = g(x) + h(x) • Maintain fronEer as a priority queue • O(bd) Eme if h is 100% accurate • We want h to be an admissible heurisEc • Admissible: never overesEmates cost Some A* ProperEes
• Implies h(x)=0 if x is a goal state • Implies f(x)=cost to goal if x is a goal state and x is popped oﬀ the queue • What if h(x)=0 for all x? – Is this admissible? – What does the algorithm do? 15 OpEmality of A* • If h is admissible, A* is opEmal • Proof (by contradicEon): – Suppose a subopEmal soluEon node n with soluEon value f(n) > C* is about to be expanded (where C* is opEmal) – Let n* be a goal state found on opEmal path – There must be some node n’ that is currently in the fringe and on the path to n* – We have f(n) > C*, and f(n’) = g(n’) + h(n’) ≤ C* – But then, n’ should be expanded ﬁrst (contradicEon) Does A* fix the greedy problem?
h=2
h=1 Initial
State h=1 h=1 h=1 h=1 h=1 Goal 16 A* is opEmally eﬃcient • A* is opEmally eﬃcient: Any other opEmal algorithm must expand at least the nodes A* expands • Proof: – Besides soluEon, A* expands the nodes with g(n)+h(n) < C* • Assuming it does not expand non
soluEon nodes with g(n)+h(n) = C*
– Any other opEmal algorithm must expand at least these nodes (since there may be a beoer soluEon there) • Note: This argument assumes that the other algorithm uses the same heurisEc h ProperEes of HeurisEcs • h2 dominates h1 if h2(x)>h1(x) for all x • Does this mean that h2 is beoer? • Suppose you have mulEple admissible heurisEcs. How do you combine them? 17 Designing heurisEcs • One strategy for designing heurisEcs: relax the problem • “Number of misplaced 5les” heurisEc corresponds to relaxed problem where Eles can jump to any locaEon, even if something else is already there • “Sum of Manha:an distances” corresponds to relaxed problem where mulEple Eles can occupy the same spot • The ideal relaxed problem is – easy to solve, – not much cheaper to solve than original problem • Some programs can successfully automaEcally create heurisEcs 18 ...
View
Full
Document
This note was uploaded on 02/17/2012 for the course COMPSCI 170 taught by Professor Parr during the Spring '11 term at Duke.
 Spring '11
 Parr
 Artificial Intelligence

Click to edit the document details