{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# ch03 - Problem-solving agents function...

This preview shows pages 1–4. Sign up to view the full content.

Solving Problems by Searching Chapter 3 Chapter 3 1 Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms Chapter 3 2 Problem-solving agents function Simple-Problem-Solving-Agent ( percept ) returns an action static : seq , an action sequence, initially empty state , some description of the current world state goal , a goal, initially null problem , a problem formulation state Update-State ( state, percept ) if seq is empty then goal Formulate-Goal ( state ) problem Formulate-Problem ( state, goal ) seq Search ( problem ) if seq is failure then return a null action action First ( seq ) seq Rest ( seq ) return action Note: this is offline problem solving; solution executed “eyes closed.” Online problem solving involves acting without complete knowledge. Chapter 3 3 Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal : be in Bucharest Formulate problem : states : various cities actions : drive between cities Find solution : sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Chapter 3 4

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example: Romania Giurgiu Urziceni Hirsova Eforie Neamt Oradea Zerind Arad Timisoara Lugoj Mehadia Dobreta Craiova Sibiu Fagaras Pitesti Vaslui Iasi Rimnicu Vilcea Bucharest 71 75 118 111 70 75 120 151 140 99 80 97 101 211 138 146 85 90 98 142 92 87 86 Chapter 3 5 Problem types Deterministic, fully observable = single-state problem Agent knows exactly which state it will be in; solution is a sequence Non-observable = conformant problem Agent may have no idea where it is; solution (if any) is a sequence Nondeterministic and/or partially observable = contingency problem percepts provide new information about current state solution is a contingent plan or a policy often interleave search, execution Unknown state space = exploration problem (“online”) Chapter 3 6 Example: vacuum world Single-state , start in #5. Solution ?? 1 2 3 4 5 6 7 8 Chapter 3 7 Example: vacuum world Single-state , start in #5. Solution ?? [ Right, Suck ] Conformant , start in { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } e.g., Right goes to { 2 , 4 , 6 , 8 } . Solution ?? 1 2 3 4 5 6 7 8 Chapter 3 8
Example: vacuum world Single-state , start in #5. Solution ?? [ Right, Suck ] Conformant , start in { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } e.g., Right goes to { 2 , 4 , 6 , 8 } . Solution ?? [ Right, Suck, Left, Suck ] Contingency Murphy’s Law (non-deterministic): Suck can dirty a clean carpet; start in #5 Local sensing (partially-observable): dirt, location only, start in { #5,#7 } . Solution ?? 1 2 3 4 5 6 7 8 Chapter 3 9 Example: vacuum world Single-state , start in #5. Solution ?? [ Right, Suck ] Conformant , start in { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } e.g., Right goes to { 2 , 4 , 6 , 8 } . Solution ?? [ Right, Suck, Left, Suck ] Contingency Murphy’s Law (non-deterministic): Suck can dirty a clean carpet; start in #5 Local sensing (partially-observable): dirt, location only, start in { #5,#7 } .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}