This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: CS221 Lecture notes Basic search Recall that, in the last lecture, we showed how discretization techniques, such as grids, visibility graphs, and probabilistic roadmaps, could be used to convert a continuous motion planning problem into one on a discrete graph. How can we search graphs like these efficiently? In the next two lectures, we will present algorithms to solve search problems on discrete graphs. We will first discuss blind search (also called uninformed search ), where we know nothing except for the nodes and edges which make up the graph. We will then describe heuristic search , in which we will use knowledge about the problem to greatly speed up the search. These search algorithms are quite general, and are widely used many areas of AI. Throughout this lecture, our motivating example will be a toy problem known as the 8puzzle, shown in Figure 1. 1 Search formalism A discrete graph search problem comprises: States . These correspond to the possible states of the world, such as points in configuration space, or board positions for the 8puzzle. We typically denote individual states as s , and the set of all states as S . (Directed) edges . There is a directed edge from state s 1 to state s 2 if s 2 can be reached from s 1 in one step. We assume directed edges for generality, but an undirected edge can be represented as a pair of directed edges. We typically use e to denote an edge, and E the set of all edges. Cost function . A nonnegative function g : E mapsto R + . (This notation means g is a function mapping from the edges E into the set of non negative real numbers R + .) 1 2 (a) (b) Figure 1: The 8puzzle. There are 8 tiles, numbered 1 through 8, which slide around on the board. In the initial state, the tiles are scrambled. The goal is to put the numbers in order, with the blank square in the lower right hand corner. (a) The initial state. (b) The goal state. Initial state . Usually a single state s S . Goal . For generality, we represent the goal with a goal test , which is a function that tells us if any particular state s is a goal state. This is because our task may be to get to any state within some goal region, rather than to a very specific, single, goal state. For instance, in motion planning, the goal test might be end of finger presses the elevator button. This cannot be described with a single goal state, since many different configurations of the robots joints are probably consistent with the goal. In problems where we are interested in reaching one particular goal state, the goal test will be a function which returns true for a state s if and only if it is the goal state. Given this definition of a search problem, there are two ways that we can represent the search space: Explicitly . In this case, we explicitly construct the entire graph of the search space in computer memory. Thus, we would create a list of all the states, all the edges, and all the costs. The goal test will also be explicitlystates, all the edges, and all the costs....
View Full
Document
 Winter '09
 KOLLER,NG
 Artificial Intelligence

Click to edit the document details