This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Lecture Notes CMSC 251 1 2 2 3 1 t u w x v s Figure 27: BFS tree. For an directed graph the analysis is essentially the same. Lecture 23: All-Pairs Shortest Paths (Tuesday, April 21, 1998) Read: Chapt 26 (up to Section 26.2) in CLR. All-Pairs Shortest Paths: Last time we showed how to compute shortest paths starting at a designated source vertex, and assuming that there are no weights on the edges. Today we talk about a consid- erable generalization of this problem. First, we compute shortest paths not from a single vertex, but from every vertex in the graph. Second, we allow edges in the graph to have numeric weights . Let G = ( V, E ) be a directed graph with edge weights. If ( u, v ) E , is an edge of G , then the weight of this edge is denoted W ( u, v ) . Intuitively, this weight denotes the distance of the road from u to v , or more generally the cost of traveling from u to v . For now, let us think of the weights as being positive values, but we will see that the algorithm that we are about to present can handle negative weights as well, in special cases. Intuitively a negative weight means that you get paid for traveling from u to v . Given a path π = h u , u 1 , . . . , u k i , the cost of this path is the sum of the edge weights: cost ( π ) = W ( u , u 1 ) + W ( u 1 , u 2 ) + ··· W ( u k- 1 , u k ) = k X i =1 W ( u i- 1 , u i ) . (We will avoid using the term length , since it can be confused with the number of edges on the path.) The distance between two vertices is the cost of the minimum cost path between them. We consider the problem of determining the cost of the shortest path between all pairs of vertices in a weighted directed graph. We will present two algorithms for this problem. The first is a rather naive Θ( n 4 ) algorithm, and the second is a Θ( n 3 ) algorithm. The latter is called the Floyd-Warshall algorithm . Both algorithms is based on a completely different algorithm design technique, called dynamic programming . For these algorithms, we will assume that the digraph is represented as an adjacency matrix, rather than the more common adjacency list. Recall that adjacency lists are generally more efficient for sparse graphs (and large graphs tend to be sparse). However, storing all the distance information between each pair of vertices, will quickly yield a dense digraph (since typically almost every vertex can reach almost every other vertex). Therefore, since the output will be dense, there is no real harm in using the adjacency matrix. Because both algorithms are matrix-based, we will employ common matrix notation, using i , j and k to denote vertices rather than u , v , and w as we usually do. Let G = ( V, E, w ) denote the input digraph and its edge weight function. The edge weights may be positive, zero, or negative, but we assume that 69 Lecture Notes CMSC 251 there are no cycles whose total weight is negative. It is easy to see why this causes problems. If the shortest path ever enters such a cycle, it would never exit. Why? Because by going round the cycleshortest path ever enters such a cycle, it would never exit....
View Full Document
This note was uploaded on 01/13/2012 for the course CMSC 351 taught by Professor Staff during the Fall '11 term at University of Louisville.
- Fall '11