Unformatted text preview: UMass Lowell Computer Science 91.503 Analysis of Algorithms
Prof. Karen Daniels Spring S i , 2011 Lecture 4
Tuesday, 2/15/11 Graph Algorithms: Part 1 Shortest Paths Chapters 2425 24 1 Introductory Graph Concepts G G= (V,E) Vertex Degree SelfSelfLoops Undirected Graph Directed Graph (digraph) No SelfLoops SelfAdjacency is symmetric
A C D E F B Degree: in/out SelfSelfLoops allowed
A C D E F
2 B This treatment follows 91.503 textbook Cormen et al. Some definitions differ slightly from other graph literature. Introductory Graph Concepts: Representations Undirected Graph
A C D E F B Directed Graph (digraph)
A C D E F B A B C D E F ABCDEF
0 1 1 0 0 0 1 0 1 0 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 1 0 1 0 0 1 0 A B C D E F BC ACEF AB E BDF BE A B C D E F ABCDEF
0 1 1 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 A B C D E F BC CEF D BD E Adjacency Matrix Adjacency List Adjacency Matrix Adjacency List 3 This treatment follows 91.503 textbook Cormen et al. Some definitions may differ slightly from other graph literature. Introductory Graph Concepts: Paths, Paths Cycles Path:
A C D A C D E E F B simple cycle <E,B,F,E> F B Directed Graph: most of C l Cycle: length: number of edges simple: all vertices distinct path <A B F> <A,B,F> <v0,v1,...,vk > forms cycle if v0=vk , , , y our cycle and k d k>=1 work will simple cycle: v ,v ..,v also 1 2 k be for distinct directed selfloop is cycle of length 1 self p y g
graphs simple cycle <A,B,C,A>= <B,C,A,B> i l l ABCA BCAB A B Undirected Graph: C <v0,v1,...,vk > forms (simple) cycle if v0=vk and k>=3 d k> 3 D simple cycle: v1,v2..,vk also F E distinct 4 This treatment follows 91.503 textbook Cormen et al. Some definitions may differ slightly from other graph literature. Introductory Graph Concepts: Connectivity A B
connected C Undirected Graph: connected Graph: every pair of vertices is connected by a path A one connected component d connected components: equivalence classes under "is reachable from" relation D E B F C D A E C 2 connected components F B Directed Graph: strongly connected Graph: every pair of vertices is reachable from each other one strongly connected component A strongly connected components: equivalence classes under "mutually reachable" i l l d " t ll h bl " relation not strongly connected D F E B strongly
connected component C D E F 5 This treatment follows 91.503 textbook Cormen et al. Some definitions may differ slightly from other graph literature. Elementary Graph Algorithms: SEARCHING: DFS SEARCHING DFS, BFS
for unweighted directed or undirected graph G=(V,E)
Time: Time: O(V + E) adj list O(V2) adj matrix predecessor subgraph = forest of spanning trees DepthFirst BreadthFirstSearch (BFS): DepthFirstSearch (DFS): BreadthFirst Shortest Path Distance From source to each reachable vertex Record during traversal Encountering, finishing times "wellformed" nested (( )( ) ) "wellformed well structure Foundation of many "shortest path" "shortest path" algorithms Vertex color shows status:
not yet encountered y encountered, but not yet finished finished Every edge of undirected G is either a tree edge or a back edge g g EdgeColor of vertex when first tested determines edge type
See 91.404 DFS/BFS slide show 6 See DFS, BFS Handout for PseudoCode Shortest Paths Sh h Chapters 24 & 25
Chapter 24: SingleSource Shortest Paths Single(but not Section 24.4 on Difference Constraints) Chapter 25: AllPairs Shortest Paths All(but not Section 25.3 on Johnson's Algorithms for Sparse Graphs)
7 BFS as a Basis for Some Shortest Path Algorithms
for unweighted, g , undirected graph G=(V,E) Problem: Problem: Given 2 vertices u, v, find the shortest path in G from u to v. Problem: Problem: Given a vertex u, find the shortest path in G from u to each vertex. Problem: Problem: Find the shortest path in G from each vertex u to each vertex v. BFS Time: Time: O(V + E) adj list O(V2) adj matrix Source/Sink Shortest Path SingleSingle Source Si l S Shortest Paths AllAllPairs Shortest Paths Solution: Solution: BFS starting at u. Stop at v. Solution: Solution: BFS starting at u. Full BFS tree. Solution: Solution: For each u: u: BFS starting at u; full BFS tree.
Time: Time: O(V(V + E)) adj list O(V3) adj matrix
8 but for weighted, directed graphs... source: based on Sedgewick, Graph Algorithms Shortest Path Applications
edge weights for weighted, directed graph G=(V,E)
adjacency matrix adjacency li k d li t dj linked list Weight ~ Cost ~ Distance Road maps Airline routes Telecommunications network routing VLSI d i routing design i
9 source: based on Sedgewick, Graph Algorithms Shortest Path Trees
source: Sedgewick, Graph Algorithms Sedgewick,
Example from previous slide: New example with same vertices and edges but different edge weights:
different edge weights Shortest Path Tree gives shortest path from root to each other vertex
Shortest Path Tree is a spanning tree. Here assume 0 is root.
.99 .83 .45 .21 .38 .36 .5 .41 .41 .1 .51 .1 .51 .1 .45 .83 .38 .1 .1 .1 .41 .1 .45 .83 .45 .83 .1 .51 Shortest path need not be unique, but shortest path total weight is unique. 10 Shortest Path Principles: Optimal Substructure Lemma: Lemma: Any subpath of a shortest path is a shortest path. Proof: CutandProof: Cutandpaste argument.
Let p be a shortest path from u to v. p: u pux x pxy y pyv v Suppose there existed a shorter path p'xy from x to y. Then: w( p' xy ) < w( p xy ) forming a path from u to v with smaller weight than path p contradicts the assumption that p is a shortest path. u pux x p p'xy y pyv v
11 source: 91.503 textbook Cormen et al. Shortest Path Principles: Relaxation "Relax" "R l " a constraint to try to improve solution i i l i Relaxation of an edge (u,v): (u test if shortest path to v [found so far] can be improved by going through u
A 3 2 4 G 5 6 B 6 7 F 4 1 Example: v
E 8 2 1 u D C
12 source: 91.503 textbook Cormen et al. Shortest Path Principles (cont.)
For weighted, directed graph G=(V,E) with no negativeweight cycles negativeBased on calling INITSINGLESOURCE once & then calling RELAX 0 times INITSINGLEshortest path weight shortest path weight estimate A
proof uses upperbound property upper 3 1 2 4 G 5 6 6 E 8 2 7 F 4 B proof uses convergence property & induction 1 D C
13 source: 91.503 textbook Cormen et al. SingleSource Sh Shortest Paths h Chapters 24 & 25
Chapter 24: SingleSource Shortest Paths Single(but not Section 24.4 on Difference Constraints) 14 Single Source Shortest Paths BellmanFord Bellman Ford
for (negative) weighted, directed graph G=(V,E) with no negativeweight cycles negativeweights source Time is in O(VE) why this upper bound? update v.d if u.d+w(u,v) < v.d u.d+w(u,v) detect negativenegativeweight cycle
15 source: 91.503 textbook Cormen et al. BellmanFord for (negative) weighted, directed graph G=(V,E) with no negativeweight cycles negative Board work: work: Example of negative weight cycle detection. Note: Note: Edges are relaxed here in lexicographic order ; other orders are possible.
16 source: 91.503 textbook Cormen et al. SingleSource Shortest Paths in DAG
Useful application: PERT charts for scheduling h d li activities 17 source: materials to accompany 91.503 textbook Cormen et al. Single Source Shortest Paths: Dijkstra's Al ith Dijk t ' Algorithm
Dijkstra s Dijkstra's algorithm solves problem efficiently for the case in which all weights are nonnegative (as in the example graph). 1 10 2 1 6 1 6 2 8 1 5 3 3 5 1 3 4 4
Dijkstra's algorithm maintains a set S of vertices whose final shortest path f ti h fi l h t t th weights have already been determined. , It also maintains, for each vertex v not in S, an upper bound d[v] on the weight of a shortest path from source s to v. The algorithm repeatedly selects the vertex u V S with minimum bound d[u], inserts u into S, and relaxes all edges leaving u (determines if passing through u makes it "faster" to get to a vertex adjacent to u). 18 Single Source Shortest Paths: Dijkstra s Algorithm Dijkstra's
for (nonnegative (why?)) weighted, directed graph G=(V,E) (why?)) Similar Si il to weighted version of BFS i h d i f BFS. implicit DECREASEKEY DECREASE Loop invariant: At start of each iteration of while loop, v.d = (s,v) for all v in S. invariant: Greedy strategy: where is greedy choice being made? 19 source: 91.503 textbook Cormen et al. Single Source Shortest Paths Dijkstra s Algorithm Dijkstra's
for (nonnegative) weighted, directed graph G=(V,E) How is amortized analysis helping here? Fibonacci Heap (Ch. 20) O(VlgV + E) O(VlgV amortized analysis PFS = PriorityFirst Search = generalize graph search with priority queue to determine next step. PriorityFringe = set of edges that are candidates for being added next to shortest path tree. 20 sources: Sedgewick, Graph Algorithms & 91.503 textbook Cormen et al. Sedgewick, AllPairs Sh Shortest Paths h Chapter 25
Chapter 25: AllPairs Shortest Paths All(but not Section 25.3 on Johnson's Algorithms for Sparse Graphs) 21 Transitive Closure (Matrix): Unweighted, Unweighted Directed Graph
Transitive Closure concepts will be useful for AllAllPairs Shortest Path calculation in directed, weighted graphs G "selfloops" added for algorithmic purposes Transitive Closure Graph p contains edge (u,v) if there exists a directed path in G from u to v.
22 source: Sedgewick, Graph Algorithms Transitive Closure (Matrix)
G G2 "selfloops" added for algorithmic purposes g 2 Boolean Matrix Product: and, or replace *,+ Product:
23 source: Sedgewick, Graph Algorithms Transitive Closure (Matrix) why this upper limit? Algorithm 1: Find , 2 , 3 ,..., V1 1:
Time: Time: O(V4) 2 Algorithm 2: Find , 2 , 4 ,..., V 2:
Time: Time: O(V3lgV) Time: Algorithm 3: [Warshall] Time: O(V3) 3: for i 0 to V1 Vfor s 0 to V1 Vfor t 0 to V1 Vif [s][i] and [i][t] then [s][t] 1 [ ][ ]
24 3 4 source: Sedgewick, Graph Algorithms Transitive Closure (Matrix)
Warshall
good for dense graphs (why?) Consider 0 as maximum intermediate vertex label. Consider 3 as maximum intermediate vertex label. Consider 1 as maximum intermediate vertex label. Consider 4 as maximum intermediate vertex label. Consider 2 as maximum intermediate vertex label. Consider 5 as maximum intermediate vertex label. 25 source: Sedgewick, Graph Algorithms Transitive Closure (Matrix)
Warshall Correctness by Induction on i: Inductive H I d ti Hypothesis: ith it ti of l th i iteration f loop sets t [s][t] to 1 iff there's a directed path from s to t with (internal) indices at most i. ( ) Inductive Step for i+1 (sketch): 2 cases for path <s...t> internal indices at most i  covered by inductive hypothesis in prior iteration so [s][t] already set to 1 an internal index exceeds i (= i+1)  [ ][i 1] [i 1][t] set in a prior iteration so [s][i+1], [i+1][t] t i i it ti [s][t] set to 1 in current iteration 26
source: Sedgewick, Graph Algorithms All Shortest Paths
.41 .29 .29 .45 .38 .36 .50 .32 .21 .32 32 .51 total path weight from 1 to 0 27 source: Sedgewick, Graph Algorithms Sedgewick, All Shortest Paths (Compact) Total distance of shortest path
.41 .29 .29 .45 38 .38 .36 .50 .32 .21 .32 .51 Shortest Paths Entry s,t gives next vertex on shortest path from s to t (our h t t th f t ( textbook uses predecessor). predecessor). 28 source: Sedgewick, Graph Algorithms Slides courtesy of Prof. Pecelli
Algorithms for ALL shortest path p g p pairs. pairs. If all the edge weights were nonnegative, we could use nonDijkstra's algorithm: O(V(V lg V + E)), which, for a (V(V dense graph, could be O(V3). d h ld b (V If we allow negative edge weights, we must use BellmanBellmanFord: O(VE) from each ve te , which could be O(V2E) o d: (VE o eac vertex, w c cou d  (V E  = O(V4 ). (V Can we do better? Is there some advantage in looking for algorithms that would compute ALL the path lengths l ih h ld h hl h simultaneously? p p parts: dense graphs and g p We separate the problem into two p sparse graphs. graphs.
3/2/2011
29 Slides courtesy of Prof. Pecelli
Dense Graphs the FloydWarshall Algorithm. The p Floydy Algorithm. g family of algorithms normally presented in this context is based on analogs of matrix multiplication  more precisely dynamic programming: graphs are represented as adjacency matrices rather than adjacency lists. This will lead us to the FloydWarshall algorithm. Johnson s Floydalgorithm Johnson's algorithm, used for sparse graphs and discussed in textbook (Section 25.3), uses adjacency lists. Note: Note: Social Networks (the rage today) seem to be based on "sparse g p p graphs" (E = O(V*logV)) even though many ( (V*logV))  g g y nodes have large numbers of edges.
3/2/2011
30 Slides courtesy of Prof. Pecelli
Some notation. notation G = (V, E); V = n; (V V W = (wij); (w wij = weight of edge (i, j); wii = 0, wij = if (i, j) not in E. (i (i Negative edges allowed; no negative cycles. D = (dij) = weight matrix for shortest paths: dij = (i, j). (d = (ij) = predecessor matrix on shortest paths: predecessor of j in paths: some shortest path from i to j; ij = NIL if i = j or (i, j) not in E. (i ith row of = shortest path tree with root i. G = predecessor graph; G,i = (V,i, E.i) = predecessor subgraph for i; (V i i i V,i = {j in V: ij NIL} U {i}; E,j = {(ij, j): j in V,j {j}}. {j
3/2/2011
31 Slides courtesy of Prof. Pecelli
Shortest Paths & Matrix Multiplication. We need to first characterize the structure of the solutions we try a dynamic programming approach, which will also give us a way to carry the program to success. Characterize the structure of an optimal solution Recursively define the value of an optimal solution Compute the value of an optimal solution in a bottom up fashion. Construct the optimal solution from computed information. 3/2/2011
32 Slides courtesy of Prof. Pecelli
Structure of a shortest path. By Lemma 24.1 we know that subpaths of shortest paths are shortest paths. paths. Let W = (wij) be the adjacency matrix of the graph. Let p be a (w path from i to j, p with at most m edges (no negative cycles implies p is finite). if i = j, p has weight 0 and no edges p' ' if i j, p can be decomposed into i k j, where p' has at most m1 edges. Lemma 24.1 gives p' as a shortest path from i to k, (i,j) = (i, k) + wkj. We are now ready to set up a recursive Dynamic Programming algorithm to compute the solution solution.
3/2/2011
33 Slides courtesy of Prof. Pecelli
Recursive Solution. Let lij(m) denote the min weight of any path from i to j using no more than m edges. The recursion gives: (25.2) The actual shortest path lengths eventually stabilize: (i, j) = lij(n1) = lij(n) = lij(n+1) ... 3/2/2011 34 Slides courtesy of Prof. Pecelli
Shortest Paths & Matrix Multiplication. We can translate this idea from Eq. (25.2) into the pseudocode: pseudo Where the input matrices are W and L(m) and the output is L(m+1). inp t o tp t
3/2/2011
35 Slides courtesy of Prof. Pecelli
If we look at the operations min and +, and replace them by + and *, we see a correspondence with matrix multiplication:
Substitutions:
l ( m 1) a wb
l (m) c min + + Each iteration of the algorithm has cost (n3). Because of the previous g p argument on termination, the total cost is (n4).
3/2/2011 similar to Transitive Closure Algorithm 1 36 Running Time is in (n4).
similar to Transitive Closure Algorithm 1 g
3/2/2011
37 Slides courtesy of Prof. Pecelli
We can speed up the running time to (n3log(n)) by using a wellknow trick in log(n y g( g wellmatrix power computations
A 2n = A n A n A 2n +1 = AA 2n = A(A n A n ).
similar to Transitive Closure Algorithm 2 We'll see this same trick used to good effect in the implementation of RSA encryption.
3/2/2011
38 Slides courtesy of Prof. Pecelli
FloydWarshall: FloydWarshall: can we find a better dynamic programming algorithm (e.g., (V3))? still with no negativeweight cycles. (V negativeThe only option open to us (short of finding a much better matrix multiplication algorithm) is to find a more efficient characterization of the structure of a shortest path which is what allows us to use the dynamic programming framework. Def.: Def.: an intermediate vertex of a simple path p = <v1, v2, ..., vl> i any vertex of p other than <v is t f th th v1 or vl.
3/2/2011
39 Slides courtesy of Prof. Pecelli
Observation: Observation: if V = {v1, v2, ..., vn} = {1, ..., n}, consider a subset {v1, v2, {v ..., vk} for some k. For any pair of vertices i, j in V, consider all the paths from i to j whose intermediate vertices are all drawn from {1, 2, ..., k}. Let p be a (simple) minimumweight path among them. minimumthem. 1. If k is not an intermediate vertex of p, then all intermediate vertices of p are in {1, 2, ..., k1}. Thus a shortest path from i to j with all {1, k1}. intermediate vertices in {1, 2, ..., k1} is a shortest path from i to j {1, kwith all intermediate vertices in {1, 2, ..., k}. {1, k}. 2. If k is an intermediate vertex of p, then decompose p = i p1> k p2> j. By Lemma 24.1, p1 is a shortest path from i to k with all 24 1 intermediate vertices in {1, 2, ..., k}. Since k is not an intermediate {1, k}. vertex of p1, all the intermediate vertices are in {1, 2, ..., k1}: p1 is a {1, k1}: shortest path from i to k with intermediate vertices in {1, 2, ..., k1}; {1, k1}; similarly p2 from k to j.
3/2/2011
40 Slides courtesy of Prof. Pecelli
A picture that might help: 3/2/2011 41 Slides courtesy of Prof. Pecelli
A recursive solution. Let dij(k) be the weight of a shortest solution. path from i to j with intermediate vertices in {1, ..., k}. When k = 0, the path can have no intermediate vertices, so dij(0) = 0, vertices wij. From the discussion on the previous slides: Since for any path all intermediate vertices are in {1, ..., n}, dij(n) = (i, j).
3/2/2011 42 Slides courtesy of Prof. Pecelli Running Time is in (n3).
3/2/2011 similar to Transitive Closure Algorithm 3 [Warshall] [Warshall] 43 Shortest Path Algorithms 44 source: Sedgewick, Graph Algorithms Connected Dominating Set Literature Paper "On Calculating Connected Dominating Set for Efficient Routing in Ad Hoc Wireless Networks" Networks" by Wu and Li, DIAL M 1999.
45 Goal & Approach of Paper Goal: Efficient routing for mobile hosts All hosts have same wireless transmission range. Approach: Represent network by undirected, unweighted graph G = (V, E). (V 2 nodes connected if both hosts are within transmission range Find small, connected, dominating set of gateway nodes I addition, this set must include all intermediate nodes on every shortest path In dditi thi t t i l d ll i t di t d h t t th between 2 nodes in dominating set. Allpairs shortest paths All Maintain routing table for gateway nodes. nodes Use heuristic, due to suspected intractability of minimum dominating set problem.
1 2 6 3 4 5 7 8 9 46 source: "On Calculating Connected Dominating Set for Efficient Routing in AdHoc Wireless Networks" Ad Dominating Set 1 2 6 3 4 5 7 8 9 47 source: Garey & Johnson Marking Heuristic Initialize each node's marker to F (false) ( ) Each vertex v exchanges its open neighbor set N(v) with each of its neighbors N (v) = {u  {u, v} E} Each vertex changes its marker to T (true) if it can determine that it has 2 (directly) unconnected neighbors. Resulting V ' induces reduced graph G ' = G [V ' ]. THEOREM 1 (reworded): If G is (not completely) connected, then V ' is a ( d d) completely) connected dominating set of G. THEOREM 2 (reworded): G ' is connected. THEOREM 3 (reworded): Sh Shortest path b h between any 2 nodes of G ' does d f d not include any nongateway node as an intermediate node. non1 2 6 3 4 5 7 8 9 48 source: "On Calculating Connected Dominating Set for Efficient Routing in AdHoc Wireless Networks" Ad Enhanced Marking Heuristic RULES 1 & 2 try to reduce number of gateway nodes t t d b f t d assuming ordering on vertex id's in V ' define closed neighbor set of v in V ': N [v] = N (v) {v} RULE 1: Consider 2 vertices v and u i G ' . If N [v] N [u ] i G and id(v) < 1 C id i d in in d id( d id( id(u), change marker of v to F if node v is marked. RULE 2: Given 2 marked neighbors u and w of marked vertex v in G ', if N (v) N (u ) N ( w) in G and id (v) = min{id (v), id (u ), id ( w)} then change marker of v to F. Paper also discusses dynamic network with hosts turning on and off. Experimental simulation uses random graphs.
RULE 1 U
v id=1 u id=2 v u u v w
49 RULE 2 source: "On Calculating Connected Dominating Set for Efficient Routing in AdHoc Wireless Networks" Ad ...
View
Full
Document
 Spring '11
 Staff
 Algorithms

Click to edit the document details