chap6 - Chapter 6 Dynamic programming In the preceding...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 6 Dynamic programming In the preceding chapters we have seen some elegant design principles—such as divide-and- conquer, graph exploration, and greedy choice—that yield defnitive algorithms For a variety oF important computational tasks. The drawback oF these tools is that they can only be used on very specifc types oF problems. We now turn to the two sledgehammers oF the algorithms craFt, dynamic programming and linear programming , techniques oF very broad applicability that can be invoked when more specialized methods Fail. Predictably, this generality oFten comes with a cost in eFfciency. 6.1 Shortest paths in dags, revisited At the conclusion oF our study oF shortest paths (Chapter 4), we observed that the problem is especially easy in directed acyclic graphs (dags). Let’s recapitulate this case, because it lies at the heart oF dynamic programming. The special distinguishing Feature oF a dag is that its nodes can be linearized ; that is, they can be arranged on a line so that all edges go From leFt to right (±igure 6.1). To see why this helps with shortest paths, suppose we want to fgure out distances From node S to the other nodes. ±or concreteness, let’s Focus on node D . The only way to get to it is through its Figure 6.1 A dag and its linearization (topological ordering). B D C A S E 1 2 4 1 6 3 1 2 S C A B D E 4 6 3 1 2 1 1 2 169
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
170 Algorithms predecessors, B or C ; so to fnd the shortest path to D , we need only compare these two routes: dist ( D ) = min { dist ( B ) + 1 , dist ( C ) + 3 } . A similar relation can be written For every node. IF we compute these dist values in the leFt-to-right order oF ±igure 6.1, we can always be sure that by the time we get to a node v , we already have all the inFormation we need to compute dist ( v ) . We are thereFore able to compute all distances in a single pass: initialize all dist ( · ) values to dist ( s ) = 0 for each v V \{ s } , in linearized order: dist ( v ) = min ( u,v ) E { dist ( u ) + l ( u, v ) } Notice that this algorithm is solving a collection oF subproblems , { dist ( u ) : u V } . We start with the smallest oF them, dist ( s ) , since we immediately know its answer to be 0 . We then proceed with progressively “larger” subproblems—distances to vertices that are Further and Further along in the linearization—where we are thinking oF a subproblem as large iF we need to have solved a lot oF other subproblems beFore we can get to it. This is a very general technique. At each node, we compute some Function oF the values oF the node’s predecessors. It so happens that our particular Function is a minimum oF sums, but we could just as well make it a maximum , in which case we would get longest paths in the dag. Or we could use a product instead oF a sum inside the brackets, in which case we would end up computing the path with the smallest product oF edge lengths.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 12/15/2009 for the course CS 473 taught by Professor Viswanathan during the Spring '08 term at University of Illinois at Urbana–Champaign.

Page1 / 31

chap6 - Chapter 6 Dynamic programming In the preceding...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online