This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Algorithms Lecture 6: Advanced Dynamic Programming Tricks [ Fa’10 ] Ninety percent of science fiction is crud. But then, ninety percent of everything is crud, and it’s the ten percent that isn’t crud that is important. — [Theodore] Sturgeon’s Law (1953) 6 Advanced Dynamic Programming Tricks ? Dynamic programming is a powerful technique for efficiently solving recursive problems, but it’s hardly the end of the story. In many cases, once we have a basic dynamic programming algorithm in place, we can make further improvements to bring down the running time or the space usage. We saw one example in the Fibonacci number algorithm. Buried inside the naïve iterative Fibonacci algorithm is a recursive problem—computing a power of a matrix—that can be solved more efficiently by dynamic programming techniques—in this case, repeated squaring. 6.1 Saving Space: Divide and Conquer Just as we did for the Fibonacci recurrence, we can reduce the space complexity of our edit distance algorithm from O ( mn ) to O ( m + n ) by only storing the current and previous rows of the memoization table. This ‘sliding window’ technique provides an easy space improvement for most (but not all) dynamic programming algorithm. Unfortunately, this technique seems to be useful only if we are interested in the cost of the optimal edit sequence, not if we want the optimal edit sequence itself. By throwing away most of the table, we apparently lose the ability to walk backward through the table to recover the optimal sequence. Fortunately for memory-misers, in 1975 Dan Hirshberg discovered a simple divide-and-conquer strategy that allows us to compute the optimal edit sequence in O ( mn ) time, using just O ( m + n ) space. The trick is to record not just the edit distance for each pair of prefixes, but also a single position in the middle of the optimal editing sequence for that prefix. Specifically, any optimal editing sequence that transforms A [ 1 .. m ] into B [ 1 .. n ] can be split into two smaller editing sequences, one transforming A [ 1 .. m / 2 ] into B [ 1 .. h ] for some integer h , the other transforming A [ m / 2 + 1 .. m ] into B [ h + 1 .. n ] . To compute this breakpoint h , we define a second function Half ( i , j ) such that some optimal edit se- quence from A [ 1.. i ] into B [ 1.. j ] contains an optimal edit sequence from A [ 1.. m / 2 ] to B [ 1.. Half ( i , j )] . We can define this function recursively as follows: Half ( i , j ) = ∞ if i < m / 2 j if i = m / 2 Half ( i- 1, j ) if i > m / 2 and Edit ( i , j ) = Edit ( i- 1, j ) + 1 Half ( i , j- 1 ) if i > m / 2 and Edit ( i , j ) = Edit ( i , j- 1 ) + 1 Half ( i- 1, j- 1 ) otherwise (Because there there may be more than one optimal edit sequence, this is not the only correct definition.) A simple inductive argument implies that Half ( m , n ) is indeed the correct value of h . We can easily modify our earlier algorithm so that it computes Half...
View Full Document
- Spring '11
- Dynamic Programming, Longest common subsequence problem