Algorithms
NonLecture C: Advanced Dynamic Programming Tricks
Ninety percent of science ﬁction is crud.
But then, ninety percent of everything is crud,
and it’s the ten percent that isn’t crud that is important.
— [Theodore] Sturgeon’s Law (1953)
C Advanced Dynamic Programming Tricks
?
Dynamic programming is a powerful technique for efﬁciently solving recursive problems, but it’s hardly
the end of the story. In many cases, once we have a basic dynamic programming algorithm in place,
we can make further improvements to bring down the running time or the space usage. We saw one
example in the Fibonacci number algorithm. Buried inside the naïve iterative Fibonacci algorithm is a
recursive problem—computing a power of a matrix—that can be solved more efﬁciently by dynamic
programming techniques—in this case, repeated squaring.
C.1 Saving Space: Divide and Conquer
Just as we did for the Fibonacci recurrence, we can reduce the space complexity of our edit distance
algorithm from
O
(
mn
)
to
O
(
m
+
n
)
by only storing the current and previous rows of the memoization
table. This ‘sliding window’ technique provides an easy space improvement for most (but
not
all)
dynamic programming algorithm.
Unfortunately, this technique seems to be useful only if we are interested in the
cost
of the optimal
edit sequence, not if we want the optimal edit sequence itself. By throwing away most of the table, we
apparently lose the ability to walk backward through the table to recover the optimal sequence.
However, if we throw away most of the rows in the table, it seems we no longer have enough
information to reconstruct the actual editing sequence. Now what?
Fortunately for memorymisers, in 1975 Dan Hirshberg discovered a simple divideandconquer
strategy that allows us to compute the optimal edit sequence in
O
(
mn
)
time, using just
O
(
m
+
n
)
space.
The trick is to record not just the edit distance for each pair of preﬁxes, but also a single position in the
middle of the editing sequence for that preﬁx. Speciﬁcally, the optimal editing sequence that transforms
A
[
1..
m
]
into
B
[
1..
n
]
can be split into two smaller editing sequences, one transforming
A
[
1..
m
/
2
]
into
B
[
1..
h
]
for some integer
h
, the other transforming
A
[
m
/
2
+
1..
m
]
into
B
[
h
+
1..
n
]
. To compute this
breakpoint
h
, we deﬁne a second function
Half
(
i
,
j
)
as follows:
Half
(
i
,
j
) =
∞
if
i
<
m
/
2
j
if
i
=
m
/
2
Half
(
i

1,
j
)
if
i
>
m
/
2 and
Edit
(
i
,
j
) =
Edit
(
i

1,
j
) +
1
Half
(
i
,
j

1
)
if
i
>
m
/
2 and
Edit
(
i
,
j
) =
Edit
(
i
,
j

1
) +
1
Half
(
i

1,
j

1
)
otherwise
A simple inductive argument implies that
Half
(
m
,
n
)
is the correct value of
h
. We can easily modify our
earlier algorithm so that it computes
Half
(
m
,
n
)
at the same time as the edit distance
Edit
(
m
,
n
)
, all in
O
(
mn
)
time, using only
O
(
m
)
space.
Now, to compute the optimal editing sequence that transforms