Unformatted text preview: rithm
Thus, we can compute N0,n−1 with an algorithm that consists primarily of three
nested forloops. The outside loop is executed n times. The loop inside is executed at most n times. And the innermost loop is also executed at most n times.
Therefore, the total running time of this algorithm is O(n3 ). Theorem 5.15: Given a chainproduct of n twodimensional matrices, we can
compute a parenthesization of this chain that achieves the minimum number of
scalar multiplications in O(n3 ) time.
Proof: We have shown above how we can compute the optimal number of scalar
multiplications. But how do we recover the actual parenthesization?
The method for computing the parenthesization itself is is actually quite straightforward. We modify the algorithm for computing Ni, j values so that any time we
ﬁnd a new minimum value for Ni, j , we store, with Ni, j , the index k that allowed us
to achieve this minimum.
In Figure 5.6, we illustrate the way the dynamic programming solution to the
matrix chainproduct problem ﬁlls in the array N . j N i,j i
i,k
+ didk+1dj+1 k+1,j Figure 5.6: Illustration of the way the matrix chainproduct dynamicprogramming algorithm ﬁlls in the array N .
Now that we have worked through a complete example of the use of the dynamic programming method, let us discuss the general aspects of the dynamic programming technique as it can be applied to other problems. Chapter 5. Fundamental Techniques 278 5.3.2 The General Technique
The dynamic programming technique is used primarily for optimization problems,
where we wish to ﬁnd the “best” way of doing something. Often the number of
different ways of doing that “something” is exponential, so a bruteforce search
for the best is computationally infeasible for all but the smallest problem sizes.
We can apply the dynamic programming technique in such situations, however, if
the problem has a certain amount of structure that we can exploit. This structure
involves the following three components:
Simple Subproblems: There has to be some way of breaking the global optimization problem into subproblems, each having a similar structure to the original
problem. Moreover, there should be a simple way of deﬁning subproblems
with just a few indices, like i, j, k, and so on.
Subproblem Optimality: An optimal solution to the global problem must be a
composition of optimal subproblem solutions, using a relatively simple combining operation. We should not be able to ﬁnd a globally optimal solution
that contains suboptimal subproblems.
Subproblem Overlap: Optimal solutions to unrelated subproblems can contain
subproblems in common. Indeed, such overlap improves the efﬁciency of a
dynamic programming algorithm that stores solutions to subproblems.
Now that we have given the general components of a dynamic programming
algorithm, we next give another example of its use. 5.3.3 The 01 Knapsack Problem
Suppose a hiker is about to go on a trek through a rain forest carrying a single
knapsack. Suppose further that she knows the maximum total weight W that she
can carry, and she has a set S of n different useful items that she can potentially take
with her, such as a folding chair, a tent, and a copy of this book. Let us assume that
each item i has an integer weight wi and a beneﬁt value bi , which is the utility value
that our hiker assigns to item i. Her problem, of course, is to optimize the total
value of the set T of items that she takes with her, without going over the weight
limit W . That is, she has the following objective:
maximize ∑ bi i∈T subject to ∑ wi ≤ W . i∈T Her problem is an instance of the 01 knapsack problem. This problem is called
a “01” problem, because each item must be entirely accepted or rejected. We
consider the fractional version of this problem in Section 5.1.1, and we study how
knapsack problems arise in the context of Internet auctions in Exercise R5.12. 5.3. Dynamic Programming 279 A First Attempt at Characterizing Subproblems
We can easily solve the 01 knapsack problem in Θ(2n ) time, of course, by enumerating all subsets of S and selecting the one that has highest total beneﬁt from
among all those with total weight not exceeding W . This would be an inefﬁcient
algorithm, however. Fortunately, we can derive a dynamic programming algorithm
for the 01 knapsack problem that runs much faster than this in most cases.
As with many dynamic programming problems, one of the hardest parts of
designing such an algorithm for the 01 knapsack problem is to ﬁnd a nice characterization for subproblems (so that we satisfy the three properties of a dynamic
programming algorithm). To simplify the discussion, number the items in S as
1, 2, . . . , n and deﬁne, for each k ∈ {1, 2, . . . , n}, the subset
Sk = {items in S labeled 1, 2, . . . , k}.
One possibility is for us to deﬁne subproblems by using a parameter k so that subproblem k is the best way to ﬁll the knapsack using only items from the set Sk . This
is a va...
View
Full Document
 Spring '14
 Dynamic Programming, Fundamental Techniques

Click to edit the document details