This preview shows page 1. Sign up to view the full content.
Unformatted text preview: er of multiplications performed by each one. Unfortunately, the set of all
different parenthesizations of the expression for A is equal in number to the set of
all different binary trees that have n external nodes. This number is exponential in
n. Thus, this straightforward (“brute force”) algorithm runs in exponential time, for
there are an exponential number of ways to parenthesize an associative arithmetic
expression (the number is equal to the nth Catalan number, which is Ω(4n /n3/2 )). Deﬁning Subproblems
We can improve the performance achieved by the brute force algorithm signiﬁcantly, however, by making a few observations about the nature of the matrix chainproduct problem. The ﬁrst observation is that the problem can be split into subproblems. In this case, we can deﬁne a number of different subproblems, each of which
is to compute the best parenthesization for some subexpression Ai · Ai+1 · · · A j . As
a concise notation, we use Ni, j to denote the minimum number of multiplications
needed to compute this subexpression. Thus, the original matrix chainproduct
problem can be characterized as that of computing the value of N0,n−1 . This observation is important, but we need one more in order to apply the dynamic programming technique. Characterizing Optimal Solutions
The other important observation we can make about the matrix chainproduct problem is that it is possible to characterize an optimal solution to a particular subproblem in terms of optimal solutions to its subproblems. We call this property the
subproblem optimality condition.
In the case of the matrix chainproduct problem, we observe that, no matter how
we parenthesize a subexpression, there has to be some ﬁnal matrix multiplication
that we perform. That is, a full parenthesization of a subexpression Ai · Ai+1 · · · A j
has to be of the form (Ai · · · Ak ) · (Ak+1 · · · A j ), for some k ∈ {i, i + 1, . . . , j − 1}.
Moreover, for whichever k is the right one, the products (Ai · · · Ak ) and (Ak+1 · · · A j )
must also be solved optimally. If this were not so, then there would be a global
optimal that had one of these subproblems solved suboptimally. But this is impossible, since we could then reduce the total number of multiplications by replacing
the current subproblem solution by an optimal solution for the subproblem. This
observation implies a way of explicitly deﬁning the optimization problem for Ni, j
in terms of other optimal subproblem solutions. Namely, we can compute Ni, j by
considering each place k where we could put the ﬁnal multiplication and taking the
minimum over all such choices. Chapter 5. Fundamental Techniques 276 Designing a Dynamic Programming Algorithm
The above discussion implies that we can characterize the optimal subproblem solution Ni, j as
Ni, j = min {Ni,k + Nk+1, j + di dk+1 d j+1 },
i≤k< j where we note that
Ni,i = 0,
since no work is needed for a subexpression comprising a single matrix. That is, Ni, j
is the minimum, taken over all possible places to perform the ﬁnal multiplication,
of the number of multiplications needed to compute each subexpression plus the
number of multiplications needed to perform the ﬁnal matrix multiplication.
The equation for Ni, j looks similar to the recurrence equations we derive for
divideandconquer algorithms, but this is only a superﬁcial resemblance, for there
is an aspect of the equation Ni, j that makes it difﬁcult to use divideandconquer
to compute Ni, j . In particular, there is a sharing of subproblems going on that
prevents us from dividing the problem into completely independent subproblems
(as we would need to do to apply the divideandconquer technique). We can,
nevertheless, use the equation for Ni, j to derive an efﬁcient algorithm by computing
Ni, j values in a bottomup fashion, and storing intermediate solutions in a table of
Ni, j values. We can begin simply enough by assigning Ni,i = 0 for i = 0, 1, . . . , n − 1.
We can then apply the general equation for Ni, j to compute Ni,i+1 values, since
they depend only on Ni,i and Ni+1,i+1 values, which are available. Given the Ni,i+1
values, we can then compute the Ni,i+2 values, and so on. Therefore, we can build
Ni, j values up from previously computed values until we can ﬁnally compute the
value of N0,n−1 , which is the number that we are searching for. The details of this
dynamic programming solution are given in Algorithm 5.5.
Algorithm MatrixChain(d0 , . . . , dn ):
Input: Sequence d0 , . . . , dn of integers
Output: For i, j = 0, . . . , n − 1, the minimum number of multiplications Ni, j
needed to compute the product Ai · Ai+1 · · · A j , where Ak is a dk × dk+1 matrix
for i ← 0 to n − 1 do
Ni,i ← 0
for b ← 1 to n − 1 do
for i ← 0 to n − b − 1 do
j ← i+b
Ni, j ← +∞
for k ← i to j − 1 do
Ni, j ← min{Ni, j , Ni,k + Nk+1, j + di dk+1 d j+1 }.
Algorithm 5.5: Dynamic programming algorithm for the matrix chainproduct problem. 5.3. Dynamic Programming 277 Analyzing the Matrix ChainProduct Algo...
View
Full
Document
This document was uploaded on 03/26/2014.
 Spring '14

Click to edit the document details