Unformatted text preview: can be
done in O(n2 ) time. Thus, the above set of equations give rise to a divideandconquer algorithm whose running time T (n) is characterized by the recurrence
T (n) = 8T (n/2) + bn2 ,
for some constant b > 0. Unfortunately, this equation implies that T (n) is O(n3 ) by
the master theorem; hence, it is no better than the straightforward matrix multiplication algorithm.
Interestingly, there is an algorithm known as Strassen’s Algorithm, that organizes arithmetic involving the subarrays A through G so that we can compute I , J ,
K , and L using just seven recursive matrix multiplications. It is somewhat mysterious how Strassen discovered these equations, but we can easily verify that they
work correctly. 5.2. DivideandConquer 273 We begin Strassen’s Algorithm by dening seven submatrix products:
S1
S2
S3
S4
S5
S6
S7 =
=
=
=
=
=
= A(F − H )
(A + B)H
(C + D)E
D(G − E )
(A + D)(E + H )
(B − D)(G + H )
(A − C)(E + F ). Given these seven submatrix products, we can compute I as
I=
=
=
= S5 + S6 + S4 − S2
(A + D)(E + H ) + (B − D)(G + H ) + D(G − E ) − (A + B)H
AE + DE + AH + DH + BG − DG + BH − DH + DG − DE − AH − BH
AE + BG. We can compute J as
J=
=
=
= S1 + S2
A(F − H ) + (A + B)H
AF − AH + AH + BH
AF + BH . K=
=
=
= S3 + S4
(C + D)E + D(G − E )
CE + DE + DG − DE
CE + DG. We can compute K as Finally, we can compute L as
L=
=
=
= S1 − S7 − S3 + S5
A(F − H ) − (A − C)(E + F ) − (C + D)E + (A + D)(E + H )
AF − AH − AE + CE − AF + CF − CE − DE + AE + DE + AH + DH
CF + DH . Thus, we can compute Z = XY using seven recursive multiplications of matrices of
size (n/2) × (n/2). Thus, we can characterize the running time T (n) as
T (n) = 7T (n/2) + bn2 ,
for some constant b > 0. Thus, by the master theorem, we have the following:
Theorem 5.13: We can multiply two n × n matrices in O(nlog 7 ) time.
Thus, with a fair bit of additional complication, we can perform the multiplication for n × n matrices in time O(n2.808 ), which is o(n3 ) time. As admittedly complicated as Strassen’s matrix multiplication is, there are actually much more complicated matrix multiplication algorithms, with running times as low as O(n2.376 ). Chapter 5. Fundamental Techniques 274 5.3 Dynamic Programming
In this section, we discuss the dynamic programming algorithmdesign technique.
This technique is similar to the divideandconquer technique, in that it can be
applied to a wide variety of different problems. Conceptually, the dynamic programming technique is different from divideandconquer, however, because the
divideandconquer technique can be easily explained in a sentence or two, and can
be well illustrated with a single example. Dynamic programming takes a bit more
explaining and multiple examples before it can be fully appreciated.
The extra effort needed to fully appreciate dynamic programming is well worth
it, though. There are few algorithmic techniques that can take problems that seem
to require exponential time and produce polynomialtime algorithms to solve them.
Dynamic programming is one such technique. In addition, the algorithms that result from applications of the dynamic programming technique are usually quite
simple—often needing little more than a few lines of code to describe some nested
loops for ﬁlling in a table. 5.3.1 Matrix ChainProduct
Rather than starting out with an explanation of the general components of the dynamic programming technique, we start out instead by giving a classic, concrete
example. Suppose we are given a collection of n twodimensional matrices for
which we wish to compute the product
A = A0 · A1 · A2 · · · An−1 ,
where Ai is a di × di+1 matrix, for i = 0, 1, 2, . . . , n − 1. In the standard matrix
multiplication algorithm (which is the one we will use), to multiply a d × ematrix B
times an e × f matrix C, we compute the (i, j)th entry of the product as
e−1 ∑ B[i, k] · C[k, j]. k =0 This deﬁnition implies that matrix multiplication is associative, that is, it implies
that B · (C · D) = (B · C) · D. Thus, we can parenthesize the expression for A any
way we wish and we will end up with the same answer. We will not necessarily perform the same number of primitive (that is, scalar) multiplications in each
parenthesization, however, as is illustrated in the following example.
Example 5.14: Let B be a 2 × 10matrix, let C be a 10 × 50matrix, and let D be
a 50 × 20matrix. Computing B · (C · D) requires 2 · 10 · 20 + 10 · 50 · 20 = 10400
multiplications, whereas computing (B · C) · D requires 2 · 10 · 50 + 2 · 50 · 20 = 3000
multiplications. 5.3. Dynamic Programming 275 The matrix chainproduct problem is to determine the parenthesization of the
expression deﬁning the product A that minimizes the total number of scalar multiplications performed. Of course, one way to solve this problem is to simply enumerate all the possible ways of parenthesizing the expression for A and determine
the numb...
View
Full Document
 Spring '14
 Dynamic Programming, Fundamental Techniques

Click to edit the document details