Unformatted text preview: this subsection, the problem of multiplying big integers, that is,
integers represented by a large number of bits that cannot be handled directly by
the arithmetic unit of a single processor. Multiplying big integers has applications
to data security, where big integers are used in encryption schemes.
Given two big integers I and J represented with n bits each, we can easily
compute I + J and I − J in O(n) time. Efﬁciently computing the product I · J using
the common gradeschool algorithm requires, however, O(n2 ) time. In the rest
of this section, we show that by using the divideandconquer technique, we can
design a subquadratictime algorithm for multiplying two nbit integers.
Let us assume that n is a power of two (if this is not the case, we can pad I and J
with 0’s). We can therefore divide the bit representations of I and J in half, with one
half representing the higherorder bits and the other representing the lowerorder
bits. In particular, if we split I into Ih and Il and J into Jh and Jl , then
I = Ih 2n/2 + Il ,
J = Jh 2n/2 + Jl . 5.2. DivideandConquer 271 Also, observe that multiplying a binary number I by a power of two, 2k , is
trivial—it simply involves shifting left (that is, in the higherorder direction) the
number I by k bit positions. Thus, provided a leftshift operation takes constant
time, multiplying an integer by 2k takes O(k) time.
Let us focus on the problem of computing the product I · J . Given the expansion
of I and J above, we can rewrite I · J as
I · J = (Ih 2n/2 + Il ) · (Jh 2n/2 + Jl ) = Ih Jh 2n + Il Jh 2n/2 + Ih Jl 2n/2 + Il Jl .
Thus, we can compute I · J by applying a divideandconquer algorithm that divides
the bit representations of I and J in half, recursively computes the product four
products of n/2 bits each (as described above), and then merges the solutions to
these subproducts in O(n) time using addition and multiplication by powers of two.
We can terminate the recursion when we get down to the multiplication of two 1bit
numbers, which is trivial. This divideandconquer algorithm has a running time
that can be characterized by the following recurrence (for n ≥ 2):
T (n) = 4T (n/2) + cn,
for some constant c > 0. We can then apply the master theorem to note that the
special function nlogb a = nlog2 4 = n2 in this case; hence, we are in Case 1 and T (n)
is Θ(n2 ). Unfortunately, this is no better than the gradeschool algorithm.
The master method gives us some insight into how we might improve this algorithm. If we can reduce the number of recursive calls, then we will reduce the
complexity of the special function used in the master theorem, which is currently
the dominating factor in our running time. Fortunately, if we are a little more clever
in how we deﬁne subproblems to solve recursively, we can in fact reduce the number of recursive calls by one. In particular, consider the product
(Ih − Il ) · (Jl − Jh ) = Ih Jl − Il Jl − Ih Jh + Il Jh .
This is admittedly a strange product to consider, but it has an interesting property.
When expanded out, it contains two of the products we want to compute (namely,
Ih Jl and Il Jh ) and two products that can be computed recursively (namely, Ih Jh and
Il Jl ). Thus, we can compute I · J as follows:
I · J = Ih Jh 2n + [(Ih − Il ) · (Jl − Jh ) + Ih Jh + Il Jl ]2n/2 + Il Jl .
This computation requires the recursive computation of three products of n/2 bits
each, plus O(n) additional work. Thus, it results in a divideandconquer algorithm
with a running time characterized by the following recurrence equation (for n ≥ 2):
T (n) = 3T (n/2) + cn,
for some constant c > 0.
Theorem 5.12: We can multiply two nbit integers in O(n1.585 ) time.
Proof: We apply the master theorem with the special function nlogb a = nlog2 3 ;
hence, we are in Case 1 and T (n) is Θ(nlog2 3 ), which is itself O(n1.585 ). Chapter 5. Fundamental Techniques 272 Using divideandconquer, we have designed an algorithm for integer multiplication that is asymptotically faster than the straightforward quadratictime method.
We can actually do even better than this, achieving a running time that is “almost”
O(n log n), by using a more complex divideandconquer algorithm called the fast
Fourier transform, which we discuss in Section 10.4. 5.2.3 Matrix Multiplication
Suppose we are given two n × n matrices X and Y , and we wish to compute their
product Z = XY , which is deﬁned so that
Z [i, j] = e−1 ∑ X [i, k] · Y [k, j], k =0 which is an equation that immediately gives rise to a simple O(n3 ) time algorithm.
Another way of viewing this product is in terms of submatrices. That is, let
us assume that n is a power of two and let us partition X , Y , and Z each into four
(n/2) × (n/2) matrices, so that we can rewrite Z = XY as
IJ
KL AB
CD = EF
GH . Thus,
I
J
K
L =
=
=
= AE + BG
AF + BH
CE + DG
CF + DH . We can use this set of equations in a divideandconquer algorithm that computes Z = XY by computing I , J , K , and L from the subarrays A through G. By the
above equations, we can compute I , J , K , and L from the eight recursively computed matrix products on (n/2) × (n/2) subarrays, plus four additions that...
View
Full Document
 Spring '14
 Dynamic Programming, Fundamental Techniques

Click to edit the document details