In the rest of this section we show that by using the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: this subsection, the problem of multiplying big integers, that is, integers represented by a large number of bits that cannot be handled directly by the arithmetic unit of a single processor. Multiplying big integers has applications to data security, where big integers are used in encryption schemes. Given two big integers I and J represented with n bits each, we can easily compute I + J and I − J in O(n) time. Efficiently computing the product I · J using the common grade-school algorithm requires, however, O(n2 ) time. In the rest of this section, we show that by using the divide-and-conquer technique, we can design a subquadratic-time algorithm for multiplying two n-bit integers. Let us assume that n is a power of two (if this is not the case, we can pad I and J with 0’s). We can therefore divide the bit representations of I and J in half, with one half representing the higher-order bits and the other representing the lower-order bits. In particular, if we split I into Ih and Il and J into Jh and Jl , then I = Ih 2n/2 + Il , J = Jh 2n/2 + Jl . 5.2. Divide-and-Conquer 271 Also, observe that multiplying a binary number I by a power of two, 2k , is trivial—it simply involves shifting left (that is, in the higher-order direction) the number I by k bit positions. Thus, provided a left-shift operation takes constant time, multiplying an integer by 2k takes O(k) time. Let us focus on the problem of computing the product I · J . Given the expansion of I and J above, we can rewrite I · J as I · J = (Ih 2n/2 + Il ) · (Jh 2n/2 + Jl ) = Ih Jh 2n + Il Jh 2n/2 + Ih Jl 2n/2 + Il Jl . Thus, we can compute I · J by applying a divide-and-conquer algorithm that divides the bit representations of I and J in half, recursively computes the product four products of n/2 bits each (as described above), and then merges the solutions to these subproducts in O(n) time using addition and multiplication by powers of two. We can terminate the recursion when we get down to the multiplication of two 1-bit numbers, which is trivial. This divide-and-conquer algorithm has a running time that can be characterized by the following recurrence (for n ≥ 2): T (n) = 4T (n/2) + cn, for some constant c > 0. We can then apply the master theorem to note that the special function nlogb a = nlog2 4 = n2 in this case; hence, we are in Case 1 and T (n) is Θ(n2 ). Unfortunately, this is no better than the grade-school algorithm. The master method gives us some insight into how we might improve this algorithm. If we can reduce the number of recursive calls, then we will reduce the complexity of the special function used in the master theorem, which is currently the dominating factor in our running time. Fortunately, if we are a little more clever in how we define subproblems to solve recursively, we can in fact reduce the number of recursive calls by one. In particular, consider the product (Ih − Il ) · (Jl − Jh ) = Ih Jl − Il Jl − Ih Jh + Il Jh . This is admittedly a strange product to consider, but it has an interesting property. When expanded out, it contains two of the products we want to compute (namely, Ih Jl and Il Jh ) and two products that can be computed recursively (namely, Ih Jh and Il Jl ). Thus, we can compute I · J as follows: I · J = Ih Jh 2n + [(Ih − Il ) · (Jl − Jh ) + Ih Jh + Il Jl ]2n/2 + Il Jl . This computation requires the recursive computation of three products of n/2 bits each, plus O(n) additional work. Thus, it results in a divide-and-conquer algorithm with a running time characterized by the following recurrence equation (for n ≥ 2): T (n) = 3T (n/2) + cn, for some constant c > 0. Theorem 5.12: We can multiply two n-bit integers in O(n1.585 ) time. Proof: We apply the master theorem with the special function nlogb a = nlog2 3 ; hence, we are in Case 1 and T (n) is Θ(nlog2 3 ), which is itself O(n1.585 ). Chapter 5. Fundamental Techniques 272 Using divide-and-conquer, we have designed an algorithm for integer multiplication that is asymptotically faster than the straightforward quadratic-time method. We can actually do even better than this, achieving a running time that is “almost” O(n log n), by using a more complex divide-and-conquer algorithm called the fast Fourier transform, which we discuss in Section 10.4. 5.2.3 Matrix Multiplication Suppose we are given two n × n matrices X and Y , and we wish to compute their product Z = XY , which is defined so that Z [i, j] = e−1 ∑ X [i, k] · Y [k, j], k =0 which is an equation that immediately gives rise to a simple O(n3 ) time algorithm. Another way of viewing this product is in terms of submatrices. That is, let us assume that n is a power of two and let us partition X , Y , and Z each into four (n/2) × (n/2) matrices, so that we can rewrite Z = XY as IJ KL AB CD = EF GH . Thus, I J K L = = = = AE + BG AF + BH CE + DG CF + DH . We can use this set of equations in a divide-and-conquer algorithm that computes Z = XY by computing I , J , K , and L from the subarrays A through G. By the above equations, we can compute I , J , K , and L from the eight recursively computed matrix products on (n/2) × (n/2) subarrays, plus four additions that...
View Full Document

This document was uploaded on 03/26/2014.

Ask a homework question - tutors are online