This preview shows page 1. Sign up to view the full content.
Unformatted text preview: December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preface This book is an introduction to the analysis of algorithms, from the point of view of proving algorithm correctness. Our theme is the following: how do we argue mathematically, without a burden of excessive formalism, that a given algorithm does what it is supposed to do? And why is this important? In the words of C.A.R. Hoare:
As far as the fundamental science is concerned, we still certainly do not know how to prove programs correct. We need a lot of steady progress in this area, which one can foresee, and a lot of breakthroughs where people suddenly ﬁnd there’s a simple way to do something that everybody hitherto has thought to be far too diﬃcult1 . Software engineers know many examples of things going terribly wrong because of program errors; their particular favorites are the following two2 . The blackout in the American NorthEast during the summer of 2003 was due to a software bug in an energy management system; an alarm that should have been triggered never went oﬀ, leading to a chain of events that climaxed in a cascading blackout. The Ariane 5, ﬂight 501, the maiden ﬂight of the rocket in June 4, 1996, ended with an explosion 40 seconds into the ﬂight; this $500 million loss was caused by an overﬂow in the conversion from a 64bit ﬂoating point number to a 16bit signed integer. While the goal of absolute certainty in program correctness is elusive, we can develop methods and techniques for reducing errors. The aim of this book is modest: we want to present an introduction to the analysis of algorithms—the “ideas” behind programs, and show how to prove their correctness.
An Interview with C.A.R. Hoare, in [Shustek (2009)]. two examples come from [van Vliet (2000)], where many more instances of spectacular failures may be found.
2 These 1 From vii December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg viii An introduction to the analysis of algorithms The algorithm may be correct, but the implementation itself might be ﬂawed. Some syntactical errors in the program implementation may be uncovered by a compiler or translator—which in turn could also be buggy— but there might be other hidden errors. The hardware itself might be faulty; the libraries on which the program relies at run time might be unreliable, etc. It is the task of the software engineer to write code that works given such a delicate environment, prone to errors. Finally, the algorithmic content of a piece of software might be very small; the majority of the lines of code could be the “menial” task of interface programming. Thus, the ability to argue correctly about the soundness of an algorithm is only one of many facets of the task at hand, yet an important one, if only for the pedagogical reason of learning to argue rigorously about algorithms. We begin this book with a chapter of preliminaries, containing the key ideas of induction and invariance, and the framework of pre/postconditions and loop invariants. The remaining, purely algorithmic, contents of the book are as follows. We present three standard algorithm design techniques in eponymous chapters: greedy algorithms, dynamic programming and the divide and conquer paradigm. We are concerned with correctness of algorithms, rather than, say, eﬃciency or the underlying data structures. For example, in the chapter on the greedy paradigm we explore in depth the idea of a promising partial solution, a powerful technique for proving the correctness of greedy algorithms. We also include online algorithms and the idea of competitive analysis, and the last chapter is an introduction to randomized algorithms, with a section on cryptography. The intended audience for this book consists of undergraduate students in computer science and mathematics. The book is very selfcontained: the ﬁrst chapter, Preliminaries, reviews induction and the invariance principle. It also introduces the aforementioned ideas of pre/postconditions, loop invariants and termination—in other words, it sets the mathematical stage for the rest of the book. Not much mathematics is assumed (besides some tame forays into linear algebra and number theory), but a certain penchant for discrete mathematics is considered helpful. Algorithms solve problems, and many of the problems in this book fall under the category of optimization problems, whether cost minimization, such as Kruskal’s algorithm for computing minimum cost spanning trees— section 2.1, or proﬁt maximization, such as selecting the most proﬁtable subset of activities—section 4.5. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preface ix This book draws on many sources. First of all, [Cormen et al. (2001)] is a fantastic reference for anyone who is learning algorithms. I have also used as reference the elegantly written [Kleinberg and Tardos (2006)]. A classic in the ﬁeld is [Knuth (1997)], and I base my presentation of online algorithms on the material in [Borodin and ElYaniv (1998)]. I have learned greedy algorithms and dynamic programming from Stephen A. Cook at the University of Toronto. Appendix B, on relations, is based on handwritten lectures slides of Ryszard Janicki. No book on algorithms is complete without a short introduction to the “bigOh” notation. We say that g (n) ∈ O(f (n)) if there exist constants c, n0 such that for all n ≥ n0 , g (n) ≤ cf (n), and the littleoh notation, g (n) ∈ o(f (n)), which denotes that limn→∞ g (n)/f (n) = 0. We also say that g (n) ∈ Ω(f (n)) if there exist constants c, n0 such that for all n ≥ n0 , g (n) ≥ cf (n). Finally, we say that g (n) ∈ Θ(f (n)) if it is the case that both g (n) ∈ O(f (n)) and g (n) ∈ Ω(f (n)). The ubiquitous ﬂoor and ceil functions are deﬁned, respectively, as follows: x = max{n ∈ Zn ≤ x} and x = min{n ∈ Zn ≥ x}. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg x An introduction to the analysis of algorithms December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Contents Preface 1. Preliminaries 1.1 1.2 1.3 Induction . . . . . . . . . . . . . . Invariance . . . . . . . . . . . . . . Correctness of algorithms . . . . . 1.3.1 Division algorithm . . . . . 1.3.2 Euclid’s algorithm . . . . . 1.3.3 Palindromes algorithm . . 1.3.4 Further examples . . . . . 1.3.5 Recursion and ﬁxed points Stable marriage . . . . . . . . . . . Answers to selected problems . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 1 4 5 6 7 9 9 12 14 17 27 29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 34 36 36 37 38 38 42 43
xi 1.4 1.5 1.6 2. Greedy Algorithms 2.1 2.2 2.3 Minimum cost spanning trees . . . Jobs with deadlines and proﬁts . . Further examples and problems . . 2.3.1 Make change . . . . . . . . 2.3.2 Maximum weight matching 2.3.3 Shortest path . . . . . . . Answers to selected problems . . . Notes . . . . . . . . . . . . . . . . 2.4 2.5 3. Divide and Conquer December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg xii An introduction to the analysis of algorithms 3.1 3.2 3.3 3.4 3.5 4. Mergesort . . . . . . . . . . . . Multiplying numbers in binary Savitch’s algorithm . . . . . . . Answers to selected problems . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 45 48 49 50 51 Dynamic Programming 4.1 4.2 4.3 4.4 4.5 4.6 4.7 Longest monotone subsequence problem . . . . All pairs shortest path problem . . . . . . . . . Simple knapsack problem . . . . . . . . . . . . 4.3.1 Dispersed knapsack . . . . . . . . . . . General knapsack problem . . . . . . . . . . . . Activity selection problem . . . . . . . . . . . . Jobs with deadlines, durations and proﬁts . . . Further examples and problems . . . . . . . . . 4.7.1 Consecutive subsequence sum problem 4.7.2 Context free grammars . . . . . . . . . Answers to selected problems . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 4.9 5. 51 52 54 58 58 59 62 63 63 64 67 71 73 Online Algorithms 5.1 5.2 List accessing problem . . . . Paging . . . . . . . . . . . . . 5.2.1 Demand paging . . . 5.2.2 FIFO . . . . . . . . . 5.2.3 LRU . . . . . . . . . 5.2.4 Marking algorithms . 5.2.5 FWF . . . . . . . . . 5.2.6 LFD . . . . . . . . . Answers to selected problems Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 5.4 6. 74 78 79 82 82 86 87 88 92 94 95 Randomized Algorithms 6.1 6.2 6.3 6.4 Perfect matching . . . . . Pattern matching . . . . . Primality testing . . . . . Public key cryptography . 6.4.1 DiﬃeHellman key ...... ...... ...... ...... exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 100 101 105 106 December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Contents xiii 6.5 6.6 6.4.2 ElGamal . . . . . . . 6.4.3 RSA . . . . . . . . . Answers to selected problems Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 107 109 111 113 Appendix A A.1 A.2 Number Theory and Group Theory Answers to selected problems . . . . . . . . . . . . . . . . 118 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 120 121 123 124 125 128 128 129 131 Appendix B B.1 B.2 B.3 B.4 B.5 B.6 B.7 Closure . . . . . . . . . . . . Equivalence relation . . . . . Partial orders . . . . . . . . . Lattices . . . . . . . . . . . . Fixed point theory . . . . . . Answers to selected problems Notes . . . . . . . . . . . . . Bibliography Index December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg xiv An introduction to the analysis of algorithms December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Chapter 1 Preliminaries 1.1 Induction Let N = {0, 1, 2, . . .} be the set of natural numbers. Suppose that S is a subset of N with the following two properties: ﬁrst 0 ∈ S , and second, whenever n ∈ S , then n + 1 ∈ S as well. Then, invoking the Induction Principle (IP) we can conclude that S = N. We shall use the IP with a more convenient notation; let P be a property of natural numbers, in other words, P is a unary relation such that P(i) is either true or false. The relation P may be identiﬁed with a set SP in the obvious way, i.e., i ∈ SP iﬀ P(i) is true. For example, if P is the property of being prime, then P(2) and P(3) are true, but P(6) is false, and SP = {2, 3, 5, 7, 11, . . .}. Using this notation the IP may be stated as: [P(0) ∧ ∀n(P(n) → P(n + 1))] → ∀mP(m), (1.1) for any (unary) relation P over N. In practice, we use (1.1) as follows: ﬁrst we prove that P(0) holds (this is the basis case). Then we show that ∀n(P(n) → P(n + 1)) (this is the induction step). Finally, using (1.1) and modus ponens, we conclude that ∀mP(m). As an example, let P be the assertion “the sum of the ﬁrst i odd numbers equals i2 .” We follow the convention that the sum of an empty set of numbers is zero; thus P(0) holds as the set of the ﬁrst zero odd numbers is an empty set. P(3) is also true as 1 + 3 + 5 = 9 = 32 . We want to show that in fact ∀mP(m) (i.e., P is always true, and so SP = N). We use induction. The basis case is P(0) and we already showed that it holds. Suppose now that the assertion holds for n, i.e., the sum of the ﬁrst n odd numbers is n2 , i.e., 1+3+5+ · · · +(2n − 1) = n2 (this is our inductive hypothesis or inductive assumption). Consider the sum of the ﬁrst (n + 1)
1 December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 2 An introduction to the analysis of algorithms odd numbers, 1 + 3 + 5 + · · · + (2n − 1) + (2n + 1) = n2 + (2n + 1) = (n + 1)2 , and so we just proved the induction step, and by IP we have ∀mP(m). i Problem 1.1. Prove that 1 + j =0 2j = 2i+1 . [P(k ) ∧ ∀n(P(n) → P(n + 1))] → (∀m ≥ k )P(m), Sometimes it is convenient to start our induction higher than at 0. We have the following generalized induction principle: (1.2) for any predicate P and any number k . Note that (1.2) follows easily from (1.1) if we simply let P (i) be P(i + k ), and do the usual induction on the predicate P (i). Problem 1.2. Use induction to prove that for n ≥ 1, 13 + 23 + 33 + · · · + n3 = (1 + 2 + 3 + · · · + n)2 . Problem 1.3. For every n ≥ 1, consider a square of size 2n × 2n where one square is missing. Show that the resulting square can be ﬁlled with “L” shapes—that is, with clusters of three squares, where the three squares do not form a line. Problem 1.4. In the generalized IP (1.2) we can replace the induction step ∀n(P(n) → P(n + 1)) with (∀n ≥ k )(P(n) → P(n + 1)). Explain why both versions eﬀectively yield the same principle. Problem 1.5. The Fibonacci sequence is deﬁned as follows: f0 = 0 and f1 = 1 and fi+2 = fi+1 + fi , i ≥ 0. Prove that for all n ≥ 1 we have: n 11 fn+1 fn = . 10 f n f n− 1 Problem 1.6. Prove the following: if m divides n, then fm divides fn , i.e., mn implies fm fn . The Complete Induction Principle (CIP) is just like IP except that in the induction step we show that if P(i) holds for all i ≤ n, then P(n + 1) also holds, i.e., the induction step is now ∀n((∀i ≤ n)P(i) → P(n + 1)). Problem 1.7. Use the CIP to prove that every number (in N) greater than 1 may be written as a product of one or more prime numbers. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 3 Problem 1.8. Suppose that we have a (Swiss) chocolate bar consisting of a number of squares arranged in a rectangular pattern. Our task is to split the bar into small squares (always breaking along the lines between the squares) with a minimum number of breaks. How many breaks will it take? Make an educated guess, and prove it by induction. The Least Number Principle (LNP) says that every nonempty subset of the natural numbers must have a least element. A direct consequence of the LNP is that every decreasing nonnegative sequence of integers must terminate; that is, if R = {r1 , r2 , r3 , . . .} ⊆ N where ri > ri+1 for all i, then R is a ﬁnite subset of N. We are going to be using the LNP to show termination of algorithms. Problem 1.9. Show that IP, CIP, and LNP are equivalent principles. There are three standard ways to list the nodes of a binary tree. We present them below, together with a recursive procedure that lists the nodes according to each scheme. Inﬁx: left subtree, root, right subtree. Preﬁx: root, left subtree, right subtree. Postﬁx: left subtree, right subtree, root. See the example in ﬁgure 1.1. 1 2 3 5 4 6 7 inﬁx: 2,1,6,4,7,3,5 preﬁx: 1,2,3,4,6,7,5 postﬁx: 2,6,7,4,5,3,1 Fig. 1.1 A binary tree with the corresponding representations. Note that some authors use a diﬀerent name for inﬁx, preﬁx, and postﬁx; they call it inorder, preorder, and postorder, respectively. Problem 1.10. Show that given any two representations we can obtain from them the third one, or, put another way, from any two representations we can reconstruct the tree. Show, using induction, that your reconstruction is correct. Then show that having just one representation is not enough. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 4 An introduction to the analysis of algorithms 1.2 Invariance The Invariance Technique (IT) is a method for proving assertions about the outcomes of procedures. The IT identiﬁes some property that remains true throughout the execution of a procedure. Then, once the procedure terminates, we use this property to prove assertions about the output. Fig. 1.2 An 8 × 8 board. As an example, consider an 8 × 8 board from which two squares from opposing corners have been removed (see ﬁgure 1.2). The area of the board is 64 − 2 = 62 squares. Now suppose that we have 31 dominoes of size 1 × 2. We want to show that the board cannot be covered by them. Verifying this by brute force (that is, examining all possible coverings) is an extremely laborious job. However, using IT we argue as follows: color the squares as a chess board. Each domino, covering two adjacent squares, covers 1 white and 1 black square, and, hence, each placement covers as many white squares as it covers black squares. Note that the number of white squares and the number of black squares diﬀer by 2—opposite corners lying on the same diagonal have the same color—and, hence, no placement of domino’s yields a cover; done! More formally, we place the dominoes one by one on the board, any way we want. The invariant is that after placing each new domino, the number of covered white squares is the same as the number of covered black squares. We prove that this is an invariant by induction on the number of placed dominoes. The basis case is when zero dominoes have been placed (so zero black and zero white squares are covered). In the induction step, we add one more domino which, no matter how we place it, covers one white and one black square, thus maintaining the property. At the end, when we are done placing dominoes, we would have to have as many white squares as black squares covered, which is not possible due to the nature of the December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 5 coloring of the board (i.e., the number of black and whites squares is not the same). Note that this argument extends easily to the n × n board. Problem 1.11. Let n be an odd number, and suppose that we have the set {1, 2, . . . , 2n}. We pick any two numbers a, b in the set, delete them from the set, and replace them with a − b. Continue repeating this until just one number remains in the set; show that this remaining number must be odd. The next three problems have the common theme of social gatherings. We always assume that relations of likes and dislikes, of being an enemy or a friend, are reﬂexive relations: that is, if a likes b, then b also likes a, etc. See appendix B for background on relations—reﬂexive relations are deﬁned on page 119. Problem 1.12. At a country club, each member dislikes at most three other members. There are two tennis courts; show that each member can be assigned to one of the two courts in such a way that at most one person they dislike is also playing on the same court. Problem 1.13. You are hosting a dinner party where 2n people are going to be sitting at a round table. As it happens in any social clique, animosities are rife, but you know that everyone sitting at the table dislikes at most (n − 1) people; show that you can make sitting arrangements so that nobody sits next to someone they dislike. Problem 1.14. Handshakes are exchanged at a meeting. We call a person an odd person if he has exchanged an odd number of handshakes. Show that, at any moment, there is an even number of odd persons. 1.3 Correctness of algorithms How can we prove that an algorithm is correct1 ? We make two assertions, called the precondition and the postcondition; by correctness we mean that whenever the precondition holds before the algorithm executes, the postcondition will hold after it executes. By termination we mean that whenever the precondition holds, the algorithm will stop running after ﬁnitely many steps. Correctness without termination is called partial correctness, and correctness per se is partial correctness with termination.
wonderful introduction to this topic can be found in [Harel (1987)], in chapter 5, “The correctness of algorithms, or getting it done right.”
1A December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 6 An introduction to the analysis of algorithms A fundamental notion in the analysis of algorithms is that of a loop invariant; it is an assertion that stays true after each execution of a “while” (or “for”) loop. Coming up with the right assertion, and proving it, is a creative endeavor. Once the loop invariant has been shown to hold, it is used for proving partial correctness of the algorithm. So the criterion for selecting a loop invariant is that it helps in proving the postcondition. In general many diﬀerent loop invariants (and for that matter pre and postconditions) may yield a desirable proof of correctness; the art of the analysis of algorithms consists in selecting them judiciously. We usually need induction to prove that a chosen loop invariant holds after each iteration of a loop, and usually we also need the precondition as an assumption in this proof. An implicit precondition of all the algorithms in this section is that the numbers are in Z. 1.3.1 Division algorithm We analyze the algorithm for integer division, algorithm 1.1. Note that the q and r returned by the division algorithm are usually denoted as div(x, y ) (the quotient) and rem(x, y ) (the remainder), respectively. Algorithm 1.1 Division Precondition: x ≥ 0 ∧ y > 0 1: q ← 0 2: r ← x 3: while y ≤ r do 4: r ←r−y 5: q ←q+1 6: end while 7: return q, r Postcondition: x = (q · y ) + r ∧ 0 ≤ r < y We propose the following assertion as the loop invariant: x = (q · y ) + r ∧ r ≥ 0. (1.3) We show that (1.3) holds after each iteration of the loop. Basis case (i.e., zero iterations of the loop—we are just before line 3 of the algorithm): q = 0, r = x, so x = (q · y ) + r and since x ≥ 0 and r = x, r ≥ 0. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 7 Induction step: suppose x = (q · y ) + r ∧ r ≥ 0 and we go once more through the loop, and let q , r be the new values of q, r, respectively (computed in lines 4 and 5 of the algorithm). Since we executed the loop one more time it follows that y ≤ r (this is the condition checked for in line 3 of the algorithm), and since r = r − y , we have that r ≥ 0. Thus, x = (q · y ) + r = ((q + 1) · y ) + (r − y ) = (q · y ) + r , and so q , r still satisfy the loop invariant (1.3). Now we use the loop invariant to show that (if the algorithm terminates) the postcondition of the division algorithm holds, if the precondition holds. This is very easy in this case since the loop ends when it is no longer true that y ≤ r, i.e., when it is true that r < y . On the other hand, (1.3) holds after each iteration, and in particular the last iteration. Putting together (1.3) and r < y we get our postcondition, and hence partial correctness. To show termination we use the least number principle (LNP). We need to relate some nonnegative monotone decreasing sequence to the algorithm; just consider r0 , r1 , r2 , . . ., where r0 = x, and ri is the value of r after the ith iteration. Note that ri+1 = ri − y . First, ri ≥ 0, because the algorithm enters the while loop only if y ≤ r, and second, ri+1 < ri , since y > 0. By LNP such a sequence “cannot go on for ever,” (in the sense that the set {ri i = 0, 1, 2, . . .} is a subset of the natural numbers, and so it has a least element), and so the algorithm must terminate. Thus we have shown full correctness of the division algorithm. 1.3.2 Euclid’s algorithm Given two positive integers a and b, their greatest common divisor, denoted as gcd(a, b), is the largest positive integer that divides them both. Euclid’s algorithm, presented as algorithm 1.2, is a procedure for ﬁnding the greatest common divisor of two numbers. It is one of the oldest know algorithms— it appeared in Euclid’s Elements (Book 7, Propositions 1 and 2) around 300 BC. Note that to compute rem(n, m) in lines 1 and 3 of Euclid’s algorithm we need to use algorithm 1.1 (the division algorithm) as a subroutine; this is a typical “composition” of algorithms. Also note that lines 1 and 3 are executed from left to right, so in particular in line 3 we ﬁrst do m ← n, then n ← r and ﬁnally r ← rem(m, n). This is important for the algorithm to work correctly. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 8 An introduction to the analysis of algorithms Algorithm 1.2 Euclid Precondition: a > 0 ∧ b > 0 1: m ← a ; n ← b ; r ← rem(m, n) 2: while (r > 0) do 3: m ← n ; n ← r ; r ← rem(m, n) 4: end while 5: return n Postcondition: n = gcd(a, b) To prove the correctness of Euclid’s algorithm we are going to show that after each iteration of the while loop the following assertion holds: gcd(m, n) = gcd(a, b), (1.4) that is, (1.4) is our loop invariant. We prove this by induction on the number of iterations. Basis case: after zero iterations (i.e., just before the while loop starts—so after executing line 1 and before executing line 2) we have that m = a and n = b, so (1.4) holds trivially. For the induction step, suppose that gcd(a, b) = gcd(m, n), and we go through the loop one more time, yielding m , n . We want to show that gcd(m, n) = gcd(m , n ). Note that from line 3 of the algorithm we see that m = n, n = r = rem(m, n). In other words, it is enough to prove that in general gcd(m, n) = gcd(n, rem(m, n)). Problem 1.15. Show that for all m, n > 0, gcd(m, n) = gcd(n, rem(m, n)). Now the correctness of Euclid’s algorithm follows from (1.4), since the algorithm stops when r = rem(m, n) = 0, so m = q ·n, and so gcd(m, n) = n. Problem 1.16. Show that Euclid’s algorithm terminates. Problem 1.17. Do you have any ideas how to speedup Euclid’s algorithm? Problem 1.18. Modify Euclid’s algorithm, to obtain the so called extended Euclid’s algorithm, which given integers m, n, outputs integers a, b such that am + bn = g = gcd(m, n). (a) Use the LNP to show that if g = gcd(m, n), then there exist a, b such that am + bn = g . (b) Design Euclid’s extended algorithm. (c) The usual Euclid’s extended algorithm has a running time polynomial in min{m, n}; show that this is the running time of your algorithm, or modify your algorithm so that it runs in this time. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 9 1.3.3 Palindromes algorithm Algorithm 1.3 tests strings for palindromes, which are strings that read the same backwards as forwards, for example, madamimadam or racecar. Algorithm 1.3 Palindromes Precondition: n ≥ 1 ∧ A[1 . . . n] is a character array 1: i ← 1 2: while (i ≤ n ) do 2 3: if (A[i] = A[n − i + 1]) then 4: return F 5: i←i+1 6: end if 7: end while 8: return T Postcondition: return T iﬀ A is a palindrome Let the loop invariant be: after the k th iteration, i = k + 1 and for all j such that 1 ≤ j ≤ k , A[j ] = A[n − j + 1]. We prove that the loop invariant holds by induction on k . Basis case: before any iterations take place (i.e., after zero iterations), there are no j ’s such that 1 ≤ j ≤ 0, so the second part of the loop invariant is (vacuously) true. The ﬁrst part of the loop invariant holds since i is initially set to 1. Induction step: we know that after k iterations, A[j ] = A[n − j +1] for all 1 ≤ j ≤ k ; after one more iteration we know that A[k +1] = A[n−(k +1)+1], so the statement follows for all 1 ≤ j ≤ k +1. This proves the loop invariant. Problem 1.19. Using the loop invariant argue the partial correctness of the palindromes algorithm. Show that the algorithm for palindromes always terminates. 1.3.4 Further examples Problem 1.20. Give an algorithm which on the input “a positive integer n,” outputs “yes” if n = 2k (i.e., n is a power of 2), and “no” otherwise. Prove that your algorithm is correct. Problem 1.21. What does algorithm 1.4 compute? Prove your claim. Problem 1.22. What does algorithm 1.5 compute? Assume that a, b are December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 10 An introduction to the analysis of algorithms Algorithm 1.4 Problem 1.21 1: x ← m ; y ← n ; z ← 0 2: while (x = 0) do 3: if (rem(x, 2) = 1) then 4: z ←z+y 5: end if 6: x ← div(x, 2) 7: y ←y·2 8: end while 9: return z positive integers (i.e., assume that the precondition is that a, b > 0). For Algorithm 1.5 Problem 1.22 1: while (a > 0) do 2: if (a < b) then 3: (a, b) ← (2a, b − a) 4: else 5: (a, b) ← (a − b, 2b) 6: end if 7: end while which starting a, b does this algorithm terminate? In how many steps does it terminate, if it does terminate? Problem 1.23. The following problem requires some linear algebra2 . Let {v1 , v2 , . . . , vn } be a basis for a vectors space V ⊆ Rn ; {v1 , v2 , . . . , vn } are linearly independent and span V . Consider algorithm 1.6, where v · w denotes the dotproduct of the two vectors. Show that algorithm 1.6 produces ∗∗ ∗ an orthogonal basis {v1 , v2 , . . . , vn } for the vector space V . In other words, ∗ ∗ show that vi · vj = 0 when i = j , and that
∗∗ ∗ span{v1 , v2 , . . . , vn } = span{v1 , v2 , . . . , vn }. Justify why in line 4 of the algorithm we never divide by zero. Based on the examples presented thus far it may appear that it is fairly clear to the naked eye whether an algorithm terminates or not, and the diﬃculty consists in coming up with a proof. But that is not the case.
2A great and accessible introduction to linear algebra can be found in [Halmos (1995)]. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 11 Algorithm 1.6 GramSchmidt Precondition: {v1 , . . . , vn } a basis for Rn ∗ 1: v1 ←− v1 2: for i = 2, 3, . . . , n do 3: for j = 1, 2, . . . , (i − 1) do ∗ ∗ 4: µij ←− (vi · vj )/vj 2 5: end for i− 1 ∗ ∗ 6: vi ←− vi − j =1 µij vj 7: end for ∗ ∗ Postcondition: {v1 , . . . , vn } an orthogonal basis for Rn Clearly, if we have a trivial algorithm consisting of a single whileloop, with the condition i ≥ 0, and the body of the loop consists of the single command i ←− i +1, then we can immediately conclude that this whileloop will never terminate. But what about algorithm 1.7? Does it terminate? Algorithm 1.7 Ulam’s algorithm Precondition: a > 0 x ←− a while last three values of x not 4, 2, 1 do if x is even then x ←− x/2 else x ←− 3x + 1 end if end while For example, if a = 22, then one can check that x takes on the following values: 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, and algorithm 1.7 terminates. It is conjectured that regardless of the initial value of a, as long as a is a positive integer, algorithm 1.7 terminates. This conjecture is known as “Ulam’s problem,”3 No one has been able to prove that algorithm 1.7 terminates, and in fact proving termination would involve solving a diﬃcult open mathematical problem.
is also called “Collatz Conjecture,” “Syracuse Problem,” “Kakutani’s Problem,” or “Hasse’s Algorithm.” While it is true that a rose by any other name would smell just as sweet, the preponderance of names shows that the conjecture is a very alluring mathematical problem.
3 It December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 12 An introduction to the analysis of algorithms 1.3.5 Recursion and ﬁxed points So far we have proved the correctness of whileloops and forloops, but there is another way of “looping” using recursive procedures, i.e., algorithms that “call themselves.” We are going to see examples of such algorithms in the chapter on the divide and conquer method. There is a robust theory of correctness of recursive algorithms based on ﬁxed point theory, and in particular on Kleene’s theorem (see appendix B, theorem B.32). We brieﬂy illustrate this approach with an example. We are going to be using partial orders; all the necessary background can be found in appendix B, in section B.3. Consider the recursive algorithm 1.8. Algorithm 1.8 F (x, y ) 1: if x = y then 2: return y + 1 3: else 4: F (x, F (x − 1, y + 1)) 5: end if To see how this algorithm works consider computing F (4, 2). First in line 1 it is established that 4 = 2 and so we must compute F (4, F (3, 3)). We ﬁrst compute F (3, 3), recursively, so in line 1 it is now established that 3 = 3, and so in line 2 y is set to 4 and that is the value returned, i.e., F (3, 3) = 4, so now we can go back and compute F (4, F (3, 3)) = F (4, 4), so again, recursively, we establish in line 1 that 4 = 4, and so in line 2 y is set to 5 and this is the value returned, i.e., F (4, 2) = 5. On the other hand it is easy to see that F (3, 5) = F (3, F (2, 6)) = F (3, F (2, F (1, 7))) = · · · , and this procedure never ends as x will never equal y . Thus F is not a total function, i.e., not deﬁned on all (x, y ) ∈ Z × Z. Problem 1.24. What is the domain of deﬁnition of F as computed by algorithm 1.8? That is, the domain of F is Z × Z, while the domain of deﬁnition is the largest subset S ⊆ Z × Z such that F is deﬁned for all (x, y ) ∈ S . We have seen already that (4, 2) ∈ S while (3, 5) ∈ S . We now consider three diﬀerent functions, all given by algorithms that are not recursive: algorithms 1.9, 1.10 and 1.11, computing functions f1 , f2 and f3 , respectively. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 13 Algorithm 1.9 f1 (x, y ) if x = y then return y + 1 else return x + 1 end if Functions f1 has an interesting property: if we were to replace F in algorithm 1.8 with f1 we would get back F . In other words, given algorithm 1.8, if we were to replace line 4 with f1 (x, f1 (x−1, y +1)), and compute f1 with the (nonrecursive) algorithm 1.9 for f1 , then algorithm 1.8 thus modiﬁed would now be computing F (x, y ). Therefore, we say that the functions f1 is a ﬁxed point of the recursive algorithm 1.8. For example, recall the we have already shown that F (4, 2) = 5, using the recursive algorithm 1.8 for computing F . Replace line 4 in algorithm 1.8 with f1 (x, f1 (x − 1, y + 1)) and compute F (4, 2) anew; since 4 = 2 we go directly to line 4 where we compute f1 (4, f1 (3, 3)) = f1 (4, 4) = 5. Notice that this last computation was not recursive, as we computed f1 directly with algorithm 1.9, and that we have obtained the same value. Consider now f2 , f3 , computed by algorithms 1.10, 1.11, respectively. Algorithm 1.10 f2 (x, y ) if x ≥ y then return x + 1 else return y − 1 end if Algorithm 1.11 f3 (x, y ) if x ≥ y ∧ (x − y is even) then return x + 1 end if Notice that in algorithm 1.11, if it is not the case that x ≥ y and x − y is even, then the output is undeﬁned. Thus f3 is a partial function, and if x < y or x − y is odd, then (x, y ) is not in its domain of deﬁnition. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 14 An introduction to the analysis of algorithms Problem 1.25. Prove that f1 , f2 , f3 are all ﬁxed points of algorithm 1.8. The function f3 has one additional property. For every pair of integers x, y such that f3 (x, y ) is deﬁned, that is x ≥ y and x−y is even, both f1 (x, y ) and f2 (x, y ) are also deﬁned and have the same value as f3 (x, y ). We say that f3 is less deﬁned than or equal to f1 and f2 , and write f3 f1 and f3 f2 ; that is, we have deﬁned (informally) a partial order on functions f : Z × Z −→ Z × Z. Problem 1.26. Show that f3 f1 and f3 f2 . Recall the notion of a domain of deﬁnition introduced in problem 1.24. Let S1 , S2 , S3 be the domains of deﬁnition of f1 , f2 , f3 , respectively. You must show that S3 ⊆ S1 and S3 ⊆ S2 . It can be shown that f3 has this property, not only with respect to f1 and f2 , but also with respect to all ﬁxed points of algorithm 1.8. Moreover, f3 (x, y ) is the only function having this property, and therefore f3 is said to be the least (deﬁned) ﬁxed point of algorithm 1.8. It is an important application of Kleene’s theorem (theorem B.32) that every recursive algorithm has a unique ﬁxed point. This unique ﬁxed point may be seen as the “meaning” or “interpretation” of the recursive algorithm—and may be used for its proof of correctness. This example illustrates an approach to the correctness of recursive algorithms; for more material we direct the interested reader to [Manna (1974)]. 1.4 Stable marriage We end this chapter with a very nice algorithm that is used in practice. It has two typical applications: the college admission process and matching interns with hospitals. An instance of the stable marriage problem of size n consists of two disjoint ﬁnite sets of equal size; a set of boys B = {b1 , b2 , . . . , bn }, and a set of girls G = {g1 , g2 , . . . , gn }. Let “<i ” denote the ranking of boy bi ; that is, g <i g means that boy bi prefers g over g . Similarly, “<j ” denotes the ranking of girl gj . Each boy bi has such a ranking (linear ordering) <i of G which reﬂects his preference for the girls that he wants to marry. Similarly each girl gj has a ranking (linear ordering) <j of B which reﬂects her preference for the boys she would like to marry. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 15 g b pM ( g ) pM ( b)
Fig. 1.3 Blocking pair. A matching (or marriage) M is a 11 correspondence between B and G. We say that b and g are partners in M if they are matched in M and write pM (b) = g and also pM (g ) = b. A matching M is unstable if there is a pair (b, g ) from B × G such that b and g are not partners in M but b prefers g to pM (b) and g prefers b to pM (g ). Such a pair (b, g ) is said to block the matching M and is called a blocking pair for M (see ﬁgure 1.3). A matching M is stable if there is no blocking pair for M . A result of Gale and Shapley from 1962 is that any marriage problem has a solution, i.e., there always exists a stable matching, no matter what are the lists of preferences. In fact, they give an algorithm which produces a solution in n stages and takes O(n3 ) steps; we present their algorithm in this section (algorithm 1.12). The matching M is produced in stages Ms so that bt always has a partner at stage s ≥ t and pMt (bt ) <t pMt+1 (bt ) <t · · · . On the other hand, for each g ∈ G, if g has a partner at stage t, then g will have a partner at each stage s ≥ t and pMt (g ) >t pMt+1 (g ) >t . . .. Thus, as s increases, the partners of bt become less preferable and the partners of g become more preferable. At the end of stage s, assume that we have produced a matching Ms = {(b1 , g1,s ), . . . , (bs , gs,s )}, where the notation gi,s means that gi,s is the partner of boy bi after the end of stage s. We will say that partners in Ms are engaged. The idea is that at stage s +1, bs+1 will try to get a partner by proposing to the girls in G in his order of preference. When bs+1 proposes to a girl gj , gj accepts his proposal if either gj is not currently engaged or is currently engaged to a less preferable boy b, i.e., bs+1 <j b. In the case where gj prefers bs+1 over her current partner b, then gj breaks oﬀ the engagement with b and b then has to search for a new partner. Problem 1.27. Show that each b need propose at most once to each g . December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 16 An introduction to the analysis of algorithms Algorithm 1.12 GaleShapley Stage 1: At stage 1, b1 chooses the ﬁrst girl g in his preference list and we set M1 = {(b1 , g )}. Stage s + 1: M ←− Ms b∗ ←− bs+1 Then b∗ proposes to the girls in order of his preference until one accepts; girl g will accept the proposal as long as she is either not engaged or prefers b∗ to her current partner pM (g ). Then we add (b∗ , g ) to M and proceed according to one of the following two cases: (i) If g was not engaged, then we terminate the procedure and set Ms+1 ←− M ∪ {(b∗ , g )}. (ii) If g was engaged to b, then we set M ←− (M − {(b, g )}) ∪ {(b∗ , g )} b∗ ←− b and repeat. From problem 1.27 we see that we can make each boy keep a bookmark on his list of preference, and this bookmark is only moving forward. When a boy’s turn to choose comes, he starts proposing from the point where his bookmark is, and by the time he is done, his bookmark moved only forward. Note that at stage s + 1 each boy’s bookmark cannot have moved beyond the girl number s on the list without choosing someone (after stage s only s girls are engaged). As the boys take turns, each boy’s bookmark is advancing, so some boy’s bookmark (among the boys in {b1 , . . . , bs+1 }) will advance eventually to a point where he must choose a girl. The discussion in the above paragraph shows that stage s + 1 in algorithm 1.12 must end. The concern here was that case (ii) of stage s + 1 might end up being circular. But the fact that the bookmarks are advancing shows that this is not possible. Furthermore, this gives an upper bound of (s + 1)2 steps at stage (s + 1) in the procedure. This means that there are n stages, and each stage takes O(n2 ) steps, and hence algorithm 1.12 takes O(n3 ) steps altogether. The question, of course, is what do we mean by a step? Computers operate on binary strings, yet here the implicit assumption is that we compare numbers and access the lists of preferences in a single step. But the cost of these operations is negligible when compared to our idealized running time, and December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 17 so we allow ourselves this poetic license to bound the overall running time. Problem 1.28. Show that there is exactly one girl that was not engaged at stage s but is engaged at stage (s + 1) and that, for each girl gj that is engaged in Ms , gj will be engaged in Ms+1 and that pMs+1 (gj ) <j pMs (gj ). (Thus, once gj becomes engaged, she will remain engaged and her partners will only gain in preference as the stages proceed.) Problem 1.29. Suppose that B  = G = n. Show that at the end of stage n, Mn will be a stable marriage. We say that a matching (b, g ) is feasible if there exists a stable matching in which b, g are partners. We say that a matching is boyoptimal if every boy is paired with his highest ranked feasible partner. We say that a matching is boypessimal if every boy is paired with his lowest ranking feasible partner. Similarly, we deﬁne girloptimal/pessimal. Problem 1.30. Show that our version of the algorithm produces a boyoptimal and girlpessimal stable matching. 1.5 Answers to selected problems Problem 1.2. Basis case: n = 1, then 13 = 12 . For the induction step: (1 + 2 + 3 + · · · + n + (n + 1))2 and by the induction hypothesis, = (13 + 23 + 33 + · · · + n3 ) + 2(1 + 2 + 3 + · · · + n)(n + 1) + (n + 1)2 n(n + 1) (n + 1) + (n + 1)2 2 = (13 + 23 + 33 + · · · + n3 ) + n(n + 1)2 + (n + 1)2 = (13 + 23 + 33 + · · · + n3 ) + 2 = (13 + 23 + 33 + · · · + n3 ) + (n + 1)3 = (1 + 2 + 3 + · · · + n)2 + 2(1 + 2 + 3 + · · · + n)(n + 1) + (n + 1)2 Problem 1.3. It is important to interpret the statement of the problem correctly: when it says that one square is missing, it means that any square may be missing. So the basis case is: given a 2 × 2 square, there are four possible ways for a square to be missing; but in each case, the remaining squares form an “L.” These four possibilities are drawn in ﬁgure 1.4. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 18 An introduction to the analysis of algorithms Fig. 1.4 The four diﬀerent “L” shapes. Suppose the claim holds for n, and consider a square of size 2n+1 × 2n+1 . Divide it into four quadrants of equal size. No matter which square we choose to be missing, it will be in one of the four quadrants; that quadrant can be ﬁlled with “L” shapes (i.e., shapes of the form given by ﬁgure 1.4) by induction hypothesis. As to the remaining three quadrants, put an “L” in them in such a way that it straddles all three of them (the “L” wraps around the center staying in those three quadrants). The remaining squares of each quadrant can now be ﬁlled with “L” shapes by induction hypothesis. Problem 1.5. The basis case is n = 1, and it is immediate. For the induction step, assume the equality holds for exponent n, and show that it holds for exponent n + 1: n 11 11 fn+1 fn 11 fn+1 + fn fn+1 = = 10 10 f n f n− 1 10 f n + f n− 1 f n The rightmost matrix can be simpliﬁed using the deﬁnition of Fibonacci numbers to be as desired. Problem 1.6. mn iﬀ n = km, so show that fm fkm by induction on k . If k = 1, there is nothing to prove. Otherwise, f(k+1)m = fkm+m . Now, using a separate inductive argument, show that for y ≥ 1, fx+y = fy fx+1 + fy−1 fx , and ﬁnish the proof. To show this last statement, let y = 1, and note that fy fx+1 + fy−1 fx = f1 fx+1 + f0 fx = fx+1 . Assume now that fx+y = fy fx+1 + fy−1 fx holds. Consider fx+(y+1) = f(x+y)+1 = f(x+y) + f(x+y)−1 = f(x+y) + fx+(y−1) = (fy fx+1 + fy−1 fx ) + (fy−1 fx+1 + fy−2 fx ) = fx+1 (fy + fy−1 ) + fx (fy−1 + fy−2 ) = fx+1 fy+1 + fx fy . Problem 1.7. Note that this is almost the Fundamental Theorem of Arithmetic; what is missing is the fact that up to reordering of primes this representation is unique. The proof of this can be found in appendix A, theorem A.2. Problem 1.8. Let our assertion P(n) be: the minimal number of breaks to break up a chocolate bar of n squares is (n − 1). Note that this says December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 19 that (n − 1) breaks are suﬃcient, and (n − 2) are not. Basis case: only one square requires no breaks. Induction step: Suppose that we have m + 1 squares. No matter how we break the bar into two smaller pieces of a and b squares each, a + b = m + 1. By induction hypothesis, the “a” piece requires a − 1 breaks, and the “b” piece requires b − 1 breaks, so together the number of breaks is (a − 1) + (b − 1) + 1 = a + b − 1 = m − 1, and we are done (note that the 1 in the box comes from the initial break to divide the chocolate bar into the “a” and the “b” pieces). So the “boring” way of breaking up the chocolate (ﬁrst into rows, and then each row separately into pieces) is in fact optimal. Problem 1.9. Let IP be: [P(0) ∧ (∀n)(P(n) → P(n + 1))] → (∀m)P(m) (where n, m range over natural numbers), and let LNP: Every nonempty subset of the natural numbers has a least element. These two principles are equivalent, in the sense that one can be shown from the other. Indeed: LNP⇒IP: Suppose we have [P(0) ∧ (∀n)(P(n) → P(n + 1))], but that it is not the case that (∀m)P(m). Then, the set S of m’s for which P(m) is false is nonempty. By the LNP we know that S has a least element. We know this element is not 0, as P(0) was assumed. So this element can be expressed as n + 1 for some natural number n. But since n + 1 is the least such number, P(n) must hold. This is a contradiction as we assumed that (∀n)(P(n) → P(n + 1)), and here we have an n such that P(n) but not P(n + 1). IP⇒LNP: Suppose that S is a nonempty subset of the natural numbers. Suppose that it does not have a least element; let P(n) be the following assertion “all elements up to and including n are not in S .” We know that P(0) must be true, for otherwise 0 would be in S , and it would then be the least element (by deﬁnition of 0). Suppose P(n) is true (so none of {0, 1, 2, . . . , n} is in S ). Suppose P(n + 1) were false: then n + 1 would necessarily be in S (as we know that none of {0, 1, 2, . . . , n} is in S ), and thereby n + 1 would be the smallest element in S . So we have shown [P(0) ∧ (∀n)(P(n) → P(n + 1))]. By IP we can therefore conclude that (∀m)P(m). But this means that S is empty. Contradiction. Thus S must have a least element. Showing that IP and CIP are equivalent is straightforward. Problem 1.10. We use the example in ﬁgure 1.1. Suppose that we want to obtain the tree from the inﬁx (2164735) and preﬁx (1234675) encodings: from the preﬁx encoding we know that 1 is the root, and thus from the December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 20 An introduction to the analysis of algorithms inﬁx encoding we know that the left subtree has the inﬁx encoding 2, and so preﬁx encoding 2, and the right subtree has the inﬁx encoding 64735 and so preﬁx encoding 34675, and we proceed recursively. Problem 1.11. Consider the following invariant: the sum S of the numbers currently in the set is odd. Now we prove that this invariant holds. Basis case: S = 1 + 2 + · · · + 2n = n(2n + 1) which is odd. Induction step: assume S is odd, let S be the result of one more iteration, so S = S + a − b − a − b = S − 2 min(a, b), and since 2 min(a, b) is even, and S was odd by the induction hypothesis, it follows that S must be odd as well. At the end, when there is just one number left, say x, S = x, so x is odd. Problem 1.12. To solve this problem we must provide both an algorithm and an invariant for it. The algorithm works as follows: initially divide the club into any two groups. Let H be the total sum of enemies that each member has in his own group. Now repeat the following loop: while there is an m which has at least two enemies in his own group, move m to the other group (where m must have at most one enemy). Thus, when m switches houses, H decreases. Here the invariant is “H decreases monotonically.” Now we know that a sequence of positive integers cannot decrease for ever, so when H reaches its absolute minimum, we obtain the required distribution. Problem 1.13. At ﬁrst, arrange the guests in any way; let H be the number of neighboring hostile pairs. We ﬁnd an algorithm that reduces H whenever H > 0. Suppose H > 0, and let (A, B ) be a hostile couple, sitting sidebyside, in the clockwise order A, B . Traverse the table, clockwise, until we ﬁnd another couple (A , B ) such that A, A and B, B are friends. Such a couple must exist: there are 2n − 2 − 1 = 2n − 3 candidates for A (these are all the people sitting clockwise after B , which have a neighbor sitting next to them, again clockwise, and that neighbor is neither A nor B ). As A has at least n friends (among people other than itself), out of these 2n − 3 candidates, at least n − 1 of them are friends of A. If each of these friends had an enemy of B sitting next to it (again, going clockwise), then B would have at least n enemies, which is not possible, so there must be an A friends with A so that the neighbor of A (clockwise) is B and B is a friend of B ; see ﬁgure 1.5. Note that when n = 1 no one has enemies, and so this analysis is applicable when n ≥ 2, in which case 2n − 3 ≥ 1. Now the situation around the table is . . . , A, B, . . . , A , B , . . .. Reverse everyone in the box (i.e., mirror image the box), to reduce H by 1. Keep December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 21 A, B, c1 , c2 , . . . , c2n−3 , c2n−2
Fig. 1.5 List of guests sitting around the table, in clockwise order, starting at A. We are interested in friends of A among c1 , c2 , . . . , c2n−3 , to make sure that there is a neighbor to the right, and that neighbor is not A or B ; of course, the table wraps around at c2n−2 , so the next neighbor, clockwise, of c2n−2 is A. As A has at most n − 1 enemies, A has at least n friends (not counting itself; selflove does not count as friendship). Those n friends of A are among the c’s, but if we exclude c2n−2 it follows that A has at least n − 1 friends among c1 , c2 , . . . , c2n−3 . If the clockwise neighbor of ci , 1 ≤ i ≤ 2n − 3, i.e., ci+1 was in each case an enemy of B , then, as B already has an enemy of A, it would follow that B has n enemies, which is not possible. repeating this procedure while H > 0; eventually H = 0 (by the LNP), at which point there are no neighbors that dislike each other. Problem 1.14. We partition the participants into the set E of even persons and the set O of odd persons. We observe that, during the hand shaking ceremony, the set O cannot change its parity. Indeed, if two odd persons shake hands, O decreases by 2. If two even persons shake hands, O increases by 2, and, if an even and an odd person shake hands, O does not change. Since, initially, O = 0, the parity of the set is preserved. Problem 1.15. Suppose that im and in. Then Therefore i ≤ gcd(n, rem(m, n)), and as this is true for every i, it is in particular true for i = gcd(m, n); thus gcd(m, n) ≤ gcd(n, rem(m, n)). Conversely, suppose that in and irem(m, n). Then im = qn + r, so i ≤ gcd(m, n), and again, gcd(n, rem(m, n)) meets the condition of being such an i, so we have gcd(n, rem(m, n)) ≤ gcd(m, n). Both inequalities taken together give us gcd(m, n) = gcd(n, rem(m, n)). Problem 1.16. Let ri be r after the ith iteration of the loop. Note that r0 = rem(m, n) = rem(a, b) ≥ 0, and in fact every ri ≥ 0 by deﬁnition of remainder. Furthermore: ri+1 = rem(m , n ) = rem(n, r) = rem(n, rem(m, n)) = rem(n, ri ) < ri . and so we have a decreasing, and yet nonnegative, sequence of numbers; by the LNP this must terminate. Problem 1.17. When m < n then rem(m, n) = m, and so m = n and n = m. Thus we execute one iteration of the loop only to swap m and n. Thus, we could add one line at the beginning of Euclid’s algorithm to check if m < n, and if that is the case to swap them. Problem 1.18. (a) We show that if d = gcd(a, b), then there exist u, v such that au + bv = d. Let S = {ax + by ax + by > 0}; clearly S = ∅. By i(m − qn) = r = rem(m, n). December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 22 An introduction to the analysis of algorithms LNP there exists a least g ∈ S . We show that g = d. Let a = q · g + r, 0 ≤ r < g . Suppose that r > 0; then r = a − q · g = a − q (ax0 + by0 ) = a(1 − qx0 ) + b(−qy0 ). Thus, r ∈ S , but r < g —contradiction. So r = 0, and so g a, and a similar argument shows that g b. It remains to show that g is greater than any other common divisor of a, b. Suppose ca and cb, so c(ax0 + by0 ), and so cg , which means that c ≤ g . Thus g = gcd(a, b) = d. (b) See page 13, algorithm E, in [Knuth (1997)]; also page 292, algorithm A.5, in [Delfs and Knebl (2007)]. (c) On pp. 292–293 in [Delfs and Knebl (2007)] there is a nice analysis of their version of the algorithm. They bound the running time in terms of Fibonacci numbers, and obtain the desired bound on the running time. Problem 1.19. For partial correctness of algorithm 1.3, we show that if the precondition holds, and if the algorithm terminates, then the postcondition will hold. So assume the precondition, and suppose ﬁrst that A is not a palindrome. Then there exists a smallest i0 (there exists one, and so by the LNP there exists a smallest one) such that A[i0 ] = A[n − i0 + 1], and so, after the ﬁrst i0 − 1 iteration of the whileloop, we know from the loop invariant that i = (i0 − 1) + 1 = i0 , and so line 4 is executed and the algorithm returns F. Therefore, “A not a palindrome” ⇒ “return F.” Suppose now that A is a palindrome. Then line 4 is never executed (as no such i0 exists), and so after the k = n th iteration of the whileloop, 2 we know from the loop invariant that i = n + 1 and so the whileloop is 2 not executed any more, and the algorithm moves on to line 8, and returns T. Therefore, “A is a palindrome” ⇒ “return T.” Therefore, the postcondition, “return T iﬀ A is a palindrome,” holds. Note that we have only used part of the loop invariant, that is we used the fact that after the k th iteration, i = k + 1; it still holds that after the k th iteration, for 1 ≤ j ≤ k , A[j ] = A[n − j + 1], but we do not need this fact in the above proof. To show that the algorithm does actually terminates, let di = n − i. 2 By the precondition, we know that n ≥ 1. The sequence d1 , d2 , d3 , . . . is a decreasing sequence of positive integers (because i ≤ n ), so by the LNP 2 it is ﬁnite, and so the loop terminates. Problem 1.20. The solution is given by algorithm 1.13. Let the loop invariant be: “x is a power of 2 iﬀ n is a power of 2.” We show the loop invariant by induction on the number of iterations of the main loop. Basis case: zero iterations, and since x ← n, x = n, so December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 23 Algorithm 1.13 Powers of 2. Precondition: n ≥ 1 x←n while (x > 1) do if (2x) then x ← x/2 else stop and return “no” end if end while return “yes” Postcondition: “yes” ⇐⇒ n is a power of 2 obviously x is a power of 2 iﬀ n is a power of 2. For the induction step, note that if we ever get to update x, we have x = x/2, and clearly x is a power of 2 iﬀ x is. Note that the algorithm always terminates (let x0 = n, and xi+1 = xi /2, and apply the LNP as usual). We can now prove correctness: if the algorithms returns “yes”, then after the last iteration of the loop x = 1 = 20 , and by the loop invariant n is a power of 2. If, on the other hand, n is a power of 2, then so is every x, so eventually x = 1, and so the algorithm returns “yes”. Problem 1.21. Algorithm 1.4 computes the product of m and n, that is, the returned z = m · n. A good loop invariant is x · y + z = m · n. Problem 1.23. We are going to prove a loop invariant on the outer loop of algorithm 1.6, that is, we are going to prove a loop invariant on the forloop (indexed on i) that starts on line 2 and ends on line 7. Our invariant consists of two parts: after the k th iteration of the loop, the following two statements hold true:
∗ ∗ (1) the set {v1 , . . . , vk+1 } is orthogonal, and ∗ ∗ (2) span{v1 , . . . , vk+1 } = span{v1 , . . . , vk+1 }. Basis Case: after zero iterations of the forloop, that is, before the forloop ∗ is ever executed, we have, from line 1 of the algorithm, that v1 ←− v1 , and ∗ so the ﬁrst statement is true because {v1 } is orthogonal (a set consisting of ∗ a single nonzero vector is always orthogonal—and v1 = v1 = 0 because the assumption (i.e., precondition) is that {v1 , . . . , vn } is linearly independent, and so none of these vectors can be zero), and the second statement also ∗ ∗ holds trivially since if v1 = v1 then span{v1 } = span{v1 }. December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 24 An introduction to the analysis of algorithms Induction Step: Suppose that the two conditions hold after the ﬁrst k iterations of the loop; we are going to show that they continue to hold after the k + 1 iteration. Consider:
∗ vk+2 = vk+2 − k+1 j =1 ∗ µ(k+1)j vj , which we obtain directly from line 6 of the algorithm; note that the outer forloop is indexed on i which goes from 2 to n, so after the k th execution of line 2, for k ≥ 1, the value of the index i is k + 1. We show the ﬁrst ∗ ∗ statement, i.e., that {v1 , . . . , vk+2 } are orthogonal. Since, by induction ∗ ∗ hypothesis, we know that {v1 , . . . , vk+1 } are already orthogonal, it is enough ∗ ∗ to show that for 1 ≤ l ≤ k + 1, vl · vk+2 = 0, which we do next: k+1 ∗ ∗ ∗ ∗ vl · vk+2 = vl · vk+2 − µ(k+2)j vj j =1 ∗ = (vl · vk+2 ) − k+1 j =1 ∗ ∗ µ(k+2)j (vl · vj ) ∗ ∗ and since vl · vj = 0 unless l = j , we have: ∗ ∗ ∗ = (vl · vk+2 ) − µ(k+2)l (vl · vl ) and using line 4 of the algorithm we write:
∗ = (vl · vk+2 ) − ∗ vk+2 · vl ∗ ∗ ∗ 2 (vl · vl ) = 0 v l ∗ ∗ where we have used the fact that vl · vl = vl 2 and that vl · vk+2 = vk+2 · vl . For the second statement of the loop invariant we need to show that ∗ ∗ span{v1 , . . . , vk+2 } = span{v1 , . . . , vk+2 }, (1.5) assuming, by the induction hypothesis, that span{v1 , . . . , vk+1 } = ∗ ∗ span{v1 , . . . , vk+1 }. The argument will be based on line 6 of the algorithm, which provides us with the following equality:
∗ vk+2 = vk+2 − k+1 j =1 ∗ µ(k+2)j vj . (1.6) Given the induction hypothesis, to show (1.5) we need only show the following two things:
∗ ∗ (1) vk+2 ∈ span{v1 , . . . , vk+2 }, and December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries
∗ (2) vk+2 ∈ span{v1 , . . . , vk+2 }. ∗ Using (1.6) we obtain immediately that vk+2 = vk+2 + so we have (1). To show (2) we note that 25 k+1
j =1 ∗ µ(k+2)j vj and ∗ ∗ span{v1 , . . . , vk+2 } = span{v1 , . . . , vk+1 , vk+2 } by the induction hypothesis, and so we have what we need directly from (1.6). Finally, note that we never divide by zero in line 4 of the algorithm ∗ because we always divide by vj , and the only way for the norm to be zero ∗ is if the vector itself, vj , is zero. But we know from the postcondition that ∗ ∗ {v1 , . . . , vn } is a basis, and so these vectors must be linearly independent, and so none of them can be zero. Problem 1.24. Let S ⊆ Z × Z be the set consisting precisely of those pairs of integers (x, y ) such that x ≥ y and x − y is even. We are going to prove that S is the domain of deﬁnition of F . First, if x < y then x = y and so we go on to compute F (x, F (x − 1, y + 1)), and now we must compute F (x − 1, y + 1); but if x < y , then clearly x − 1 < y + 1; this condition is preserved, and so we end up having to compute F (x − i, y + i) for all i, and so this recursion never “bottoms out.” Suppose that x − y is odd. Then x = y (as 0 is even!), so again we go on to F (x, F (x − 1, y + 1)); if x − y is odd, so is (x − 1) − (y + 1) = x − y − 2. Again we end up having to compute F (x − i, y + i) for all i, and so the recursion never terminates. Clearly, all the pairs in S c are not in the domain of deﬁnition of F . Suppose now that (x, y ) ∈ S . Then x ≥ y and x − y is even; thus, x − y = 2i for some i ≥ 0. We show, by induction on i, that the algorithm terminates on such (x, y ) and outputs x + 1. Basis case: i = 0, so x = y , and so the algorithm returns y + 1 which is x + 1. Suppose now that x − y = 2(i + 1). Then x = y , and so we compute F (x, F (x − 1, y + 1)). But (x − 1) − (y + 1) = x − y − 2 = 2(i + 1) − 2 = 2i, for i ≥ 0, and so by induction F (x − 1, y + 1) terminates and outputs (x − 1) + 1 = x. So now we must compute F (x, x) which is just x + 1, and we are done. Problem 1.25. We show that f1 is a ﬁxed point of algorithm 1.8. Recall that in problem 1.24 we showed that the domain of deﬁnition of F , the function computed by algorithm 1.8, is S = {(x, y ) : x − y = 2i, i ≥ 0}. Now we show that if we replace F in algorithm 1.8 by f1 , the new algorithm, December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 26 An introduction to the analysis of algorithms Algorithm 1.14 Algorithm 1.8 with F replaced by f1 . 1: if x = y then 2: return y + 1 3: else 4: f1 (x, f1 (x − 1, y + 1)) 5: end if which is algorithm 1.14, still computes F albeit not recursively (as f1 is deﬁned by algorithm 1.9 which is not recursive). We proceed as follows: if (x, y ) ∈ S , then x − y = 2i with i ≥ 0. On such (x, y ) we know, from problem 1.24, that F (x, y ) = x +1. Now consider the output of algorithm 1.14 on such a pair (x, y ). If i = 0, then it returns y + 1 = x + 1, so we are done. If i > 0, then it computes f1 (x, f1 (x − 1, y + 1)) = f1 (x, x) = x + 1, and we are done. To see why f1 (x − 1, y + 1) = x notice that there are two cases: ﬁrst, if x − 1 = y + 1, then the algorithm for f1 (algorithm 1.9) returns (y + 1) + 1 = (x − 1) + 1 = x. Second, if x − 1 > y + 1 (and that is the only other possibility), algorithm 1.9 returns (x − 1) + 1 = x as well. Problem 1.27. After b proposed to g for the ﬁrst time, whether this proposal was successful or not, the partners of g could have only gotten better. Thus, there is no need for b to try again. Problem 1.28. bs+1 proposes to the girls according to his list of preference; a g ends up accepting, and if the g who accepted bs+1 was free, she is the new one with a partner. Otherwise, some b∗ ∈ {b1 , . . . , bs } became disengaged, and we repeat the same argument. The g ’s disengage only if a better b proposes, so it is true that pMs+1 (gj ) <j pMs (gj ). Problem 1.29. Suppose that we have a blocking pair {b, g } (meaning that {(b, g ), (b , g )} ⊆ Mn , but b prefers g to g , and g prefers b to b ). Either b came after b or before. If b came before b , then g would have been with b or someone better when b came around, so g would not have become engaged to b . On the other hand, since (b , g ) is a pair, no better oﬀer has been made to g after the oﬀer of b , so b could not have come after b . In either case we get an impossibility, and so there is no blocking pair {b, g }. Problem 1.30. To show that the matching is boyoptimal, we argue by contradiction. Let “g is an optimal partner for b” mean that among all the stable matchings g is the best partner that b can get. We run the GaleShapley algorithm, and let b be the ﬁrst boy who is rejected by his optimal partner g . This means that g has already been December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg Preliminaries 27 paired with some b , and g prefers b to b. Furthermore, g is at least as desirable to b as his own optimal partner (since the proposal of b is the ﬁrst time during the run of the algorithm that a boy is rejected by his optimal partner). Since g is optimal for b, we know (by deﬁnition) that there exists some stable matching S where (b, g ) is a pair. On the other hand, the optimal partner of b is ranked (by b of course) at most as high as g , and since g is taken by b, whoever b is paired with in S , say g , b prefers g to g . This gives us an unstable pairing, because {b , g } prefer each other to the partners they have in S . To show that the GaleShapley algorithm is girlpessimal, we use the fact that it is boyoptimal (which we just showed). Again, we argue by contradiction. Suppose there is a stable matching S where g is paired with b, and g prefers b to b, where (b , g ) is the result of the GaleShapley algorithm. By boyoptimality, we know that in S we have (b , g ), where g is not higher on the preference list of b than g , and since g is already paired with b, we know that g is actually lower. This says that S is unstable since {b , g } would rather be together than with their partners. 1.6 Notes This book is about proving things about algorithms; their correctness, their termination, their running time, etc. The art of mathematical proofs is a diﬃcult art to master; a very good place to start is [Velleman (2006)]. N (the set of natural numbers) and IP (the induction principle) are very tightly related; the rigorous deﬁnition of N, as a settheoretic object, is the following: it is the unique set satisfying the following three properties: (i) it contains 0, (ii) if n is in it, then so is n +1, and (iii) it satisﬁes the induction principle (which in this context is stated as follows: if S is a subset of N, and S satisﬁes (i) and (ii) above, then in fact S = N). The references in this paragraph are with respect to Knuth’s seminal The Art of Computer Programming, [Knuth (1997)]. For an extensive study of Euclid’s algorithm see §1.1. Problem 1.2 comes from §1.2.1, problem #8, pg. 19. See §2.3.1, pg. 318 for more background on tree traversals. For the history of the concept of pre and postcondition, and loop invariants, see pg. 17. See [Zingaro (2008)] for a book dedicated to the idea of invariants in the context of proving correctness of algorithms. A great source of problems on the invariance principle, that is section 1.2, is chapter 1 in [Engel (1998)] December 29, 2009 16:54 World Scientiﬁc Book  9in x 6in soltys˙alg 28 An introduction to the analysis of algorithms The example about the 8 × 8 board with two squares missing (ﬁgure 1.2) comes from [Dijkstra (1989)]. The palindrome madamimadam comes from Joyce’s Ulysses. Section 1.3.5 on the correctness of recursive algorithms is based on chapter 5 of [Manna (1974)]. Section 1.4 is based on §2 in [Cenzer and Remmel (2001)]. For another presentation of the Stable Marriage problem see chapter 1 in [Kleinberg and Tardos (2006)]. ...
View
Full
Document
This note was uploaded on 11/30/2010 for the course CAS CS 2ME3 taught by Professor Soltys during the Spring '10 term at McMaster University.
 Spring '10
 Soltys
 Ulysses

Click to edit the document details