kitap - 1 The Role of Algorithms in Computing What are...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions. 1.1 Algorithms Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computational problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship. For example, one might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem: Input: A sequence of n numbers a1 , a2 , . . . , an . Output: A permutation (reordering) a 1 , a2 , . . . , an of the input sequence such that a1 ≤ a2 ≤ · · · ≤ an . For example, given the input sequence 31, 41, 59, 26, 41, 58 , a sorting algorithm returns as output the sequence 26, 31, 41, 41, 58, 59 . Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem. Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms 6 Chapter 1 The Role of Algorithms in Computing have been developed. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, and the kind of storage device to be used: main memory, disks, or tapes. An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an answer other than the desired one. Contrary to what one might expect, incorrect algorithms can sometimes be useful, if their error rate can be controlled. We shall see an example of this in Chapter 31 when we study algorithms for finding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms. An algorithm can be specified in English, as a computer program, or even as a hardware design. The only requirement is that the specification must provide a precise description of the computational procedure to be followed. What kinds of problems are solved by algorithms? Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the following examples: • The Human Genome Project has the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. While the solutions to the various problems involved are beyond the scope of this book, ideas from many of the chapters in this book are used in the solution of these biological problems, thereby enabling scientists to accomplish tasks while using resources efficiently. The savings are in time, both human and machine, and in money, as more information can be extracted from laboratory techniques. The Internet enables people all around the world to quickly access and retrieve large amounts of information. In order to do so, clever algorithms are employed to manage and manipulate this large volume of data. Examples of problems which must be solved include finding good routes on which the data will travel (techniques for solving such problems appear in Chapter 24), and using a search engine to quickly find pages on which particular information resides (related techniques are in Chapters 11 and 32). • 1.1 Algorithms 7 • Electronic commerce enables goods and services to be negotiated and exchanged electronically. The ability to keep information such as credit card numbers, passwords, and bank statements private is essential if electronic commerce is to be used widely. Public-key cryptography and digital signatures (covered in Chapter 31) are among the core technologies used and are based on numerical algorithms and number theory. In manufacturing and other commercial settings, it is often important to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. A candidate for the presidency of the United States may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29. • While some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many concrete problems in this book, including the following: • We are given a road map on which the distance between each pair of adjacent intersections is marked, and our goal is to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Chapter 10 and Appendix B), and we wish to find the shortest path from one vertex to another in the graph. We shall see how to solve this problem efficiently in Chapter 24. We are given a sequence A1 , A2 , . . . , An of n matrices, and we wish to determine their product A1 A2 · · · An . Because matrix multiplication is associative, there are several legal multiplication orders. For example, if n = 4, we could perform the matrix multiplications as if the product were parenthesized in any of the following orders: ( A1 ( A2 ( A3 A4 ))), ( A1 (( A2 A3 ) A4 )), (( A1 A2 )( A3 A4 )), (( A1 ( A2 A3 )) A4 ), or ((( A1 A2 ) A3 ) A4 ). If these matrices are all square (and hence the same size), the multiplication order will not affect how long the matrix multiplications take. If, however, these matrices are of differing sizes (yet their sizes are compatible for matrix multiplication), then the multiplication order can make a very big difference. The number of possible multiplication • 8 Chapter 1 The Role of Algorithms in Computing orders is exponential in n , and so trying all possible orders may take a very long time. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efficiently. • We are given an equation ax ≡ b (mod n ), where a , b, and n are integers, and we wish to find all the integers x , modulo n , that satisfy the equation. There may be zero, one, or more than one such solution. We can simply try x = 0, 1, . . . , n − 1 in order, but Chapter 31 shows a more efficient method. • We are given n points in the plane, and we wish to find the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 948 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for finding the convex hull. These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interesting algorithms. 1. There are many candidate solutions, most of which are not what we want. Finding one that we do want can present quite a challenge. 2. There are practical applications. Of the problems in the above list, shortest paths provides the easiest examples. A transportation firm, such as a trucking or railroad company, has a financial interest in finding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to find the shortest path through the network in order to route a message quickly. Data structures This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them. 1.1 Algorithms 9 Technique Although you can use this book as a “cookbook” for algorithms, you may someday encounter a problem for which you cannot readily find a published algorithm (many of the exercises and problems in this book, for example!). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efficiency. Hard problems Most of this book is about efficient algorithms. Our usual measure of efficiency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NP-complete. Why are NP-complete problems interesting? First, although no efficient algorithm for an NP-complete problem has ever been found, nobody has ever proven that an efficient algorithm for one cannot exist. In other words, it is unknown whether or not efficient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efficient algorithm exists for any one of them, then efficient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efficient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efficient algorithms. A small change to the problem statement can cause a big change to the efficiency of the best known algorithm. It is valuable to know about NP-complete problems because some of them arise surprisingly often in real applications. If you are called upon to produce an efficient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution. As a concrete example, consider a trucking company with a central warehouse. Each day, it loads up the truck at the warehouse and sends it around to several locations to make deliveries. At the end of the day, the truck must end up back at the warehouse so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by the truck. This problem is the well-known “traveling-salesman problem,” and it is NP-complete. It has no known efficient algorithm. Under certain assumptions, however, there are efficient algorithms that give an overall distance that is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.” 10 Chapter 1 The Role of Algorithms in Computing Exercises 1.1-1 Give a real-world example in which one of the following computational problems appears: sorting, determining the best order for multiplying matrices, or finding the convex hull. 1.1-2 Other than speed, what other measures of efficiency might one use in a real-world setting? 1.1-3 Select a data structure that you have seen previously, and discuss its strengths and limitations. 1.1-4 How are the shortest-path and traveling-salesman problems given above similar? How are they different? 1.1-5 Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough. 1.2 Algorithms as a technology Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (i.e., well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not infinitely fast. And memory may be cheap, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. These resources should be used wisely, and algorithms that are efficient in terms of time or space will help you do so. 1.2 Algorithms as a technology 11 Efficiency Algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software. As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c 1 n 2 to sort n items, where c1 is a constant that does not depend on n . That is, it takes time roughly proportional to n 2 . The second, merge sort, takes time roughly equal to c 2 n lg n , where lg n stands for log2 n and c2 is another constant that also does not depend on n . Insertion sort usually has a smaller constant factor than merge sort, so that c 1 < c2 . We shall see that the constant factors can be far less significant in the running time than the dependence on the input size n . Where merge sort has a factor of lg n in its running time, insertion sort has a factor of n , which is much larger. Although insertion sort is usually faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c 1 is than c2 , there will always be a crossover point beyond which merge sort is faster. For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of one million numbers. Suppose that computer A executes one billion instructions per second and computer B executes only ten million instructions per second, so that computer A is 100 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world’s craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n 2 instructions to sort n numbers. (Here, c1 = 2.) Merge sort, on the other hand, is programmed for computer B by an average programmer using a high-level language with an inefficient compiler, with the resulting code taking 50n lg n instructions (so that c 2 = 50). To sort one million numbers, computer A takes 2 · (106 )2 instructions = 2000 seconds , 109 instructions/second while computer B takes 50 · 106 lg 106 instructions ≈ 100 seconds . 107 instructions/second By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs 20 times faster than computer A! The advantage of merge sort is even more pronounced when we sort ten million numbers: where insertion sort takes approximately 2.3 days, merge sort takes under 20 minutes. In general, as the problem size increases, so does the relative advantage of merge sort. 12 Chapter 1 The Role of Algorithms in Computing Algorithms and other technologies The example above shows that algorithms, like computer hardware, are a technology. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well. You might wonder whether algorithms are truly that important on contemporary computers in light of other advanced technologies, such as • • • • hardware with high clock rates, pipelining, and superscalar architectures, easy-to-use, intuitive graphical user interfaces (GUIs), object-oriented systems, and local-area and wide-area networking. The answer is yes. Although there are some applications that do not explicitly require algorithmic content at the application level (e.g., some simple web-based applications), most also require a degree of algorithmic content on their own. For example, consider a web-based service that determines how to travel from one location to another. (Several such services existed at the time of this writing.) Its implementation would rely on fast hardware, a graphical user interface, wide-area networking, and also possibly on object orientation. However, it would also require algorithms for certain operations, such as finding routes (probably using a shortestpath algorithm), rendering maps, and interpolating addresses. Moreover, even an application that does not require algorithmic content at the application level relies heavily upon algorithms. Does the application rely on fast hardware? The hardware design used algorithms. Does the application rely on graphical user interfaces? The design of any GUI relies on algorithms. Does the application rely on networking? Routing in networks relies heavily on algorithms. Was the application written in a language other than machine code? Then it was processed by a compiler, interpreter, or assembler, all of which make extensive use of algorithms. Algorithms are at the core of most technologies used in contemporary computers. Furthermore, with the ever-increasing capacities of computers, we use them to solve larger problems than ever before. As we saw in the above comparison between insertion sort and merge sort, it is at larger problem sizes that the differences in efficiencies between algorithms become particularly prominent. Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more. Problems for Chapter 1 13 Exercises 1.2-1 Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved. 1.2-2 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n , insertion sort runs in 8n 2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort? 1.2-3 What is the smallest value of n such that an algorithm whose running time is 100n 2 runs faster than an algorithm whose running time is 2 n on the same machine? Problems 1-1 Comparison of running times For each function f (n ) and time t in the following table, determine the largest size n of a problem that can be solved in time t , assuming that the algorithm to solve the problem takes f (n ) microseconds. 1 second lg n √ n n n lg n n2 n3 2n n! 1 minute 1 hour 1 day 1 month 1 year 1 century 14 Chapter 1 The Role of Algorithms in Computing Chapter notes There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman [5, 6], Baase and Van Gelder [26], Brassard and Bratley [46, 47], Goodrich and Tamassia [128], Horowitz, Sahni, and Rajasekaran [158], Kingston [179], Knuth [182, 183, 185], Kozen [193], Manber [210], Mehlhorn [217, 218, 219], Purdom and Brown [252], Reingold, Nievergelt, and Deo [257], Sedgewick [269], Skiena [280], and Wilf [315]. Some of the more practical aspects of algorithm design are discussed by Bentley [39, 40] and Gonnet [126]. Surveys of the field of algorithms can also be found in the Handbook of Theoretical Computer Science, Volume A [302] and the CRC Handbook on Algorithms and Theory of Computation [24]. Overviews of the algorithms used in computational biology can be found in textbooks by Gusfield [136], Pevzner [240], Setubal and Medinas [272], and Waterman [309]. 2 Getting Started This chapter will familiarize you with the framework we shall use throughout the book to think about the design and analysis of algorithms. It is self-contained, but it does include several references to material that will be introduced in Chapters 3 and 4. (It also contains several summations, which Appendix A shows how to solve.) We begin by examining the insertion sort algorithm to solve the sorting problem introduced in Chapter 1. We define a “pseudocode” that should be familiar to readers who have done computer programming and use it to show how we shall specify our algorithms. Having specified the algorithm, we then argue that it correctly sorts and we analyze its running time. The analysis introduces a notation that focuses on how that time increases with the number of items to be sorted. Following our discussion of insertion sort, we introduce the divide-and-conquer approach to the design of algorithms and use it to develop an algorithm called merge sort. We end with an analysis of merge sort’s running time. 2.1 Insertion sort Our first algorithm, insertion sort, solves the sorting problem introduced in Chapter 1: Input: A sequence of n numbers a1 , a2 , . . . , an . Output: A permutation (reordering) a 1 , a2 , . . . , an of the input sequence such that a1 ≤ a2 ≤ · · · ≤ an . The numbers that we wish to sort are also known as the keys. In this book, we shall typically describe algorithms as programs written in a pseudocode that is similar in many respects to C, Pascal, or Java. If you have been introduced to any of these languages, you should have little trouble reading our algorithms. What separates pseudocode from “real” code is that in pseudocode, we 16 Chapter 2 Getting Started Figure 2.1 Sorting a hand of cards using insertion sort. employ whatever expressive method is most clear and concise to specify a given algorithm. Sometimes, the clearest method is English, so do not be surprised if you come across an English phrase or sentence embedded within a section of “real” code. Another difference between pseudocode and real code is that pseudocode is not typically concerned with issues of software engineering. Issues of data abstraction, modularity, and error handling are often ignored in order to convey the essence of the algorithm more concisely. We start with insertion sort, which is an efficient algorithm for sorting a small number of elements. Insertion sort works the way many people sort a hand of playing cards. We start with an empty left hand and the cards face down on the table. We then remove one card at a time from the table and insert it into the correct position in the left hand. To find the correct position for a card, we compare it with each of the cards already in the hand, from right to left, as illustrated in Figure 2.1. At all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table. Our pseudocode for insertion sort is presented as a procedure called I NSERTION S ORT, which takes as a parameter an array A[1 . . n ] containing a sequence of length n that is to be sorted. (In the code, the number n of elements in A is denoted by length [ A].) The input numbers are sorted in place: the numbers are rearranged within the array A, with at most a constant number of them stored outside the array at any time. The input array A contains the sorted output sequence when I NSERTION -S ORT is finished. 7 ♣ 2 ♣ ♣♣ ♣ ♣♣ 10 4 5♣ ♣ ♣ ♣♣♣ ♣ ♣ ♣ ♣ ♣♣ ♣ 7 ♣ 10 ♣♣ 5 ♣♣ ♣ ♣2 ♣ ♣4 ♣ ♣ ♣ ♣♣♣ ♣♣ 2.1 Insertion sort 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 17 (a) 5 2 4 6 13 (b) 2 5 4 6 1 3 (c) 2 4 5 6 1 3 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 (d) 2 4 5 6 13 (e) 1 2 4 5 6 3 (f) 1 2 3 4 5 6 Figure 2.2 The operation of I NSERTION -S ORT on the array A = 5, 2, 4, 6, 1, 3 . Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles. (a)–(e) The iterations of the for loop of lines 1–8. In each iteration, the black rectangle holds the key taken from A[ j ], which is compared with the values in shaded rectangles to its left in the test of line 5. Shaded arrows show array values moved one position to the right in line 6, and black arrows indicate where the key is moved to in line 8. (f) The final sorted array. I NSERTION -S ORT ( A) 1 for j ← 2 to length[ A] 2 do key ← A[ j ] 3 £ Insert A[ j ] into the sorted sequence A[1 . . j − 1]. 4 i ← j −1 5 while i > 0 and A[i ] > key 6 do A[i + 1] ← A[i ] 7 i ←i −1 8 A[i + 1] ← key Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A = 5, 2, 4, 6, 1, 3 . The index j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the “outer” for loop, which is indexed by j , the subarray consisting of elements A[1 . . j − 1] constitute the currently sorted hand, and elements A[ j + 1 . . n ] correspond to the pile of cards still on the table. In fact, elements A[1 . . j − 1] are the elements originally in positions 1 through j − 1, but now in sorted order. We state these properties of A[1 . . j − 1] formally as a loop invariant: At the start of each iteration of the for loop of lines 1–8, the subarray A[1 . . j − 1] consists of the elements originally in A[1 . . j − 1] but in sorted order. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant: 18 Chapter 2 Getting Started Initialization: It is true prior to the first iteration of the loop. Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. When the first two properties hold, the loop invariant is true prior to every iteration of the loop. Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the first iteration is like the base case, and showing that the invariant holds from iteration to iteration is like the inductive step. The third property is perhaps the most important one, since we are using the loop invariant to show correctness. It also differs from the usual use of mathematical induction, in which the inductive step is used infinitely; here, we stop the “induction” when the loop terminates. Let us see how these properties hold for insertion sort. Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j = 2.1 The subarray A[1 . . j − 1], therefore, consists of just the single element A[1], which is in fact the original element in A[1]. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop. Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the outer for loop works by moving A[ j − 1], A[ j − 2], A[ j − 3], and so on by one position to the right until the proper position for A[ j ] is found (lines 4–7), at which point the value of A[ j ] is inserted (line 8). A more formal treatment of the second property would require us to state and show a loop invariant for the “inner” while loop. At this point, however, we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. Termination: Finally, we examine what happens when the loop terminates. For insertion sort, the outer for loop ends when j exceeds n , i.e., when j = n + 1. Substituting n + 1 for j in the wording of loop invariant, we have that the subarray A[1 . . n ] consists of the elements originally in A[1 . . n ], but in sorted 1 When the loop is a for loop, the moment at which we check the loop invariant just prior to the first iteration is immediately after the initial assignment to the loop-counter variable and just before the first test in the loop header. In the case of I NSERTION -S ORT, this time is after assigning 2 to the variable j but before the first test of whether j ≤ length[ A]. 2.1 Insertion sort 19 order. But the subarray A[1 . . n ] is the entire array! Hence, the entire array is sorted, which means that the algorithm is correct. We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well. Pseudocode conventions We use the following conventions in our pseudocode. 1. Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of the while loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to if-then-else statements as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity. 2 2. The looping constructs while, for, and repeat and the conditional constructs if, then, and else have interpretations similar to those in Pascal. 3 There is one subtle difference with respect to for loops, however: in Pascal, the value of the loop-counter variable is undefined upon exiting the loop, but in this book, the loop counter retains its value after exiting the loop. Thus, immediately after a for loop, the loop counter’s value is the value that first exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j ← 2 to length[ A], and so when this loop terminates, j = length[ A]+1 (or, equivalently, j = n +1, since n = length [ A]). 3. The symbol “£” indicates that the remainder of the line is a comment. 4. A multiple assignment of the form i ← j ← e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j ← e followed by the assignment i ← j . 5. Variables (such as i , j , and key) are local to the given procedure. We shall not use global variables without explicit indication. 6. Array elements are accessed by specifying the array name followed by the index in square brackets. For example, A[i ] indicates the i th element of the array A. The notation “. .” is used to indicate a range of values within an ar- 2 In real programming languages, it is generally not advisable to use indentation alone to indicate block structure, since levels of indentation are hard to determine when code is split across pages. 3 Most block-structured languages have equivalent constructs, though the exact syntax may differ from that of Pascal. 20 Chapter 2 Getting Started ray. Thus, A[1 . . j ] indicates the subarray of A consisting of the j elements A[1], A[2], . . . , A[ j ]. 7. Compound data are typically organized into objects, which are composed of attributes or fields. A particular field is accessed using the field name followed by the name of its object in square brackets. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write length[ A]. Although we use square brackets for both array indexing and object attributes, it will usually be clear from the context which interpretation is intended. A variable representing an array or object is treated as a pointer to the data representing the array or object. For all fields f of an object x , setting y ← x causes f [ y ] = f [x ]. Moreover, if we now set f [x ] ← 3, then afterward not only is f [x ] = 3, but f [ y ] = 3 as well. In other words, x and y point to (“are”) the same object after the assignment y ← x . Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL. 8. Parameters are passed to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s fields are not. For example, if x is a parameter of a called procedure, the assignment x ← y within the called procedure is not visible to the calling procedure. The assignment f [x ] ← 3, however, is visible. 9. The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y ” we first evaluate x . If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y . If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression “x or y ” we evaluate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as “x = NIL and f [x ] = y ” without worrying about what happens when we try to evaluate f [x ] when x is NIL. Exercises 2.1-1 Using Figure 2.2 as a model, illustrate the operation of I NSERTION -S ORT on the array A = 31, 41, 59, 26, 41, 58 . 2.2 Analyzing algorithms 21 2.1-2 Rewrite the I NSERTION -S ORT procedure to sort into nonincreasing instead of nondecreasing order. 2.1-3 Consider the searching problem: Output: An index i such that v = A[i ] or the special value appear in A. Input: A sequence of n numbers A = a1 , a2 , . . . , an and a value v . NIL if v does not Write pseudocode for linear search, which scans through the sequence, looking for v . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties. 2.1-4 Consider the problem of adding two n -bit binary integers, stored in two n -element arrays A and B . The sum of the two integers should be stored in binary form in an (n + 1)-element array C . State the problem formally and write pseudocode for adding the two integers. 2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, a most efficient one can be easily identified. Such analysis may indicate more than one viable candidate, but several inferior algorithms are usually discarded in the process. Before we can analyze an algorithm, we must have a model of the implementation technology that will be used, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic oneprocessor, random-access machine (RAM ) model of computation as our implementation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after another, with no concurrent operations. In later chapters, however, we shall have occasion to investigate models for digital hardware. Strictly speaking, one should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM 22 Chapter 2 Getting Started model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are designed. The RAM model contains instructions commonly found in real computers: arithmetic (add, subtract, multiply, divide, remainder, floor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time. The data types in the RAM model are integer and floating point. Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n , we typically assume that integers are represented by c lg n bits for some constant c ≥ 1. We require c ≥ 1 so that each word can hold the value of n , enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.) Real computers contain instructions not listed above, and such instructions represent a gray area in the RAM model. For example, is exponentiation a constanttime instruction? In the general case, no; it takes several instructions to compute x y when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2. Shifting the bits by k positions to the left is equivalent to multiplication by 2k . Therefore, such computers can compute 2 k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2 k as a constant-time operation when k is a small enough positive integer. In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory (which is most often implemented with demand paging). Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, so that they can be difficult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, alge- 2.2 Analyzing algorithms 23 braic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas. Even though we typically select only one machine model to analyze a given algorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important characteristics of an algorithm’s resource requirements, and suppresses tedious details. Analysis of insertion sort The time taken by the I NSERTION -S ORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, I NSERTION S ORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms “running time” and “size of input” more carefully. The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the input—for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algorithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. It is convenient to define the notion of step so that it is as machine-independent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the i th line takes time c i , where ci is a constant. This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers. 4 4 There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say “sort the points by x -coordinate,” which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine—passing parameters to it, etc.—from the process of executing the subroutine. 24 Chapter 2 Getting Started In the following discussion, our expression for the running time of I NSERTION S ORT will evolve from a messy formula that uses all the statement costs c i to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efficient than another. We start by presenting the I NSERTION -S ORT procedure with the time “cost” of each statement and the number of times each statement is executed. For each j = 2, 3, . . . , n , where n = length [ A], we let t j be the number of times the while loop test in line 5 is executed for that value of j . When a for or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time. I NSERTION -S ORT ( A) 1 for j ← 2 to length [ A] 2 do key ← A[ j ] 3 £ Insert A[ j ] into the sorted sequence A[1 . . j − 1]. 4 i ← j −1 5 while i > 0 and A[i ] > key 6 do A[i + 1] ← A[i ] 7 i ←i −1 8 A[i + 1] ← key cost c1 c2 0 c4 c5 c6 c7 c8 times n n−1 n−1 n−1 n−1 n j =2 t j n j =2 (t j n j =2 (t j − 1) − 1) The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and is executed n times will contribute ci n to the total running time.5 To compute T (n ), the running time of I NSERTION -S ORT, we sum the products of the cost and times columns, obtaining n n T (n ) = c1 n + c2 (n − 1) + c4 (n − 1) + c5 n j =2 t j + c6 j =2 (t j − 1) + c7 j =2 (t j − 1) + c8 (n − 1) . Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in I NSERTION -S ORT, the best 5 This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily consume mn words of memory in total. 2.2 Analyzing algorithms 25 case occurs if the array is already sorted. For each j = 2, 3, . . . , n , we then find that A[i ] ≤ key in line 5 when i has its initial value of j − 1. Thus t j = 1 for j = 2, 3, . . . , n , and the best-case running time is T (n ) = c1 n + c2 (n − 1) + c4 (n − 1) + c5 (n − 1) + c8 (n − 1) = (c1 + c2 + c4 + c5 + c8 )n − (c2 + c4 + c5 + c8 ) . This running time can be expressed as an + b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n . If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element A[ j ] with each element in the entire sorted subarray A[1 . . j − 1], and so t j = j for j = 2, 3, . . . , n . Noting that n j =2 j= n (n + 1) −1 2 and n j =2 ( j − 1) = n (n − 1) 2 (see Appendix A for a review of how to solve these summations), we find that in the worst case, the running time of I NSERTION -S ORT is T (n ) = c1 n + c2 (n − 1) + c4 (n − 1) + c5 + c6 = n (n + 1) −1 2 This worst-case running time can be expressed as an 2 + bn + c for constants a , b, and c that again depend on the statement costs c i ; it is thus a quadratic function of n . Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting “randomized” algorithms whose behavior can vary even for a fixed input. Worst-case and average-case analysis In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on n (n − 1) n (n − 1) + c7 + c8 (n − 1) 2 2 c5 c6 c7 2 c5 c6 c7 ++ n + c1 + c 2 + c 4 + − − + c8 n 2 2 2 2 2 2 − (c2 + c4 + c5 + c8 ) . 26 Chapter 2 Getting Started finding only the worst-case running time, that is, the longest running time for any input of size n . We give three reasons for this orientation. • The worst-case running time of an algorithm is an upper bound on the running time for any input. Knowing it gives us a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse. For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some searching applications, searches for absent information may be frequent. The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray A[1 . . j − 1] to insert element A[ j ]? On average, half the elements in A[1 . . j − 1] are less than A[ j ], and half the elements are greater. On average, therefore, we check half of the subarray A[1 . . j − 1], so t j = j /2. If we work out the resulting average-case running time, it turns out to be a quadratic function of the input size, just like the worst-case running time. • • In some particular cases, we shall be interested in the average-case or expected running time of an algorithm; in Chapter 5, we shall see the technique of probabilistic analysis, by which we determine expected running times. One problem with performing an average-case analysis, however, is that it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis. Order of growth We used some simplifying abstractions to ease our analysis of the I NSERTION S ORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: the worst-case running time is an 2 + bn + c for some constants a , b, and c that depend on the statement costs c i . We thus ignored not only the actual statement costs, but also the abstract costs c i . We shall now make one more simplifying abstraction. It is the rate of growth, or order of growth, of the running time that really interests us. We therefore consider only the leading term of a formula (e.g., an 2 ), since the lower-order terms are relatively insignificant for large n . We also ignore the leading term’s constant coefficient, since constant factors are less significant than the rate of growth in 2.3 Designing algorithms 27 determining computational efficiency for large inputs. Thus, we write that insertion sort, for example, has a worst-case running time of (n 2 ) (pronounced “theta of n -squared”). We shall use -notation informally in this chapter; it will be defined precisely in Chapter 3. We usually consider one algorithm to be more efficient than another if its worstcase running time has a lower order of growth. Due to constant factors and lowerorder terms, this evaluation may be in error for small inputs. But for large enough inputs, a (n 2 ) algorithm, for example, will run more quickly in the worst case than a (n 3 ) algorithm. Exercises 2.2-1 Express the function n 3 /1000 − 100n 2 − 100n + 3 in terms of -notation. 2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in A[1]. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n − 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n − 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in -notation. 2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in -notation? Justify your answers. 2.2-4 How can we modify almost any algorithm to have a good best-case running time? 2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 . . j − 1], we insert the single element A[ j ] into its proper place, yielding the sorted subarray A[1 . . j ]. 28 Chapter 2 Getting Started In this section, we examine an alternative design approach, known as “divideand-conquer.” We shall use divide-and-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that will be introduced in Chapter 4. 2.3.1 The divide-and-conquer approach Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related subproblems. These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original problem. The merge sort algorithm closely follows the divide-and-conquer paradigm. Intuitively, it operates as follows. Divide: Divide the n -element sequence to be sorted into two subsequences of n /2 elements each. Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer. The recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order. The key operation of the merge sort algorithm is the merging of two sorted sequences in the “combine” step. To perform the merging, we use an auxiliary procedure M ERGE ( A, p , q , r ), where A is an array and p , q , and r are indices numbering elements of the array such that p ≤ q < r . The procedure assumes that the subarrays A[ p . . q ] and A[q + 1 . . r ] are in sorted order. It merges them to form a single sorted subarray that replaces the current subarray A[ p . . r ]. Our M ERGE procedure takes time (n ), where n = r − p + 1 is the number of elements being merged, and it works as follows. Returning to our card-playing 2.3 Designing algorithms 29 motif, suppose we have two piles of cards face up on a table. Each pile is sorted, with the smallest cards on top. We wish to merge the two piles into a single sorted output pile, which is to be face down on the table. Our basic step consists of choosing the smaller of the two cards on top of the face-up piles, removing it from its pile (which exposes a new top card), and placing this card face down onto the output pile. We repeat this step until one input pile is empty, at which time we just take the remaining input pile and place it face down onto the output pile. Computationally, each basic step takes constant time, since we are checking just two top cards. Since we perform at most n basic steps, merging takes (n ) time. The following pseudocode implements the above idea, but with an additional twist that avoids having to check whether either pile is empty in each basic step. The idea is to put on the bottom of each pile a sentinel card, which contains a special value that we use to simplify our code. Here, we use ∞ as the sentinel value, so that whenever a card with ∞ is exposed, it cannot be the smaller card unless both piles have their sentinel cards exposed. But once that happens, all the nonsentinel cards have already been placed onto the output pile. Since we know in advance that exactly r − p + 1 cards will be placed onto the output pile, we can stop once we have performed that many basic steps. M ERGE ( A, p , q , r ) 1 n1 ← q − p + 1 2 n2 ← r − q 3 create arrays L [1 . . n 1 + 1] and R [1 . . n 2 + 1] 4 for i ← 1 to n 1 5 do L [i ] ← A[ p + i − 1] 6 for j ← 1 to n 2 7 do R [ j ] ← A[q + j ] 8 L [n 1 + 1] ← ∞ 9 R [n 2 + 1] ← ∞ 10 i ← 1 11 j ← 1 12 for k ← p to r 13 do if L [i ] ≤ R [ j ] 14 then A[k ] ← L [i ] 15 i ←i +1 16 else A[k ] ← R [ j ] 17 j ← j +1 In detail, the M ERGE procedure works as follows. Line 1 computes the length n 1 of the subarray A[ p ..q ], and line 2 computes the length n 2 of the subarray A[q + 1..r ]. We create arrays L and R (“left” and “right”), of lengths n 1 + 1 and n 2 + 1, respectively, in line 3. The for loop of lines 4–5 copies the subar- 30 Chapter 2 8 9 Getting Started 8 9 10 11 12 13 14 15 16 17 A…2 k L 2 i 1 10 11 12 13 14 15 16 17 4 4 5 5 7 1 2 1 3 6… 2 3 A…1 5 1 2 3 4 k 4 5 5 7 1 2 1 3 2 6… 3 4 2 5 3 7∞ R1 j (a) 23 6∞ 4 L 2 i 4 5 7∞ R1 (b) 2 j 3 6∞ 4 5 A…1 1 2 3 8 9 10 11 12 13 14 15 16 17 2 4 5 k 5 7 1 2 1 3 6… 2 3 A…1 5 1 2 3 8 9 10 11 12 13 14 15 16 17 2 4 2 5 7 k 1 2 1 3 2 6… 3 L 2 4 i 5 7∞ R1 (c) 23 j 6∞ 4 L 2 4 i 5 7∞ R1 (d) 2 3 j 6∞ 4 5 Figure 2.3 The operation of lines 10–17 in the call M ERGE( A , 9, 12, 16), when the subarray A[9 . . 16] contains the sequence 2, 4, 5, 7, 1, 2, 3, 6 . After copying and inserting sentinels, the array L contains 2, 4, 5, 7, ∞ , and the array R contains 1, 2, 3, 6, ∞ . Lightly shaded positions in A contain their final values, and lightly shaded positions in L and R contain values that have yet to be copied back into A. Taken together, the lightly shaded positions always comprise the values originally in A[9 . . 16], along with the two sentinels. Heavily shaded positions in A contain values that will be copied over, and heavily shaded positions in L and R contain values that have already been copied back into A. (a)–(h) The arrays A, L , and R , and their respective indices k , i , and j prior to each iteration of the loop of lines 12–17. (i) The arrays and indices at termination. At this point, the subarray in A[9 . . 16] is sorted, and the two sentinels in L and R are the only two elements in these arrays that have not been copied into A. ray A[ p . . q ] into L [1 . . n 1 ], and the for loop of lines 6–7 copies the subarray A[q + 1 . . r ] into R [1 . . n 2 ]. Lines 8–9 put the sentinels at the ends of the arrays L and R . Lines 10–17, illustrated in Figure 2.3, perform the r − p + 1 basic steps by maintaining the following loop invariant: At the start of each iteration of the for loop of lines 12–17, the subarray A[ p . . k − 1] contains the k − p smallest elements of L [1 . . n 1 + 1] and R [1 . . n 2 + 1], in sorted order. Moreover, L [i ] and R [ j ] are the smallest elements of their arrays that have not been copied back into A. We must show that this loop invariant holds prior to the first iteration of the for loop of lines 12–17, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates. Initialization: Prior to the first iteration of the loop, we have k = p , so that the subarray A[ p . . k − 1] is empty. This empty subarray contains the k − p = 0 2.3 8 9 Designing algorithms 10 11 12 13 14 15 16 17 8 9 10 11 12 13 14 15 16 17 31 A… 1 1 3 2 4 2 5 3 1 k 2 1 3 2 6… 3 A… 5 1 1 3 2 4 2 5 3 4 2 k 1 3 2 6… 3 L 2 4 i 2 5 7∞ R1 (e) 2 3 6∞ j 4 L 2 4 2 5 i 7∞ R1 (f) 2 3 6∞ j 4 5 A… 1 8 1 3 9 10 11 12 13 14 15 16 17 2 4 2 5 3 4 5 1 3 k 2 2 6… 3 A… 5 1 8 1 3 9 10 11 12 13 14 15 16 17 2 4 2 5 3 4 5 1 6 2 6… k 3 3 L 2 4 2 5 7∞ i R1 (g) 3 6∞ j 4 L 2 4 2 5 7∞ i R1 (h) 2 6∞ j 4 5 A… 1 8 1 3 9 10 11 12 13 14 15 16 17 2 4 2 5 3 4 5 1 6 2 7… k 3 3 L 2 4 2 5 7∞ i R1 (i) 2 6∞ j 4 5 smallest elements of L and R , and since i = j = 1, both L [i ] and R [ j ] are the smallest elements of their arrays that have not been copied back into A. Maintenance: To see that each iteration maintains the loop invariant, let us first suppose that L [i ] ≤ R [ j ]. Then L [i ] is the smallest element not yet copied back into A. Because A[ p . . k − 1] contains the k − p smallest elements, after line 14 copies L [i ] into A[k ], the subarray A[ p . . k ] will contain the k − p + 1 smallest elements. Incrementing k (in the for loop update) and i (in line 15) reestablishes the loop invariant for the next iteration. If instead L [i ] > R [ j ], then lines 16–17 perform the appropriate action to maintain the loop invariant. Termination: At termination, k = r + 1. By the loop invariant, the subarray A[ p . . k − 1], which is A[ p . . r ], contains the k − p = r − p + 1 smallest elements of L [1 . . n 1 + 1] and R [1 . . n 2 + 1], in sorted order. The arrays L and R together contain n 1 + n 2 + 2 = r − p + 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels. To see that the M ERGE procedure runs in (n ) time, where n = r − p + 1, observe that each of lines 1–3 and 8–11 takes constant time, the for loops of 32 Chapter 2 Getting Started lines 4–7 take (n 1 + n 2 ) = (n ) time,6 and there are n iterations of the for loop of lines 12–17, each of which takes constant time. We can now use the M ERGE procedure as a subroutine in the merge sort algorithm. The procedure M ERGE -S ORT ( A, p , r ) sorts the elements in the subarray A[ p . . r ]. If p ≥ r , the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that partitions A[ p . . r ] into two subarrays: A[ p . . q ], containing n /2 elements, and A[q + 1 . . r ], containing n /2 elements.7 M ERGE -S ORT ( A, p , r ) 1 if p < r 2 then q ← ( p + r )/2 3 M ERGE -S ORT ( A, p , q ) 4 M ERGE -S ORT ( A, q + 1, r ) 5 M ERGE ( A, p , q , r ) To sort the entire sequence A = A[1], A[2], . . . , A[n ] , we make the initial call M ERGE -S ORT ( A, 1, length [ A]), where once again length[ A] = n . Figure 2.4 illustrates the operation of the procedure bottom-up when n is a power of 2. The algorithm consists of merging pairs of 1-item sequences to form sorted sequences of length 2, merging pairs of sequences of length 2 to form sorted sequences of length 4, and so on, until two sequences of length n /2 are merged to form the final sorted sequence of length n . 2.3.2 Analyzing divide-and-conquer algorithms When an algorithm contains a recursive call to itself, its running time can often be described by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. We can then use mathematical tools to solve the recurrence and provide bounds on the performance of the algorithm. A recurrence for the running time of a divide-and-conquer algorithm is based on the three steps of the basic paradigm. As before, we let T (n ) be the running time on a problem of size n . If the problem size is small enough, say n ≤ c 6 We shall see in Chapter 3 how to formally interpret equations containing -notation. 7 The expression x denotes the least integer greater than or equal to x , and x denotes the greatest integer less than or equal to x . These notations are defined in Chapter 3. The easiest way to verify that setting q to ( p + r )/2 yields subarrays A[ p . . q ] and A[q + 1 . . r ] of sizes n /2 and n /2 , respectively, is to examine the four cases that arise depending on whether each of p and r is odd or even. 2.3 Designing algorithms 33 sorted sequence 1 2 2 3 merge 2 4 merge 2 merge 5 2 4 5 4 merge 7 1 7 1 merge 3 2 3 5 7 1 2 merge 2 merge 6 6 3 6 4 5 6 7 initial sequence Figure 2.4 The operation of merge sort on the array A = 5, 2, 4, 7, 1, 3, 2, 6 . The lengths of the sorted sequences being merged increase as the algorithm progresses from bottom to top. for some constant c, the straightforward solution takes constant time, which we write as (1). Suppose that our division of the problem yields a subproblems, each of which is 1/b the size of the original. (For merge sort, both a and b are 2, but we shall see many divide-and-conquer algorithms in which a = b.) If we take D (n ) time to divide the problem into subproblems and C (n ) time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence T (n ) = (1) if n ≤ c , aT (n /b) + D (n ) + C (n ) otherwise . In Chapter 4, we shall see how to solve common recurrences of this form. Analysis of merge sort Although the pseudocode for M ERGE -S ORT works correctly when the number of elements is not even, our recurrence-based analysis is simplified if we assume that the original problem size is a power of 2. Each divide step then yields two subsequences of size exactly n /2. In Chapter 4, we shall see that this assumption does not affect the order of growth of the solution to the recurrence. 34 Chapter 2 Getting Started We reason as follows to set up the recurrence for T (n ), the worst-case running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows. Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D (n ) = (1). Conquer: We recursively solve two subproblems, each of size n /2, which contributes 2T (n /2) to the running time. Combine: We have already noted that the M ERGE procedure on an n -element subarray takes time (n ), so C (n ) = (n ). When we add the functions D (n ) and C (n ) for the merge sort analysis, we are adding a function that is (n ) and a function that is (1). This sum is a linear function of n , that is, (n ). Adding it to the 2T (n /2) term from the “conquer” step gives the recurrence for the worst-case running time T (n ) of merge sort: T (n ) = (1) 2T (n /2) + if n = 1 , (n ) if n > 1 . (2.1) In Chapter 4, we shall see the “master theorem,” which we can use to show that T (n ) is (n lg n ), where lg n stands for log 2 n . Because the logarithm function grows more slowly than any linear function, for large enough inputs, merge sort, with its (n lg n ) running time, outperforms insertion sort, whose running time is (n 2), in the worst case. We do not need the master theorem to intuitively understand why the solution to the recurrence (2.1) is T (n ) = (n lg n ). Let us rewrite recurrence (2.1) as T (n ) = c if n = 1 , 2T (n /2) + cn if n > 1 , (2.2) where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps. 8 Figure 2.5 shows how we can solve the recurrence (2.2). For convenience, we assume that n is an exact power of 2. Part (a) of the figure shows T (n ), which in part (b) has been expanded into an equivalent tree representing the recurrence. The cn term is the root (the cost at the top level of recursion), and the two subtrees 8 It is unlikely that the same constant exactly represents both the time to solve problems of size 1 and the time per array element of the divide and combine steps. We can get around this problem by letting c be the larger of these times and understanding that our recurrence gives an upper bound on the running time, or by letting c be the lesser of these times and understanding that our recurrence gives a lower bound on the running time. Both bounds will be on the order of n lg n and, taken together, give a (n lg n ) running time. 2.3 Designing algorithms 35 T(n) cn cn T(n/2) T(n/2) cn/2 cn/2 T(n/4) (a) (b) T(n/4) (c) T(n/4) T(n/4) cn cn cn/2 cn/2 cn lg n cn/4 cn/4 cn/4 cn/4 cn c c c c n (d) c … c c Total: cn lg n + cn Figure 2.5 The construction of a recursion tree for the recurrence T (n ) = 2T (n /2) + cn . Part (a) shows T (n ), which is progressively expanded in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has lg n + 1 levels (i.e., it has height lg n , as indicated), and each level contributes a total cost of cn . The total cost, therefore, is cn lg n + cn , which is (n lg n ). … cn 36 Chapter 2 Getting Started of the root are the two smaller recurrences T (n /2). Part (c) shows this process carried one step further by expanding T (n /2). The cost for each of the two subnodes at the second level of recursion is cn /2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence, until the problem sizes get down to 1, each with a cost of c. Part (d) shows the resulting tree. Next, we add the costs across each level of the tree. The top level has total cost cn , the next level down has total cost c(n /2) + c(n /2) = cn , the level after that has total cost c(n /4) + c(n /4) + c(n /4) + c(n /4) = cn , and so on. In general, the level i below the top has 2i nodes, each contributing a cost of c(n /2 i ), so that the i th level below the top has total cost 2 i c(n /2i ) = cn . At the bottom level, there are n nodes, each contributing a cost of c, for a total cost of cn . The total number of levels of the “recursion tree” in Figure 2.5 is lg n + 1. This fact is easily seen by an informal inductive argument. The base case occurs when n = 1, in which case there is only one level. Since lg 1 = 0, we have that lg n + 1 gives the correct number of levels. Now assume as an inductive hypothesis that the number of levels of a recursion tree for 2 i nodes is lg 2i + 1 = i + 1 (since for any value of i , we have that lg 2i = i ). Because we are assuming that the original input size is a power of 2, the next input size to consider is 2 i +1 . A tree with 2i +1 nodes has one more level than a tree of 2 i nodes, and so the total number of levels is (i + 1) + 1 = lg 2i +1 + 1. To compute the total cost represented by the recurrence (2.2), we simply add up the costs of all the levels. There are lg n + 1 levels, each costing cn , for a total cost of cn (lg n + 1) = cn lg n + cn . Ignoring the low-order term and the constant c gives the desired result of (n lg n ). Exercises 2.3-1 Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A = 3, 41, 52, 26, 38, 57, 9, 49 . 2.3-2 Rewrite the M ERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A. 2.3-3 Use mathematical induction to show that when n is an exact power of 2, the solution of the recurrence Problems for Chapter 2 37 T (n ) = is T (n ) = n lg n . 2 if n = 2 , 2T (n /2) + n if n = 2k , for k > 1 2.3-4 Insertion sort can be expressed as a recursive procedure as follows. In order to sort A[1 . . n ], we recursively sort A[1 . . n − 1] and then insert A[n ] into the sorted array A[1 . . n − 1]. Write a recurrence for the running time of this recursive version of insertion sort. 2.3-5 Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against v and eliminate half of the sequence from further consideration. Binary search is an algorithm that repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is (lg n ). 2.3-6 Observe that the while loop of lines 5 – 7 of the I NSERTION -S ORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1 . . j − 1]. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to (n lg n )? 2.3-7 Describe a (n lg n )-time algorithm that, given a set S of n integers and another integer x , determines whether or not there exist two elements in S whose sum is exactly x . Problems 2-1 Insertion sort on small arrays in merge sort Although merge sort runs in (n lg n ) worst-case time and insertion sort runs in (n 2 ) worst-case time, the constant factors in insertion sort make it faster for small n . Thus, it makes sense to use insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which n / k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined. a. Show that the n / k sublists, each of length k , can be sorted by insertion sort in (nk ) worst-case time. 38 Chapter 2 Getting Started b. Show that the sublists can be merged in (n lg(n / k )) worst-case time. c. Given that the modified algorithm runs in (nk + n lg(n / k )) worst-case time, what is the largest asymptotic ( -notation) value of k as a function of n for which the modified algorithm has the same asymptotic running time as standard merge sort? d. How should k be chosen in practice? 2-2 Correctness of bubblesort Bubblesort is a popular sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order. B UBBLESORT ( A) 1 for i ← 1 to length[ A] 2 do for j ← length[ A] downto i + 1 3 do if A[ j ] < A[ j − 1] 4 then exchange A[ j ] ↔ A[ j − 1] a. Let A denote the output of B UBBLESORT ( A). To prove that B UBBLESORT is correct, we need to prove that it terminates and that A [1] ≤ A [2] ≤ · · · ≤ A [n ] , (2.3) where n = length [ A]. What else must be proved to show that B UBBLESORT actually sorts? The next two parts will prove inequality (2.3). b. State precisely a loop invariant for the for loop in lines 2–4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter. c. Using the termination condition of the loop invariant proved in part (b), state a loop invariant for the for loop in lines 1–4 that will allow you to prove inequality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter. d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort? Problems for Chapter 2 39 2-3 Correctness of Horner’s rule The following code fragment implements Horner’s rule for evaluating a polynomial n P (x ) = ak x k = a0 + x (a1 + x (a2 + · · · + x (an−1 + xan ) · · ·)) , given the coefficients a0 , a1 , . . . , an and a value for x : 1 y←0 2 i ←n 3 while i ≥ 0 4 do y ← ai + x · y 5 i ←i −1 a. What is the asymptotic running time of this code fragment for Horner’s rule? b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule? c. Prove that the following is a loop invariant for the while loop in lines 3 –5. At the start of each iteration of the while loop of lines 3–5, y= n −(i +1) k =0 k =0 ak +i +1 x k . Interpret a summation with no terms as equaling 0. Your proof should follow the structure of the loop invariant proof presented in this chapter and should show that, at termination, y = n=0 ak x k . k d. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients a 0 , a1 , . . . , an . 2-4 Inversions Let A[1 . . n ] be an array of n distinct numbers. If i < j and A[i ] > A[ j ], then the pair (i , j ) is called an inversion of A. a. List the five inversions of the array 2, 3, 8, 6, 1 . b. What array with elements from the set {1, 2, . . . , n } has the most inversions? How many does it have? 40 Chapter 2 Getting Started c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer. d. Give an algorithm that determines the number of inversions in any permutation on n elements in (n lg n ) worst-case time. (Hint: Modify merge sort.) Chapter notes In 1968, Knuth published the first of three volumes with the general title The Art of Computer Programming [182, 183, 185]. The first volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word “algorithm” is derived from the name “al-Khowˆ rizmˆ,” a ninth-century Persian mathematician. a ı Aho, Hopcroft, and Ullman [5] advocated the asymptotic analysis of algorithms as a means of comparing relative performance. They also popularized the use of recurrence relations to describe the running times of recursive algorithms. Knuth [185] provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth’s discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell’s sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm. Merge sort is also described by Knuth. He mentions that a mechanical collator capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945. The early history of proving programs correct is described by Gries [133], who credits P. Naur with the first article in this field. Gries attributes loop invariants to R. W. Floyd. The textbook by Mitchell [222] describes more recent progress in proving programs correct. Introduction This part presents several algorithms that solve the following sorting problem: Input: A sequence of n numbers a1 , a2 , . . . , an . Output: A permutation (reordering) a 1 , a2 , . . . , an of the input sequence such that a1 ≤ a2 ≤ · · · ≤ an . The input sequence is usually an n -element array, although it may be represented in some other fashion, such as a linked list. The structure of the data In practice, the numbers to be sorted are rarely isolated values. Each is usually part of a collection of data called a record. Each record contains a key, which is the value to be sorted, and the remainder of the record consists of satellite data, which are usually carried around with the key. In practice, when a sorting algorithm permutes the keys, it must permute the satellite data as well. If each record includes a large amount of satellite data, we often permute an array of pointers to the records rather than the records themselves in order to minimize data movement. In a sense, it is these implementation details that distinguish an algorithm from a full-blown program. Whether we sort individual numbers or large records that contain numbers is irrelevant to the method by which a sorting procedure determines the sorted order. Thus, when focusing on the problem of sorting, we typically assume that the input consists only of numbers. The translation of an algorithm for sorting numbers into a program for sorting records is conceptually straightforward, although in a given engineering situation there may be other subtleties that make the actual programming task a challenge. 124 Part II Sorting and Order Statistics Why sorting? Many computer scientists consider sorting to be the most fundamental problem in the study of algorithms. There are several reasons: • Sometimes the need to sort information is inherent in an application. For example, in order to prepare customer statements, banks need to sort checks by check number. Algorithms often use sorting as a key subroutine. For example, a program that renders graphical objects that are layered on top of each other might have to sort the objects according to an “above” relation so that it can draw these objects from bottom to top. We shall see numerous algorithms in this text that use sorting as a subroutine. There is a wide variety of sorting algorithms, and they use a rich set of techniques. In fact, many important techniques used throughout algorithm design are represented in the body of sorting algorithms that have been developed over the years. In this way, sorting is also a problem of historical interest. Sorting is a problem for which we can prove a nontrivial lower bound (as we shall do in Chapter 8). Our best upper bounds match the lower bound asymptotically, and so we know that our sorting algorithms are asymptotically optimal. Moreover, we can use the lower bound for sorting to prove lower bounds for certain other problems. Many engineering issues come to the fore when implementing sorting algorithms. The fastest sorting program for a particular situation may depend on many factors, such as prior knowledge about the keys and satellite data, the memory hierarchy (caches and virtual memory) of the host computer, and the software environment. Many of these issues are best dealt with at the algorithmic level, rather than by “tweaking” the code. • • • • Sorting algorithms We introduced two algorithms that sort n real numbers in Chapter 2. Insertion sort takes (n 2) time in the worst case. Because its inner loops are tight, however, it is a fast in-place sorting algorithm for small input sizes. (Recall that a sorting algorithm sorts in place if only a constant number of elements of the input array are ever stored outside the array.) Merge sort has a better asymptotic running time, (n lg n ), but the M ERGE procedure it uses does not operate in place. In this part, we shall introduce two more algorithms that sort arbitrary real numbers. Heapsort, presented in Chapter 6, sorts n numbers in place in O (n lg n ) time. It uses an important data structure, called a heap, with which we can also implement a priority queue. Part II Sorting and Order Statistics 125 Quicksort, in Chapter 7, also sorts n numbers in place, but its worst-case running time is (n 2 ). Its average-case running time is (n lg n ), though, and it generally outperforms heapsort in practice. Like insertion sort, quicksort has tight code, so the hidden constant factor in its running time is small. It is a popular algorithm for sorting large input arrays. Insertion sort, merge sort, heapsort, and quicksort are all comparison sorts: they determine the sorted order of an input array by comparing elements. Chapter 8 begins by introducing the decision-tree model in order to study the performance limitations of comparison sorts. Using this model, we prove a lower bound of (n lg n ) on the worst-case running time of any comparison sort on n inputs, thus showing that heapsort and merge sort are asymptotically optimal comparison sorts. Chapter 8 then goes on to show that we can beat this lower bound of (n lg n ) if we can gather information about the sorted order of the input by means other than comparing elements. The counting sort algorithm, for example, assumes that the input numbers are in the set {1, 2, . . . , k }. By using array indexing as a tool for determining relative order, counting sort can sort n numbers in (k + n ) time. Thus, when k = O (n ), counting sort runs in time that is linear in the size of the input array. A related algorithm, radix sort, can be used to extend the range of counting sort. If there are n integers to sort, each integer has d digits, and each digit is in the set {1, 2, . . . , k }, then radix sort can sort the numbers in (d (n + k )) time. When d is a constant and k is O (n ), radix sort runs in linear time. A third algorithm, bucket sort, requires knowledge of the probabilistic distribution of numbers in the input array. It can sort n real numbers uniformly distributed in the half-open interval [0, 1) in average-case O (n ) time. Order statistics The i th order statistic of a set of n numbers is the i th smallest number in the set. One can, of course, select the i th order statistic by sorting the input and indexing the i th element of the output. With no assumptions about the input distribution, this method runs in (n lg n ) time, as the lower bound proved in Chapter 8 shows. In Chapter 9, we show that we can find the i th smallest element in O (n ) time, even when the elements are arbitrary real numbers. We present an algorithm with tight pseudocode that runs in (n 2) time in the worst case, but linear time on average. We also give a more complicated algorithm that runs in O (n ) worst-case time. Background Although most of this part does not rely on difficult mathematics, some sections do require mathematical sophistication. In particular, the average-case analyses of quicksort, bucket sort, and the order-statistic algorithm use probability, which is 126 Part II Sorting and Order Statistics reviewed in Appendix C, and the material on probabilistic analysis and randomized algorithms in Chapter 5. The analysis of the worst-case linear-time algorithm for order statistics involves somewhat more sophisticated mathematics than the other worst-case analyses in this part. 6 Heapsort In this chapter, we introduce another sorting algorithm. Like merge sort, but unlike insertion sort, heapsort’s running time is O (n lg n ). Like insertion sort, but unlike merge sort, heapsort sorts in place: only a constant number of array elements are stored outside the input array at any time. Thus, heapsort combines the better attributes of the two sorting algorithms we have already discussed. Heapsort also introduces another algorithm design technique: the use of a data structure, in this case one we call a “heap,” to manage information during the execution of the algorithm. Not only is the heap data structure useful for heapsort, but it also makes an efficient priority queue. The heap data structure will reappear in algorithms in later chapters. We note that the term “heap” was originally coined in the context of heapsort, but it has since come to refer to “garbage-collected storage,” such as the programming languages Lisp and Java provide. Our heap data structure is not garbage-collected storage, and whenever we refer to heaps in this book, we shall mean the structure defined in this chapter. 6.1 Heaps The (binary) heap data structure is an array object that can be viewed as a nearly complete binary tree (see Section B.5.3), as shown in Figure 6.1. Each node of the tree corresponds to an element of the array that stores the value in the node. The tree is completely filled on all levels except possibly the lowest, which is filled from the left up to a point. An array A that represents a heap is an object with two attributes: length[ A], which is the number of elements in the array, and heap-size [ A], the number of elements in the heap stored within array A. That is, although A[1 . . length [ A]] may contain valid numbers, no element past A[heap-size [ A]], where heap-size [ A] ≤ length[ A], is an element of the heap. 128 Chapter 6 Heapsort 1 16 2 3 14 4 5 6 10 7 1 2 3 4 5 6 7 8 9 10 8 8 9 10 7 4 1 (a) 9 3 16 14 10 8 7 9 3 2 4 1 2 (b) Figure 6.1 A max-heap viewed as (a) a binary tree and (b) an array. The number within the circle at each node in the tree is the value stored at that node. The number above a node is the corresponding index in the array. Above and below the array are lines showing parent-child relationships; parents are always to the left of their children. The tree has height three; the node at index 4 (with value 8) has height one. The root of the tree is A[1], and given the index i of a node, the indices of its parent PARENT (i ), left child L EFT (i ), and right child R IGHT (i ) can be computed simply: PARENT (i ) return i /2 L EFT (i ) return 2i R IGHT (i ) return 2i + 1 On most computers, the L EFT procedure can compute 2i in one instruction by simply shifting the binary representation of i left one bit position. Similarly, the R IGHT procedure can quickly compute 2i + 1 by shifting the binary representation of i left one bit position and adding in a 1 as the low-order bit. The PARENT procedure can compute i /2 by shifting i right one bit position. In a good implementation of heapsort, these three procedures are often implemented as “macros” or “in-line” procedures. There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the values in the nodes satisfy a heap property, the specifics of which depend on the kind of heap. In a max-heap, the max-heap property is that for every node i other than the root, A[PARENT (i )] ≥ A[i ] , 6.1 Heaps 129 that is, the value of a node is at most the value of its parent. Thus, the largest element in a max-heap is stored at the root, and the subtree rooted at a node contains values no larger than that contained at the node itself. A min-heap is organized in the opposite way; the min-heap property is that for every node i other than the root, A[PARENT (i )] ≤ A[i ] . The smallest element in a min-heap is at the root. For the heapsort algorithm, we use max-heaps. Min-heaps are commonly used in priority queues, which we discuss in Section 6.5. We shall be precise in specifying whether we need a max-heap or a min-heap for any particular application, and when properties apply to either max-heaps or min-heaps, we just use the term “heap.” Viewing a heap as a tree, we define the height of a node in a heap to be the number of edges on the longest simple downward path from the node to a leaf, and we define the height of the heap to be the height of its root. Since a heap of n elements is based on a complete binary tree, its height is (lg n ) (see Exercise 6.1-2). We shall see that the basic operations on heaps run in time at most proportional to the height of the tree and thus take O (lg n ) time. The remainder of this chapter presents five basic procedures and shows how they are used in a sorting algorithm and a priority-queue data structure. • The M AX -H EAPIFY procedure, which runs in O (lg n ) time, is the key to maintaining the max-heap property. The B UILD -M AX -H EAP procedure, which runs in linear time, produces a maxheap from an unordered input array. The H EAPSORT procedure, which runs in O (n lg n ) time, sorts an array in place. The M AX -H EAP -I NSERT , H EAP -E XTRACT-M AX, H EAP -I NCREASE -K EY, and H EAP -M AXIMUM procedures, which run in O (lg n ) time, allow the heap data structure to be used as a priority queue. • • • Exercises 6.1-1 What are the minimum and maximum numbers of elements in a heap of height h ? 6.1-2 Show that an n -element heap has height lg n . 130 Chapter 6 Heapsort 6.1-3 Show that in any subtree of a max-heap, the root of the subtree contains the largest value occurring anywhere in that subtree. 6.1-4 Where in a max-heap might the smallest element reside, assuming that all elements are distinct? 6.1-5 Is an array that is in sorted order a min-heap? 6.1-6 Is the sequence 23, 17, 14, 6, 13, 10, 1, 5, 7, 12 a max-heap? 6.1-7 Show that, with the array representation for storing an n -element heap, the leaves are the nodes indexed by n /2 + 1, n /2 + 2, . . . , n . 6.2 Maintaining the heap property M AX -H EAPIFY is an important subroutine for manipulating max-heaps. Its inputs are an array A and an index i into the array. When M AX -H EAPIFY is called, it is assumed that the binary trees rooted at L EFT (i ) and R IGHT (i ) are max-heaps, but that A[i ] may be smaller than its children, thus violating the max-heap property. The function of M AX -H EAPIFY is to let the value at A[i ] “float down” in the maxheap so that the subtree rooted at index i becomes a max-heap. M AX -H EAPIFY ( A, i ) 1 l ← L EFT (i ) 2 r ← R IGHT (i ) 3 if l ≤ heap-size [ A] and A[l ] > A[i ] 4 then largest ← l 5 else largest ← i 6 if r ≤ heap-size [ A] and A[r ] > A[largest ] 7 then largest ← r 8 if largest = i 9 then exchange A[i ] ↔ A[largest ] 10 M AX -H EAPIFY ( A, largest ) Figure 6.2 illustrates the action of M AX -H EAPIFY. At each step, the largest of the elements A[i ], A[L EFT (i )], and A[R IGHT (i )] is determined, and its index is 6.2 Maintaining the heap property 1 1 131 16 2 3 2 16 3 i 4 4 5 6 10 7 4 14 5 6 10 7 14 8 9 10 7 8 1 (a) 1 9 3 i 8 4 9 10 7 8 1 (b) 9 3 2 2 16 2 3 14 4 5 6 10 7 8 8 9 7 4 i 10 9 3 2 1 (c) Figure 6.2 The action of M AX -H EAPIFY( A , 2), where heap-size[ A] = 10. (a) The initial configuration, with A[2] at node i = 2 violating the max-heap property since it is not larger than both children. The max-heap property is restored for node 2 in (b) by exchanging A[2] with A[4], which destroys the max-heap property for node 4. The recursive call M AX -H EAPIFY( A , 4) now has i = 4. After swapping A[4] with A[9], as shown in (c), node 4 is fixed up, and the recursive call M AX -H EAPIFY( A , 9) yields no further change to the data structure. stored in largest . If A[i ] is largest, then the subtree rooted at node i is a max-heap and the procedure terminates. Otherwise, one of the two children has the largest element, and A[i ] is swapped with A[largest ], which causes node i and its children to satisfy the max-heap property. The node indexed by largest , however, now has the original value A[i ], and thus the subtree rooted at largest may violate the maxheap property. Consequently, M AX -H EAPIFY must be called recursively on that subtree. The running time of M AX -H EAPIFY on a subtree of size n rooted at given node i is the (1) time to fix up the relationships among the elements A[i ], A[L EFT (i )], and A[R IGHT (i )], plus the time to run M AX -H EAPIFY on a subtree rooted at one of the children of node i . The children’s subtrees each have size at most 2n /3—the worst case occurs when the last row of the tree is exactly half full—and the running time of M AX -H EAPIFY can therefore be described by the recurrence 132 Chapter 6 Heapsort T (n ) ≤ T (2n /3) + (1) . The solution to this recurrence, by case 2 of the master theorem (Theorem 4.1), is T (n ) = O (lg n ). Alternatively, we can characterize the running time of M AX H EAPIFY on a node of height h as O (h ). Exercises 6.2-1 Using Figure 6.2 as a model, illustrate the operation of M AX -H EAPIFY ( A, 3) on the array A = 27, 17, 3, 16, 13, 10, 1, 5, 7, 12, 4, 8, 9, 0 . 6.2-2 Starting with the procedure M AX -H EAPIFY, write pseudocode for the procedure M IN -H EAPIFY ( A, i ), which performs the corresponding manipulation on a minheap. How does the running time of M IN -H EAPIFY compare to that of M AX H EAPIFY? 6.2-3 What is the effect of calling M AX -H EAPIFY ( A, i ) when the element A[i ] is larger than its children? 6.2-4 What is the effect of calling M AX -H EAPIFY ( A, i ) for i > heap-size [ A]/2? 6.2-5 The code for M AX -H EAPIFY is quite efficient in terms of constant factors, except possibly for the recursive call in line 10, which might cause some compilers to produce inefficient code. Write an efficient M AX -H EAPIFY that uses an iterative control construct (a loop) instead of recursion. 6.2-6 Show that the worst-case running time of M AX -H EAPIFY on a heap of size n is (lg n ). (Hint: For a heap with n nodes, give node values that cause M AX H EAPIFY to be called recursively at every node on a path from the root down to a leaf.) 6.3 Building a heap We can use the procedure M AX -H EAPIFY in a bottom-up manner to convert an array A[1 . . n ], where n = length[ A], into a max-heap. By Exercise 6.1-7, the 6.3 Building a heap 133 elements in the subarray A[( n /2 + 1) . . n ] are all leaves of the tree, and so each is a 1-element heap to begin with. The procedure B UILD -M AX -H EAP goes through the remaining nodes of the tree and runs M AX -H EAPIFY on each one. B UILD -M AX -H EAP ( A) 1 heap-size [ A] ← length[ A] 2 for i ← length[ A]/2 downto 1 3 do M AX -H EAPIFY ( A, i ) Figure 6.3 shows an example of the action of B UILD -M AX -H EAP. To show why B UILD -M AX -H EAP works correctly, we use the following loop invariant: At the start of each iteration of the for loop of lines 2–3, each node i + 1, i + 2, . . . , n is the root of a max-heap. We need to show that this invariant is true prior to the first loop iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates. Initialization: Prior to the first iteration of the loop, i = n /2 . Each node n /2 + 1, n /2 + 2, . . . , n is a leaf and is thus the root of a trivial max-heap. Maintenance: To see that each iteration maintains the loop invariant, observe that the children of node i are numbered higher than i . By the loop invariant, therefore, they are both roots of max-heaps. This is precisely the condition required for the call M AX -H EAPIFY ( A, i ) to make node i a max-heap root. Moreover, the M AX -H EAPIFY call preserves the property that nodes i + 1, i + 2, . . . , n are all roots of max-heaps. Decrementing i in the for loop update reestablishes the loop invariant for the next iteration. Termination: At termination, i = 0. By the loop invariant, each node 1, 2, . . . , n is the root of a max-heap. In particular, node 1 is. We can compute a simple upper bound on the running time of B UILD -M AX H EAP as follows. Each call to M AX -H EAPIFY costs O (lg n ) time, and there are O (n ) such calls. Thus, the running time is O (n lg n ). This upper bound, though correct, is not asymptotically tight. We can derive a tighter bound by observing that the time for M AX -H EAPIFY to run at a node varies with the height of the node in the tree, and the heights of most nodes are small. Our tighter analysis relies on the properties that an n -element heap has height lg n (see Exercise 6.1-2) and at most n /2h +1 nodes of any height h (see Exercise 6.3-3). The time required by M AX -H EAPIFY when called on a node of height h is O (h ), so we can express the total cost of B UILD -M AX -H EAP as 134 Chapter 6 Heapsort A4 1 3 2 16 9 10 14 8 1 7 1 4 2 3 2 4 3 1 4 5 6 3 7 4 1 5 6 3 7 2 8 9 i 16 10 9 10 i 8 2 9 10 16 8 7 (b) 1 9 10 14 8 7 (a) 1 14 4 2 3 2 4 3 1 4 5 6 3 16 9 i 7 4 i 10 8 1 5 6 10 7 14 8 9 10 14 9 10 16 8 7 (d) 1 9 3 2 8 7 (c) 1 2 i 2 4 3 2 16 3 16 4 5 6 10 7 4 14 5 6 10 7 14 8 9 10 7 8 1 (e) 9 3 8 8 9 10 7 4 1 (f) 9 3 2 2 Figure 6.3 The operation of B UILD -M AX -H EAP, showing the data structure before the call to M AX -H EAPIFY in line 3 of B UILD -M AX -H EAP. (a) A 10-element input array A and the binary tree it represents. The figure shows that the loop index i refers to node 5 before the call M AX -H EAPIFY( A , i ). (b) The data structure that results. The loop index i for the next iteration refers to node 4. (c)–(e) Subsequent iterations of the for loop in B UILD -M AX -H EAP. Observe that whenever M AX -H EAPIFY is called on a node, the two subtrees of that node are both max-heaps. (f) The max-heap after B UILD -M AX -H EAP finishes. 6.4 lg n h =0 The heapsort algorithm lg n 135 n 2h +1 O (h ) = O n h =0 h 2h . The last summation can be evaluated by substituting x = 1/2 in the formula (A.8), which yields ∞ h =0 h 2h = = 2. lg n 1/2 (1 − 1/2)2 Thus, the running time of B UILD -M AX -H EAP can be bounded as On h =0 h 2h =On ∞ h =0 h 2h = O (n ) . Hence, we can build a max-heap from an unordered array in linear time. We can build a min-heap by the procedure B UILD -M IN -H EAP , which is the same as B UILD -M AX -H EAP but with the call to M AX -H EAPIFY in line 3 replaced by a call to M IN -H EAPIFY (see Exercise 6.2-2). B UILD -M IN -H EAP produces a min-heap from an unordered linear array in linear time. Exercises 6.3-1 Using Figure 6.3 as a model, illustrate the operation of B UILD -M AX -H EAP on the array A = 5, 3, 17, 10, 84, 19, 6, 22, 9 . 6.3-2 Why do we want the loop index i in line 2 of B UILD -M AX -H EAP to decrease from length[ A]/2 to 1 rather than increase from 1 to length[ A]/2 ? 6.3-3 Show that there are at most n /2h +1 nodes of height h in any n -element heap. 6.4 The heapsort algorithm The heapsort algorithm starts by using B UILD -M AX -H EAP to build a max-heap on the input array A[1 . . n ], where n = length[ A]. Since the maximum element of the array is stored at the root A[1], it can be put into its correct final position 136 Chapter 6 Heapsort by exchanging it with A[n ]. If we now “discard” node n from the heap (by decrementing heap-size [ A]), we observe that A[1 . . (n − 1)] can easily be made into a max-heap. The children of the root remain max-heaps, but the new root element may violate the max-heap property. All that is needed to restore the max-heap property, however, is one call to M AX -H EAPIFY ( A, 1), which leaves a max-heap in A[1 . . (n − 1)]. The heapsort algorithm then repeats this process for the maxheap of size n − 1 down to a heap of size 2. (See Exercise 6.4-2 for a precise loop invariant.) H EAPSORT ( A) 1 B UILD -M AX -H EAP ( A) 2 for i ← length[ A] downto 2 3 do exchange A[1] ↔ A[i ] 4 heap-size [ A] ← heap-size [ A] − 1 5 M AX -H EAPIFY ( A, 1) Figure 6.4 shows an example of the operation of heapsort after the max-heap is initially built. Each max-heap is shown at the beginning of an iteration of the for loop of lines 2–5. The H EAPSORT procedure takes time O (n lg n ), since the call to B UILD -M AX H EAP takes time O (n ) and each of the n − 1 calls to M AX -H EAPIFY takes time O (lg n ). Exercises 6.4-1 Using Figure 6.4 as a model, illustrate the operation of H EAPSORT on the array A = 5, 13, 2, 25, 7, 17, 20, 8, 4 . 6.4-2 Argue the correctness of H EAPSORT using the following loop invariant: At the start of each iteration of the for loop of lines 2–5, the subarray A[1 . . i ] is a max-heap containing the i smallest elements of A[1 . . n ], and the subarray A[i + 1 . . n ] contains the n − i largest elements of A[1 . . n ], sorted. 6.4-3 What is the running time of heapsort on an array A of length n that is already sorted in increasing order? What about decreasing order? 6.4-4 Show that the worst-case running time of heapsort is (n lg n ). 6.4 The heapsort algorithm 137 16 14 8 2 4 1 (a) 7 9 10 3 2 4 1 8 7 16 i 14 10 9 (b) 3 2 4 8 i 14 16 7 10 9 1 (c) 3 9 8 4 i 10 14 16 (d) 7 1 3 2 10 4 14 16 7 2 8 3 1 (e) i9 10 1 14 16 4 2 7 3 8i (f) 9 4 2 1 10 14 i7 16 (g) 8 3 9 i4 10 14 16 2 7 3 1 8 (h) 9 10 4 14 16 1 7 2 3i 8 (i) 9 1 i2 4 10 14 16 (j) (k) 7 8 3 9 A 1 23 47 8 9 10 14 16 Figure 6.4 The operation of H EAPSORT. (a) The max-heap data structure just after it has been built by B UILD -M AX -H EAP. (b)–(j) The max-heap just after each call of M AX -H EAPIFY in line 5. The value of i at that time is shown. Only lightly shaded nodes remain in the heap. (k) The resulting sorted array A. 138 Chapter 6 Heapsort 6.4-5 Show that when all elements are distinct, the best-case running time of heapsort is (n lg n ). 6.5 Priority queues Heapsort is an excellent algorithm, but a good implementation of quicksort, presented in Chapter 7, usually beats it in practice. Nevertheless, the heap data structure itself has enormous utility. In this section, we present one of the most popular applications of a heap: its use as an efficient priority queue. As with heaps, there are two kinds of priority queues: max-priority queues and min-priority queues. We will focus here on how to implement max-priority queues, which are in turn based on max-heaps; Exercise 6.5-3 asks you to write the procedures for min-priority queues. A priority queue is a data structure for maintaining a set S of elements, each with an associated value called a key. A max-priority queue supports the following operations. I NSERT ( S , x ) inserts the element x into the set S . This operation could be written as S ← S ∪ {x }. M AXIMUM ( S ) returns the element of S with the largest key. E XTRACT-M AX ( S ) removes and returns the element of S with the largest key. I NCREASE -K EY ( S , x , k ) increases the value of element x ’s key to the new value k , which is assumed to be at least as large as x ’s current key value. One application of max-priority queues is to schedule jobs on a shared computer. The max-priority queue keeps track of the jobs to be performed and their relative priorities. When a job is finished or interrupted, the highest-priority job is selected from those pending using E XTRACT-M AX. A new job can be added to the queue at any time using I NSERT. Alternatively, a min-priority queue supports the operations I NSERT, M INIMUM, E XTRACT-M IN, and D ECREASE -K EY. A min-priority queue can be used in an event-driven simulator. The items in the queue are events to be simulated, each with an associated time of occurrence that serves as its key. The events must be simulated in order of their time of occurrence, because the simulation of an event can cause other events to be simulated in the future. The simulation program uses E XTRACT-M IN at each step to choose the next event to simulate. As new events are produced, they are inserted into the min-priority queue using I NSERT. We shall see other uses for min-priority queues, highlighting the D ECREASE -K EY operation, in Chapters 23 and 24. 6.5 Priority queues 139 Not surprisingly, we can use a heap to implement a priority queue. In a given application, such as job scheduling or event-driven simulation, elements of a priority queue correspond to objects in the application. It is often necessary to determine which application object corresponds to a given priority-queue element, and viceversa. When a heap is used to implement a priority queue, therefore, we often need to store a handle to the corresponding application object in each heap element. The exact makeup of the handle (i.e., a pointer, an integer, etc.) depends on the application. Similarly, we need to store a handle to the corresponding heap element in each application object. Here, the handle would typically be an array index. Because heap elements change locations within the array during heap operations, an actual implementation, upon relocating a heap element, would also have to update the array index in the corresponding application object. Because the details of accessing application objects depend heavily on the application and its implementation, we shall not pursue them here, other than noting that in practice, these handles do need to be correctly maintained. Now we discuss how to implement the operations of a max-priority queue. The procedure H EAP -M AXIMUM implements the M AXIMUM operation in (1) time. H EAP -M AXIMUM ( A) 1 return A[1] The procedure H EAP -E XTRACT-M AX implements the E XTRACT-M AX operation. It is similar to the for loop body (lines 3–5) of the H EAPSORT procedure. H EAP -E XTRACT-M AX ( A) 1 if heap-size [ A] < 1 2 then error “heap underflow” 3 max ← A[1] 4 A[1] ← A[heap-size [ A]] 5 heap-size [ A] ← heap-size [ A] − 1 6 M AX -H EAPIFY ( A, 1) 7 return max The running time of H EAP -E XTRACT-M AX is O (lg n ), since it performs only a constant amount of work on top of the O (lg n ) time for M AX -H EAPIFY. The procedure H EAP -I NCREASE -K EY implements the I NCREASE -K EY operation. The priority-queue element whose key is to be increased is identified by an index i into the array. The procedure first updates the key of element A[i ] to its new value. Because increasing the key of A[i ] may violate the max-heap property, the procedure then, in a manner reminiscent of the insertion loop (lines 5–7) of I NSERTION -S ORT from Section 2.1, traverses a path from this node toward the 140 Chapter 6 Heapsort root to find a proper place for the newly increased key. During this traversal, it repeatedly compares an element to its parent, exchanging their keys and continuing if the element’s key is larger, and terminating if the element’s key is smaller, since the max-heap property now holds. (See Exercise 6.5-5 for a precise loop invariant.) H EAP -I NCREASE -K EY ( A, i , key ) 1 if key < A[i ] 2 then error “new key is smaller than current key” 3 A[i ] ← key 4 while i > 1 and A[PARENT (i )] < A[i ] 5 do exchange A[i ] ↔ A[PARENT (i )] 6 i ← PARENT (i ) Figure 6.5 shows an example of a H EAP -I NCREASE -K EY operation. The running time of H EAP -I NCREASE -K EY on an n -element heap is O (lg n ), since the path traced from the node updated in line 3 to the root has length O (lg n ). The procedure M AX -H EAP -I NSERT implements the I NSERT operation. It takes as an input the key of the new element to be inserted into max-heap A. The procedure first expands the max-heap by adding to the tree a new leaf whose key is −∞. Then it calls H EAP -I NCREASE -K EY to set the key of this new node to its correct value and maintain the max-heap property. M AX -H EAP -I NSERT ( A, key ) 1 heap-size [ A] ← heap-size [ A] + 1 2 A[heap-size [ A]] ← −∞ 3 H EAP -I NCREASE -K EY ( A, heap-size [ A], key ) The running time of M AX -H EAP -I NSERT on an n -element heap is O (lg n ). In summary, a heap can support any priority-queue operation on a set of size n in O (lg n ) time. Exercises 6.5-1 Illustrate the operation of H EAP -E XTRACT-M AX on the heap A = 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 . 6.5-2 Illustrate the operation of M AX -H EAP -I NSERT ( A, 10) on the heap A = 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 . Use the heap of Figure 6.5 as a model for the H EAP I NCREASE -K EY call. 6.5 Priority queues 141 16 14 8 i 2 4 1 (a) 2 15 7 9 10 3 8 i 1 14 7 16 10 9 3 (b) 16 14 i 15 2 8 1 (c) 7 9 10 3 2 14 8 1 i 15 7 16 10 9 3 (d) Figure 6.5 The operation of H EAP -I NCREASE -K EY. (a) The max-heap of Figure 6.4(a) with a node whose index is i heavily shaded. (b) This node has its key increased to 15. (c) After one iteration of the while loop of lines 4–6, the node and its parent have exchanged keys, and the index i moves up to the parent. (d) The max-heap after one more iteration of the while loop. At this point, A[PARENT(i )] ≥ A[i ]. The max-heap property now holds and the procedure terminates. 6.5-3 Write pseudocode for the procedures H EAP -M INIMUM, H EAP -E XTRACT-M IN, H EAP -D ECREASE -K EY, and M IN -H EAP -I NSERT that implement a min-priority queue with a min-heap. 6.5-4 Why do we bother setting the key of the inserted node to −∞ in line 2 of M AX H EAP -I NSERT when the next thing we do is increase its key to the desired value? 142 Chapter 6 Heapsort 6.5-5 Argue the correctness of H EAP -I NCREASE -K EY using the following loop invariant: At the start of each iteration of the while loop of lines 4–6, the array A[1 . . heap-size [ A]] satisfies the max-heap property, except that there may be one violation: A[i ] may be larger than A[PARENT (i )]. 6.5-6 Show how to implement a first-in, first-out queue with a priority queue. Show how to implement a stack with a priority queue. (Queues and stacks are defined in Section 10.1.) 6.5-7 The operation H EAP -D ELETE ( A, i ) deletes the item in node i from heap A. Give an implementation of H EAP -D ELETE that runs in O (lg n ) time for an n -element max-heap. 6.5-8 Give an O (n lg k )-time algorithm to merge k sorted lists into one sorted list, where n is the total number of elements in all the input lists. (Hint: Use a minheap for k -way merging.) Problems 6-1 Building a heap using insertion The procedure B UILD -M AX -H EAP in Section 6.3 can be implemented by repeatedly using M AX -H EAP -I NSERT to insert the elements into the heap. Consider the following implementation: B UILD -M AX -H EAP ( A) 1 heap-size [ A] ← 1 2 for i ← 2 to length[ A] 3 do M AX -H EAP -I NSERT ( A, A[i ]) a. Do the procedures B UILD -M AX -H EAP and B UILD -M AX -H EAP always create the same heap when run on the same input array? Prove that they do, or provide a counterexample. b. Show that in the worst case, B UILD -M AX -H EAP requires build an n -element heap. (n lg n ) time to Problems for Chapter 6 143 6-2 Analysis of d -ary heaps A d-ary heap is like a binary heap, but (with one possible exception) non-leaf nodes have d children instead of 2 children. a. How would you represent a d -ary heap in an array? b. What is the height of a d -ary heap of n elements in terms of n and d ? c. Give an efficient implementation of E XTRACT-M AX in a d -ary max-heap. Analyze its running time in terms of d and n . d. Give an efficient implementation of I NSERT in a d -ary max-heap. Analyze its running time in terms of d and n . e. Give an efficient implementation of I NCREASE -K EY ( A, i , k ), which first sets A[i ] ← max( A[i ], k ) and then updates the d -ary max-heap structure appropriately. Analyze its running time in terms of d and n . 6-3 Young tableaus An m × n Young tableau is an m × n matrix such that the entries of each row are in sorted order from left to right and the entries of each column are in sorted order from top to bottom. Some of the entries of a Young tableau may be ∞, which we treat as nonexistent elements. Thus, a Young tableau can be used to hold r ≤ mn finite numbers. a. Draw a 4×4 Young tableau containing the elements {9, 16, 3, 2, 4, 8, 5, 14, 12} . b. Argue that an m × n Young tableau Y is empty if Y [1, 1] = ∞. Argue that Y is full (contains mn elements) if Y [m , n ] < ∞. c. Give an algorithm to implement E XTRACT-M IN on a nonempty m × n Young tableau that runs in O (m + n ) time. Your algorithm should use a recursive subroutine that solves an m × n problem by recursively solving either an (m − 1) × n or an m × (n − 1) subproblem. (Hint: Think about M AX H EAPIFY.) Define T ( p ), where p = m + n , to be the maximum running time of E XTRACT-M IN on any m × n Young tableau. Give and solve a recurrence for T ( p ) that yields the O (m + n ) time bound. d. Show how to insert a new element into a nonfull m × n Young tableau in O (m + n ) time. e. Using no other sorting method as a subroutine, show how to use an n × n Young tableau to sort n 2 numbers in O (n 3 ) time. 144 Chapter 6 Heapsort f. Give an O (m +n )-time algorithm to determine whether a given number is stored in a given m × n Young tableau. Chapter notes The heapsort algorithm was invented by Williams [316], who also described how to implement a priority queue with a heap. The B UILD -M AX -H EAP procedure was suggested by Floyd [90]. We use min-heaps to implement min-priority queues in Chapters 16, 23, and 24. We also give an implementation with improved time bounds for certain operations in Chapters 19 and 20. Faster implementations of priority queues are possible for integer data. A data structure invented by van Emde Boas [301] supports the operations M INIMUM, M AXIMUM, I NSERT, D ELETE, S EARCH, E XTRACT-M IN, E XTRACT-M AX, P RE DECESSOR, and S UCCESSOR in worst-case time O (lg lg C ), subject to the restriction that the universe of keys is the set {1, 2, . . . , C }. If the data are b-bit integers, and the computer memory consists of addressable b-bit words, Fredman and Willard [99] showed how to implement M INIMUM in O (1) time and I NSERT and E XTRACT-M IN in O ( lg n ) time. Thorup [299] has improved the O ( lg n ) bound to O ((lg lg n )2 ) time. This bound uses an amount of space unbounded in n , but it can be implemented in linear space by using randomized hashing. An important special case of priority queues occurs when the sequence of E XTRACT-M IN operations is monotone, that is, the values returned by successive E XTRACT-M IN operations are monotonically increasing over time. This case arises in several important applications, such as Dijkstra’s single-source shortestpaths algorithm, which is discussed in Chapter 24, and in discrete-event simulation. For Dijkstra’s algorithm it is particularly important that the D ECREASE -K EY operation be implemented efficiently. For the monotone case, if the data are integers in the range 1, 2, . . . , C , Ahuja, Melhorn, Orlin, and Tarjan [8] describe how to implement E XTRACT-M IN and I NSERT in O (lg C ) amortized time (see Chapter 17 for more on amortized analysis) and D ECREASE -K EY in O (1) time, using a data structure called a radix heap. The O (lg C ) bound can be improved to O ( lg C ) using Fibonacci heaps (see Chapter 20) in conjunction with radix heaps. The bound was further improved to O (lg 1/3+ C ) expected time by Cherkassky, Goldberg, and Silverstein [58], who combine the multilevel bucketing structure of Denardo and Fox [72] with the heap of Thorup mentioned above. Raman [256] further improved these results to obtain a bound of O (min(lg 1/4+ C , lg1/3+ n )), for any fixed > 0. More detailed discussions of these results can be found in papers by Raman [256] and Thorup [299]. 7 Quicksort Quicksort is a sorting algorithm whose worst-case running time is (n 2) on an input array of n numbers. In spite of this slow worst-case running time, quicksort is often the best practical choice for sorting because it is remarkably efficient on the average: its expected running time is (n lg n ), and the constant factors hidden in the (n lg n ) notation are quite small. It also has the advantage of sorting in place (see page 16), and it works well even in virtual memory environments. Section 7.1 describes the algorithm and an important subroutine used by quicksort for partitioning. Because the behavior of quicksort is complex, we start with an intuitive discussion of its performance in Section 7.2 and postpone its precise analysis to the end of the chapter. Section 7.3 presents a version of quicksort that uses random sampling. This algorithm has a good average-case running time, and no particular input elicits its worst-case behavior. The randomized algorithm is analyzed in Section 7.4, where it is shown to run in (n 2 ) time in the worst case and in O (n lg n ) time on average. 7.1 Description of quicksort Quicksort, like merge sort, is based on the divide-and-conquer paradigm introduced in Section 2.3.1. Here is the three-step divide-and-conquer process for sorting a typical subarray A[ p . . r ]. Divide: Partition (rearrange) the array A[ p . . r ] into two (possibly empty) subarrays A[ p . . q − 1] and A[q + 1 . . r ] such that each element of A[ p . . q − 1] is less than or equal to A[q ], which is, in turn, less than or equal to each element of A[q + 1 . . r ]. Compute the index q as part of this partitioning procedure. Conquer: Sort the two subarrays A[ p . . q − 1] and A[q + 1 . . r ] by recursive calls to quicksort. Combine: Since the subarrays are sorted in place, no work is needed to combine them: the entire array A[ p . . r ] is now sorted. 146 Chapter 7 Quicksort The following procedure implements quicksort. Q UICKSORT ( A, p , r ) 1 if p < r 2 then q ← PARTITION ( A, p , r ) 3 Q UICKSORT ( A, p , q − 1) 4 Q UICKSORT ( A, q + 1, r ) To sort an entire array A, the initial call is Q UICKSORT ( A, 1, length [ A]). Partitioning the array The key to the algorithm is the PARTITION procedure, which rearranges the subarray A[ p . . r ] in place. PARTITION ( A, p , r ) 1 x ← A[r ] 2 i ← p−1 3 for j ← p to r − 1 4 do if A[ j ] ≤ x 5 then i ← i + 1 6 exchange A[i ] ↔ A[ j ] 7 exchange A[i + 1] ↔ A[r ] 8 return i + 1 Figure 7.1 shows the operation of PARTITION on an 8-element array. PARTITION always selects an element x = A[r ] as a pivot element around which to partition the subarray A[ p . . r ]. As the procedure runs, the array is partitioned into four (possibly empty) regions. At the start of each iteration of the for loop in lines 3–6, each region satisfies certain properties, which we can state as a loop invariant: At the beginning of each iteration of the loop of lines 3–6, for any array index k , 1. If p ≤ k ≤ i , then A[k ] ≤ x . 2. If i + 1 ≤ k ≤ j − 1, then A[k ] > x . 3. If k = r , then A[k ] = x . Figure 7.2 summarizes this structure. The indices between j and r − 1 are not covered by any of the three cases, and the values in these entries have no particular relationship to the pivot x . We need to show that this loop invariant is true prior to the first iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates. 7.1 Description of quicksort 147 (a) i p,j 28 p,i j 28 p,i 28 p,i 28 p 2 p 2 p 2 p 2 p 2 i 1 7 13 5 6 r 4 r 4 r 4 r 4 r 4 r 4 r 4 r 4 r 8 (b) 7 j 7 13 5 6 (c) 13 j 13 j 83 5 6 (d) 7 5 6 (e) 7 i 3 i 3 i 3 i 3 5 j 5 6 (f) 1 87 6 j 6 (g) 1 87 5 (h) 1 87 5 6 (i) 1 47 5 6 Figure 7.1 The operation of PARTITION on a sample array. Lightly shaded array elements are all in the first partition with values no greater than x . Heavily shaded elements are in the second partition with values greater than x . The unshaded elements have not yet been put in one of the first two partitions, and the final white element is the pivot. (a) The initial array and variable settings. None of the elements have been placed in either of the first two partitions. (b) The value 2 is “swapped with itself” and put in the partition of smaller values. (c)–(d) The values 8 and 7 are added to the partition of larger values. (e) The values 1 and 8 are swapped, and the smaller partition grows. (f) The values 3 and 8 are swapped, and the smaller partition grows. (g)–(h) The larger partition grows to include 5 and 6 and the loop terminates. (i) In lines 7–8, the pivot element is swapped so that it lies between the two partitions. Initialization: Prior to the first iteration of the loop, i = p − 1, and j = p . There are no values between p and i , and no values between i + 1 and j − 1, so the first two conditions of the loop invariant are trivially satisfied. The assignment in line 1 satisfies the third condition. Maintenance: As Figure 7.3 shows, there are two cases to consider, depending on the outcome of the test in line 4. Figure 7.3(a) shows what happens when 148 Chapter 7 Quicksort p ≤x i j r x unrestricted >x Figure 7.2 The four regions maintained by the procedure PARTITION on a subarray A[ p . . r ]. The values in A[ p . . i ] are all less than or equal to x , the values in A[i + 1 . . j − 1] are all greater than x , and A[r ] = x . The values in A[ j . . r − 1] can take on any values. A[ j ] > x ; the only action in the loop is to increment j . After j is incremented, condition 2 holds for A[ j − 1] and all other entries remain unchanged. Figure 7.3(b) shows what happens when A[ j ] ≤ x ; i is incremented, A[i ] and A[ j ] are swapped, and then j is incremented. Because of the swap, we now have that A[i ] ≤ x , and condition 1 is satisfied. Similarly, we also have that A[ j − 1] > x , since the item that was swapped into A[ j − 1] is, by the loop invariant, greater than x . Termination: At termination, j = r . Therefore, every entry in the array is in one of the three sets described by the invariant, and we have partitioned the values in the array into three sets: those less than or equal to x , those greater than x , and a singleton set containing x . The final two lines of PARTITION move the pivot element into its place in the middle of the array by swapping it with the leftmost element that is greater than x . The output of PARTITION now satisfies the specifications given for the divide step. The running time of PARTITION on the subarray A[ p . . r ] is (n ), where n = r − p + 1 (see Exercise 7.1-3). Exercises 7.1-1 Using Figure 7.1 as a model, illustrate the operation of PARTITION on the array A = 13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21 . 7.1-2 What value of q does PARTITION return when all elements in the array A[ p . . r ] have the same value? Modify PARTITION so that q = ( p + r )/2 when all elements in the array A[ p . . r ] have the same value. 7.1-3 Give a brief argument that the running time of PARTITION on a subarray of size n is (n ). 7.2 Performance of quicksort 149 p (a) ≤x p ≤x p (b) ≤x p ≤x i j >x >x r x i j r x >x i j ≤x >x i j r x r x >x Figure 7.3 The two cases for one iteration of procedure PARTITION. (a) If A[ j ] > x , the only action is to increment j , which maintains the loop invariant. (b) If A[ j ] ≤ x , index i is incremented, A[i ] and A[ j ] are swapped, and then j is incremented. Again, the loop invariant is maintained. 7.1-4 How would you modify Q UICKSORT to sort into nonincreasing order? 7.2 Performance of quicksort The running time of quicksort depends on whether the partitioning is balanced or unbalanced, and this in turn depends on which elements are used for partitioning. If the partitioning is balanced, the algorithm runs asymptotically as fast as merge sort. If the partitioning is unbalanced, however, it can run asymptotically as slowly as insertion sort. In this section, we shall informally investigate how quicksort performs under the assumptions of balanced versus unbalanced partitioning. Worst-case partitioning The worst-case behavior for quicksort occurs when the partitioning routine produces one subproblem with n − 1 elements and one with 0 elements. (This claim is proved in Section 7.4.1.) Let us assume that this unbalanced partitioning arises in each recursive call. The partitioning costs (n ) time. Since the recursive call 150 Chapter 7 Quicksort on an array of size 0 just returns, T (0) = time is T (n ) = T (n − 1) + T (0) + = T (n − 1) + (n ) . (n ) (1), and the recurrence for the running Intuitively, if we sum the costs incurred at each level of the recursion, we get an arithmetic series (equation (A.2)), which evaluates to (n 2). Indeed, it is straightforward to use the substitution method to prove that the recurrence T (n ) = T (n − 1) + (n ) has the solution T (n ) = (n 2 ). (See Exercise 7.2-1.) Thus, if the partitioning is maximally unbalanced at every recursive level of the algorithm, the running time is (n 2). Therefore the worst-case running time of quicksort is no better than that of insertion sort. Moreover, the (n 2 ) running time occurs when the input array is already completely sorted—a common situation in which insertion sort runs in O (n ) time. Best-case partitioning In the most even possible split, PARTITION produces two subproblems, each of size no more than n /2, since one is of size n /2 and one of size n /2 − 1. In this case, quicksort runs much faster. The recurrence for the running time is then which by case 2 of the master theorem (Theorem 4.1) has the solution T (n ) = O (n lg n ). Thus, the equal balancing of the two sides of the partition at every level of the recursion produces an asymptotically faster algorithm. Balanced partitioning The average-case running time of quicksort is much closer to the best case than to the worst case, as the analyses in Section 7.4 will show. The key to understanding why is to understand how the balance of the partitioning is reflected in the recurrence that describes the running time. Suppose, for example, that the partitioning algorithm always produces a 9-to-1 proportional split, which at first blush seems quite unbalanced. We then obtain the recurrence on the running time of quicksort, where we have explicitly included the constant c hidden in the (n ) term. Figure 7.4 shows the recursion tree for this recurrence. Notice that every level of the tree has cost cn , until a boundary condition is reached at depth log 10 n = (lg n ), and then the levels have cost at most cn . The recursion terminates at depth log 10/9 n = (lg n ). The total cost of quicksort is T (n ) ≤ T (9n /10) + T (n /10) + cn T (n ) ≤ 2T (n /2) + (n ) , 7.2 Performance of quicksort 151 PSfrag replacements 1 10 n n 9 100 9 10 cn n 81 100 cn n 729 1000 log10 n log10/9 n 1 1 100 n n 9 100 n 81 1000 cn n cn ≤ cn 1 ≤ cn O (n lg n ) n Figure 7.4 A recursion tree for Q UICKSORT in which PARTITION always produces a 9-to-1 split, yielding a running time of O (n lg n ). Nodes show subproblem sizes, with per-level costs on the right. The per-level costs include the constant c implicit in the (n ) term. therefore O (n lg n ). Thus, with a 9-to-1 proportional split at every level of recursion, which intuitively seems quite unbalanced, quicksort runs in O (n lg n ) time—asymptotically the same as if the split were right down the middle. In fact, even a 99-to-1 split yields an O (n lg n ) running time. The reason is that any split of constant proportionality yields a recursion tree of depth (lg n ), where the cost at each level is O (n ). The running time is therefore O (n lg n ) whenever the split has constant proportionality. Intuition for the average case To develop a clear notion of the average case for quicksort, we must make an assumption about how frequently we expect to encounter the various inputs. The behavior of quicksort is determined by the relative ordering of the values in the array elements given as the input, and not by the particular values in the array. As in our probabilistic analysis of the hiring problem in Section 5.2, we will assume for now that all permutations of the input numbers are equally likely. When we run quicksort on a random input array, it is unlikely that the partitioning always happens in the same way at every level, as our informal analysis has assumed. We expect that some of the splits will be reasonably well balanced and that some will be fairly unbalanced. For example, Exercise 7.2-6 asks you to show 152 Chapter 7 Quicksort n Θ(n) 0 (n–1)/2 – 1 (a) n–1 (n–1)/2 (n–1)/2 (b) (n–1)/2 n Θ(n) Figure 7.5 (a) Two levels of a recursion tree for quicksort. The partitioning at the root costs n and produces a “bad” split: two subarrays of sizes 0 and n − 1. The partitioning of the subarray of size n − 1 costs n − 1 and produces a “good” split: subarrays of size (n − 1)/2 − 1 and (n − 1)/2. (b) A single level of a recursion tree that is very well balanced. In both parts, the partitioning cost for the subproblems shown with elliptical shading is (n ). Yet the subproblems remaining to be solved in (a), shown with square shading, are no larger than the corresponding subproblems remaining to be solved in (b). that about 80 percent of the time PARTITION produces a split that is more balanced than 9 to 1, and about 20 percent of the time it produces a split that is less balanced than 9 to 1. In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose for the sake of intuition, however, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure 7.5(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is n for partitioning, and the subarrays produced have sizes n − 1 and 0: the worst case. At the next level, the subarray of size n − 1 is best-case partitioned into subarrays of size (n − 1)/2 − 1 and (n − 1)/2. Let’s assume that the boundary-condition cost is 1 for the subarray of size 0. The combination of the bad split followed by the good split produces three subarrays of sizes 0, (n − 1)/2 − 1, and (n − 1)/2 at a combined partitioning cost of (n ) + (n − 1) = (n ). Certainly, this situation is no worse than that in Figure 7.5(b), namely a single level of partitioning that produces two subarrays of size (n − 1)/2, at a cost of (n ). Yet this latter situation is balanced! Intuitively, the (n − 1) cost of the bad split can be absorbed into the (n ) cost of the good split, and the resulting split is good. Thus, the running time of quicksort, when levels alternate between good and bad splits, is like the running time for good splits alone: still O (n lg n ), but with a slightly larger constant hidden by the O -notation. We shall give a rigorous analysis of the average case in Section 7.4.2. 7.3 A randomized version of quicksort 153 Exercises 7.2-1 Use the substitution method to prove that the recurrence T (n ) = T (n − 1) + has the solution T (n ) = (n 2 ), as claimed at the beginning of Section 7.2. (n ) 7.2-2 What is the running time of Q UICKSORT when all elements of array A have the same value? 7.2-3 Show that the running time of Q UICKSORT is (n 2) when the array A contains distinct elements and is sorted in decreasing order. 7.2-4 Banks often record transactions on an account in order of the times of the transactions, but many people like to receive their bank statements with checks listed in order by check number. People usually write checks in order by check number, and merchants usually cash them with reasonable dispatch. The problem of converting time-of-transaction ordering to check-number ordering is therefore the problem of sorting almost-sorted input. Argue that the procedure I NSERTION -S ORT would tend to beat the procedure Q UICKSORT on this problem. 7.2-5 Suppose that the splits at every level of quicksort are in the proportion 1 − α to α , where 0 < α ≤ 1/2 is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately − lg n / lg α and the maximum depth is approximately − lg n / lg(1 − α). (Don’t worry about integer round-off.) 7.2-6 Argue that for any constant 0 < α ≤ 1/2, the probability is approximately 1 − 2α that on a random input array, PARTITION produces a split more balanced than 1 − α to α . 7.3 A randomized version of quicksort In exploring the average-case behavior of quicksort, we have made an assumption that all permutations of the input numbers are equally likely. In an engineering situation, however, we cannot always expect it to hold. (See Exercise 7.2-4.) As we saw in Section 5.3, we can sometimes add randomization to an algorithm in order to obtain good average-case performance over all inputs. Many people regard the 154 Chapter 7 Quicksort resulting randomized version of quicksort as the sorting algorithm of choice for large enough inputs. In Section 5.3, we randomized our algorithm by explicitly permuting the input. We could do so for quicksort also, but a different randomization technique, called random sampling, yields a simpler analysis. Instead of always using A[r ] as the pivot, we will use a randomly chosen element from the subarray A[ p . . r ]. We do so by exchanging element A[r ] with an element chosen at random from A[ p . . r ]. This modification, in which we randomly sample the range p , . . . , r , ensures that the pivot element x = A[r ] is equally likely to be any of the r − p + 1 elements in the subarray. Because the pivot element is randomly chosen, we expect the split of the input array to be reasonably well balanced on average. The changes to PARTITION and Q UICKSORT are small. In the new partition procedure, we simply implement the swap before actually partitioning: R ANDOMIZED -PARTITION ( A, p , r ) 1 i ← R ANDOM( p , r ) 2 exchange A[r ] ↔ A[i ] 3 return PARTITION ( A, p , r ) The new quicksort calls R ANDOMIZED -PARTITION in place of PARTITION: R ANDOMIZED -Q UICKSORT ( A, p , r ) 1 if p < r 2 then q ← R ANDOMIZED -PARTITION ( A, p , r ) 3 R ANDOMIZED -Q UICKSORT ( A, p , q − 1) 4 R ANDOMIZED -Q UICKSORT ( A, q + 1, r ) We analyze this algorithm in the next section. Exercises 7.3-1 Why do we analyze the average-case performance of a randomized algorithm and not its worst-case performance? 7.3-2 During the running of the procedure R ANDOMIZED -Q UICKSORT, how many calls are made to the random-number generator R ANDOM in the worst case? How about in the best case? Give your answer in terms of -notation. 7.4 Analysis of quicksort 155 7.4 Analysis of quicksort Section 7.2 gave some intuition for the worst-case behavior of quicksort and for why we expect it to run quickly. In this section, we analyze the behavior of quicksort more rigorously. We begin with a worst-case analysis, which applies to either Q UICKSORT or R ANDOMIZED -Q UICKSORT, and conclude with an average-case analysis of R ANDOMIZED -Q UICKSORT. 7.4.1 Worst-case analysis We saw in Section 7.2 that a worst-case split at every level of recursion in quicksort produces a (n 2 ) running time, which, intuitively, is the worst-case running time of the algorithm. We now prove this assertion. Using the substitution method (see Section 4.1), we can show that the running time of quicksort is O (n 2 ). Let T (n ) be the worst-case time for the procedure Q UICKSORT on an input of size n . We have the recurrence T (n ) = max (T (q ) + T (n − q − 1)) + 0≤q ≤n −1 (n ) , (7.1) where the parameter q ranges from 0 to n − 1 because the procedure PARTITION produces two subproblems with total size n − 1. We guess that T (n ) ≤ cn 2 for some constant c. Substituting this guess into recurrence (7.1), we obtain T (n ) ≤ 0≤q ≤n −1 max (cq 2 + c(n − q − 1)2 ) + 0≤q ≤n −1 (n ) (n ) . = c · max (q 2 + (n − q − 1)2 ) + The expression q 2 + (n − q − 1)2 achieves a maximum over the parameter’s range 0 ≤ q ≤ n − 1 at either endpoint, as can be seen since the second derivative of the expression with respect to q is positive (see Exercise 7.4-3). This observation gives us the bound max0≤q ≤n−1 (q 2 + (n − q − 1)2 ) ≤ (n − 1)2 = n 2 − 2n + 1. Continuing with our bounding of T (n ), we obtain T (n ) ≤ cn 2 − c(2n − 1) + ≤ cn 2 , (n ) since we can pick the constant c large enough so that the c(2n − 1) term dominates the (n ) term. Thus, T (n ) = O (n 2 ). We saw in Section 7.2 a specific case in which quicksort takes (n 2 ) time: when partitioning is unbalanced. Alternatively, Exercise 7.4-1 asks you to show that recurrence (7.1) has a solution of T (n ) = (n 2 ). Thus, the (worst-case) running time of quicksort is (n 2). 156 Chapter 7 Quicksort 7.4.2 Expected running time We have already given an intuitive argument why the average-case running time of R ANDOMIZED -Q UICKSORT is O (n lg n ): if, in each level of recursion, the split induced by R ANDOMIZED -PARTITION puts any constant fraction of the elements on one side of the partition, then the recursion tree has depth (lg n ), and O (n ) work is performed at each level. Even if we add new levels with the most unbalanced split possible between these levels, the total time remains O (n lg n ). We can analyze the expected running time of R ANDOMIZED -Q UICKSORT precisely by first understanding how the partitioning procedure operates and then using this understanding to derive an O (n lg n ) bound on the expected running time. This upper bound on the expected running time, combined with the (n lg n ) best-case bound we saw in Section 7.2, yields a (n lg n ) expected running time. Running time and comparisons The running time of Q UICKSORT is dominated by the time spent in the PARTI TION procedure. Each time the PARTITION procedure is called, a pivot element is selected, and this element is never included in any future recursive calls to Q UICK SORT and PARTITION. Thus, there can be at most n calls to PARTITION over the entire execution of the quicksort algorithm. One call to PARTITION takes O (1) time plus an amount of time that is proportional to the number of iterations of the for loop in lines 3–6. Each iteration of this for loop performs a comparison in line 4, comparing the pivot element to another element of the array A. Therefore, if we can count the total number of times that line 4 is executed, we can bound the total time spent in the for loop during the entire execution of Q UICKSORT. Lemma 7.1 Let X be the number of comparisons performed in line 4 of PARTITION over the entire execution of Q UICKSORT on an n -element array. Then the running time of Q UICKSORT is O (n + X ). Proof By the discussion above, there are n calls to PARTITION, each of which does a constant amount of work and then executes the for loop some number of times. Each iteration of the for loop executes line 4. Our goal, therefore is to compute X , the total number of comparisons performed in all calls to PARTITION. We will not attempt to analyze how many comparisons are made in each call to PARTITION. Rather, we will derive an overall bound on the total number of comparisons. To do so, we must understand when the algorithm compares two elements of the array and when it does not. For ease of analysis, we rename the elements of the array A as z 1 , z 2 , . . . , z n , with z i being the i th smallest 7.4 Analysis of quicksort 157 element. We also define the set Z i j = {z i , z i +1 , . . . , z j } to be the set of elements between z i and z j , inclusive. When does the algorithm compare z i and z j ? To answer this question, we first observe that each pair of elements is compared at most once. Why? Elements are compared only to the pivot element and, after a particular call of PARTITION finishes, the pivot element used in that call is never again compared to any other elements. Our analysis uses indicator random variables (see Section 5.2). We define X i j = I {z i is compared to z j } , where we are considering whether the comparison takes place at any time during the execution of the algorithm, not just during one iteration or one call of PARTI TION. Since each pair is compared at most once, we can easily characterize the total number of comparisons performed by the algorithm: X= n −1 n Xij . i =1 j =i +1 Taking expectations of both sides, and then using linearity of expectation and Lemma 5.1, we obtain E [X] = E = = n −1 n −1 n Xij E [Xij ] i =1 j =i +1 n n −1 i =1 j =i +1 n i =1 j =i +1 Pr {z i is compared to z j } . (7.2) It remains to compute Pr {z i is compared to z j }. It is useful to think about when two items are not compared. Consider an input to quicksort of the numbers 1 through 10 (in any order), and assume that the first pivot element is 7. Then the first call to PARTITION separates the numbers into two sets: {1, 2, 3, 4, 5, 6} and {8, 9, 10}. In doing so, the pivot element 7 is compared to all other elements, but no number from the first set (e.g., 2) is or ever will be compared to any number from the second set (e.g., 9). In general, once a pivot x is chosen with z i < x < z j , we know that z i and z j cannot be compared at any subsequent time. If, on the other hand, z i is chosen as a pivot before any other item in Z i j , then z i will be compared to each item in Z i j , except for itself. Similarly, if z j is chosen as a pivot before any other item in Z i j , then z j will be compared to each item in Z i j , except for itself. In our example, the 158 Chapter 7 Quicksort values 7 and 9 are compared because 7 is the first item from Z 7,9 to be chosen as a pivot. In contrast, 2 and 9 will never be compared because the first pivot element chosen from Z 2,9 is 7. Thus, z i and z j are compared if and only if the first element to be chosen as a pivot from Z i j is either z i or z j . We now compute the probability that this event occurs. Prior to the point at which an element from Z i j has been chosen as a pivot, the whole set Z i j is together in the same partition. Therefore, any element of Z i j is equally likely to be the first one chosen as a pivot. Because the set Z i j has j − i + 1 elements, the probability that any given element is the first one chosen as a pivot is 1/( j − i + 1). Thus, we have Pr {z i is compared to z j } = Pr {z i or z j is first pivot chosen from Z i j } = Pr {z i is first pivot chosen from Z i j } + Pr {z j is first pivot chosen from Z i j } 1 1 + = j −i +1 j −i +1 2 . = j −i +1 (7.3) The second line follows because the two events are mutually exclusive. Combining equations (7.2) and (7.3), we get that E [X] = n −1 n i =1 j =i +1 2 . j −i +1 We can evaluate this sum using a change of variables (k = j − i ) and the bound on the harmonic series in equation (A.7): E [X] = = < = n −1 n n −1 n −i i =1 j =i +1 2 j −i +1 n −1 i =1 k =1 n 2 k+1 2 k n −1 i =1 i =1 k =1 O (lg n ) (7.4) Thus we conclude that, using R ANDOMIZED -PARTITION, the expected running time of quicksort is O (n lg n ). = O (n lg n ) . Problems for Chapter 7 159 Exercises 7.4-1 Show that in the recurrence T (n ) = max (T (q ) + T (n − q − 1)) + 0≤q ≤n −1 (n ) , T (n ) = (n 2 ). (n lg n ). 7.4-2 Show that quicksort’s best-case running time is 7.4-3 Show that q 2 + (n − q − 1)2 achieves a maximum over q = 0, 1, . . . , n − 1 when q = 0 or q = n − 1. 7.4-4 Show that R ANDOMIZED -Q UICKSORT ’s expected running time is (n lg n ). 7.4-5 The running time of quicksort can be improved in practice by taking advantage of the fast running time of insertion sort when its input is “nearly” sorted. When quicksort is called on a subarray with fewer than k elements, let it simply return without sorting the subarray. After the top-level call to quicksort returns, run insertion sort on the entire array to finish the sorting process. Argue that this sorting algorithm runs in O (nk + n lg(n / k )) expected time. How should k be picked, both in theory and in practice? 7.4-6 Consider modifying the PARTITION procedure by randomly picking three elements from array A and partitioning about their median (the middle value of the three elements). Approximate the probability of getting at worst an α -to-(1 − α) split, as a function of α in the range 0 < α < 1. Problems 7-1 Hoare partition correctness The version of PARTITION given in this chapter is not the original partitioning algorithm. Here is the original partition algorithm, which is due to T. Hoare: 160 Chapter 7 Quicksort H OARE -PARTITION ( A, p , r ) 1 x ← A[ p ] 2 i ← p−1 3 j ←r +1 4 while TRUE 5 do repeat j ← j − 1 6 until A[ j ] ≤ x 7 repeat i ← i + 1 8 until A[i ] ≥ x 9 if i < j 10 then exchange A[i ] ↔ A[ j ] 11 else return j a. Demonstrate the operation of H OARE -PARTITION on the array A = 13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21 , showing the values of the array and auxiliary values after each iteration of the for loop in lines 4–11. The next three questions ask you to give a careful argument that the procedure H OARE -PARTITION is correct. Prove the following: b. The indices i and j are such that we never access an element of A outside the subarray A[ p . . r ]. c. When H OARE -PARTITION terminates, it returns a value j such that p ≤ j < r . d. Every element of A[ p . . j ] is less than or equal to every element of A[ j + 1 . . r ] when H OARE -PARTITION terminates. The PARTITION procedure in Section 7.1 separates the pivot value (originally in A[r ]) from the two partitions it forms. The H OARE -PARTITION procedure, on the other hand, always places the pivot value (originally in A[ p ]) into one of the two partitions A[ p . . j ] and A[ j + 1 . . r ]. Since p ≤ j < r , this split is always nontrivial. e. Rewrite the Q UICKSORT procedure to use H OARE -PARTITION. 7-2 Alternative quicksort analysis An alternative analysis of the running time of randomized quicksort focuses on the expected running time of each individual recursive call to Q UICKSORT, rather than on the number of comparisons performed. a. Argue that, given an array of size n , the probability that any particular element is chosen as the pivot is 1/ n . Use this to define indicator random variables X i = I {i th smallest element is chosen as the pivot }. What is E [ X i ]? Problems for Chapter 7 161 b. Let T (n ) be a random variable denoting the running time of quicksort on an array of size n . Argue that n E [T (n )] = E q =1 X q (T (q − 1) + T (n − q ) + (n )) . (7.5) c. Show that equation (7.5) simplifies to E [T (n )] = d. Show that n −1 2 n n −1 q =0 E [T (q )] + (n ) . (7.6) 1 1 k lg k ≤ n 2 lg n − n 2 . 2 8 k =1 (7.7) (Hint: Split the summation into two parts, one for k = 1, 2, . . . , n /2 − 1 and one for k = n /2 , . . . , n − 1.) e. Using the bound from equation (7.7), show that the recurrence in equation (7.6) has the solution E [T (n )] = (n lg n ). (Hint: Show, by substitution, that E [T (n )] ≤ an log n − bn for some positive constants a and b.) 7-3 Stooge sort Professors Howard, Fine, and Howard have proposed the following “elegant” sorting algorithm: S TOOGE -S ORT ( A, i , j ) 1 if A[i ] > A[ j ] 2 then exchange A[i ] ↔ A[ j ] 3 if i + 1 ≥ j 4 then return 5 k ← ( j − i + 1)/3 6 S TOOGE -S ORT ( A, i , j − k ) 7 S TOOGE -S ORT ( A, i + k , j ) 8 S TOOGE -S ORT ( A, i , j − k ) £ Round down. £ First two-thirds. £ Last two-thirds. £ First two-thirds again. a. Argue that, if n = length[ A], then S TOOGE -S ORT ( A, 1, length [ A]) correctly sorts the input array A[1 . . n ]. b. Give a recurrence for the worst-case running time of S TOOGE -S ORT and a tight asymptotic ( -notation) bound on the worst-case running time. 162 Chapter 7 Quicksort c. Compare the worst-case running time of S TOOGE -S ORT with that of insertion sort, merge sort, heapsort, and quicksort. Do the professors deserve tenure? 7-4 Stack depth for quicksort The Q UICKSORT algorithm of Section 7.1 contains two recursive calls to itself. After the call to PARTITION, the left subarray is recursively sorted and then the right subarray is recursively sorted. The second recursive call in Q UICKSORT is not really necessary; it can be avoided by using an iterative control structure. This technique, called tail recursion, is provided automatically by good compilers. Consider the following version of quicksort, which simulates tail recursion. Q UICKSORT ( A, p , r ) 1 while p < r 2 do £ Partition and sort left subarray. 3 q ← PARTITION ( A, p , r ) 4 Q UICKSORT ( A, p , q − 1) 5 p ←q +1 a. Argue that Q UICKSORT ( A, 1, length [ A]) correctly sorts the array A. Compilers usually execute recursive procedures by using a stack that contains pertinent information, including the parameter values, for each recursive call. The information for the most recent call is at the top of the stack, and the information for the initial call is at the bottom. When a procedure is invoked, its information is pushed onto the stack; when it terminates, its information is popped. Since we assume that array parameters are represented by pointers, the information for each procedure call on the stack requires O (1) stack space. The stack depth is the maximum amount of stack space used at any time during a computation. b. Describe a scenario in which the stack depth of Q UICKSORT is n -element input array. (n ) on an (lg n ). c. Modify the code for Q UICKSORT so that the worst-case stack depth is Maintain the O (n lg n ) expected running time of the algorithm. 7-5 Median-of-3 partition One way to improve the R ANDOMIZED -Q UICKSORT procedure is to partition around a pivot that is chosen more carefully than by picking a random element from the subarray. One common approach is the median-of-3 method: choose the pivot as the median (middle element) of a set of 3 elements randomly selected from the subarray. (See Exercise 7.4-6.) For this problem, let us assume that the elements in the input array A[1 . . n ] are distinct and that n ≥ 3. We denote the Problems for Chapter 7 163 sorted output array by A [1 . . n ]. Using the median-of-3 method to choose the pivot element x , define pi = Pr {x = A [i ]}. a. Give an exact formula for pi as a function of n and i for i = 2, 3, . . . , n − 1. (Note that p1 = pn = 0.) b. By what amount have we increased the likelihood of choosing the pivot as x = A [ (n + 1)/2 ], the median of A[1 . . n ], compared to the ordinary implementation? Assume that n → ∞, and give the limiting ratio of these probabilities. c. If we define a “good” split to mean choosing the pivot as x = A [i ], where n /3 ≤ i ≤ 2n /3, by what amount have we increased the likelihood of getting a good split compared to the ordinary implementation? (Hint: Approximate the sum by an integral.) d. Argue that in the (n lg n ) running time of quicksort, the median-of-3 method affects only the constant factor. 7-6 Fuzzy sorting of intervals Consider a sorting problem in which the numbers are not known exactly. Instead, for each number, we know an interval on the real line to which it belongs. That is, we are given n closed intervals of the form [a i , bi ], where ai ≤ bi . The goal is to fuzzy-sort these intervals, i.e., produce a permutation i 1 , i 2 , . . . , i n of the intervals such that there exist c j ∈ [ai j , bi j ] satisfying c1 ≤ c2 ≤ · · · ≤ cn . a. Design an algorithm for fuzzy-sorting n intervals. Your algorithm should have the general structure of an algorithm that quicksorts the left endpoints (the a i ’s), but it should take advantage of overlapping intervals to improve the running time. (As the intervals overlap more and more, the problem of fuzzy-sorting the intervals gets easier and easier. Your algorithm should take advantage of such overlapping, to the extent that it exists.) b. Argue that your algorithm runs in expected time (n lg n ) in general, but runs in expected time (n ) when all of the intervals overlap (i.e., when there exists a value x such that x ∈ [ai , bi ] for all i ). Your algorithm should not be checking for this case explicitly; rather, its performance should naturally improve as the amount of overlap increases. 164 Chapter 7 Quicksort Chapter notes The quicksort procedure was invented by Hoare [147]; Hoare’s version appears in Problem 7-1. The PARTITION procedure given in Section 7.1 is due to N. Lomuto. The analysis in Section 7.4 is due to Avrim Blum. Sedgewick [268] and Bentley [40] provide a good reference on the details of implementation and how they matter. McIlroy [216] showed how to engineer a “killer adversary” that produces an array on which virtually any implementation of quicksort takes (n 2) time. If the implementation is randomized, the adversary produces the array after seeing the random choices of the quicksort algorithm. 8 Sorting in Linear Time We have now introduced several algorithms that can sort n numbers in O (n lg n ) time. Merge sort and heapsort achieve this upper bound in the worst case; quicksort achieves it on average. Moreover, for each of these algorithms, we can produce a sequence of n input numbers that causes the algorithm to run in (n lg n ) time. These algorithms share an interesting property: the sorted order they determine is based only on comparisons between the input elements. We call such sorting algorithms comparison sorts. All the sorting algorithms introduced thus far are comparison sorts. In Section 8.1, we shall prove that any comparison sort must make (n lg n ) comparisons in the worst case to sort n elements. Thus, merge sort and heapsort are asymptotically optimal, and no comparison sort exists that is faster by more than a constant factor. Sections 8.2, 8.3, and 8.4 examine three sorting algorithms—counting sort, radix sort, and bucket sort—that run in linear time. Needless to say, these algorithms use operations other than comparisons to determine the sorted order. Consequently, the (n lg n ) lower bound does not apply to them. 8.1 Lower bounds for sorting In a comparison sort, we use only comparisons between elements to gain order information about an input sequence a 1 , a2 , . . . , an . That is, given two elements ai and a j , we perform one of the tests ai < a j , ai ≤ a j , ai = a j , ai ≥ a j , or ai > a j to determine their relative order. We may not inspect the values of the elements or gain order information about them in any other way. In this section, we assume without loss of generality that all of the input elements are distinct. Given this assumption, comparisons of the form a i = a j are useless, so we can assume that no comparisons of this form are made. We also note that the comparisons ai ≤ a j , ai ≥ a j , ai > a j , and ai < a j are all equivalent in that 166 Chapter 8 Sorting in Linear Time 1:2 ≤ 2:3 ≤ 〈1,2,3〉 ≤ 〈1,3,2〉 > 1:3 > 〈3,1,2〉 ≤ 〈2,1,3〉 ≤ 〈2,3,1〉 > 1:3 > 2:3 > 〈3,2,1〉 Figure 8.1 The decision tree for insertion sort operating on three elements. An internal node annotated by i : j indicates a comparison between ai and a j . A leaf annotated by the permutation π (1), π(2), . . . , π(n ) indicates the ordering aπ(1) ≤ aπ(2) ≤ · · · ≤ aπ(n) . The shaded path indicates the decisions made when sorting the input sequence a1 = 6, a2 = 8, a3 = 5 ; the permutation 3, 1, 2 at the leaf indicates that the sorted ordering is a3 = 5 ≤ a1 = 6 ≤ a2 = 8. There are 3! = 6 possible permutations of the input elements, so the decision tree must have at least 6 leaves. they yield identical information about the relative order of a i and a j . We therefore assume that all comparisons have the form a i ≤ a j . The decision-tree model Comparison sorts can be viewed abstractly in terms of decision trees. A decision tree is a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size. Control, data movement, and all other aspects of the algorithm are ignored. Figure 8.1 shows the decision tree corresponding to the insertion sort algorithm from Section 2.1 operating on an input sequence of three elements. In a decision tree, each internal node is annotated by i : j for some i and j in the range 1 ≤ i , j ≤ n , where n is the number of elements in the input sequence. Each leaf is annotated by a permutation π (1), π(2), . . . , π(n ) . (See Section C.1 for background on permutations.) The execution of the sorting algorithm corresponds to tracing a path from the root of the decision tree to a leaf. At each internal node, a comparison ai ≤ a j is made. The left subtree then dictates subsequent comparisons for ai ≤ a j , and the right subtree dictates subsequent comparisons for ai > a j . When we come to a leaf, the sorting algorithm has established the ordering aπ(1) ≤ aπ(2) ≤ · · · ≤ aπ(n) . Because any correct sorting algorithm must be able to produce each permutation of its input, a necessary condition for a comparison sort to be correct is that each of the n ! permutations on n elements must appear as one of the leaves of the decision tree, and that each of these leaves must be reachable from the root by a path corresponding to an actual execution of the comparison sort. (We shall refer to such leaves as “reachable.”) Thus, we shall consider only decision trees in which each permutation appears as a reachable leaf. 8.1 Lower bounds for sorting 167 A lower bound for the worst case The length of the longest path from the root of a decision tree to any of its reachable leaves represents the worst-case number of comparisons that the corresponding sorting algorithm performs. Consequently, the worst-case number of comparisons for a given comparison sort algorithm equals the height of its decision tree. A lower bound on the heights of all decision trees in which each permutation appears as a reachable leaf is therefore a lower bound on the running time of any comparison sort algorithm. The following theorem establishes such a lower bound. Theorem 8.1 Any comparison sort algorithm requires (n lg n ) comparisons in the worst case. Proof From the preceding discussion, it suffices to determine the height of a decision tree in which each permutation appears as a reachable leaf. Consider a decision tree of height h with l reachable leaves corresponding to a comparison sort on n elements. Because each of the n ! permutations of the input appears as some leaf, we have n ! ≤ l . Since a binary tree of height h has no more than 2 h leaves, we have n ! ≤ l ≤ 2h , which, by taking logarithms, implies h ≥ lg(n !) (since the lg function is monotonically increasing) = (n lg n ) (by equation (3.18)) . Corollary 8.2 Heapsort and merge sort are asymptotically optimal comparison sorts. Proof The O (n lg n ) upper bounds on the running times for heapsort and merge sort match the (n lg n ) worst-case lower bound from Theorem 8.1. Exercises 8.1-1 What is the smallest possible depth of a leaf in a decision tree for a comparison sort? 8.1-2 Obtain asymptotically tight bounds on lg(n !) without using Stirling’s approximation. Instead, evaluate the summation n=1 lg k using techniques from Seck tion A.2. 168 Chapter 8 Sorting in Linear Time 8.1-3 Show that there is no comparison sort whose running time is linear for at least half of the n ! inputs of length n . What about a fraction of 1/ n of the inputs of length n ? What about a fraction 1/2n ? 8.1-4 You are given a sequence of n elements to sort. The input sequence consists of n / k subsequences, each containing k elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length n is to sort the k elements in each of the n / k subsequences. Show an (n lg k ) lower bound on the number of comparisons needed to solve this variant of the sorting problem. (Hint: It is not rigorous to simply combine the lower bounds for the individual subsequences.) 8.2 Counting sort Counting sort assumes that each of the n input elements is an integer in the range 0 to k , for some integer k . When k = O (n ), the sort runs in (n ) time. The basic idea of counting sort is to determine, for each input element x , the number of elements less than x . This information can be used to place element x directly into its position in the output array. For example, if there are 17 elements less than x , then x belongs in output position 18. This scheme must be modified slightly to handle the situation in which several elements have the same value, since we don’t want to put them all in the same position. In the code for counting sort, we assume that the input is an array A[1 . . n ], and thus length [ A] = n . We require two other arrays: the array B [1 . . n ] holds the sorted output, and the array C [0 . . k ] provides temporary working storage. C OUNTING -S ORT ( A, B , k ) 1 for i ← 0 to k 2 do C [i ] ← 0 3 for j ← 1 to length[ A] 4 do C [ A[ j ]] ← C [ A[ j ]] + 1 5 £ C [i ] now contains the number of elements equal to i . 6 for i ← 1 to k 7 do C [i ] ← C [i ] + C [i − 1] 8 £ C [i ] now contains the number of elements less than or equal to i . 9 for j ← length[ A] downto 1 10 do B [C [ A[ j ]]] ← A[ j ] 11 C [ A[ j ]] ← C [ A[ j ]] − 1 8.2 1 2 3 4 5 Counting sort 6 7 8 1 0 1 2 3 4 5 2 3 4 5 6 7 169 8 A25 0 1 3 2 0 3 2 4 3 5 0 3 B 0 1 2 3 4 5 3 C2 2 4 7 7 8 C20 2 3 0 1 (b) 6 7 8 1 2 3 4 5 6 7 8 C2 2 4 6 7 8 (a) 1 2 3 4 5 (c) B 0 0 1 2 3 4 5 3 B 0 0 1 2 3 4 3 5 3 1 2 3 4 5 6 7 8 B0 0 2 2 3 3 3 5 C12 4 6 7 8 C1 2 4 5 7 8 (f) (d) (e) Figure 8.2 The operation of C OUNTING -S ORT on an input array A[1 . . 8], where each element of A is a nonnegative integer no larger than k = 5. (a) The array A and the auxiliary array C after line 4. (b) The array C after line 7. (c)–(e) The output array B and the auxiliary array C after one, two, and three iterations of the loop in lines 9–11, respectively. Only the lightly shaded elements of array B have been filled in. (f) The final sorted output array B . Figure 8.2 illustrates counting sort. After the initialization in the for loop of lines 1–2, we inspect each input element in the for loop of lines 3–4. If the value of an input element is i , we increment C [i ]. Thus, after line 4, C [i ] holds the number of input elements equal to i for each integer i = 0, 1, . . . , k . In lines 6–7, we determine for each i = 0, 1, . . . , k , how many input elements are less than or equal to i by keeping a running sum of the array C . Finally, in the for loop of lines 9–11, we place each element A[ j ] in its correct sorted position in the output array B . If all n elements are distinct, then when we first enter line 9, for each A[ j ], the value C [ A[ j ]] is the correct final position of A[ j ] in the output array, since there are C [ A[ j ]] elements less than or equal to A[ j ]. Because the elements might not be distinct, we decrement C [ A[ j ]] each time we place a value A[ j ] into the B array. Decrementing C [ A[ j ]] causes the next input element with a value equal to A[ j ], if one exists, to go to the position immediately before A[ j ] in the output array. How much time does counting sort require? The for loop of lines 1–2 takes time (k ), the for loop of lines 3–4 takes time (n ), the for loop of lines 6–7 takes time (k ), and the for loop of lines 9–11 takes time (n ). Thus, the overall time is (k +n ). In practice, we usually use counting sort when we have k = O (n ), in which case the running time is (n ). Counting sort beats the lower bound of (n lg n ) proved in Section 8.1 because it is not a comparison sort. In fact, no comparisons between input elements occur 170 Chapter 8 Sorting in Linear Time anywhere in the code. Instead, counting sort uses the actual values of the elements to index into an array. The (n lg n ) lower bound for sorting does not apply when we depart from the comparison-sort model. An important property of counting sort is that it is stable: numbers with the same value appear in the output array in the same order as they do in the input array. That is, ties between two numbers are broken by the rule that whichever number appears first in the input array appears first in the output array. Normally, the property of stability is important only when satellite data are carried around with the element being sorted. Counting sort’s stability is important for another reason: counting sort is often used as a subroutine in radix sort. As we shall see in the next section, counting sort’s stability is crucial to radix sort’s correctness. Exercises 8.2-1 Using Figure 8.2 as a model, illustrate the operation of C OUNTING -S ORT on the array A = 6, 0, 2, 0, 1, 3, 4, 6, 1, 3, 2 . 8.2-2 Prove that C OUNTING -S ORT is stable. 8.2-3 Suppose that the for loop header in line 9 of the C OUNTING -S ORT procedure is rewritten as 9 for j ← 1 to length [ A] Show that the algorithm still works properly. Is the modified algorithm stable? 8.2-4 Describe an algorithm that, given n integers in the range 0 to k , preprocesses its input and then answers any query about how many of the n integers fall into a range [a . . b] in O (1) time. Your algorithm should use (n + k ) preprocessing time. 8.3 Radix sort Radix sort is the algorithm used by the card-sorting machines you now find only in computer museums. The cards are organized into 80 columns, and in each column a hole can be punched in one of 12 places. The sorter can be mechanically “programmed” to examine a given column of each card in a deck and distribute the 8.3 Radix sort 171 3 29 4 57 6 57 8 39 4 36 7 20 3 55 720 355 436 457 657 329 839 7 20 3 29 4 36 8 39 3 55 4 57 6 57 329 355 436 457 657 720 839 Figure 8.3 The operation of radix sort on a list of seven 3-digit numbers. The leftmost column is the input. The remaining columns show the list after successive sorts on increasingly significant digit positions. Shading indicates the digit position sorted on to produce each list from the previous one. card into one of 12 bins depending on which place has been punched. An operator can then gather the cards bin by bin, so that cards with the first place punched are on top of cards with the second place punched, and so on. For decimal digits, only 10 places are used in each column. (The other two places are used for encoding nonnumeric characters.) A d -digit number would then occupy a field of d columns. Since the card sorter can look at only one column at a time, the problem of sorting n cards on a d -digit number requires a sorting algorithm. Intuitively, one might want to sort numbers on their most significant digit, sort each of the resulting bins recursively, and then combine the decks in order. Unfortunately, since the cards in 9 of the 10 bins must be put aside to sort each of the bins, this procedure generates many intermediate piles of cards that must be kept track of. (See Exercise 8.3-5.) Radix sort solves the problem of card sorting counterintuitively by sorting on the least significant digit first. The cards are then combined into a single deck, with the cards in the 0 bin preceding the cards in the 1 bin preceding the cards in the 2 bin, and so on. Then the entire deck is sorted again on the second-least significant digit and recombined in a like manner. The process continues until the cards have been sorted on all d digits. Remarkably, at that point the cards are fully sorted on the d -digit number. Thus, only d passes through the deck are required to sort. Figure 8.3 shows how radix sort operates on a “deck” of seven 3-digit numbers. It is essential that the digit sorts in this algorithm be stable. The sort performed by a card sorter is stable, but the operator has to be wary about not changing the order of the cards as they come out of a bin, even though all the cards in a bin have the same digit in the chosen column. In a typical computer, which is a sequential random-access machine, radix sort is sometimes used to sort records of information that are keyed by multiple fields. For example, we might wish to sort dates by three keys: year, month, and day. We could run a sorting algorithm with a comparison function that, given two dates, compares years, and if there is a tie, compares months, and if another tie occurs, 172 Chapter 8 Sorting in Linear Time compares days. Alternatively, we could sort the information three times with a stable sort: first on day, next on month, and finally on year. The code for radix sort is straightforward. The following procedure assumes that each element in the n -element array A has d digits, where digit 1 is the lowest-order digit and digit d is the highest-order digit. R ADIX -S ORT ( A, d ) 1 for i ← 1 to d 2 do use a stable sort to sort array A on digit i Lemma 8.3 Given n d -digit numbers in which each digit can take on up to k possible values, R ADIX -S ORT correctly sorts these numbers in (d (n + k )) time. Proof The correctness of radix sort follows by induction on the column being sorted (see Exercise 8.3-3). The analysis of the running time depends on the stable sort used as the intermediate sorting algorithm. When each digit is in the range 0 to k − 1 (so that it can take on k possible values), and k is not too large, counting sort is the obvious choice. Each pass over n d -digit numbers then takes time (n + k ). There are d passes, so the total time for radix sort is (d (n + k )). When d is constant and k = O (n ), radix sort runs in linear time. More generally, we have some flexibility in how to break each key into digits. Lemma 8.4 Given n b-bit numbers and any positive integer r ≤ b, R ADIX -S ORT correctly sorts these numbers in ((b/ r )(n + 2r )) time. Proof For a value r ≤ b, we view each key as having d = b/ r digits of r bits each. Each digit is an integer in the range 0 to 2r − 1, so that we can use counting sort with k = 2r − 1. (For example, we can view a 32-bit word as having 4 8-bit digits, so that b = 32, r = 8, k = 2r − 1 = 255, and d = b/ r = 4.) Each pass of counting sort takes time (n + k ) = (n + 2r ) and there are d passes, for a total running time of (d (n + 2r )) = ((b/ r )(n + 2r )). For given values of n and b, we wish to choose the value of r , with r ≤ b, that minimizes the expression (b/ r )(n + 2r ). If b < lg n , then for any value of r ≤ b, we have that (n + 2r ) = (n ). Thus, choosing r = b yields a running time of (b/b)(n + 2b ) = (n ), which is asymptotically optimal. If b ≥ lg n , then choosing r = lg n gives the best time to within a constant factor, which we can see as follows. Choosing r = lg n yields a running time of (bn / lg n ). As we increase r above lg n , the 2r term in the numerator increases faster than 8.3 Radix sort 173 the r term in the denominator, and so increasing r above lg n yields a running time of (bn / lg n ). If instead we were to decrease r below lg n , then the b/ r term increases and the n + 2r term remains at (n ). Is radix sort preferable to a comparison-based sorting algorithm, such as quicksort? If b = O (lg n ), as is often the case, and we choose r ≈ lg n , then radix sort’s running time is (n ), which appears to be better than quicksort’s average-case time of (n lg n ). The constant factors hidden in the -notation differ, however. Although radix sort may make fewer passes than quicksort over the n keys, each pass of radix sort may take significantly longer. Which sorting algorithm is preferable depends on the characteristics of the implementations, of the underlying machine (e.g., quicksort often uses hardware caches more effectively than radix sort), and of the input data. Moreover, the version of radix sort that uses counting sort as the intermediate stable sort does not sort in place, which many of the (n lg n )-time comparison sorts do. Thus, when primary memory storage is at a premium, an in-place algorithm such as quicksort may be preferable. Exercises 8.3-1 Using Figure 8.3 as a model, illustrate the operation of R ADIX -S ORT on the following list of English words: COW, DOG, SEA, RUG, ROW, MOB, BOX, TAB, BAR, EAR, TAR, DIG, BIG, TEA, NOW, FOX. 8.3-2 Which of the following sorting algorithms are stable: insertion sort, merge sort, heapsort, and quicksort? Give a simple scheme that makes any sorting algorithm stable. How much additional time and space does your scheme entail? 8.3-3 Use induction to prove that radix sort works. Where does your proof need the assumption that the intermediate sort is stable? 8.3-4 Show how to sort n integers in the range 0 to n 2 − 1 in O (n ) time. 8.3-5 In the first card-sorting algorithm in this section, exactly how many sorting passes are needed to sort d -digit decimal numbers in the worst case? How many piles of cards would an operator need to keep track of in the worst case? 174 Chapter 8 Sorting in Linear Time 8.4 Bucket sort Bucket sort runs in linear time when the input is drawn from a uniform distribution. Like counting sort, bucket sort is fast because it assumes something about the input. Whereas counting sort assumes that the input consists of integers in a small range, bucket sort assumes that the input is generated by a random process that distributes elements uniformly over the interval [0, 1). (See Section C.2 for a definition of uniform distribution.) The idea of bucket sort is to divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute the n input numbers into the buckets. Since the inputs are uniformly distributed over [0, 1), we don’t expect many numbers to fall into each bucket. To produce the output, we simply sort the numbers in each bucket and then go through the buckets in order, listing the elements in each. Our code for bucket sort assumes that the input is an n -element array A and that each element A[i ] in the array satisfies 0 ≤ A[i ] < 1. The code requires an auxiliary array B [0 . . n − 1] of linked lists (buckets) and assumes that there is a mechanism for maintaining such lists. (Section 10.2 describes how to implement basic operations on linked lists.) B UCKET-S ORT ( A) 1 n ← length[ A] 2 for i ← 1 to n 3 do insert A[i ] into list B [ n A[i ] ] 4 for i ← 0 to n − 1 5 do sort list B [i ] with insertion sort 6 concatenate the lists B [0], B [1], . . . , B [n − 1] together in order Figure 8.4 shows the operation of bucket sort on an input array of 10 numbers. To see that this algorithm works, consider two elements A[i ] and A[ j ]. Assume without loss of generality that A[i ] ≤ A[ j ]. Since n A[i ] ≤ n A[ j ] , element A[i ] is placed either into the same bucket as A[ j ] or into a bucket with a lower index. If A[i ] and A[ j ] are placed into the same bucket, then the for loop of lines 4–5 puts them into the proper order. If A[i ] and A[ j ] are placed into different buckets, then line 6 puts them into the proper order. Therefore, bucket sort works correctly. To analyze the running time, observe that all lines except line 5 take O (n ) time in the worst case. It remains to balance the total time taken by the n calls to insertion sort in line 5. To analyze the cost of the calls to insertion sort, let n i be the random variable denoting the number of elements placed in bucket B [i ]. Since insertion sort runs in quadratic time (see Section 2.2), the running time of bucket sort is 8.4 Bucket sort 175 1 2 3 4 5 6 7 8 9 10 A .78 .17 .39 .26 .72 .94 .21 .12 .23 .68 (a) B 0 1 2 3 4 5 6 7 8 9 .12 .21 .39 .17 .23 .26 .68 .72 .94 (b) .78 Figure 8.4 The operation of B UCKET-S ORT. (a) The input array A[1 . . 10]. (b) The array B [0 . . 9] of sorted lists (buckets) after line 5 of the algorithm. Bucket i holds values in the half-open interval [i /10, (i + 1)/10). The sorted output consists of a concatenation in order of the lists B [0], B [1], . . . , B [9]. n −1 i =0 T (n ) = (n ) + O (n 2 ) . i Taking expectations of both sides and using linearity of expectation, we have E [T (n )] = E = = We claim that E [n 2 ] = 2 − 1/ n i (8.2) (n ) + (n ) + (n ) + n −1 i =0 n −1 i =0 O (n 2 ) i (by linearity of expectation) (by equation (C.21)) . (8.1) E [ O (n 2 )] i O (E [n 2 ]) i n −1 i =0 for i = 0, 1, . . . , n − 1. It is no surprise that each bucket i has the same value of E [n 2 ], since each value in the input array A is equally likely to fall in any bucket. i To prove equation (8.2), we define indicator random variables X i j = I { A[ j ] falls in bucket i } for i = 0, 1, . . . , n − 1 and j = 1, 2, . . . , n . Thus, 176 Chapter 8 n Sorting in Linear Time ni = Xij . j =1 To compute E [n 2 ], we expand the square and regroup terms: i E [n 2 ] i n 2 =E n Xij j =1 n =E = E n X i j X ik j =1 k =1 n j =1 X i2j + 1≤ j ≤n 1≤k ≤n k= j = j =1 E X i2j + X i j X ik (8.3) E [ X i j X ik ] , E [ X i j X ik ] = E [ X i j ] E [ X ik ] 11 = · nn 1 = . n2 Substituting these two expected values in equation (8.3), we obtain E [n 2 ] = i n j =1 where the last line follows by linearity of expectation. We evaluate the two summations separately. Indicator random variable X i j is 1 with probability 1/ n and 0 otherwise, and therefore 1 1 E X i2j = 1 · + 0 · 1 − n n 1 . = n When k = j , the variables X i j and X ik are independent, and hence 1≤ j ≤n 1≤k ≤n k= j 1 1 + n 1≤ j ≤n 1≤k ≤n n 2 k= j 1 1 = n · + n (n − 1) · 2 n n n−1 = 1+ n 1 = 2− , n 8.4 Bucket sort 177 which proves equation (8.2). Using this expected value in equation (8.1), we conclude that the expected time for bucket sort is (n ) + n · O (2 − 1/ n ) = (n ). Thus, the entire bucket sort algorithm runs in linear expected time. Even if the input is not drawn from a uniform distribution, bucket sort may still run in linear time. As long as the input has the property that the sum of the squares of the bucket sizes is linear in the total number of elements, equation (8.1) tells us that bucket sort will run in linear time. Exercises 8.4-1 Using Figure 8.4 as a model, illustrate the operation of B UCKET-S ORT on the array A = .79, .13, .16, .64, .39, .20, .89, .53, .71, .42 . 8.4-2 What is the worst-case running time for the bucket-sort algorithm? What simple change to the algorithm preserves its linear expected running time and makes its worst-case running time O (n lg n )? 8.4-3 Let X be a random variable that is equal to the number of heads in two flips of a fair coin. What is E [ X 2 ]? What is E2 [ X ]? 8.4-4 We are given n points in the unit circle, p i = (x i , yi ), such that 0 < x i2 + yi2 ≤ 1 for i = 1, 2, . . . , n . Suppose that the points are uniformly distributed; that is, the probability of finding a point in any region of the circle is proportional to the area of that region. Design a (n ) expected-time algorithm to sort the n points by their distances di = x i2 + yi2 from the origin. (Hint: Design the bucket sizes in B UCKET-S ORT to reflect the uniform distribution of the points in the unit circle.) 8.4-5 A probability distribution function P (x ) for a random variable X is defined by P (x ) = Pr { X ≤ x }. Suppose that a list of n random variables X 1 , X 2 , . . . , X n is drawn from a continuous probability distribution function P that is computable in O (1) time. Show how to sort these numbers in linear expected time. 178 Chapter 8 Sorting in Linear Time Problems 8-1 Average-case lower bounds on comparison sorting In this problem, we prove an (n lg n ) lower bound on the expected running time of any deterministic or randomized comparison sort on n distinct input elements. We begin by examining a deterministic comparison sort A with decision tree T A . We assume that every permutation of A’s inputs is equally likely. a. Suppose that each leaf of T A is labeled with the probability that it is reached given a random input. Prove that exactly n ! leaves are labeled 1/ n ! and that the rest are labeled 0. b. Let D (T ) denote the external path length of a decision tree T ; that is, D (T ) is the sum of the depths of all the leaves of T . Let T be a decision tree with k > 1 leaves, and let LT and RT be the left and right subtrees of T . Show that D (T ) = D (LT ) + D (RT ) + k . c. Let d (k ) be the minimum value of D (T ) over all decision trees T with k > 1 leaves. Show that d (k ) = min1≤i ≤k −1 {d (i ) + d (k − i ) + k }. (Hint: Consider a decision tree T with k leaves that achieves the minimum. Let i 0 be the number of leaves in LT and k − i 0 the number of leaves in RT .) d. Prove that for a given value of k > 1 and i in the range 1 ≤ i ≤ k − 1, the function i lg i + (k − i ) lg(k − i ) is minimized at i = k /2. Conclude that d (k ) = (k lg k ). e. Prove that D (T A ) = (n ! lg(n !)), and conclude that the expected time to sort n elements is (n lg n ). Now, consider a randomized comparison sort B . We can extend the decision-tree model to handle randomization by incorporating two kinds of nodes: ordinary comparison nodes and “randomization” nodes. A randomization node models a random choice of the form R ANDOM(1, r ) made by algorithm B ; the node has r children, each of which is equally likely to be chosen during an execution of the algorithm. f. Show that for any randomized comparison sort B , there exists a deterministic comparison sort A that makes no more comparisons on the average than B does. 8-2 Sorting in place in linear time Suppose that we have an array of n data records to sort and that the key of each record has the value 0 or 1. An algorithm for sorting such a set of records might possess some subset of the following three desirable characteristics: Problems for Chapter 8 179 1. The algorithm runs in O (n ) time. 2. The algorithm is stable. 3. The algorithm sorts in place, using no more than a constant amount of storage space in addition to the original array. a. Give an algorithm that satisfies criteria 1 and 2 above. b. Give an algorithm that satisfies criteria 1 and 3 above. c. Give an algorithm that satisfies criteria 2 and 3 above. d. Can any of your sorting algorithms from parts (a)–(c) be used to sort n records with b-bit keys using radix sort in O (bn ) time? Explain how or why not. e. Suppose that the n records have keys in the range from 1 to k . Show how to modify counting sort so that the records can be sorted in place in O (n + k ) time. You may use O (k ) storage outside the input array. Is your algorithm stable? (Hint: How would you do it for k = 3?) 8-3 Sorting variable-length items a. You are given an array of integers, where different integers may have different numbers of digits, but the total number of digits over all the integers in the array is n . Show how to sort the array in O (n ) time. b. You are given an array of strings, where different strings may have different numbers of characters, but the total number of characters over all the strings is n . Show how to sort the strings in O (n ) time. (Note that the desired order here is the standard alphabetical order; for example, a < ab < b.) 8-4 Water jugs Suppose that you are given n red and n blue water jugs, all of different shapes and sizes. All red jugs hold different amounts of water, as do the blue ones. Moreover, for every red jug, there is a blue jug that holds the same amount of water, and vice versa. It is your task to find a grouping of the jugs into pairs of red and blue jugs that hold the same amount of water. To do so, you may perform the following operation: pick a pair of jugs in which one is red and one is blue, fill the red jug with water, and then pour the water into the blue jug. This operation will tell you whether the red or the blue jug can hold more water, or if they are of the same volume. Assume that such a comparison takes one time unit. Your goal is to find an algorithm that 180 Chapter 8 Sorting in Linear Time makes a minimum number of comparisons to determine the grouping. Remember that you may not directly compare two red jugs or two blue jugs. a. Describe a deterministic algorithm that uses jugs into pairs. (n 2 ) comparisons to group the b. Prove a lower bound of (n lg n ) for the number of comparisons an algorithm solving this problem must make. c. Give a randomized algorithm whose expected number of comparisons is O (n lg n ), and prove that this bound is correct. What is the worst-case number of comparisons for your algorithm? 8-5 Average sorting Suppose that, instead of sorting an array, we just require that the elements increase on average. More precisely, we call an n -element array A k-sorted if, for all i = 1, 2, . . . , n − k , the following holds: i +k −1 j =i A[ j ] k ≤ i +k j =i +1 A[ j ] k . a. What does it mean for an array to be 1-sorted? b. Give a permutation of the numbers 1, 2, . . . , 10 that is 2-sorted, but not sorted. c. Prove that an n -element array is k -sorted if and only if A[i ] ≤ A[i + k ] for all i = 1, 2, . . . , n − k . d. Give an algorithm that k -sorts an n -element array in O (n lg(n / k )) time. We can also show a lower bound on the time to produce a k -sorted array, when k is a constant. e. Show that a k -sorted array of length n can be sorted in O (n lg k ) time. (Hint: Use the solution to Exercise 6.5-8. ) f. Show that when k is a constant, it requires (n lg n ) time to k -sort an n -element array. (Hint: Use the solution to the previous part along with the lower bound on comparison sorts.) 8-6 Lower bound on merging sorted lists The problem of merging two sorted lists arises frequently. It is used as a subroutine of M ERGE -S ORT, and the procedure to merge two sorted lists is given as M ERGE in Section 2.3.1. In this problem, we will show that there is a lower bound of 2n − 1 Notes for Chapter 8 181 on the worst-case number of comparisons required to merge two sorted lists, each containing n items. First we will show a lower bound of 2n − o(n ) comparisons by using a decision tree. a. Show that, given 2n numbers, there are two sorted lists, each with n numbers. 2n n possible ways to divide them into b. Using a decision tree, show that any algorithm that correctly merges two sorted lists uses at least 2n − o(n ) comparisons. Now we will show a slightly tighter 2n − 1 bound. c. Show that if two elements are consecutive in the sorted order and from opposite lists, then they must be compared. d. Use your answer to the previous part to show a lower bound of 2n − 1 comparisons for merging two sorted lists. Chapter notes The decision-tree model for studying comparison sorts was introduced by Ford and Johnson [94]. Knuth’s comprehensive treatise on sorting [185] covers many variations on the sorting problem, including the information-theoretic lower bound on the complexity of sorting given here. Lower bounds for sorting using generalizations of the decision-tree model were studied comprehensively by Ben-Or [36]. Knuth credits H. H. Seward with inventing counting sort in 1954, and also with the idea of combining counting sort with radix sort. Radix sorting starting with the least significant digit appears to be a folk algorithm widely used by operators of mechanical card-sorting machines. According to Knuth, the first published reference to the method is a 1929 document by L. J. Comrie describing punched-card equipment. Bucket sorting has been in use since 1956, when the basic idea was proposed by E. J. Isaac and R. C. Singleton. Munro and Raman [229] give a stable sorting algorithm that performs O (n 1+ ) comparisons in the worst case, where 0 < ≤ 1 is any fixed constant. Although any of the O (n lg n )-time algorithms make fewer comparisons, the algorithm by Munro and Raman moves data only O (n ) times and operates in place. The case of sorting n b-bit integers in o(n lg n ) time has been considered by many researchers. Several positive results have been obtained, each under slightly different assumptions about the model of computation and the restrictions placed on the algorithm. All the results assume that the computer memory is divided into 182 Chapter 8 Sorting in Linear Time addressable b-bit words. Fredman and Willard [99] introduced the fusion tree data structure and used it to sort n integers in O (n lg n / lg lg n ) time. This bound was later improved to O (n lg n ) time by Andersson [16]. These algorithms require the use of multiplication and several precomputed constants. Andersson, Hagerup, Nilsson, and Raman [17] have shown how to sort n integers in O (n lg lg n ) time without using multiplication, but their method requires storage that can be unbounded in terms of n . Using multiplicative hashing, one can reduce the storage needed to O (n ), but the O (n lg lg n ) worst-case bound on the running time becomes an expected-time bound. Generalizing the exponential search trees of Andersson [16], Thorup [297] gave an O (n (lg lg n ) 2 )-time sorting algorithm that does not use multiplication or randomization, and uses linear space. Combining these techniques with some new ideas, Han [137] improved the bound for sorting to O (n lg lg n lg lg lg n ) time. Although these algorithms are important theoretical breakthroughs, they are all fairly complicated and at the present time seem unlikely to compete with existing sorting algorithms in practice. 9 Medians and Order Statistics The i th order statistic of a set of n elements is the i th smallest element. For example, the minimum of a set of elements is the first order statistic (i = 1), and the maximum is the n th order statistic (i = n ). A median, informally, is the “halfway point” of the set. When n is odd, the median is unique, occurring at i = (n + 1)/2. When n is even, there are two medians, occurring at i = n /2 and i = n /2 + 1. Thus, regardless of the parity of n , medians occur at i = (n + 1)/2 (the lower median) and i = (n + 1)/2 (the upper median). For simplicity in this text, however, we consistently use the phrase “the median” to refer to the lower median. This chapter addresses the problem of selecting the i th order statistic from a set of n distinct numbers. We assume for convenience that the set contains distinct numbers, although virtually everything that we do extends to the situation in which a set contains repeated values. The selection problem can be specified formally as follows: Input: A set A of n (distinct) numbers and a number i , with 1 ≤ i ≤ n . Output: The element x ∈ A that is larger than exactly i − 1 other elements of A. The selection problem can be solved in O (n lg n ) time, since we can sort the numbers using heapsort or merge sort and then simply index the i th element in the output array. There are faster algorithms, however. In Section 9.1, we examine the problem of selecting the minimum and maximum of a set of elements. More interesting is the general selection problem, which is investigated in the subsequent two sections. Section 9.2 analyzes a practical algorithm that achieves an O (n ) bound on the running time in the average case. Section 9.3 contains an algorithm of more theoretical interest that achieves the O (n ) running time in the worst case. 184 Chapter 9 Medians and Order Statistics 9.1 Minimum and maximum How many comparisons are necessary to determine the minimum of a set of n elements? We can easily obtain an upper bound of n − 1 comparisons: examine each element of the set in turn and keep track of the smallest element seen so far. In the following procedure, we assume that the set resides in array A, where length [ A] = n . M INIMUM ( A) 1 min ← A[1] 2 for i ← 2 to length[ A] 3 do if min > A[i ] 4 then min ← A[i ] 5 return min Finding the maximum can, of course, be accomplished with n − 1 comparisons as well. Is this the best we can do? Yes, since we can obtain a lower bound of n − 1 comparisons for the problem of determining the minimum. Think of any algorithm that determines the minimum as a tournament among the elements. Each comparison is a match in the tournament in which the smaller of the two elements wins. The key observation is that every element except the winner must lose at least one match. Hence, n − 1 comparisons are necessary to determine the minimum, and the algorithm M INIMUM is optimal with respect to the number of comparisons performed. Simultaneous minimum and maximum In some applications, we must find both the minimum and the maximum of a set of n elements. For example, a graphics program may need to scale a set of (x , y ) data to fit onto a rectangular display screen or other graphical output device. To do so, the program must first determine the minimum and maximum of each coordinate. It is not difficult to devise an algorithm that can find both the minimum and the maximum of n elements using (n ) comparisons, which is asymptotically optimal. Simply find the minimum and maximum independently, using n − 1 comparisons for each, for a total of 2n − 2 comparisons. In fact, at most 3 n /2 comparisons are sufficient to find both the minimum and the maximum. The strategy is to maintain the minimum and maximum elements seen thus far. Rather than processing each element of the input by comparing it against the current minimum and maximum, at a cost of 2 comparisons per element, 9.2 Selection in expected linear time 185 we process elements in pairs. We compare pairs of elements from the input first with each other, and then we compare the smaller to the current minimum and the larger to the current maximum, at a cost of 3 comparisons for every 2 elements. Setting up initial values for the current minimum and maximum depends on whether n is odd or even. If n is odd, we set both the minimum and maximum to the value of the first element, and then we process the rest of the elements in pairs. If n is even, we perform 1 comparison on the first 2 elements to determine the initial values of the minimum and maximum, and then process the rest of the elements in pairs as in the case for odd n . Let us analyze the total number of comparisons. If n is odd, then we perform 3 n /2 comparisons. If n is even, we perform 1 initial comparison followed by 3(n − 2)/2 comparisons, for a total of 3n /2 − 2. Thus, in either case, the total number of comparisons is at most 3 n /2 . Exercises 9.1-1 Show that the second smallest of n elements can be found with n + lg n − 2 comparisons in the worst case. (Hint: Also find the smallest element.) 9.1-2 Show that 3n /2 − 2 comparisons are necessary in the worst case to find both the maximum and minimum of n numbers. (Hint: Consider how many numbers are potentially either the maximum or minimum, and investigate how a comparison affects these counts.) 9.2 Selection in expected linear time The general selection problem appears more difficult than the simple problem of finding a minimum. Yet, surprisingly, the asymptotic running time for both problems is the same: (n ). In this section, we present a divide-and-conquer algorithm for the selection problem. The algorithm R ANDOMIZED -S ELECT is modeled after the quicksort algorithm of Chapter 7. As in quicksort, the idea is to partition the input array recursively. But unlike quicksort, which recursively processes both sides of the partition, R ANDOMIZED -S ELECT only works on one side of the partition. This difference shows up in the analysis: whereas quicksort has an expected running time of (n lg n ), the expected time of R ANDOMIZED -S ELECT is (n ). R ANDOMIZED -S ELECT uses the procedure R ANDOMIZED -PARTITION introduced in Section 7.3. Thus, like R ANDOMIZED -Q UICKSORT, it is a randomized algorithm, since its behavior is determined in part by the output of a random-number 186 Chapter 9 Medians and Order Statistics generator. The following code for R ANDOMIZED -S ELECT returns the i th smallest element of the array A[ p . . r ]. R ANDOMIZED -S ELECT ( A, p , r, i ) 1 if p = r 2 then return A[ p ] 3 q ← R ANDOMIZED -PARTITION ( A, p , r ) 4 k ←q − p+1 5 if i = k £ the pivot value is the answer 6 then return A[q ] 7 elseif i < k 8 then return R ANDOMIZED -S ELECT ( A, p , q − 1, i ) 9 else return R ANDOMIZED -S ELECT ( A, q + 1, r, i − k ) After R ANDOMIZED -PARTITION is executed in line 3 of the algorithm, the array A[ p . . r ] is partitioned into two (possibly empty) subarrays A[ p . . q − 1] and A[q + 1 . . r ] such that each element of A[ p . . q − 1] is less than or equal to A[q ], which in turn is less than each element of A[q + 1 . . r ]. As in quicksort, we will refer to A[q ] as the pivot element. Line 4 of R ANDOMIZED -S ELECT computes the number k of elements in the subarray A[ p . . q ], that is, the number of elements in the low side of the partition, plus one for the pivot element. Line 5 then checks whether A[q ] is the i th smallest element. If it is, then A[q ] is returned. Otherwise, the algorithm determines in which of the two subarrays A[ p . . q − 1] and A[q + 1 . . r ] the i th smallest element lies. If i < k , then the desired element lies on the low side of the partition, and it is recursively selected from the subarray in line 8. If i > k , however, then the desired element lies on the high side of the partition. Since we already know k values that are smaller than the i th smallest element of A[ p . . r ]—namely, the elements of A[ p . . q ]—the desired element is the (i − k )th smallest element of A[q + 1 . . r ], which is found recursively in line 9. The code appears to allow recursive calls to subarrays with 0 elements, but Exercise 9.2-1 asks you to show that this situation cannot happen. The worst-case running time for R ANDOMIZED -S ELECT is (n 2), even to find the minimum, because we could be extremely unlucky and always partition around the largest remaining element, and partitioning takes (n ) time. The algorithm works well in the average case, though, and because it is randomized, no particular input elicits the worst-case behavior. The time required by R ANDOMIZED -S ELECT on an input array A[ p . . r ] of n elements is a random variable that we denote by T (n ), and we obtain an upper bound on E [T (n )] as follows. Procedure R ANDOMIZED -PARTITION is equally likely to return any element as the pivot. Therefore, for each k such that 1 ≤ k ≤ n , the subarray A[ p . . q ] has k elements (all less than or equal to the pivot) with 9.2 Selection in expected linear time 187 probability 1/ n . For k = 1, 2, . . . , n , we define indicator random variables X k where and so we have E [ X k ] = 1/ n . X k = I {the subarray A[ p . . q ] has exactly k elements } , (9.1) When we call R ANDOMIZED -S ELECT and choose A[q ] as the pivot element, we do not know, a priori, if we will terminate immediately with the correct answer, recurse on the subarray A[ p . . q − 1], or recurse on the subarray A[q + 1 . . r ]. This decision depends on where the i th smallest element falls relative to A[q ]. Assuming that T (n ) is monotonically increasing, we can bound the time needed for the recursive call by the time needed for the recursive call on the largest possible input. In other words, we assume, to obtain an upper bound, that the i th element is always on the side of the partition with the greater number of elements. For a given call of R ANDOMIZED -S ELECT, the indicator random variable X k has the value 1 for exactly one value of k , and it is 0 for all other k . When X k = 1, the two subarrays on which we might recurse have sizes k − 1 and n − k . Hence, we have the recurrence n T (n ) ≤ = k =1 n k =1 X k · (T (max(k − 1, n − k )) + O (n )) ( X k · T (max(k − 1, n − k )) + O (n )) . Taking expected values, we have E [T (n )] n ≤E n k =1 X k · T (max(k − 1, n − k )) + O (n ) (by linearity of expectation) = = = k =1 n k =1 n k =1 E [ X k · T (max(k − 1, n − k ))] + O (n ) E [ X k ] · E [T (max(k − 1, n − k ))] + O (n ) (by equation (C.23)) 1 · E [T (max(k − 1, n − k ))] + O (n ) n (by equation (9.1)) . In order to apply equation (C.23), we rely on X k and T (max(k − 1, n − k )) being independent random variables. Exercise 9.2-2 asks you to justify this assertion. Let us consider the expression max(k − 1, n − k ). We have 188 Chapter 9 Medians and Order Statistics max(k − 1, n − k ) = If n is even, each term from T ( n /2 ) up to T (n − 1) appears exactly twice in the summation, and if n is odd, all these terms appear twice and T ( n /2 ) appears once. Thus, we have 2 E [T (n )] ≤ n n −1 k = n /2 k − 1 if k > n /2 , n − k if k ≤ n /2 . E [T (k )] + O (n ) . We solve the recurrence by substitution. Assume that T (n ) ≤ cn for some constant c that satisfies the initial conditions of the recurrence. We assume that T (n ) = O (1) for n less than some constant; we shall pick this constant later. We also pick a constant a such that the function described by the O (n ) term above (which describes the non-recursive component of the running time of the algorithm) is bounded from above by an for all n > 0. Using this inductive hypothesis, we have E [T (n )] ≤ = = 2 n n −1 k = n /2 ck + an k− n /2 −1 k =1 2c n n −1 k =1 k + an 2c (n − 1)n ( n /2 − 1) n /2 + an − n 2 2 2c (n − 1)n (n /2 − 2)(n /2 − 1) ≤ − + an n 2 2 2c n 2 − n n 2 /4 − 3n /2 + 2 = − + an n 2 2 c 3n 2 n = + − 2 + an n 4 2 3n 1 2 =c +− + an 4 2n 3cn c ≤ + + an 4 2 cn c = cn − − − an . 4 2 In order to complete the proof, we need to show that for sufficiently large n , this last expression is at most cn or, equivalently, that cn /4 − c/2 − an ≥ 0. If we add c/2 to both sides and factor out n , we get n (c/4 − a ) ≥ c/2. As long as we 9.3 Selection in worst-case linear time 189 choose the constant c so that c/4 − a > 0, i.e., c > 4a , we can divide both sides by c/4 − a , giving n≥ c/2 2c = . c/4 − a c − 4a Thus, if we assume that T (n ) = O (1) for n < 2c/(c − 4a ), we have T (n ) = O (n ). We conclude that any order statistic, and in particular the median, can be determined on average in linear time. Exercises 9.2-1 Show that in R ANDOMIZED -S ELECT, no recursive call is ever made to a 0-length array. 9.2-2 Argue that the indicator random variable X k and the value T (max(k − 1, n − k )) are independent. 9.2-3 Write an iterative version of R ANDOMIZED -S ELECT . 9.2-4 Suppose we use R ANDOMIZED -S ELECT to select the minimum element of the array A = 3, 2, 9, 0, 7, 5, 4, 8, 6, 1 . Describe a sequence of partitions that results in a worst-case performance of R ANDOMIZED -S ELECT . 9.3 Selection in worst-case linear time We now examine a selection algorithm whose running time is O (n ) in the worst case. Like R ANDOMIZED -S ELECT , the algorithm S ELECT finds the desired element by recursively partitioning the input array. The idea behind the algorithm, however, is to guarantee a good split when the array is partitioned. S ELECT uses the deterministic partitioning algorithm PARTITION from quicksort (see Section 7.1), modified to take the element to partition around as an input parameter. The S ELECT algorithm determines the i th smallest of an input array of n > 1 elements by executing the following steps. (If n = 1, then S ELECT merely returns its only input value as the i th smallest.) 1. Divide the n elements of the input array into n /5 groups of 5 elements each and at most one group made up of the remaining n mod 5 elements. 190 Chapter 9 Medians and Order Statistics x Figure 9.1 Analysis of the algorithm S ELECT. The n elements are represented by small circles, and each group occupies a column. The medians of the groups are whitened, and the median-ofmedians x is labeled. (When finding the median of an even number of elements, we use the lower median.) Arrows are drawn from larger elements to smaller, from which it can be seen that 3 out of every full group of 5 elements to the right of x are greater than x , and 3 out of every group of 5 elements to the left of x are less than x . The elements greater than x are shown on a shaded background. 2. Find the median of each of the n /5 groups by first insertion sorting the elements of each group (of which there are at most 5) and then picking the median from the sorted list of group elements. 3. Use S ELECT recursively to find the median x of the n /5 medians found in step 2. (If there are an even number of medians, then by our convention, x is the lower median.) 4. Partition the input array around the median-of-medians x using the modified version of PARTITION. Let k be one more than the number of elements on the low side of the partition, so that x is the k th smallest element and there are n − k elements on the high side of the partition. 5. If i = k , then return x . Otherwise, use S ELECT recursively to find the i th smallest element on the low side if i < k , or the (i − k )th smallest element on the high side if i > k . To analyze the running time of S ELECT, we first determine a lower bound on the number of elements that are greater than the partitioning element x . Figure 9.1 is helpful in visualizing this bookkeeping. At least half of the medians found in step 2 are greater than1 the median-of-medians x . Thus, at least half of the n /5 groups 1 Because of our assumption that the numbers are distinct, we can say “greater than” and “less than” without being concerned about equality. 9.3 Selection in worst-case linear time 191 contribute 3 elements that are greater than x , except for the one group that has fewer than 5 elements if 5 does not divide n exactly, and the one group containing x itself. Discounting these two groups, it follows that the number of elements greater than x is at least 3n 1n −6. −2 ≥ 3 25 10 Similarly, the number of elements that are less than x is at least 3n /10 − 6. Thus, in the worst case, S ELECT is called recursively on at most 7n /10 + 6 elements in step 5. We can now develop a recurrence for the worst-case running time T (n ) of the algorithm S ELECT. Steps 1, 2, and 4 take O (n ) time. (Step 2 consists of O (n ) calls of insertion sort on sets of size O (1).) Step 3 takes time T ( n /5 ), and step 5 takes time at most T (7n /10 + 6), assuming that T is monotonically increasing. We make the assumption, which seems unmotivated at first, that any input of 140 or fewer elements requires O (1) time; the origin of the magic constant 140 will be clear shortly. We can therefore obtain the recurrence T (n ) ≤ (1) if n ≤ 140 , T ( n /5 ) + T (7n /10 + 6) + O (n ) if n > 140 . We show that the running time is linear by substitution. More specifically, we will show that T (n ) ≤ cn for some suitably large constant c and all n > 0. We begin by assuming that T (n ) ≤ cn for some suitably large constant c and all n ≤ 140; this assumption holds if c is large enough. We also pick a constant a such that the function described by the O (n ) term above (which describes the non-recursive component of the running time of the algorithm) is bounded above by an for all n > 0. Substituting this inductive hypothesis into the right-hand side of the recurrence yields T (n ) ≤ ≤ = = c n /5 + c(7n /10 + 6) + an cn /5 + c + 7cn /10 + 6c + an 9cn /10 + 7c + an cn + (−cn /10 + 7c + an ) , (9.2) which is at most cn if Inequality (9.2) is equivalent to the inequality c ≥ 10a (n /(n − 70)) when n > 70. Because we assume that n ≥ 140, we have n /(n − 70) ≤ 2, and so choosing c ≥ 20a will satisfy inequality (9.2). (Note that there is nothing special about the constant 140; we could replace it by any integer strictly greater than 70 and then choose c accordingly.) The worst-case running time of S ELECT is therefore linear. −cn /10 + 7c + an ≤ 0 . 192 Chapter 9 Medians and Order Statistics As in a comparison sort (see Section 8.1), S ELECT and R ANDOMIZED -S ELECT determine information about the relative order of elements only by comparing elements. Recall from Chapter 8 that sorting requires (n lg n ) time in the comparison model, even on average (see Problem 8-1). The linear-time sorting algorithms in Chapter 8 make assumptions about the input. In contrast, the linear-time selection algorithms in this chapter do not require any assumptions about the input. They are not subject to the (n lg n ) lower bound because they manage to solve the selection problem without sorting. Thus, the running time is linear because these algorithms do not sort; the lineartime behavior is not a result of assumptions about the input, as was the case for the sorting algorithms in Chapter 8. Sorting requires (n lg n ) time in the comparison model, even on average (see Problem 8-1), and thus the method of sorting and indexing presented in the introduction to this chapter is asymptotically inefficient. Exercises 9.3-1 In the algorithm S ELECT, the input elements are divided into groups of 5. Will the algorithm work in linear time if they are divided into groups of 7? Argue that S ELECT does not run in linear time if groups of 3 are used. 9.3-2 Analyze S ELECT to show that if n ≥ 140, then at least n /4 elements are greater than the median-of-medians x and at least n /4 elements are less than x . 9.3-3 Show how quicksort can be made to run in O (n lg n ) time in the worst case. 9.3-4 Suppose that an algorithm uses only comparisons to find the i th smallest element in a set of n elements. Show that it can also find the i − 1 smaller elements and the n − i larger elements without performing any additional comparisons. 9.3-5 Suppose that you have a “black-box” worst-case linear-time median subroutine. Give a simple, linear-time algorithm that solves the selection problem for an arbitrary order statistic. 9.3-6 The k th quantiles of an n -element set are the k − 1 order statistics that divide the sorted set into k equal-sized sets (to within 1). Give an O (n lg k )-time algorithm to list the k th quantiles of a set. 9.3 Selection in worst-case linear time 193 Figure 9.2 Professor Olay needs to determine the position of the east-west oil pipeline that minimizes the total length of the north-south spurs. 9.3-7 Describe an O (n )-time algorithm that, given a set S of n distinct numbers and a positive integer k ≤ n , determines the k numbers in S that are closest to the median of S . 9.3-8 Let X [1 . . n ] and Y [1 . . n ] be two arrays, each containing n numbers already in sorted order. Give an O (lg n )-time algorithm to find the median of all 2n elements in arrays X and Y . 9.3-9 Professor Olay is consulting for an oil company, which is planning a large pipeline running east to west through an oil field of n wells. From each well, a spur pipeline is to be connected directly to the main pipeline along a shortest path (either north or south), as shown in Figure 9.2. Given x - and y -coordinates of the wells, how should the professor pick the optimal location of the main pipeline (the one that minimizes the total length of the spurs)? Show that the optimal location can be determined in linear time. 194 Chapter 9 Medians and Order Statistics Problems 9-1 Largest i numbers in sorted order Given a set of n numbers, we wish to find the i largest in sorted order using a comparison-based algorithm. Find the algorithm that implements each of the following methods with the best asymptotic worst-case running time, and analyze the running times of the algorithms in terms of n and i . a. Sort the numbers, and list the i largest. b. Build a max-priority queue from the numbers, and call E XTRACT-M AX i times. c. Use an order-statistic algorithm to find the i th largest number, partition around that number, and sort the i largest numbers. 9-2 Weighted median For n distinct elements x 1 , x 2 , . . . , x n with positive weights w1 , w2 , . . . , wn such that n=1 wi = 1, the weighted (lower) median is the element x k satisfying i wi < xi <xk 1 2 and xi >xk wi ≤ 1 . 2 a. Argue that the median of x 1 , x 2 , . . . , x n is the weighted median of the x i with weights wi = 1/ n for i = 1, 2, . . . , n . b. Show how to compute the weighted median of n elements in O (n lg n ) worstcase time using sorting. c. Show how to compute the weighted median in (n ) worst-case time using a linear-time median algorithm such as S ELECT from Section 9.3. The post-office location problem is defined as follows. We are given n points p1 , p2 , . . . , pn with associated weights w1 , w2 , . . . , wn . We wish to find a point p (not necessarily one of the input points) that minimizes the sum n=1 wi d ( p , pi ), i where d (a , b) is the distance between points a and b. d. Argue that the weighted median is a best solution for the 1-dimensional postoffice location problem, in which points are simply real numbers and the distance between points a and b is d (a , b) = |a − b|. Notes for Chapter 9 195 e. Find the best solution for the 2-dimensional post-office location problem, in which the points are (x , y ) coordinate pairs and the distance between points a = (x 1 , y1 ) and b = (x 2 , y2 ) is the Manhattan distance given by d (a , b) = | x 1 − x 2 | + | y1 − y 2 |. 9-3 Small order statistics The worst-case number T (n ) of comparisons used by S ELECT to select the i th order statistic from n numbers was shown to satisfy T (n ) = (n ), but the constant hidden by the -notation is rather large. When i is small relative to n , we can implement a different procedure that uses S ELECT as a subroutine but makes fewer comparisons in the worst case. a. Describe an algorithm that uses U i (n ) comparisons to find the i th smallest of n elements, where Ui (n ) = T (n ) if i ≥ n /2 , n /2 + Ui ( n /2 ) + T (2i ) otherwise . (Hint: Begin with n /2 disjoint pairwise comparisons, and recurse on the set containing the smaller element from each pair.) b. Show that, if i < n /2, then Ui (n ) = n + O (T (2i ) lg(n / i )). c. Show that if i is a constant less than n /2, then U i (n ) = n + O (lg n ). d. Show that if i = n / k for k ≥ 2, then Ui (n ) = n + O (T (2n / k ) lg k ). Chapter notes The worst-case linear-time median-finding algorithm was devised by Blum, Floyd, Pratt, Rivest, and Tarjan [43]. The fast average-time version is due to Hoare [146]. Floyd and Rivest [92] have developed an improved average-time version that partitions around an element recursively selected from a small sample of the elements. It is still unknown exactly how many comparisons are needed to determine the median. A lower bound of 2n comparisons for median finding was given by Bent and John [38]. An upper bound of 3n was given by Schonhage, Paterson, and Pippenger [265]. Dor and Zwick [79] have improved on both of these bounds; their upper bound is slightly less than 2.95n and the lower bound is slightly more than 2n . Paterson [239] describes these results along with other related work. III Data Structures Introduction Sets are as fundamental to computer science as they are to mathematics. Whereas mathematical sets are unchanging, the sets manipulated by algorithms can grow, shrink, or otherwise change over time. We call such sets dynamic. The next five chapters present some basic techniques for representing finite dynamic sets and manipulating them on a computer. Algorithms may require several different types of operations to be performed on sets. For example, many algorithms need only the ability to insert elements into, delete elements from, and test membership in a set. A dynamic set that supports these operations is called a dictionary. Other algorithms require more complicated operations. For example, min-priority queues, which were introduced in Chapter 6 in the context of the heap data structure, support the operations of inserting an element into and extracting the smallest element from a set. The best way to implement a dynamic set depends upon the operations that must be supported. Elements of a dynamic set In a typical implementation of a dynamic set, each element is represented by an object whose fields can be examined and manipulated if we have a pointer to the object. (Section 10.3 discusses the implementation of objects and pointers in programming environments that do not contain them as basic data types.) Some kinds of dynamic sets assume that one of the object’s fields is an identifying key field. If the keys are all different, we can think of the dynamic set as being a set of key values. The object may contain satellite data, which are carried around in other object fields but are otherwise unused by the set implementation. It may also have 198 Part III Data Structures fields that are manipulated by the set operations; these fields may contain data or pointers to other objects in the set. Some dynamic sets presuppose that the keys are drawn from a totally ordered set, such as the real numbers, or the set of all words under the usual alphabetic ordering. (A totally ordered set satisfies the trichotomy property, defined on page 49.) A total ordering allows us to define the minimum element of the set, for example, or speak of the next element larger than a given element in a set. Operations on dynamic sets Operations on a dynamic set can be grouped into two categories: queries, which simply return information about the set, and modifying operations, which change the set. Here is a list of typical operations. Any specific application will usually require only a few of these to be implemented. S EARCH ( S , k ) A query that, given a set S and a key value k , returns a pointer x to an element in S such that key[x ] = k , or NIL if no such element belongs to S . I NSERT ( S , x ) A modifying operation that augments the set S with the element pointed to by x . We usually assume that any fields in element x needed by the set implementation have already been initialized. D ELETE ( S , x ) A modifying operation that, given a pointer x to an element in the set S , removes x from S . (Note that this operation uses a pointer to an element x , not a key value.) M INIMUM ( S ) A query on a totally ordered set S that returns a pointer to the element of S with the smallest key. M AXIMUM ( S ) A query on a totally ordered set S that returns a pointer to the element of S with the largest key. S UCCESSOR ( S , x ) A query that, given an element x whose key is from a totally ordered set S , returns a pointer to the next larger element in S , or NIL if x is the maximum element. P REDECESSOR ( S , x ) A query that, given an element x whose key is from a totally ordered set S , returns a pointer to the next smaller element in S , or NIL if x is the minimum element. Part III Data Structures 199 The queries S UCCESSOR and P REDECESSOR are often extended to sets with nondistinct keys. For a set on n keys, the normal presumption is that a call to M INI MUM followed by n − 1 calls to S UCCESSOR enumerates the elements in the set in sorted order. The time taken to execute a set operation is usually measured in terms of the size of the set given as one of its arguments. For example, Chapter 13 describes a data structure that can support any of the operations listed above on a set of size n in time O (lg n ). Overview of Part III Chapters 10–14 describe several data structures that can be used to implement dynamic sets; many of these will be used later to construct efficient algorithms for a variety of problems. Another important data structure—the heap—has already been introduced in Chapter 6. Chapter 10 presents the essentials of working with simple data structures such as stacks, queues, linked lists, and rooted trees. It also shows how objects and pointers can be implemented in programming environments that do not support them as primitives. Much of this material should be familiar to anyone who has taken an introductory programming course. Chapter 11 introduces hash tables, which support the dictionary operations I N SERT, D ELETE, and S EARCH. In the worst case, hashing requires (n ) time to perform a S EARCH operation, but the expected time for hash-table operations is O (1). The analysis of hashing relies on probability, but most of the chapter requires no background in the subject. Binary search trees, which are covered in Chapter 12, support all the dynamicset operations listed above. In the worst case, each operation takes (n ) time on a tree with n elements, but on a randomly built binary search tree, the expected time for each operation is O (lg n ). Binary search trees serve as the basis for many other data structures. Red-black trees, a variant of binary search trees, are introduced in Chapter 13. Unlike ordinary binary search trees, red-black trees are guaranteed to perform well: operations take O (lg n ) time in the worst case. A red-black tree is a balanced search tree; Chapter 18 presents another kind of balanced search tree, called a Btree. Although the mechanics of red-black trees are somewhat intricate, you can glean most of their properties from the chapter without studying the mechanics in detail. Nevertheless, walking through the code can be quite instructive. In Chapter 14, we show how to augment red-black trees to support operations other than the basic ones listed above. First, we augment them so that we can dynamically maintain order statistics for a set of keys. Then, we augment them in a different way to maintain intervals of real numbers. 10 Elementary Data Structures In this chapter, we examine the representation of dynamic sets by simple data structures that use pointers. Although many complex data structures can be fashioned using pointers, we present only the rudimentary ones: stacks, queues, linked lists, and rooted trees. We also discuss a method by which objects and pointers can be synthesized from arrays. 10.1 Stacks and queues Stacks and queues are dynamic sets in which the element removed from the set by the D ELETE operation is prespecified. In a stack, the element deleted from the set is the one most recently inserted: the stack implements a last-in, first-out, or LIFO, policy. Similarly, in a queue, the element deleted is always the one that has been in the set for the longest time: the queue implements a first-in, first-out, or FIFO, policy. There are several efficient ways to implement stacks and queues on a computer. In this section we show how to use a simple array to implement each. Stacks The I NSERT operation on a stack is often called P USH, and the D ELETE operation, which does not take an element argument, is often called P OP. These names are allusions to physical stacks, such as the spring-loaded stacks of plates used in cafeterias. The order in which plates are popped from the stack is the reverse of the order in which they were pushed onto the stack, since only the top plate is accessible. As shown in Figure 10.1, we can implement a stack of at most n elements with an array S [1 . . n ]. The array has an attribute top[ S ] that indexes the most recently inserted element. The stack consists of elements S [1 . . top[ S ]], where S [1] is the element at the bottom of the stack and S [top[ S ]] is the element at the top. 10.1 1 2 Stacks and queues 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 201 7 S 15 6 2 9 S 15 6 2 9 17 3 top[S] = 6 (b) S 15 6 2 9 17 3 top[S] = 5 (c) top[S] = 4 (a) Figure 10.1 An array implementation of a stack S . Stack elements appear only in the lightly shaded positions. (a) Stack S has 4 elements. The top element is 9. (b) Stack S after the calls P USH( S , 17) and P USH( S , 3). (c) Stack S after the call P OP( S ) has returned the element 3, which is the one most recently pushed. Although element 3 still appears in the array, it is no longer in the stack; the top is element 17. When top[ S ] = 0, the stack contains no elements and is empty. The stack can be tested for emptiness by the query operation S TACK -E MPTY. If an empty stack is popped, we say the stack underflows, which is normally an error. If top[ S ] exceeds n , the stack overflows. (In our pseudocode implementation, we don’t worry about stack overflow.) The stack operations can each be implemented with a few lines of code. S TACK -E MPTY ( S ) 1 if top[ S ] = 0 2 then return TRUE 3 else return FALSE P USH ( S , x ) 1 top[ S ] ← top[ S ] + 1 2 S [top[ S ]] ← x P OP ( S ) 1 if S TACK -E MPTY ( S ) 2 then error “underflow” 3 else top[ S ] ← top[ S ] − 1 4 return S [top[ S ] + 1] Figure 10.1 shows the effects of the modifying operations P USH and P OP. Each of the three stack operations takes O (1) time. Queues We call the I NSERT operation on a queue E NQUEUE, and we call the D ELETE operation D EQUEUE; like the stack operation P OP, D EQUEUE takes no element 202 Chapter 10 1 Elementary Data Structures 2 3 4 5 6 7 8 9 10 11 12 (a) Q 15 6 head[Q] = 7 1 2 3 4 5 6 7 8 9 8 4 tail[Q] = 12 9 10 11 12 (b) Q3 5 tail[Q] = 3 15 6 head[Q] = 7 5 6 7 8 9 8 4 17 1 2 3 4 9 10 11 12 (c) Q3 5 tail[Q] = 3 15 6 9 8 4 17 head[Q] = 8 Figure 10.2 A queue implemented using an array Q [1 . . 12]. Queue elements appear only in the lightly shaded positions. (a) The queue has 5 elements, in locations Q [7 . . 11]. (b) The configuration of the queue after the calls E NQUEUE( Q , 17), E NQUEUE( Q , 3), and E NQUEUE( Q , 5). (c) The configuration of the queue after the call D EQUEUE( Q ) returns the key value 15 formerly at the head of the queue. The new head has key 6. argument. The FIFO property of a queue causes it to operate like a line of people in the registrar’s office. The queue has a head and a tail. When an element is enqueued, it takes its place at the tail of the queue, just as a newly arriving student takes a place at the end of the line. The element dequeued is always the one at the head of the queue, like the student at the head of the line who has waited the longest. (Fortunately, we don’t have to worry about computational elements cutting into line.) Figure 10.2 shows one way to implement a queue of at most n − 1 elements using an array Q [1 . . n ]. The queue has an attribute head [ Q ] that indexes, or points to, its head. The attribute tail[ Q ] indexes the next location at which a newly arriving element will be inserted into the queue. The elements in the queue are in locations head[Q], head[Q] +1, . . . , tail[ Q ] − 1, where we “wrap around” in the sense that location 1 immediately follows location n in a circular order. When head [ Q ] = tail[ Q ], the queue is empty. Initially, we have head[Q] = tail[Q] = 1. When the queue is empty, an attempt to dequeue an element causes the queue to underflow. When head [ Q ] = tail[ Q ] + 1, the queue is full, and an attempt to enqueue an element causes the queue to overflow. 10.1 Stacks and queues 203 In our procedures E NQUEUE and D EQUEUE, the error checking for underflow and overflow has been omitted. (Exercise 10.1-4 asks you to supply code that checks for these two error conditions.) E NQUEUE ( Q , x ) 1 Q [tail[ Q ]] ← x 2 if tail[ Q ] = length[ Q ] 3 then tail[ Q ] ← 1 4 else tail[ Q ] ← tail[ Q ] + 1 D EQUEUE ( Q ) 1 x ← Q [head [ Q ]] 2 if head [ Q ] = length[ Q ] 3 then head [ Q ] ← 1 4 else head [ Q ] ← head [ Q ] + 1 5 return x Figure 10.2 shows the effects of the E NQUEUE and D EQUEUE operations. Each operation takes O (1) time. Exercises 10.1-1 Using Figure 10.1 as a model, illustrate the result of each operation in the sequence P USH ( S , 4), P USH ( S , 1), P USH ( S , 3), P OP ( S ), P USH ( S , 8), and P OP ( S ) on an initially empty stack S stored in array S [1 . . 6]. 10.1-2 Explain how to implement two stacks in one array A[1 . . n ] in such a way that neither stack overflows unless the total number of elements in both stacks together is n . The P USH and P OP operations should run in O (1) time. 10.1-3 Using Figure 10.2 as a model, illustrate the result of each operation in the sequence E NQUEUE ( Q , 4), E NQUEUE ( Q , 1), E NQUEUE ( Q , 3), D EQUEUE ( Q ), E NQUEUE ( Q , 8), and D EQUEUE ( Q ) on an initially empty queue Q stored in array Q [1 . . 6]. 10.1-4 Rewrite E NQUEUE and D EQUEUE to detect underflow and overflow of a queue. 204 Chapter 10 Elementary Data Structures 10.1-5 Whereas a stack allows insertion and deletion of elements at only one end, and a queue allows insertion at one end and deletion at the other end, a deque (doubleended queue) allows insertion and deletion at both ends. Write four O (1)-time procedures to insert elements into and delete elements from both ends of a deque constructed from an array. 10.1-6 Show how to implement a queue using two stacks. Analyze the running time of the queue operations. 10.1-7 Show how to implement a stack using two queues. Analyze the running time of the stack operations. 10.2 Linked lists A linked list is a data structure in which the objects are arranged in a linear order. Unlike an array, though, in which the linear order is determined by the array indices, the order in a linked list is determined by a pointer in each object. Linked lists provide a simple, flexible representation for dynamic sets, supporting (though not necessarily efficiently) all the operations listed on page 198. As shown in Figure 10.3, each element of a doubly linked list L is an object with a key field and two other pointer fields: next and prev. The object may also contain other satellite data. Given an element x in the list, next [x ] points to its successor in the linked list, and prev[x ] points to its predecessor. If prev[x ] = NIL , the element x has no predecessor and is therefore the first element, or head, of the list. If next [x ] = NIL , the element x has no successor and is therefore the last element, or tail, of the list. An attribute head [ L ] points to the first element of the list. If head [ L ] = NIL , the list is empty. A list may have one of several forms. It may be either singly linked or doubly linked, it may be sorted or not, and it may be circular or not. If a list is singly linked, we omit the prev pointer in each element. If a list is sorted, the linear order of the list corresponds to the linear order of keys stored in elements of the list; the minimum element is the head of the list, and the maximum element is the tail. If the list is unsorted, the elements can appear in any order. In a circular list, the prev pointer of the head of the list points to the tail, and the next pointer of the tail of the list points to the head. The list may thus be viewed as a ring of elements. In the remainder of this section, we assume that the lists with which we are working are unsorted and doubly linked. 10.2 Linked lists 205 prev (a) head[L] 9 key 16 next 4 1 (b) head[L] 25 9 16 4 1 (c) head[L] 25 9 16 1 Figure 10.3 (a) A doubly linked list L representing the dynamic set {1, 4, 9, 16}. Each element in the list is an object with fields for the key and pointers (shown by arrows) to the next and previous objects. The next field of the tail and the prev field of the head are NIL, indicated by a diagonal slash. The attribute head [ L ] points to the head. (b) Following the execution of L IST-I NSERT( L , x ), where key[x ] = 25, the linked list has a new object with key 25 as the new head. This new object points to the old head with key 9. (c) The result of the subsequent call L IST-D ELETE( L , x ), where x points to the object with key 4. Searching a linked list The procedure L IST-S EARCH ( L , k ) finds the first element with key k in list L by a simple linear search, returning a pointer to this element. If no object with key k appears in the list, then NIL is returned. For the linked list in Figure 10.3(a), the call L IST-S EARCH ( L , 4) returns a pointer to the third element, and the call L IST-S EARCH ( L , 7) returns NIL. L IST-S EARCH ( L , k ) 1 x ← head [ L ] 2 while x = NIL and key[x ] = k 3 do x ← next [x ] 4 return x To search a list of n objects, the L IST-S EARCH procedure takes worst case, since it may have to search the entire list. Inserting into a linked list Given an element x whose key field has already been set, the L IST-I NSERT procedure “splices” x onto the front of the linked list, as shown in Figure 10.3(b). (n ) time in the 206 Chapter 10 Elementary Data Structures L IST-I NSERT ( L , x ) 1 next [x ] ← head [ L ] 2 if head [ L ] = NIL 3 then prev[head [ L ]] ← x 4 head [ L ] ← x 5 prev[x ] ← NIL The running time for L IST-I NSERT on a list of n elements is O (1). Deleting from a linked list The procedure L IST-D ELETE removes an element x from a linked list L . It must be given a pointer to x , and it then “splices” x out of the list by updating pointers. If we wish to delete an element with a given key, we must first call L IST-S EARCH to retrieve a pointer to the element. L IST-D ELETE ( L , x ) 1 if prev[x ] = NIL 2 then next [prev[x ]] ← next [x ] 3 else head [ L ] ← next [x ] 4 if next [x ] = NIL 5 then prev[next [x ]] ← prev[x ] Figure 10.3(c) shows how an element is deleted from a linked list. L IST-D ELETE runs in O (1) time, but if we wish to delete an element with a given key, (n ) time is required in the worst case because we must first call L IST-S EARCH. Sentinels The code for L IST-D ELETE would be simpler if we could ignore the boundary conditions at the head and tail of the list. L IST-D ELETE ( L , x ) 1 next [prev[x ]] ← next [x ] 2 prev[next [x ]] ← prev[x ] A sentinel is a dummy object that allows us to simplify boundary conditions. For example, suppose that we provide with list L an object nil[ L ] that represents NIL but has all the fields of the other list elements. Wherever we have a reference to NIL in list code, we replace it by a reference to the sentinel nil[ L ]. As shown in Figure 10.4, this turns a regular doubly linked list into a circular, doubly linked list with a sentinel, in which the sentinel nil[ L ] is placed between the head and 10.2 Linked lists 207 (a) nil[L] 9 16 4 1 (b) nil[L] (c) nil[L] 25 9 16 4 1 (d) nil[L] 25 9 16 4 Figure 10.4 A circular, doubly linked list with a sentinel. The sentinel nil[ L ] appears between the head and tail. The attribute head [ L ] is no longer needed, since we can access the head of the list by next[nil[ L ]]. (a) An empty list. (b) The linked list from Figure 10.3(a), with key 9 at the head and key 1 at the tail. (c) The list after executing L IST-I NSERT ( L , x ), where key[x ] = 25. The new object becomes the head of the list. (d) The list after deleting the object with key 1. The new tail is the object with key 4. tail; the field next [nil[ L ]] points to the head of the list, and prev[nil[ L ]] points to the tail. Similarly, both the next field of the tail and the prev field of the head point to nil[ L ]. Since next [nil[ L ]] points to the head, we can eliminate the attribute head [ L ] altogether, replacing references to it by references to next [nil[ L ]]. An empty list consists of just the sentinel, since both next [nil[ L ]] and prev[nil[ L ]] can be set to nil[ L ]. The code for L IST-S EARCH remains the same as before, but with the references to NIL and head [ L ] changed as specified above. L IST-S EARCH ( L , k ) 1 x ← next [nil[ L ]] 2 while x = nil[ L ] and key[x ] = k 3 do x ← next [x ] 4 return x We use the two-line procedure L IST-D ELETE to delete an element from the list. We use the following procedure to insert an element into the list. L IST-I NSERT ( L , x ) 1 next [x ] ← next [nil[ L ]] 2 prev[next [nil[ L ]]] ← x 3 next [nil[ L ]] ← x 4 prev[x ] ← nil[ L ] 208 Chapter 10 Elementary Data Structures Figure 10.4 shows the effects of L IST-I NSERT and L IST-D ELETE on a sample list. Sentinels rarely reduce the asymptotic time bounds of data structure operations, but they can reduce constant factors. The gain from using sentinels within loops is usually a matter of clarity of code rather than speed; the linked list code, for example, is simplified by the use of sentinels, but we save only O (1) time in the L IST-I NSERT and L IST-D ELETE procedures. In other situations, however, the use of sentinels helps to tighten the code in a loop, thus reducing the coefficient of, say, n or n 2 in the running time. Sentinels should not be used indiscriminately. If there are many small lists, the extra storage used by their sentinels can represent significant wasted memory. In this book, we use sentinels only when they truly simplify the code. Exercises 10.2-1 Can the dynamic-set operation I NSERT be implemented on a singly linked list in O (1) time? How about D ELETE? 10.2-2 Implement a stack using a singly linked list L . The operations P USH and P OP should still take O (1) time. 10.2-3 Implement a queue by a singly linked list L . The operations E NQUEUE and D E QUEUE should still take O (1) time. 10.2-4 As written, each loop iteration in the L IST-S EARCH procedure requires two tests: one for x = nil[ L ] and one for key[x ] = k . Show how to eliminate the test for x = nil[ L ] in each iteration. 10.2-5 Implement the dictionary operations I NSERT, D ELETE, and S EARCH using singly linked, circular lists. What are the running times of your procedures? 10.2-6 The dynamic-set operation U NION takes two disjoint sets S 1 and S2 as input, and it returns a set S = S1 ∪ S2 consisting of all the elements of S1 and S2. The sets S1 and S2 are usually destroyed by the operation. Show how to support U NION in O (1) time using a suitable list data structure. 10.3 Implementing pointers and objects 209 10.2-7 Give a (n )-time nonrecursive procedure that reverses a singly linked list of n elements. The procedure should use no more than constant storage beyond that needed for the list itself. 10.2-8 Explain how to implement doubly linked lists using only one pointer value np[x ] per item instead of the usual two (next and prev). Assume that all pointer values can be interpreted as k -bit integers, and define np[x ] to be np[x ] = next [x ] XOR prev[x ], the k -bit “exclusive-or” of next [x ] and prev[x ]. (The value NIL is represented by 0.) Be sure to describe what information is needed to access the head of the list. Show how to implement the S EARCH, I NSERT, and D ELETE operations on such a list. Also show how to reverse such a list in O (1) time. 10.3 Implementing pointers and objects How do we implement pointers and objects in languages, such as Fortran, that do not provide them? In this section, we shall see two ways of implementing linked data structures without an explicit pointer data type. We shall synthesize objects and pointers from arrays and array indices. A multiple-array representation of objects We can represent a collection of objects that have the same fields by using an array for each field. As an example, Figure 10.5 shows how we can implement the linked list of Figure 10.3(a) with three arrays. The array key holds the values of the keys currently in the dynamic set, and the pointers are stored in the arrays next and prev. For a given array index x , key[x ], next [x ], and prev[x ] represent an object in the linked list. Under this interpretation, a pointer x is simply a common index into the key, next , and prev arrays. In Figure 10.3(a), the object with key 4 follows the object with key 16 in the linked list. In Figure 10.5, key 4 appears in key[2], and key 16 appears in key[5], so we have next [5] = 2 and prev[2] = 5. Although the constant NIL appears in the next field of the tail and the prev field of the head, we usually use an integer (such as 0 or −1) that cannot possibly represent an actual index into the arrays. A variable L holds the index of the head of the list. In our pseudocode, we have been using square brackets to denote both the indexing of an array and the selection of a field (attribute) of an object. Either way, the meanings of key[x ], next [x ], and prev[x ] are consistent with implementation practice. 210 Chapter 10 1 Elementary Data Structures 2 3 4 5 6 7 8 L 7 next key prev 3 4 5 1 2 2 16 7 5 9 Figure 10.5 The linked list of Figure 10.3(a) represented by the arrays key, next, and prev. Each vertical slice of the arrays represents a single object. Stored pointers correspond to the array indices shown at the top; the arrows show how to interpret them. Lightly shaded object positions contain list elements. The variable L keeps the index of the head. A single-array representation of objects The words in a computer memory are typically addressed by integers from 0 to M − 1, where M is a suitably large integer. In many programming languages, an object occupies a contiguous set of locations in the computer memory. A pointer is simply the address of the first memory location of the object, and other memory locations within the object can be indexed by adding an offset to the pointer. We can use the same strategy for implementing objects in programming environments that do not provide explicit pointer data types. For example, Figure 10.6 shows how a single array A can be used to store the linked list from Figures 10.3(a) and 10.5. An object occupies a contiguous subarray A[ j . . k ]. Each field of the object corresponds to an offset in the range from 0 to k − j , and a pointer to the object is the index j . In Figure 10.6, the offsets corresponding to key, next , and prev are 0, 1, and 2, respectively. To read the value of prev[i ], given a pointer i , we add the value i of the pointer to the offset 2, thus reading A[i + 2]. The single-array representation is flexible in that it permits objects of different lengths to be stored in the same array. The problem of managing such a heterogeneous collection of objects is more difficult than the problem of managing a homogeneous collection, where all objects have the same fields. Since most of the data structures we shall consider are composed of homogeneous elements, it will be sufficient for our purposes to use the multiple-array representation of objects. Allocating and freeing objects To insert a key into a dynamic set represented by a doubly linked list, we must allocate a pointer to a currently unused object in the linked-list representation. Thus, it is useful to manage the storage of objects not currently used in the linked-list representation so that one can be allocated. In some systems, a garbage collector is responsible for determining which objects are unused. Many applications, 10.3 Implementing pointers and objects 1 2 3 4 5 6 7 8 9 211 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 L 19 A 4 7 13 1 4 16 4 19 key prev next 9 13 Figure 10.6 The linked list of Figures 10.3(a) and 10.5 represented in a single array A. Each list element is an object that occupies a contiguous subarray of length 3 within the array. The three fields key, next, and prev correspond to the offsets 0, 1, and 2, respectively. A pointer to an object is an index of the first element of the object. Objects containing list elements are lightly shaded, and arrows show the list ordering. however, are simple enough that they can bear responsibility for returning an unused object to a storage manager. We shall now explore the problem of allocating and freeing (or deallocating) homogeneous objects using the example of a doubly linked list represented by multiple arrays. Suppose that the arrays in the multiple-array representation have length m and that at some moment the dynamic set contains n ≤ m elements. Then n objects represent elements currently in the dynamic set, and the remaining m −n objects are free; the free objects can be used to represent elements inserted into the dynamic set in the future. We keep the free objects in a singly linked list, which we call the free list. The free list uses only the next array, which stores the next pointers within the list. The head of the free list is held in the global variable free. When the dynamic set represented by linked list L is nonempty, the free list may be intertwined with list L , as shown in Figure 10.7. Note that each object in the representation is either in list L or in the free list, but not in both. The free list is a stack: the next object allocated is the last one freed. We can use a list implementation of the stack operations P USH and P OP to implement the procedures for allocating and freeing objects, respectively. We assume that the global variable free used in the following procedures points to the first element of the free list. A LLOCATE -O BJECT () 1 if free = NIL 2 then error “out of space” 3 else x ← free 4 free ← next [x ] 5 return x 212 Chapter 10 Elementary Data Structures 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 free L 4 7 next key prev free L 8 4 next key prev 3 4 5 8 1 2 21 16 7 5 9 6 3 4 5 721 1 25 16 2 7 (b) 5 9 4 6 (a) 1 2 3 4 5 6 7 8 free L 5 4 next key prev 3 4 7 78 1 25 2 (c) 1 2 9 4 6 Figure 10.7 The effect of the A LLOCATE -O BJECT and F REE -O BJECT procedures. (a) The list of Figure 10.5 (lightly shaded) and a free list (heavily shaded). Arrows show the free-list structure. (b) The result of calling A LLOCATE -O BJECT() (which returns index 4), setting key[4] to 25, and calling L IST-I NSERT( L , 4). The new free-list head is object 8, which had been next[4] on the free list. (c) After executing L IST-D ELETE( L , 5), we call F REE -O BJECT(5). Object 5 becomes the new free-list head, with object 8 following it on the free list. F REE -O BJECT (x ) 1 next [x ] ← free 2 free ← x The free list initially contains all n unallocated objects. When the free list has been exhausted, the A LLOCATE -O BJECT procedure signals an error. It is common to use a single free list to service several linked lists. Figure 10.8 shows two linked lists and a free list intertwined through key, next , and prev arrays. The two procedures run in O (1) time, which makes them quite practical. They can be modified to work for any homogeneous collection of objects by letting any one of the fields in the object act like a next field in the free list. Exercises 10.3-1 Draw a picture of the sequence 13, 4, 8, 19, 5, 11 stored as a doubly linked list using the multiple-array representation. Do the same for the single-array representation. 10.3 Implementing pointers and objects 213 free 10 L2 9 L1 3 1 2 3 4 5 6 7 8 9 10 next 5 68 21 key k1 k2 k3 k5 k6 k7 prev 7 6 139 7 k9 4 Figure 10.8 Two linked lists, L 1 (lightly shaded) and L 2 (heavily shaded), and a free list (darkened) intertwined. 10.3-2 Write the procedures A LLOCATE -O BJECT and F REE -O BJECT for a homogeneous collection of objects implemented by the single-array representation. 10.3-3 Why don’t we need to set or reset the prev fields of objects in the implementation of the A LLOCATE -O BJECT and F REE -O BJECT procedures? 10.3-4 It is often desirable to keep all elements of a doubly linked list compact in storage, using, for example, the first m index locations in the multiple-array representation. (This is the case in a paged, virtual-memory computing environment.) Explain how the procedures A LLOCATE -O BJECT and F REE -O BJECT can be implemented so that the representation is compact. Assume that there are no pointers to elements of the linked list outside the list itself. (Hint: Use the array implementation of a stack.) 10.3-5 Let L be a doubly linked list of length m stored in arrays key, prev, and next of length n . Suppose that these arrays are managed by A LLOCATE -O BJECT and F REE -O BJECT procedures that keep a doubly linked free list F . Suppose further that of the n items, exactly m are on list L and n − m are on the free list. Write a procedure C OMPACTIFY-L IST ( L , F ) that, given the list L and the free list F , moves the items in L so that they occupy array positions 1, 2, . . . , m and adjusts the free list F so that it remains correct, occupying array positions m + 1, m + 2, . . . , n . The running time of your procedure should be (m ), and it should use only a constant amount of extra space. Give a careful argument for the correctness of your procedure. 214 Chapter 10 Elementary Data Structures 10.4 Representing rooted trees The methods for representing lists given in the previous section extend to any homogeneous data structure. In this section, we look specifically at the problem of representing rooted trees by linked data structures. We first look at binary trees, and then we present a method for rooted trees in which nodes can have an arbitrary number of children. We represent each node of a tree by an object. As with linked lists, we assume that each node contains a key field. The remaining fields of interest are pointers to other nodes, and they vary according to the type of tree. Binary trees As shown in Figure 10.9, we use the fields p , left , and right to store pointers to the parent, left child, and right child of each node in a binary tree T . If p [x ] = NIL , then x is the root. If node x has no left child, then left [x ] = NIL , and similarly for the right child. The root of the entire tree T is pointed to by the attribute root [T ]. If root [T ] = NIL , then the tree is empty. Rooted trees with unbounded branching The scheme for representing a binary tree can be extended to any class of trees in which the number of children of each node is at most some constant k : we replace the left and right fields by child 1 , child 2 , . . . , child k . This scheme no longer works when the number of children of a node is unbounded, since we do not know how many fields (arrays in the multiple-array representation) to allocate in advance. Moreover, even if the number of children k is bounded by a large constant but most nodes have a small number of children, we may waste a lot of memory. Fortunately, there is a clever scheme for using binary trees to represent trees with arbitrary numbers of children. It has the advantage of using only O (n ) space for any n -node rooted tree. The left-child, right-sibling representation is shown in Figure 10.10. As before, each node contains a parent pointer p , and root [T ] points to the root of tree T . Instead of having a pointer to each of its children, however, each node x has only two pointers: 1. left-child [x ] points to the leftmost child of node x , and 2. right-sibling [x ] points to the sibling of x immediately to the right. If node x has no children, then left-child [x ] = NIL , and if node x is the rightmost child of its parent, then right-sibling [x ] = NIL . 10.4 Representing rooted trees 215 root[T] Figure 10.9 The representation of a binary tree T . Each node x has the fields p[x ] (top), left[x ] (lower left), and right[x ] (lower right). The key fields are not shown. root[T] Figure 10.10 The left-child, right-sibling representation of a tree T . Each node x has fields p[x ] (top), left-child[x ] (lower left), and right-sibling[x ] (lower right). Keys are not shown. 216 Chapter 10 Elementary Data Structures Other tree representations We sometimes represent rooted trees in other ways. In Chapter 6, for example, we represented a heap, which is based on a complete binary tree, by a single array plus an index. The trees that appear in Chapter 21 are traversed only toward the root, so only the parent pointers are present; there are no pointers to children. Many other schemes are possible. Which scheme is best depends on the application. Exercises 10.4-1 Draw the binary tree rooted at index 6 that is represented by the following fields. index 1 2 3 4 5 6 7 8 9 10 key 12 15 4 10 2 18 7 14 21 5 left 7 8 10 5 NIL right 3 NIL NIL 9 NIL 1 NIL 4 NIL 6 NIL NIL 2 NIL NIL 10.4-2 Write an O (n )-time recursive procedure that, given an n -node binary tree, prints out the key of each node in the tree. 10.4-3 Write an O (n )-time nonrecursive procedure that, given an n -node binary tree, prints out the key of each node in the tree. Use a stack as an auxiliary data structure. 10.4-4 Write an O (n )-time procedure that prints all the keys of an arbitrary rooted tree with n nodes, where the tree is stored using the left-child, right-sibling representation. 10.4-5 Write an O (n )-time nonrecursive procedure that, given an n -node binary tree, prints out the key of each node. Use no more than constant extra space outside of the tree itself and do not modify the tree, even temporarily, during the procedure. Problems for Chapter 10 217 10.4-6 The left-child, right-sibling representation of an arbitrary rooted tree uses three pointers in each node: left-child , right-sibling , and parent . From any node, its parent can be reached and identified in constant time and all its children can be reached and identified in time linear in the number of children. Show how to use only two pointers and one boolean value in each node so that the parent of a node or all of its children can be reached and identified in time linear in the number of children. Problems 10-1 Comparisons among lists For each of the four types of lists in the following table, what is the asymptotic worst-case running time for each dynamic-set operation listed? unsorted, singly linked S EARCH( L , k ) I NSERT( L , x ) D ELETE( L , x ) S UCCESSOR( L , x ) P REDECESSOR( L , x ) M INIMUM( L ) M AXIMUM( L ) sorted, singly linked unsorted, doubly linked sorted, doubly linked 10-2 Mergeable heaps using linked lists A mergeable heap supports the following operations: M AKE -H EAP (which creates an empty mergeable heap), I NSERT, M INIMUM, E XTRACT-M IN, and U NION. 1 Show how to implement mergeable heaps using linked lists in each of the following cases. Try to make each operation as efficient as possible. Analyze the running time of each operation in terms of the size of the dynamic set(s) being operated on. a. Lists are sorted. 1 Because we have defined a mergeable heap to support M INIMUM and E XTRACT-M IN , we can also refer to it as a mergeable min-heap. Alternatively, if it supported M AXIMUM and E XTRACT-M AX, it would be a mergeable max-heap. 218 Chapter 10 Elementary Data Structures b. Lists are unsorted. c. Lists are unsorted, and dynamic sets to be merged are disjoint. 10-3 Searching a sorted compact list Exercise 10.3-4 asked how we might maintain an n -element list compactly in the first n positions of an array. We shall assume that all keys are distinct and that the compact list is also sorted, that is, key[i ] < key[next [i ]] for all i = 1, 2, . . . , n such that next [i ] = NIL . Under these assumptions, you will √ show that the following randomized algorithm can be used to search the list in O ( n ) expected time. C OMPACT-L IST-S EARCH ( L , n , k ) 1 i ← head [ L ] 2 while i = NIL and key[i ] < k 3 do j ← R ANDOM(1, n ) 4 if key[i ] < key[ j ] and key[ j ] ≤ k 5 then i ← j 6 if key[i ] = k 7 then return i 8 i ← next [i ] 9 if i = NIL or key[i ] > k 10 then return NIL 11 else return i If we ignore lines 3–7 of the procedure, we have an ordinary algorithm for searching a sorted linked list, in which index i points to each position of the list in turn. The search terminates once the index i “falls off” the end of the list or once key[i ] ≥ k . In the latter case, if key[i ] = k , clearly we have found a key with the value k . If, however, key[i ] > k , then we will never find a key with the value k , and so terminating the search was the right thing to do. Lines 3–7 attempt to skip ahead to a randomly chosen position j . Such a skip is beneficial if key[ j ] is larger than key[i ] and no larger than k ; in such a case, j marks a position in the list that i would have to reach during an ordinary list search. Because the list is compact, we know that any choice of j between 1 and n indexes some object in the list rather than a slot on the free list. Instead of analyzing the performance of C OMPACT-L IST-S EARCH directly, we shall analyze a related algorithm, C OMPACT-L IST-S EARCH , which executes two separate loops. This algorithm takes an additional parameter t which determines an upper bound on the number of iterations of the first loop. Problems for Chapter 10 219 C OMPACT-L IST-S EARCH ( L , n , k , t ) 1 i ← head [ L ] 2 for q ← 1 to t 3 do j ← R ANDOM (1, n ) 4 if key[i ] < key[ j ] and key[ j ] ≤ k 5 then i ← j 6 if key[i ] = k 7 then return i 8 while i = NIL and key[i ] < k 9 do i ← next [i ] 10 if i = NIL or key[i ] > k 11 then return NIL 12 else return i To compare the execution of the algorithms C OMPACT-L IST-S EARCH ( L , k ) and C OMPACT-L IST-S EARCH ( L , k , t ), assume that the sequence of integers returned by the calls of R ANDOM(1, n ) is the same for both algorithms. a. Suppose that C OMPACT-L IST-S EARCH ( L , k ) takes t iterations of the while loop of lines 2–8. Argue that C OMPACT-L IST-S EARCH ( L , k , t ) returns the same answer and that the total number of iterations of both the for and while loops within C OMPACT-L IST-S EARCH is at least t . In the call C OMPACT-L IST-S EARCH ( L , k , t ), let X t be the random variable that describes the distance in the linked list (that is, through the chain of next pointers) from position i to the desired key k after t iterations of the for loop of lines 2–7 have occurred. b. Argue that the expected running time of C OMPACT-L IST-S EARCH ( L , k , t ) is O (t + E [ X t ]). c. Show that E [ X t ] ≤ d. Show that n −1 t r =0 r n r =1 (1 − r / n )t . (Hint: Use equation (C.24).) ≤ n t +1 /(t + 1). e. Prove that E [ X t ] ≤ n /(t + 1). f. Show that C OMPACT-L IST-S EARCH ( L , k , t ) runs in O (t + n / t ) expected time. √ g. Conclude that C OMPACT-L IST-S EARCH runs in O ( n ) expected time. h. Why do we assume that all keys are distinct in C OMPACT-L IST-S EARCH? Argue that random skips do not necessarily help asymptotically when the list contains repeated key values. 220 Chapter 10 Elementary Data Structures Chapter notes Aho, Hopcroft, and Ullman [6] and Knuth [182] are excellent references for elementary data structures. Many other texts cover both basic data structures and their implementation in a particular programming language. Examples of these types of textbooks include Goodrich and Tamassia [128], Main [209], Shaffer [273], and Weiss [310, 312, 313]. Gonnet [126] provides experimental data on the performance of many data structure operations. The origin of stacks and queues as data structures in computer science is unclear, since corresponding notions already existed in mathematics and paper-based business practices before the introduction of digital computers. Knuth [182] cites A. M. Turing for the development of stacks for subroutine linkage in 1947. Pointer-based data structures also seem to be a folk invention. According to Knuth, pointers were apparently used in early computers with drum memories. The A-1 language developed by G. M. Hopper in 1951 represented algebraic formulas as binary trees. Knuth credits the IPL-II language, developed in 1956 by A. Newell, J. C. Shaw, and H. A. Simon, for recognizing the importance and promoting the use of pointers. Their IPL-III language, developed in 1957, included explicit stack operations. 11 Hash Tables Many applications require a dynamic set that supports only the dictionary operations I NSERT, S EARCH, and D ELETE. For example, a compiler for a computer language maintains a symbol table, in which the keys of elements are arbitrary character strings that correspond to identifiers in the language. A hash table is an effective data structure for implementing dictionaries. Although searching for an element in a hash table can take as long as searching for an element in a linked list— (n ) time in the worst case—in practice, hashing performs extremely well. Under reasonable assumptions, the expected time to search for an element in a hash table is O (1). A hash table is a generalization of the simpler notion of an ordinary array. Directly addressing into an ordinary array makes effective use of our ability to examine an arbitrary position in an array in O (1) time. Section 11.1 discusses direct addressing in more detail. Direct addressing is applicable when we can afford to allocate an array that has one position for every possible key. When the number of keys actually stored is small relative to the total number of possible keys, hash tables become an effective alternative to directly addressing an array, since a hash table typically uses an array of size proportional to the number of keys actually stored. Instead of using the key as an array index directly, the array index is computed from the key. Section 11.2 presents the main ideas, focusing on “chaining” as a way to handle “collisions” in which more than one key maps to the same array index. Section 11.3 describes how array indices can be computed from keys using hash functions. We present and analyze several variations on the basic theme. Section 11.4 looks at “open addressing,” which is another way to deal with collisions. The bottom line is that hashing is an extremely effective and practical technique: the basic dictionary operations require only O (1) time on the average. Section 11.5 explains how “perfect hashing” can support searches in O (1) worstcase time, when the set of keys being stored is static (that is, when the set of keys never changes once stored). 222 Chapter 11 Hash Tables 11.1 Direct-address tables Direct addressing is a simple technique that works well when the universe U of keys is reasonably small. Suppose that an application needs a dynamic set in which each element has a key drawn from the universe U = {0, 1, . . . , m − 1}, where m is not too large. We shall assume that no two elements have the same key. To represent the dynamic set, we use an array, or direct-address table, denoted by T [0 . . m − 1], in which each position, or slot, corresponds to a key in the universe U . Figure 11.1 illustrates the approach; slot k points to an element in the set with key k . If the set contains no element with key k , then T [k ] = NIL . The dictionary operations are trivial to implement. D IRECT-A DDRESS -S EARCH (T , k ) return T [k ] D IRECT-A DDRESS -I NSERT (T , x ) T [key[x ]] ← x D IRECT-A DDRESS -D ELETE (T , x ) T [key[x ]] ← NIL Each of these operations is fast: only O (1) time is required. For some applications, the elements in the dynamic set can be stored in the direct-address table itself. That is, rather than storing an element’s key and satellite data in an object external to the direct-address table, with a pointer from a slot in the table to the object, we can store the object in the slot itself, thus saving space. Moreover, it is often unnecessary to store the key field of the object, since if we have the index of an object in the table, we have its key. If keys are not stored, however, we must have some way to tell if the slot is empty. Exercises 11.1-1 Suppose that a dynamic set S is represented by a direct-address table T of length m . Describe a procedure that finds the maximum element of S . What is the worst-case performance of your procedure? 11.1-2 A bit vector is simply an array of bits (0’s and 1’s). A bit vector of length m takes much less space than an array of m pointers. Describe how to use a bit vector 11.1 Direct-address tables 223 T 0 9 1 U (universe of keys) 0 6 7 4 2 5 3 8 1 2 3 4 5 6 7 8 9 key 2 3 5 satellite data K (actual keys) 8 Figure 11.1 Implementing a dynamic set by a direct-address table T . Each key in the universe U = {0, 1, . . . , 9} corresponds to an index in the table. The set K = {2, 3, 5, 8} of actual keys determines the slots in the table that contain pointers to elements. The other slots, heavily shaded, contain NIL. to represent a dynamic set of distinct elements with no satellite data. Dictionary operations should run in O (1) time. 11.1-3 Suggest how to implement a direct-address table in which the keys of stored elements do not need to be distinct and the elements can have satellite data. All three dictionary operations (I NSERT, D ELETE, and S EARCH) should run in O (1) time. (Don’t forget that D ELETE takes as an argument a pointer to an object to be deleted, not a key.) 11.1-4 We wish to implement a dictionary by using direct addressing on a huge array. At the start, the array entries may contain garbage, and initializing the entire array is impractical because of its size. Describe a scheme for implementing a directaddress dictionary on a huge array. Each stored object should use O (1) space; the operations S EARCH, I NSERT, and D ELETE should take O (1) time each; and the initialization of the data structure should take O (1) time. (Hint: Use an additional stack, whose size is the number of keys actually stored in the dictionary, to help determine whether a given entry in the huge array is valid or not.) 224 Chapter 11 Hash Tables 11.2 Hash tables The difficulty with direct addressing is obvious: if the universe U is large, storing a table T of size |U | may be impractical, or even impossible, given the memory available on a typical computer. Furthermore, the set K of keys actually stored may be so small relative to U that most of the space allocated for T would be wasted. When the set K of keys stored in a dictionary is much smaller than the universe U of all possible keys, a hash table requires much less storage than a directaddress table. Specifically, the storage requirements can be reduced to ( | K |) while we maintain the benefit that searching for an element in the hash table still requires only O (1) time. The only catch is that this bound is for the average time, whereas for direct addressing it holds for the worst-case time. With direct addressing, an element with key k is stored in slot k . With hashing, this element is stored in slot h (k ); that is, we use a hash function h to compute the slot from the key k . Here h maps the universe U of keys into the slots of a hash table T [0 . . m − 1]: We say that an element with key k hashes to slot h (k ); we also say that h (k ) is the hash value of key k . Figure 11.2 illustrates the basic idea. The point of the hash function is to reduce the range of array indices that need to be handled. Instead of |U | values, we need to handle only m values. Storage requirements are correspondingly reduced. There is one hitch: two keys may hash to the same slot. We call this situation a collision. Fortunately, there are effective techniques for resolving the conflict created by collisions. Of course, the ideal solution would be to avoid collisions altogether. We might try to achieve this goal by choosing a suitable hash function h . One idea is to make h appear to be “random,” thus avoiding collisions or at least minimizing their number. The very term “to hash,” evoking images of random mixing and chopping, captures the spirit of this approach. (Of course, a hash function h must be deterministic in that a given input k should always produce the same output h (k ).) Since |U | > m , however, there must be at least two keys that have the same hash value; avoiding collisions altogether is therefore impossible. Thus, while a welldesigned, “random”-looking hash function can minimize the number of collisions, we still need a method for resolving the collisions that do occur. The remainder of this section presents the simplest collision resolution technique, called chaining. Section 11.4 introduces an alternative method for resolving collisions, called open addressing. h : U → {0, 1, . . . , m − 1} . 11.2 Hash tables 225 T 0 U (universe of keys) k1 K (actual keys) k4 k2 k5 k3 h(k1) h(k4) h(k2) = h(k5) h(k3) m–1 Figure 11.2 Using a hash function h to map keys to hash-table slots. Keys k 2 and k5 map to the same slot, so they collide. T U (universe of keys) k1 K (actual keys) k6 k4 k2 k7 k8 k5 k3 k5 k3 k8 k2 k7 k1 k4 k6 Figure 11.3 Collision resolution by chaining. Each hash-table slot T [ j ] contains a linked list of all the keys whose hash value is j . For example, h (k 1 ) = h (k4 ) and h (k 5 ) = h (k2 ) = h (k7 ). Collision resolution by chaining In chaining, we put all the elements that hash to the same slot in a linked list, as shown in Figure 11.3. Slot j contains a pointer to the head of the list of all stored elements that hash to j ; if there are no such elements, slot j contains NIL. The dictionary operations on a hash table T are easy to implement when collisions are resolved by chaining. 226 Chapter 11 Hash Tables C HAINED -H ASH -I NSERT (T , x ) insert x at the head of list T [h (key[x ])] C HAINED -H ASH -S EARCH (T , k ) search for an element with key k in list T [h (k )] C HAINED -H ASH -D ELETE (T , x ) delete x from the list T [h (key[x ])] The worst-case running time for insertion is O (1). The insertion procedure is fast in part because it assumes that the element x being inserted is not already present in the table; this assumption can be checked if necessary (at additional cost) by performing a search before insertion. For searching, the worst-case running time is proportional to the length of the list; we shall analyze this operation more closely below. Deletion of an element x can be accomplished in O (1) time if the lists are doubly linked. (Note that C HAINED -H ASH -D ELETE takes as input an element x and not its key k , so we don’t have to search for x first. If the lists were singly linked, it would not be of great help to take as input the element x rather than the key k . We would still have to find x in the list T [h (key[x ])], so that the next link of x ’s predecessor could be properly set to splice x out. In this case, deletion and searching would have essentially the same running time.) Analysis of hashing with chaining How well does hashing with chaining perform? In particular, how long does it take to search for an element with a given key? Given a hash table T with m slots that stores n elements, we define the load factor α for T as n / m , that is, the average number of elements stored in a chain. Our analysis will be in terms of α , which can be less than, equal to, or greater than 1. The worst-case behavior of hashing with chaining is terrible: all n keys hash to the same slot, creating a list of length n . The worst-case time for searching is thus (n ) plus the time to compute the hash function—no better than if we used one linked list for all the elements. Clearly, hash tables are not used for their worst-case performance. (Perfect hashing, described in Section 11.5, does however provide good worst-case performance when the set of keys is static.) The average performance of hashing depends on how well the hash function h distributes the set of keys to be stored among the m slots, on the average. Section 11.3 discusses these issues, but for now we shall assume that any given element is equally likely to hash into any of the m slots, independently of where any other element has hashed to. We call this the assumption of simple uniform hashing. 11.2 Hash tables 227 For j = 0, 1, . . . , m − 1, let us denote the length of the list T [ j ] by n j , so that n = n 0 + n 1 + · · · + n m −1 , (11.1) and the average value of n j is E [n j ] = α = n / m . We assume that the hash value h (k ) can be computed in O (1) time, so that the time required to search for an element with key k depends linearly on the length n h (k ) of the list T [h (k )]. Setting aside the O (1) time required to compute the hash function and to access slot h (k ), let us consider the expected number of elements examined by the search algorithm, that is, the number of elements in the list T [h (k )] that are checked to see if their keys are equal to k . We shall consider two cases. In the first, the search is unsuccessful: no element in the table has key k . In the second, the search successfully finds an element with key k . Theorem 11.1 In a hash table in which collisions are resolved by chaining, an unsuccessful search takes expected time (1 + α), under the assumption of simple uniform hashing. Proof Under the assumption of simple uniform hashing, any key k not already stored in the table is equally likely to hash to any of the m slots. The expected time to search unsuccessfully for a key k is the expected time to search to the end of list T [h (k )], which has expected length E [n h (k ) ] = α . Thus, the expected number of elements examined in an unsuccessful search is α , and the total time required (including the time for computing h (k )) is (1 + α). The situation for a successful search is slightly different, since each list is not equally likely to be searched. Instead, the probability that a list is searched is proportional to the number of elements it contains. Nonetheless, the expected search time is still (1 + α). Theorem 11.2 In a hash table in which collisions are resolved by chaining, a successful search takes time (1 + α), on the average, under the assumption of simple uniform hashing. Proof We assume that the element being searched for is equally likely to be any of the n elements stored in the table. The number of elements examined during a successful search for an element x is 1 more than the number of elements that appear before x in x ’s list. Elements before x in the list were all inserted after x was inserted, because new elements are placed at the front of the list. To find the expected number of elements examined, we take the average, over the n elements x in the table, of 1 plus the expected number of elements added to x ’s list after x was added to the list. Let x i denote the i th element inserted into the ta- 228 Chapter 11 Hash Tables ble, for i = 1, 2, . . . , n , and let k i = key[x i ]. For keys k i and k j , we define the indicator random variable X i j = I {h (ki ) = h (k j )}. Under the assumption of simple uniform hashing, we have Pr {h (ki ) = h (k j )} = 1/ m , and so by Lemma 5.1, E [ X i j ] = 1/ m . Thus, the expected number of elements examined in a successful search is E 1 n n i =1 n 1+ = = 1 n 1 n Xij j =i +1 n n i =1 n 1+ 1+ n i =1 E [Xij ] j =i +1 (by linearity of expectation) i =1 = 1+ 1 nm 1 m j =i +1 (n − i ) n n 1 = 1+ nm = 1+ n 1 n (n + 1) n2 − nm 2 n−1 = 1+ 2m α α = 1+ − . 2 2n i =1 n− i i =1 (by equation (A.1)) Thus, the total time required for a successful search (including the time for computing the hash function) is (2 + α/2 − α/2n ) = (1 + α). What does this analysis mean? If the number of hash-table slots is at least proportional to the number of elements in the table, we have n = O (m ) and, consequently, α = n / m = O (m )/ m = O (1). Thus, searching takes constant time on average. Since insertion takes O (1) worst-case time and deletion takes O (1) worst-case time when the lists are doubly linked, all dictionary operations can be supported in O (1) time on average. Exercises 11.2-1 Suppose we use a hash function h to hash n distinct keys into an array T of length m . Assuming simple uniform hashing, what is the expected number of 11.3 Hash functions 229 collisions? More precisely, what is the expected cardinality of {{k , l } : k = l and h (k ) = h (l )}? 11.2-2 Demonstrate the insertion of the keys 5, 28, 19, 15, 20, 33, 12, 17, 10 into a hash table with collisions resolved by chaining. Let the table have 9 slots, and let the hash function be h (k ) = k mod 9. 11.2-3 Professor Marley hypothesizes that substantial performance gains can be obtained if we modify the chaining scheme so that each list is kept in sorted order. How does the professor’s modification affect the running time for successful searches, unsuccessful searches, insertions, and deletions? 11.2-4 Suggest how storage for elements can be allocated and deallocated within the hash table itself by linking all unused slots into a free list. Assume that one slot can store a flag and either one element plus a pointer or two pointers. All dictionary and free-list operations should run in O (1) expected time. Does the free list need to be doubly linked, or does a singly linked free list suffice? 11.2-5 Show that if |U | > nm , there is a subset of U of size n consisting of keys that all hash to the same slot, so that the worst-case searching time for hashing with chaining is (n ). 11.3 Hash functions In this section, we discuss some issues regarding the design of good hash functions and then present three schemes for their creation. Two of the schemes, hashing by division and hashing by multiplication, are heuristic in nature, whereas the third scheme, universal hashing, uses randomization to provide provably good performance. What makes a good hash function? A good hash function satisfies (approximately) the assumption of simple uniform hashing: each key is equally likely to hash to any of the m slots, independently of where any other key has hashed to. Unfortunately, it is typically not possible to check this condition, since one rarely knows the probability distribution according to which the keys are drawn, and the keys may not be drawn independently. 230 Chapter 11 Hash Tables Occasionally we do know the distribution. For example, if the keys are known to be random real numbers k independently and uniformly distributed in the range 0 ≤ k < 1, the hash function satisfies the condition of simple uniform hashing. In practice, heuristic techniques can often be used to create a hash function that performs well. Qualitative information about distribution of keys may be useful in this design process. For example, consider a compiler’s symbol table, in which the keys are character strings representing identifiers in a program. Closely related symbols, such as pt and pts, often occur in the same program. A good hash function would minimize the chance that such variants hash to the same slot. A good approach is to derive the hash value in a way that is expected to be independent of any patterns that might exist in the data. For example, the “division method” (discussed in Section 11.3.1) computes the hash value as the remainder when the key is divided by a specified prime number. This method frequently gives good results, assuming that the prime number is chosen to be unrelated to any patterns in the distribution of keys. Finally, we note that some applications of hash functions might require stronger properties than are provided by simple uniform hashing. For example, we might want keys that are “close” in some sense to yield hash values that are far apart. (This property is especially desirable when we are using linear probing, defined in Section 11.4.) Universal hashing, described in Section 11.3.3, often provides the desired properties. Interpreting keys as natural numbers Most hash functions assume that the universe of keys is the set N = {0, 1, 2, . . . } of natural numbers. Thus, if the keys are not natural numbers, a way is found to interpret them as natural numbers. For example, a character string can be interpreted as an integer expressed in suitable radix notation. Thus, the identifier pt might be interpreted as the pair of decimal integers (112, 116), since p = 112 and t = 116 in the ASCII character set; then, expressed as a radix-128 integer, pt becomes (112 · 128) + 116 = 14452. It is usually straightforward in an application to devise some such method for interpreting each key as a (possibly large) natural number. In what follows, we assume that the keys are natural numbers. 11.3.1 The division method h (k ) = k m In the division method for creating hash functions, we map a key k into one of m slots by taking the remainder of k divided by m . That is, the hash function is h (k ) = k mod m . 11.3 Hash functions 231 For example, if the hash table has size m = 12 and the key is k = 100, then h (k ) = 4. Since it requires only a single division operation, hashing by division is quite fast. When using the division method, we usually avoid certain values of m . For example, m should not be a power of 2, since if m = 2 p , then h (k ) is just the p lowest-order bits of k . Unless it is known that all low-order p -bit patterns are equally likely, it is better to make the hash function depend on all the bits of the key. As Exercise 11.3-3 asks you to show, choosing m = 2 p − 1 when k is a character string interpreted in radix 2 p may be a poor choice, because permuting the characters of k does not change its hash value. A prime not too close to an exact power of 2 is often a good choice for m . For example, suppose we wish to allocate a hash table, with collisions resolved by chaining, to hold roughly n = 2000 character strings, where a character has 8 bits. We don’t mind examining an average of 3 elements in an unsuccessful search, so we allocate a hash table of size m = 701. The number 701 is chosen because it is a prime near 2000/3 but not near any power of 2. Treating each key k as an integer, our hash function would be h (k ) = k mod 701 . 11.3.2 The multiplication method The multiplication method for creating hash functions operates in two steps. First, we multiply the key k by a constant A in the range 0 < A < 1 and extract the fractional part of k A. Then, we multiply this value by m and take the floor of the result. In short, the hash function is h (k ) = m (k A mod 1) , where “k A mod 1” means the fractional part of k A, that is, k A − k A . An advantage of the multiplication method is that the value of m is not critical. We typically choose it to be a power of 2 (m = 2 p for some integer p ) since we can then easily implement the function on most computers as follows. Suppose that the word size of the machine is w bits and that k fits into a single word. We restrict A to be a fraction of the form s /2w , where s is an integer in the range 0 < s < 2 w . Referring to Figure 11.4, we first multiply k by the w -bit integer s = A · 2 w . The result is a 2w -bit value r 1 2w + r0 , where r1 is the high-order word of the product and r0 is the low-order word of the product. The desired p -bit hash value consists of the p most significant bits of r 0 . Although this method works with any value of the constant A, it works better with some values than with others. The optimal choice depends on the characteristics of the data being hashed. Knuth [185] suggests that 232 Chapter 11 Hash Tables PSfrag replacements w bits k × s = A · 2w r0 extract p bits h (k ) r1 Figure 11.4 The multiplication method of hashing. The w-bit representation of the key k is multiplied by the w-bit value s = A · 2w . The p highest-order bits of the lower w-bit half of the product form the desired hash value h (k ). √ A ≈ ( 5 − 1)/2 = 0.6180339887 . . . (11.2) is likely to work reasonably well. As an example, suppose we have k = 123456, p = 14, m = 2 14 = 16384, and w = 32. Adapting Knuth’s√ suggestion, we choose A to be the fraction of the form s /232 that is closest to ( 5 − 1)/2, so that A = 2654435769/232 . Then k · s = 327706022297664 = (76300 · 232 ) + 17612864, and so r 1 = 76300 and r0 = 17612864. The 14 most significant bits of r 0 yield the value h (k ) = 67. 11.3.3 Universal hashing If a malicious adversary chooses the keys to be hashed by some fixed hash function, then he can choose n keys that all hash to the same slot, yielding an average retrieval time of (n ). Any fixed hash function is vulnerable to such terrible worstcase behavior; the only effective way to improve the situation is to choose the hash function randomly in a way that is independent of the keys that are actually going to be stored. This approach, called universal hashing, can yield provably good performance on average, no matter what keys are chosen by the adversary. The main idea behind universal hashing is to select the hash function at random from a carefully designed class of functions at the beginning of execution. As in the case of quicksort, randomization guarantees that no single input will always evoke worst-case behavior. Because of the randomization, the algorithm can behave differently on each execution, even for the same input, guaranteeing good average-case performance for any input. Returning to the example of a compiler’s symbol table, we find that the programmer’s choice of identifiers cannot now cause consistently poor hashing performance. Poor performance occurs only when the compiler chooses a random hash function that causes the set of identifiers to hash 11.3 Hash functions 233 poorly, but the probability of this situation occurring is small and is the same for any set of identifiers of the same size. Let H be a finite collection of hash functions that map a given universe U of keys into the range {0, 1, . . . , m − 1}. Such a collection is said to be universal if for each pair of distinct keys k , l ∈ U , the number of hash functions h ∈ H for which h (k ) = h (l ) is at most |H | / m . In other words, with a hash function randomly chosen from H , the chance of a collision between distinct keys k and l is no more than the chance 1/ m of a collision if h (k ) and h (l ) were randomly and independently chosen from the set {0, 1, . . . , m − 1}. The following theorem shows that a universal class of hash functions gives good average-case behavior. Recall that n i denotes the length of list T [i ]. Theorem 11.3 Suppose that a hash function h is chosen from a universal collection of hash functions and is used to hash n keys into a table T of size m , using chaining to resolve collisions. If key k is not in the table, then the expected length E [n h (k ) ] of the list that key k hashes to is at most α . If key k is in the table, then the expected length E [n h (k ) ] of the list containing key k is at most 1 + α . Proof We note that the expectations here are over the choice of the hash function, and do not depend on any assumptions about the distribution of the keys. For each pair k and l of distinct keys, define the indicator random variable X kl = I {h (k ) = h (l )}. Since by definition, a single pair of keys collides with probability at most 1/ m , we have Pr {h (k ) = h (l )} ≤ 1/ m , and so Lemma 5.1 implies that E [ X kl ] ≤ 1/ m . Next we define, for each key k , the random variable Y k that equals the number of keys other than k that hash to the same slot as k , so that Yk = X kl . l ∈T l =k Thus we have E [Yk ] = E = ≤ X kl l ∈T l =k E [ X kl ] 1 . m (by linearity of expectation) l ∈T l =k l ∈T l =k The remainder of the proof depends on whether key k is in table T . 234 Chapter 11 Hash Tables • If k ∈ T , then n h (k ) = Yk and |{l : l ∈ T and l = k }| = n . Thus E [n h (k ) ] = E [Yk ] ≤ n / m = α . If k ∈ T , then because key k appears in list T [h (k )] and the count Y k does not include key k , we have n h (k ) = Yk + 1 and |{l : l ∈ T and l = k }| = n − 1. Thus E [n h (k ) ] = E [Yk ] + 1 ≤ (n − 1)/ m + 1 = 1 + α − 1/ m < 1 + α . • The following corollary says universal hashing provides the desired payoff: it is now impossible for an adversary to pick a sequence of operations that forces the worst-case running time. By cleverly randomizing the choice of hash function at run time, we guarantee that every sequence of operations can be handled with good expected running time. Corollary 11.4 Using universal hashing and collision resolution by chaining in a table with m slots, it takes expected time (n ) to handle any sequence of n I NSERT, S EARCH and D ELETE operations containing O (m ) I NSERT operations. Proof Since the number of insertions is O (m ), we have n = O (m ) and so α = O (1). The I NSERT and D ELETE operations take constant time and, by Theorem 11.3, the expected time for each S EARCH operation is O (1). By linearity of expectation, therefore, the expected time for the entire sequence of operations is O (n ). Designing a universal class of hash functions It is quite easy to design a universal class of hash functions, as a little number theory will help us prove. You may wish to consult Chapter 31 first if you are unfamiliar with number theory. We begin by choosing a prime number p large enough so that every possible key k is in the range 0 to p − 1, inclusive. Let Z p denote the set {0, 1, . . . , p − 1}, and let Z∗ denote the set {1, 2, . . . , p − 1}. Since p is prime, we can solve equap tions modulo p with the methods given in Chapter 31. Because we assume that the size of the universe of keys is greater than the number of slots in the hash table, we hav p > m . We now define the hash function h a ,b for any a ∈ Z∗ and any b ∈ Z p using a p linear transformation followed by reductions modulo p and then modulo m : h a ,b (k ) = ((ak + b) mod p ) mod m . (11.3) For example, with p = 17 and m = 6, we have h 3,4 (8) = 5. The family of all such hash functions is H p,m = h a ,b : a ∈ Z∗ and b ∈ Z p . p (11.4) 11.3 Hash functions 235 Each hash function h a ,b maps Z p to Zm . This class of hash functions has the nice property that the size m of the output range is arbitrary—not necessarily prime—a feature which we shall use in Section 11.5. Since there are p − 1 choices for a and there are p choices for b, there are p ( p − 1) hash functions in H p,m . Theorem 11.5 The class H p,m of hash functions defined by equations (11.3) and (11.4) is universal. Proof Consider two distinct keys k and l from Z p , so that k = l . For a given hash function h a ,b we let r = (ak + b) mod p , s = (al + b) mod p . r − s ≡ a (k − l ) (mod p ) . We first note that r = s . Why? Observe that It follows that r = s because p is prime and both a and (k − l ) are nonzero modulo p , and so their product must also be nonzero modulo p by Theorem 31.6. Therefore, during the computation of any h a ,b in H p,m , distinct inputs k and l map to distinct values r and s modulo p ; there are no collisions yet at the “mod p level.” Moreover, each of the possible p ( p − 1) choices for the pair (a , b) with a = 0 yields a different resulting pair (r, s ) with r = s , since we can solve for a and b given r and s : a = ((r − s )((k − l )−1 mod p )) mod p , b = (r − ak ) mod p , where ((k − l )−1 mod p ) denotes the unique multiplicative inverse, modulo p , of k − l . Since there are only p ( p − 1) possible pairs (r, s ) with r = s , there is a one-to-one correspondence between pairs (a , b) with a = 0 and pairs (r, s ) with r = s . Thus, for any given pair of inputs k and l , if we pick (a , b) uniformly at random from Z∗ × Z p , the resulting pair (r, s ) is equally likely to be any pair of p distinct values modulo p . It then follows that the probability that distinct keys k and l collide is equal to the probability that r ≡ s (mod m ) when r and s are randomly chosen as distinct values modulo p . For a given value of r , of the p − 1 possible remaining values for s , the number of values s such that s = r and s ≡ r (mod m ) is at most p / m − 1 ≤ (( p + m − 1)/ m ) − 1 (by inequality (3.7)) = ( p − 1)/ m . The probability that s collides with r when reduced modulo m is at most (( p − 1)/ m )/( p − 1) = 1/ m . 236 Chapter 11 Hash Tables Pr {h a ,b (k ) = h a ,b (l )} ≤ 1/ m , Therefore, for any pair of distinct values k , l ∈ Z p , so that H p,m is indeed universal. Exercises 11.3-1 Suppose we wish to search a linked list of length n , where each element contains a key k along with a hash value h (k ). Each key is a long character string. How might we take advantage of the hash values when searching the list for an element with a given key? 11.3-2 Suppose that a string of r characters is hashed into m slots by treating it as a radix-128 number and then using the division method. The number m is easily represented as a 32-bit computer word, but the string of r characters, treated as a radix-128 number, takes many words. How can we apply the division method to compute the hash value of the character string without using more than a constant number of words of storage outside the string itself? 11.3-3 Consider a version of the division method in which h (k ) = k mod m , where m = 2 p − 1 and k is a character string interpreted in radix 2 p . Show that if string x can be derived from string y by permuting its characters, then x and y hash to the same value. Give an example of an application in which this property would be undesirable in a hash function. 11.3-4 Consider a hash table of size m = 1000 and a corresponding hash function h (k ) = √ m (k A mod 1) for A = ( 5 − 1)/2. Compute the locations to which the keys 61, 62, 63, 64, and 65 are mapped. 11.3-5 Define a family H of hash functions from a finite set U to a finite set B to be -universal if for all pairs of distinct elements k and l in U , Pr {h (k ) = h (l )} ≤ , where the probability is taken over the drawing of hash function h at random from the family H . Show that an -universal family of hash functions must have 1 1 ≥ − . | B | |U | 11.4 Open addressing 237 11.3-6 Let U be the set of n -tuples of values drawn from Z p , and let B = Z p , where p is prime. Define the hash function h b : U → B for b ∈ Z p on an input n -tuple a0 , a1 , . . . , an−1 from U as h b ( a0 , a1 , . . . , an−1 ) = n −1 j =0 ajbj and let H = {h b : b ∈ Z p }. Argue that H is ((n − 1)/ p )-universal according to the definition of -universal in Exercise 11.3-5. (Hint: See Exercise 31.4-4.) 11.4 Open addressing In open addressing, all elements are stored in the hash table itself. That is, each table entry contains either an element of the dynamic set or NIL. When searching for an element, we systematically examine table slots until the desired element is found or it is clear that the element is not in the table. There are no lists and no elements stored outside the table, as there are in chaining. Thus, in open addressing, the hash table can “fill up” so that no further insertions can be made; the load factor α can never exceed 1. Of course, we could store the linked lists for chaining inside the hash table, in the otherwise unused hash-table slots (see Exercise 11.2-4), but the advantage of open addressing is that it avoids pointers altogether. Instead of following pointers, we compute the sequence of slots to be examined. The extra memory freed by not storing pointers provides the hash table with a larger number of slots for the same amount of memory, potentially yielding fewer collisions and faster retrieval. To perform insertion using open addressing, we successively examine, or probe, the hash table until we find an empty slot in which to put the key. Instead of being fixed in the order 0, 1, . . . , m − 1 (which requires (n ) search time), the sequence of positions probed depends upon the key being inserted. To determine which slots to probe, we extend the hash function to include the probe number (starting from 0) as a second input. Thus, the hash function becomes h : U × {0, 1, . . . , m − 1} → {0, 1, . . . , m − 1} . With open addressing, we require that for every key k , the probe sequence h (k , 0), h (k , 1), . . . , h (k , m − 1) be a permutation of 0, 1, . . . , m − 1 , so that every hash-table position is eventually considered as a slot for a new key as the table fills up. In the following pseudocode, 238 Chapter 11 Hash Tables we assume that the elements in the hash table T are keys with no satellite information; the key k is identical to the element containing key k . Each slot contains either a key or NIL (if the slot is empty). H ASH -I NSERT (T , k ) 1 i ←0 2 repeat j ← h (k , i ) 3 if T [ j ] = NIL 4 then T [ j ] ← k 5 return j 6 else i ← i + 1 7 until i = m 8 error “hash table overflow” The algorithm for searching for key k probes the same sequence of slots that the insertion algorithm examined when key k was inserted. Therefore, the search can terminate (unsuccessfully) when it finds an empty slot, since k would have been inserted there and not later in its probe sequence. (This argument assumes that keys are not deleted from the hash table.) The procedure H ASH -S EARCH takes as input a hash table T and a key k , returning j if slot j is found to contain key k , or NIL if key k is not present in table T . H ASH -S EARCH (T , k ) 1 i ←0 2 repeat j ← h (k , i ) 3 if T [ j ] = k 4 then return j 5 i ←i +1 6 until T [ j ] = NIL or i = m 7 return NIL Deletion from an open-address hash table is difficult. When we delete a key from slot i , we cannot simply mark that slot as empty by storing NIL in it. Doing so might make it impossible to retrieve any key k during whose insertion we had probed slot i and found it occupied. One solution is to mark the slot by storing in it the special value DELETED instead of NIL. We would then modify the procedure H ASH -I NSERT to treat such a slot as if it were empty so that a new key can be inserted. No modification of H ASH -S EARCH is needed, since it will pass over DELETED values while searching. When we use the special value DELETED, however, search times are no longer dependent on the load factor α , and for this reason chaining is more commonly selected as a collision resolution technique when keys must be deleted. 11.4 Open addressing 239 In our analysis, we make the assumption of uniform hashing: we assume that each key is equally likely to have any of the m ! permutations of 0, 1, . . . , m − 1 as its probe sequence. Uniform hashing generalizes the notion of simple uniform hashing defined earlier to the situation in which the hash function produces not just a single number, but a whole probe sequence. True uniform hashing is difficult to implement, however, and in practice suitable approximations (such as double hashing, defined below) are used. Three techniques are commonly used to compute the probe sequences required for open addressing: linear probing, quadratic probing, and double hashing. These techniques all guarantee that h (k , 0), h (k , 1), . . . , h (k , m − 1) is a permutation of 0, 1, . . . , m − 1 for each key k . None of these techniques fulfills the assumption of uniform hashing, however, since none of them is capable of generating more than m 2 different probe sequences (instead of the m ! that uniform hashing requires). Double hashing has the greatest number of probe sequences and, as one might expect, seems to give the best results. Linear probing Given an ordinary hash function h : U → {0, 1, . . . , m − 1}, which we refer to as an auxiliary hash function, the method of linear probing uses the hash function for i = 0, 1, . . . , m − 1. Given key k , the first slot probed is T [h (k )], i.e., the slot given by the auxiliary hash function. We next probe slot T [h (k ) + 1], and so on up to slot T [m − 1]. Then we wrap around to slots T [0], T [1], . . ., until we finally probe slot T [h (k ) − 1]. Because the initial probe determines the entire probe sequence, there are only m distinct probe sequences. Linear probing is easy to implement, but it suffers from a problem known as primary clustering. Long runs of occupied slots build up, increasing the average search time. Clusters arise since an empty slot preceded by i full slots gets filled next with probability (i + 1)/ m . Long runs of occupied slots tend to get longer, and the average search time increases. Quadratic probing Quadratic probing uses a hash function of the form where h is an auxiliary hash function, c1 and c2 = 0 are auxiliary constants, and i = 0, 1, . . . , m − 1. The initial position probed is T [h (k )]; later positions probed are offset by amounts that depend in a quadratic manner on the probe number i . This method works much better than linear probing, but to make full use of the h (k , i ) = (h (k ) + c1 i + c2 i 2 ) mod m , (11.5) h (k , i ) = (h (k ) + i ) mod m 240 Chapter 11 Hash Tables 0 1 2 3 4 5 6 7 8 9 10 11 12 79 69 98 72 14 50 Figure 11.5 Insertion by double hashing. Here we have a hash table of size 13 with h 1 (k ) = k mod 13 and h 2 (k ) = 1 + (k mod 11). Since 14 ≡ 1 (mod 13) and 14 ≡ 3 (mod 11), the key 14 is inserted into empty slot 9, after slots 1 and 5 are examined and found to be occupied. hash table, the values of c1 , c2 , and m are constrained. Problem 11-3 shows one way to select these parameters. Also, if two keys have the same initial probe position, then their probe sequences are the same, since h (k 1 , 0) = h (k2 , 0) implies h (k1 , i ) = h (k2 , i ). This property leads to a milder form of clustering, called secondary clustering. As in linear probing, the initial probe determines the entire sequence, so only m distinct probe sequences are used. Double hashing Double hashing is one of the best methods available for open addressing because the permutations produced have many of the characteristics of randomly chosen permutations. Double hashing uses a hash function of the form h (k , i ) = (h 1 (k ) + ih 2 (k )) mod m , where h 1 and h 2 are auxiliary hash functions. The initial probe is to position T [h 1 (k )]; successive probe positions are offset from previous positions by the amount h 2 (k ), modulo m . Thus, unlike the case of linear or quadratic probing, the probe sequence here depends in two ways upon the key k , since the initial probe position, the offset, or both, may vary. Figure 11.5 gives an example of insertion by double hashing. The value h 2 (k ) must be relatively prime to the hash-table size m for the entire hash table to be searched. (See Exercise 11.4-3.) A convenient way to ensure this 11.4 Open addressing 241 condition is to let m be a power of 2 and to design h 2 so that it always produces an odd number. Another way is to let m be prime and to design h 2 so that it always returns a positive integer less than m . For example, we could choose m prime and let h 1 (k ) = k mod m , h 2 (k ) = 1 + (k mod m ) , where m is chosen to be slightly less than m (say, m − 1). For example, if k = 123456, m = 701, and m = 700, we have h 1 (k ) = 80 and h 2 (k ) = 257, so the first probe is to position 80, and then every 257th slot (modulo m ) is examined until the key is found or every slot is examined. Double hashing improves over linear or quadratic probing in that (m 2 ) probe sequences are used, rather than (m ), since each possible (h 1 (k ), h 2 (k )) pair yields a distinct probe sequence. As a result, the performance of double hashing appears to be very close to the performance of the “ideal” scheme of uniform hashing. Analysis of open-address hashing Our analysis of open addressing, like our analysis of chaining, is expressed in terms of the load factor α = n / m of the hash table, as n and m go to infinity. Of course, with open addressing, we have at most one element per slot, and thus n ≤ m , which implies α ≤ 1. We assume that uniform hashing is used. In this idealized scheme, the probe sequence h (k , 0), h (k , 1), . . . , h (k , m − 1) used to insert or search for each key k is equally likely to be any permutation of 0, 1, . . . , m − 1 . Of course, a given key has a unique fixed probe sequence associated with it; what is meant here is that, considering the probability distribution on the space of keys and the operation of the hash function on the keys, each possible probe sequence is equally likely. We now analyze the expected number of probes for hashing with open addressing under the assumption of uniform hashing, beginning with an analysis of the number of probes made in an unsuccessful search. Theorem 11.6 Given an open-address hash table with load factor α = n / m < 1, the expected number of probes in an unsuccessful search is at most 1/(1 − α), assuming uniform hashing. Proof In an unsuccessful search, every probe but the last accesses an occupied slot that does not contain the desired key, and the last slot probed is empty. Let us define the random variable X to be the number of probes made in an unsuccessful search, and let us also define the event A i , for i = 1, 2, . . ., to be the event that 242 Chapter 11 Hash Tables there is an i th probe and it is to an occupied slot. Then the event { X ≥ i } is the intersection of events A1 ∩ A2 ∩ · · · ∩ Ai −1 . We will bound Pr { X ≥ i } by bounding Pr { A1 ∩ A2 ∩ · · · ∩ Ai −1 }. By Exercise C.2-6, Pr { A1 ∩ A2 ∩ · · · ∩ Ai −1 } = Pr { A1 } · Pr { A2 | A1 } · Pr { A3 | A1 ∩ A2 } · · · Pr { Ai −1 | A1 ∩ A2 ∩ · · · ∩ Ai −2 } . Since there are n elements and m slots, Pr { A1 } = n / m . For j > 1, the probability that there is a j th probe and it is to an occupied slot, given that the first j − 1 probes were to occupied slots, is (n − j + 1)/(m − j + 1). This probability follows because we would be finding one of the remaining (n − ( j − 1)) elements in one of the (m − ( j − 1)) unexamined slots, and by the assumption of uniform hashing, the probability is the ratio of these quantities. Observing that n < m implies that (n − j )/(m − j ) ≤ n / m for all j such that 0 ≤ j < m , we have for all i such that 1 ≤ i ≤ m, Pr { X ≥ i } = n−i +2 n n−1 n−2 · · ··· m m−1 m−2 m −i +2 n i −1 ≤ m i −1 =α . ∞ i =1 ∞ i =1 ∞ i =0 Now we use equation (C.24) to bound the expected number of probes: E [X] = ≤ = = Pr { X ≥ i } α i −1 αi 1 . 1−α The above bound of 1+α +α 2 +α 3 +· · · has an intuitive interpretation. One probe is always made. With probability approximately α , the first probe finds an occupied slot so that a second probe is necessary. With probability approximately α 2 , the first two slots are occupied so that a third probe is necessary, and so on. If α is a constant, Theorem 11.6 predicts that an unsuccessful search runs in O (1) time. For example, if the hash table is half full, the average number of probes in an unsuccessful search is at most 1/(1 − .5) = 2. If it is 90 percent full, the average number of probes is at most 1/(1 − .9) = 10. 11.4 Open addressing 243 Theorem 11.6 gives us the performance of the H ASH -I NSERT procedure almost immediately. Corollary 11.7 Inserting an element into an open-address hash table with load factor α requires at most 1/(1 − α) probes on average, assuming uniform hashing. Proof An element is inserted only if there is room in the table, and thus α < 1. Inserting a key requires an unsuccessful search followed by placement of the key in the first empty slot found. Thus, the expected number of probes is at most 1/(1 − α). Computing the expected number of probes for a successful search requires a little more work. Theorem 11.8 Given an open-address hash table with load factor α < 1, the expected number of probes in a successful search is at most 1 1 ln , α 1−α assuming uniform hashing and assuming that each key in the table is equally likely to be searched for. Proof A search for a key k follows the same probe sequence as was followed when the element with key k was inserted. By Corollary 11.7, if k was the (i + 1)st key inserted into the hash table, the expected number of probes made in a search for k is at most 1/(1 − i / m ) = m /(m − i ). Averaging over all n keys in the hash table gives us the average number of probes in a successful search: 1 n n −1 i =0 m m −i = = m n n −1 i =0 1 ( Hm − Hm −n ) , α 1 m −i where Hi = ij =1 1/ j is the i th harmonic number (as defined in equation (A.7)). Using the technique of bounding a summation by an integral, as described in Section A.2, we obtain 244 Chapter 11 Hash Tables 1 ( Hm − Hm −n ) = α ≤ = = 1 α m 1/ k k =m −n +1 m for a bound on the expected number of probes in a successful search. If the hash table is half full, the expected number of probes in a successful search is less than 1.387. If the hash table is 90 percent full, the expected number of probes is less than 2.559. Exercises 11.4-1 Consider inserting the keys 10, 22, 31, 4, 15, 28, 17, 88, 59 into a hash table of length m = 11 using open addressing with the primary hash function h (k ) = k mod m . Illustrate the result of inserting these keys using linear probing, using quadratic probing with c1 = 1 and c2 = 3, and using double hashing with h 2 (k ) = 1 + (k mod (m − 1)). 11.4-2 Write pseudocode for H ASH -D ELETE as outlined in the text, and modify H ASH I NSERT to handle the special value DELETED. 11.4-3 Suppose that we use double hashing to resolve collisions; that is, we use the hash function h (k , i ) = (h 1 (k ) + ih 2 (k )) mod m . Show that if m and h 2 (k ) have greatest common divisor d ≥ 1 for some key k , then an unsuccessful search for key k examines (1/d )th of the hash table before returning to slot h 1 (k ). Thus, when d = 1, so that m and h 2 (k ) are relatively prime, the search may examine the entire hash table. (Hint: See Chapter 31.) 11.4-4 Consider an open-address hash table with uniform hashing. Give upper bounds on the expected number of probes in an unsuccessful search and on the expected number of probes in a successful search when the load factor is 3/4 and when it is 7/8. 1 (1/x ) dx α m −n m 1 ln α m−n 1 1 ln α 1−α (by inequality (A.12)) 11.5 Perfect hashing 245 11.4-5 Consider an open-address hash table with a load factor α . Find the nonzero value α for which the expected number of probes in an unsuccessful search equals twice the expected number of probes in a successful search. Use the upper bounds given by Theorems 11.6 and 11.8 for these expected numbers of probes. 11.5 Perfect hashing Although hashing is most often used for its excellent expected performance, hashing can be used to obtain excellent worst-case performance when the set of keys is static: once the keys are stored in the table, the set of keys never changes. Some applications naturally have static sets of keys: consider the set of reserved words in a programming language, or the set of file names on a CD-ROM. We call a hashing technique perfect hashing if the worst-case number of memory accesses required to perform a search is O (1). The basic idea to create a perfect hashing scheme is simple. We use a two-level hashing scheme with universal hashing at each level. Figure 11.6 illustrates the approach. The first level is essentially the same as for hashing with chaining: the n keys are hashed into m slots using a hash function h carefully selected from a family of universal hash functions. Instead of making a list of the keys hashing to slot j , however, we use a small secondary hash table S j with an associated hash function h j . By choosing the hash functions h j carefully, we can guarantee that there are no collisions at the secondary level. In order to guarantee that there are no collisions at the secondary level, however, we will need to let the size m j of hash table S j be the square of the number n j of keys hashing to slot j . While having such a quadratic dependence of m j on n j may seem likely to cause the overall storage requirements to be excessive, we shall show that by choosing the first level hash function well, the expected total amount of space used is still O (n ). We use hash functions chosen from the universal classes of hash functions of Section 11.3.3. The first-level hash function is chosen from the class H p,m , where as in Section 11.3.3, p is a prime number greater than any key value. Those keys 246 Chapter 11 Hash Tables T 0 1 2 3 4 5 6 7 8 S m0 a0 b0 0 1 0 0 10 S2 m2 a2 b2 0 4 10 18 60 75 0 1 2 3 m5 a5 b5 1 0 0 70 m7 a7 b7 0 9 23 88 40 0 1 2 S5 S7 37 3 4 5 6 7 22 8 Figure 11.6 Using perfect hashing to store the set K = {10, 22, 37, 40, 60, 70, 75}. The outer hash function is h (k ) = ((ak + b) mod p) mod m , where a = 3, b = 42, p = 101, and m = 9. For example, h (75) = 2, so key 75 hashes to slot 2 of table T . A secondary hash table S j stores all keys hashing to slot j . The size of hash table S j is m j , and the associated hash function is h j (k ) = ((a j k + b j ) mod p) mod m j . Since h 2 (75) = 1, key 75 is stored in slot 1 of secondary hash table S2 . There are no collisions in any of the secondary hash tables, and so searching takes constant time in the worst case. hashing to slot j are re-hashed into a secondary hash table S j of size m j using a hash function h j chosen from the class H p,m j .1 We shall proceed in two steps. First, we shall determine how to ensure that the secondary tables have no collisions. Second, we shall show that the expected amount of memory used overall—for the primary hash table and all the secondary hash tables—is O (n ). Theorem 11.9 If we store n keys in a hash table of size m = n 2 using a hash function h randomly chosen from a universal class of hash functions, then the probability of there being any collisions is less than 1/2. Proof There are n pairs of keys that may collide; each pair collides with prob2 ability 1/ m if h is chosen at random from a universal family H of hash functions. Let X be a random variable that counts the number of collisions. When m = n 2 , the expected number of collisions is 1 When n = m = 1, we don’t really need a hash function for slot j ; when we choose a hash j j function h a ,b (k ) = ((ak + b) mod p) mod m j for such a slot, we just use a = b = 0. 11.5 Perfect hashing 247 E [X] = n 1 ·2 n 2 2 n −n 1 = ·2 2 n < 1/2 . (Note that this analysis is similar to the analysis of the birthday paradox in Section 5.4.1.) Applying Markov’s inequality (C.29), Pr { X ≥ t } ≤ E [ X ] / t , with t = 1 completes the proof. In the situation described in Theorem 11.9, where m = n 2 , it follows that a hash function h chosen at random from H is more likely than not to have no collisions. Given the set K of n keys to be hashed (remember that K is static), it is thus easy to find a collision-free hash function h with a few random trials. When n is large, however, a hash table of size m = n 2 is excessive. Therefore, we adopt the two-level hashing approach, and we use the approach of Theorem 11.9 only to hash the entries within each slot. An outer, or first-level, hash function h is used to hash the keys into m = n slots. Then, if n j keys hash to slot j , a secondary hash table S j of size m j = n 2 is used to provide collision-free constantj time lookup. We now turn to the issue of ensuring that the overall memory used is O (n ). Since the size m j of the j th secondary hash table grows quadratically with the number n j of keys stored, there is a risk that the overall amount of storage could be excessive. If the first-level table size is m = n , then the amount of memory used is O (n ) for the primary hash table, for the storage of the sizes m j of the secondary hash tables, and for the storage of the parameters a j and b j defining the secondary hash functions h j drawn from the class H p,m j of Section 11.3.3 (except when n j = 1 and we use a = b = 0). The following theorem and a corollary provide a bound on the expected combined sizes of all the secondary hash tables. A second corollary bounds the probability that the combined size of all the secondary hash tables is superlinear. Theorem 11.10 If we store n keys in a hash table of size m = n using a hash function h randomly chosen from a universal class of hash functions, then E m −1 j =0 n 2 < 2n , j where n j is the number of keys hashing to slot j . Proof We start with the following identity, which holds for any nonnegative integer a : 248 Chapter 11 Hash Tables a2 = a + 2 We have E m −1 j =0 a . 2 (11.6) n2 j =E =E m −1 j =0 nj + 2 nj 2 m −1 j =0 (by equation (11.6)) nj 2 (by linearity of expectation) (by equation (11.1)) (since n is not a random variable) . m −1 j =0 nj + 2E m −1 j =0 = E [n ] + 2 E = n + 2E m −1 j =0 nj 2 nj 2 − To evaluate the summation m=01 n2j , we observe that it is just the total number j of collisions. By the properties of universal hashing, the expected value of this summation is at most n (n − 1) n−1 n1 = = , 2m 2 2m since m = n . Thus, E m −1 j =0 n2 j ≤ n+2 n−1 2 = 2n − 1 < 2n . Corollary 11.11 If we store n keys in a hash table of size m = n using a hash function h randomly chosen from a universal class of hash functions and we set the size of each secondary hash table to m j = n 2 for j = 0, 1, . . . , m − 1, then the expected amount j of storage required for all secondary hash tables in a perfect hashing scheme is less than 2n . Proof Since m j = n 2 for j = 0, 1, . . . , m − 1, Theorem 11.10 gives j Problems for Chapter 11 m −1 j =0 m −1 j =0 249 E mj =E n 2 < 2n , j (11.7) which completes the proof. Corollary 11.12 If we store n keys in a hash table of size m = n using a hash function h randomly chosen from a universal class of hash functions and we set the size of each secondary hash table to m j = n 2 for j = 0, 1, . . . , m − 1, then the probability that the j total storage used for secondary hash tables exceeds 4n is less than 1/2. Proof Again we apply Markov’s inequality (C.29), Pr { X ≥ t } ≤ E [ X ] / t , this − time to inequality (11.7), with X = m=01 m j and t = 4n : j Pr m −1 j =0 m j ≥ 4n ≤ < E 4n m −1 j =0 mj 2n 4n = 1/2 . From Corollary 11.12, we see that testing a few randomly chosen hash functions from the universal family will quickly yield one that uses a reasonable amount of storage. Exercises 11.5-1 Suppose that we insert n keys into a hash table of size m using open addressing and uniform hashing. Let p (n , m ) be the probability that no collisions occur. Show that −n (n −1)/2m p( . (Hint: See equation (3.11).) Argue that when n exceeds √ n, m) ≤ e m , the probability of avoiding collisions goes rapidly to zero. Problems 11-1 Longest-probe bound for hashing A hash table of size m is used to store n items, with n ≤ m /2. Open addressing is used for collision resolution. a. Assuming uniform hashing, show that for i = 1, 2, . . . , n , the probability that the i th insertion requires strictly more than k probes is at most 2 −k . 250 Chapter 11 Hash Tables b. Show that for i = 1, 2, . . . , n , the probability that the i th insertion requires more than 2 lg n probes is at most 1/ n 2 . Let the random variable X i denote the number of probes required by the i th insertion. You have shown in part (b) that Pr { X i > 2 lg n } ≤ 1/ n 2 . Let the random variable X = max1≤i ≤n X i denote the maximum number of probes required by any of the n insertions. c. Show that Pr { X > 2 lg n } ≤ 1/ n . d. Show that the expected length E [ X ] of the longest probe sequence is O (lg n ). 11-2 Slot-size bound for chaining Suppose that we have a hash table with n slots, with collisions resolved by chaining, and suppose that n keys are inserted into the table. Each key is equally likely to be hashed to each slot. Let M be the maximum number of keys in any slot after all the keys have been inserted. Your mission is to prove an O (lg n / lg lg n ) upper bound on E [ M ], the expected value of M . a. Argue that the probability Q k that exactly k keys hash to a particular slot is given by Qk = 1 n k 1 1− n n −k n . k b. Let Pk be the probability that M = k , that is, the probability that the slot containing the most keys contains k keys. Show that Pk ≤ n Q k . c. Use Stirling’s approximation, equation (3.17), to show that Q k < ek / k k . d. Show that there exists a constant c > 1 such that Q k0 < 1/ n 3 for k0 = c lg n / lg lg n . Conclude that Pk < 1/ n 2 for k ≥ k0 = c lg n / lg lg n . e. Argue that E [ M ] ≤ Pr M > c lg n lg lg n · n + Pr M ≤ c lg n lg lg n · c lg n . lg lg n Conclude that E [ M ] = O (lg n / lg lg n ). 11-3 Quadratic probing Suppose that we are given a key k to search for in a hash table with positions 0, 1, . . . , m − 1, and suppose that we have a hash function h mapping the key space into the set {0, 1, . . . , m − 1}. The search scheme is as follows. Problems for Chapter 11 251 2. Probe in position i for the desired key k . If you find it, or if this position is empty, terminate the search. 3. Set j ← ( j + 1) mod m and i ← (i + j ) mod m , and return to step 2. Assume that m is a power of 2. a. Show that this scheme is an instance of the general “quadratic probing” scheme by exhibiting the appropriate constants c 1 and c2 for equation (11.5). b. Prove that this algorithm examines every table position in the worst case. 11-4 k -universal hashing and authentication Let H = {h } be a class of hash functions in which each h maps the universe U of keys to {0, 1, . . . , m − 1}. We say that H is k-universal if, for every fixed sequence of k distinct keys x (1) , x (2) , . . . , x (k ) and for any h chosen at random from H , the sequence h (x (1) ), h (x (2) ), . . . , h (x (k ) ) is equally likely to be any of the m k sequences of length k with elements drawn from {0, 1, . . . , m − 1}. a. Show that if H is 2-universal, then it is universal. b. Let U be the set of n -tuples of values drawn from Z p , and let B = Z p , where p is prime. For any n -tuple a = a0 , a1 , . . . , an−1 of values from Z p and for any b ∈ Z p , define the hash function h a ,b : U → B on an input n -tuple x = x 0 , x 1 , . . . , x n−1 from U as h a ,b (x ) = n −1 j =0 1. Compute the value i ← h (k ), and set j ← 0. a j x j + b mod p and let H = {h a ,b }. Argue that H is 2-universal. c. Suppose that Alice and Bob agree secretly on a hash function h a ,b from a 2universal family H of hash functions. Later, Alice sends a message m to Bob over the Internet, where m ∈ U . She authenticates this message to Bob by also sending an authentication tag t = h a ,b (m ), and Bob checks that the pair (m , t ) he receives satisfies t = h a ,b (m ). Suppose that an adversary intercepts (m , t ) en route and tries to fool Bob by replacing the pair with a different pair (m , t ). Argue that the probability that the adversary succeeds in fooling Bob into accepting (m , t ) is at most 1/ p , no matter how much computing power the adversary has. 252 Chapter 11 Hash Tables Chapter notes Knuth [185] and Gonnet [126] are excellent references for the analysis of hashing algorithms. Knuth credits H. P. Luhn (1953) for inventing hash tables, along with the chaining method for resolving collisions. At about the same time, G. M. Amdahl originated the idea of open addressing. Carter and Wegman introduced the notion of universal classes of hash functions in 1979 [52]. Fredman, Koml´ s, and Szemer´ di [96] developed the perfect hashing scheme o e for static sets presented in Section 11.5. An extension of their method to dynamic sets, handling insertions and deletions in amortized expected time O (1), has been given by Dietzfelbinger et al. [73]. 12 Binary Search Trees Search trees are data structures that support many dynamic-set operations, including S EARCH, M INIMUM, M AXIMUM, P REDECESSOR, S UCCESSOR, I NSERT, and D ELETE. Thus, a search tree can be used both as a dictionary and as a priority queue. Basic operations on a binary search tree take time proportional to the height of the tree. For a complete binary tree with n nodes, such operations run in (lg n ) worst-case time. If the tree is a linear chain of n nodes, however, the same operations take (n ) worst-case time. We shall see in Section 12.4 that the expected height of a randomly built binary search tree is O (lg n ), so that basic dynamic-set operations on such a tree take (lg n ) time on average. In practice, we can’t always guarantee that binary search trees are built randomly, but there are variations of binary search trees whose worst-case performance on basic operations can be guaranteed to be good. Chapter 13 presents one such variation, red-black trees, which have height O (lg n ). Chapter 18 introduces B-trees, which are particularly good for maintaining databases on random-access, secondary (disk) storage. After presenting the basic properties of binary search trees, the following sections show how to walk a binary search tree to print its values in sorted order, how to search for a value in a binary search tree, how to find the minimum or maximum element, how to find the predecessor or successor of an element, and how to insert into or delete from a binary search tree. The basic mathematical properties of trees appear in Appendix B. 12.1 What is a binary search tree? A binary search tree is organized, as the name suggests, in a binary tree, as shown in Figure 12.1. Such a tree can be represented by a linked data structure in which each node is an object. In addition to a key field and satellite data, each node 254 Chapter 12 Binary Search Trees 5 3 2 5 7 8 2 3 7 5 5 8 (a) (b) Figure 12.1 Binary search trees. For any node x , the keys in the left subtree of x are at most key[x ], and the keys in the right subtree of x are at least key[x ]. Different binary search trees can represent the same set of values. The worst-case running time for most search-tree operations is proportional to the height of the tree. (a) A binary search tree on 6 nodes with height 2. (b) A less efficient binary search tree with height 4 that contains the same keys. contains fields left , right , and p that point to the nodes corresponding to its left child, its right child, and its parent, respectively. If a child or the parent is missing, the appropriate field contains the value NIL. The root node is the only node in the tree whose parent field is NIL. The keys in a binary search tree are always stored in such a way as to satisfy the binary-search-tree property: Let x be a node in a binary search tree. If y is a node in the left subtree of x , then key[ y ] ≤ key[x ]. If y is a node in the right subtree of x , then key[x ] ≤ key[ y ]. Thus, in Figure 12.1(a), the key of the root is 5, the keys 2, 3, and 5 in its left subtree are no larger than 5, and the keys 7 and 8 in its right subtree are no smaller than 5. The same property holds for every node in the tree. For example, the key 3 in Figure 12.1(a) is no smaller than the key 2 in its left subtree and no larger than the key 5 in its right subtree. The binary-search-tree property allows us to print out all the keys in a binary search tree in sorted order by a simple recursive algorithm, called an inorder tree walk. This algorithm is so named because the key of the root of a subtree is printed between the values in its left subtree and those in its right subtree. (Similarly, a preorder tree walk prints the root before the values in either subtree, and a postorder tree walk prints the root after the values in its subtrees.) To use the following procedure to print all the elements in a binary search tree T , we call I NORDER -T REE -WALK (root [T ]). 12.1 What is a binary search tree? 255 I NORDER -T REE -WALK (x ) 1 if x = NIL 2 then I NORDER -T REE -WALK (left [x ]) 3 print key[x ] 4 I NORDER -T REE -WALK (right [x ]) As an example, the inorder tree walk prints the keys in each of the two binary search trees from Figure 12.1 in the order 2, 3, 5, 5, 7, 8. The correctness of the algorithm follows by induction directly from the binary-search-tree property. It takes (n ) time to walk an n -node binary search tree, since after the initial call, the procedure is called recursively exactly twice for each node in the tree—once for its left child and once for its right child. The following theorem gives a more formal proof that it takes linear time to perform an inorder tree walk. Theorem 12.1 If x is the root of an n -node subtree, then the call I NORDER -T REE -WALK (x ) takes (n ) time. Proof Let T (n ) denote the time taken by I NORDER -T REE -WALK when it is called on the root of an n -node subtree. I NORDER -T REE -WALK takes a small, constant amount of time on an empty subtree (for the test x = NIL ), and so T (0) = c for some positive constant c. For n > 0, suppose that I NORDER -T REE -WALK is called on a node x whose left subtree has k nodes and whose right subtree has n − k − 1 nodes. The time to perform I NORDER -T REE -WALK (x ) is T (n ) = T (k ) + T (n − k − 1) + d for some positive constant d that reflects the time to execute I NORDER -T REE -WALK (x ), exclusive of the time spent in recursive calls. We use the substitution method to show that T (n ) = (n ) by proving that T (n ) = (c + d )n + c. For n = 0, we have (c + d ) · 0 + c = c = T (0). For n > 0, we have T (n ) = = = = T (k ) + T (n − k − 1) + d ((c + d )k + c) + ((c + d )(n − k − 1) + c) + d (c + d )n + c − (c + d ) + c + d (c + d )n + c , which completes the proof. 256 Chapter 12 Binary Search Trees Exercises 12.1-1 For the set of keys {1, 4, 5, 10, 16, 17, 21} , draw binary search trees of height 2, 3, 4, 5, and 6. 12.1-2 What is the difference between the binary-search-tree property and the min-heap property (see page 129)? Can the min-heap property be used to print out the keys of an n -node tree in sorted order in O (n ) time? Explain how or why not. 12.1-3 Give a nonrecursive algorithm that performs an inorder tree walk. (Hint: There is an easy solution that uses a stack as an auxiliary data structure and a more complicated but elegant solution that uses no stack but assumes that two pointers can be tested for equality.) 12.1-4 Give recursive algorithms that perform preorder and postorder tree walks in time on a tree of n nodes. (n ) 12.1-5 Argue that since sorting n elements takes (n lg n ) time in the worst case in the comparison model, any comparison-based algorithm for constructing a binary search tree from an arbitrary list of n elements takes (n lg n ) time in the worst case. 12.2 Querying a binary search tree A common operation performed on a binary search tree is searching for a key stored in the tree. Besides the S EARCH operation, binary search trees can support such queries as M INIMUM, M AXIMUM, S UCCESSOR, and P REDECESSOR. In this section, we shall examine these operations and show that each can be supported in time O (h ) on a binary search tree of height h . Searching We use the following procedure to search for a node with a given key in a binary search tree. Given a pointer to the root of the tree and a key k , T REE -S EARCH returns a pointer to a node with key k if one exists; otherwise, it returns NIL. 12.2 Querying a binary search tree 257 15 6 3 2 4 7 13 9 17 18 20 Figure 12.2 Queries on a binary search tree. To search for the key 13 in the tree, we follow the path 15 → 6 → 7 → 13 from the root. The minimum key in the tree is 2, which can be found by following left pointers from the root. The maximum key 20 is found by following right pointers from the root. The successor of the node with key 15 is the node with key 17, since it is the minimum key in the right subtree of 15. The node with key 13 has no right subtree, and thus its successor is its lowest ancestor whose left child is also an ancestor. In this case, the node with key 15 is its successor. T REE -S EARCH (x , k ) 1 if x = NIL or k = key[x ] 2 then return x 3 if k < key[x ] 4 then return T REE -S EARCH (left [x ], k ) 5 else return T REE -S EARCH (right [x ], k ) The procedure begins its search at the root and traces a path downward in the tree, as shown in Figure 12.2. For each node x it encounters, it compares the key k with key[x ]. If the two keys are equal, the search terminates. If k is smaller than key[x ], the search continues in the left subtree of x , since the binary-searchtree property implies that k could not be stored in the right subtree. Symmetrically, if k is larger than key[x ], the search continues in the right subtree. The nodes encountered during the recursion form a path downward from the root of the tree, and thus the running time of T REE -S EARCH is O (h ), where h is the height of the tree. The same procedure can be written iteratively by “unrolling” the recursion into a while loop. On most computers, this version is more efficient. I TERATIVE -T REE -S EARCH (x , k ) 1 while x = NIL and k = key[x ] 2 do if k < key[x ] 3 then x ← left[x ] 4 else x ← right [x ] 5 return x 258 Chapter 12 Binary Search Trees Minimum and maximum An element in a binary search tree whose key is a minimum can always be found by following left child pointers from the root until a NIL is encountered, as shown in Figure 12.2. The following procedure returns a pointer to the minimum element in the subtree rooted at a given node x . T REE -M INIMUM (x ) 1 while left [x ] = NIL 2 do x ← left [x ] 3 return x The binary-search-tree property guarantees that T REE -M INIMUM is correct. If a node x has no left subtree, then since every key in the right subtree of x is at least as large as key[x ], the minimum key in the subtree rooted at x is key[x ]. If node x has a left subtree, then since no key in the right subtree is smaller than key[x ] and every key in the left subtree is not larger than key[x ], the minimum key in the subtree rooted at x can be found in the subtree rooted at left [x ]. The pseudocode for T REE -M AXIMUM is symmetric. T REE -M AXIMUM (x ) 1 while right [x ] = NIL 2 do x ← right [x ] 3 return x Both of these procedures run in O (h ) time on a tree of height h since, as in T REE S EARCH, the sequence of nodes encountered forms a path downward from the root. Successor and predecessor Given a node in a binary search tree, it is sometimes important to be able to find its successor in the sorted order determined by an inorder tree walk. If all keys are distinct, the successor of a node x is the node with the smallest key greater than key[x ]. The structure of a binary search tree allows us to determine the successor of a node without ever comparing keys. The following procedure returns the successor of a node x in a binary search tree if it exists, and NIL if x has the largest key in the tree. 12.2 Querying a binary search tree 259 T REE -S UCCESSOR (x ) 1 if right [x ] = NIL 2 then return T REE -M INIMUM (right [x ]) 3 y ← p [x ] 4 while y = NIL and x = right [ y ] 5 do x ← y 6 y ← p[ y ] 7 return y The code for T REE -S UCCESSOR is broken into two cases. If the right subtree of node x is nonempty, then the successor of x is just the leftmost node in the right subtree, which is found in line 2 by calling T REE -M INIMUM (right [x ]). For example, the successor of the node with key 15 in Figure 12.2 is the node with key 17. On the other hand, as Exercise 12.2-6 asks you to show, if the right subtree of node x is empty and x has a successor y , then y is the lowest ancestor of x whose left child is also an ancestor of x . In Figure 12.2, the successor of the node with key 13 is the node with key 15. To find y , we simply go up the tree from x until we encounter a node that is the left child of its parent; this is accomplished by lines 3–7 of T REE -S UCCESSOR. The running time of T REE -S UCCESSOR on a tree of height h is O (h ), since we either follow a path up the tree or follow a path down the tree. The procedure T REE -P REDECESSOR, which is symmetric to T REE -S UCCESSOR, also runs in time O (h ). Even if keys are not distinct, we define the successor and predecessor of any node x as the node returned by calls made to T REE -S UCCESSOR (x ) and T REE P REDECESSOR(x ), respectively. In summary, we have proved the following theorem. Theorem 12.2 The dynamic-set operations S EARCH, M INIMUM, M AXIMUM, S UCCESSOR, and P REDECESSOR can be made to run in O (h ) time on a binary search tree of height h . Exercises 12.2-1 Suppose that we have numbers between 1 and 1000 in a binary search tree and want to search for the number 363. Which of the following sequences could not be the sequence of nodes examined? a. 2, 252, 401, 398, 330, 344, 397, 363. b. 924, 220, 911, 244, 898, 258, 362, 363. 260 Chapter 12 Binary Search Trees c. 925, 202, 911, 240, 912, 245, 363. d. 2, 399, 387, 219, 266, 382, 381, 278, 363. e. 935, 278, 347, 621, 299, 392, 358, 363. 12.2-2 Write recursive versions of the T REE -M INIMUM and T REE -M AXIMUM procedures. 12.2-3 Write the T REE -P REDECESSOR procedure. 12.2-4 Professor Bunyan thinks he has discovered a remarkable property of binary search trees. Suppose that the search for key k in a binary search tree ends up in a leaf. Consider three sets: A, the keys to the left of the search path; B , the keys on the search path; and C , the keys to the right of the search path. Professor Bunyan claims that any three keys a ∈ A, b ∈ B , and c ∈ C must satisfy a ≤ b ≤ c. Give a smallest possible counterexample to the professor’s claim. 12.2-5 Show that if a node in a binary search tree has two children, then its successor has no left child and its predecessor has no right child. 12.2-6 Consider a binary search tree T whose keys are distinct. Show that if the right subtree of a node x in T is empty and x has a successor y , then y is the lowest ancestor of x whose left child is also an ancestor of x . (Recall that every node is its own ancestor.) 12.2-7 An inorder tree walk of an n -node binary search tree can be implemented by finding the minimum element in the tree with T REE -M INIMUM and then making n − 1 calls to T REE -S UCCESSOR. Prove that this algorithm runs in (n ) time. 12.2-8 Prove that no matter what node we start at in a height-h binary search tree, k successive calls to T REE -S UCCESSOR take O (k + h ) time. 12.2-9 Let T be a binary search tree whose keys are distinct, let x be a leaf node, and let y be its parent. Show that key[ y ] is either the smallest key in T larger than key[x ] or the largest key in T smaller than key[x ]. 12.3 Insertion and deletion 261 12.3 Insertion and deletion The operations of insertion and deletion cause the dynamic set represented by a binary search tree to change. The data structure must be modified to reflect this change, but in such a way that the binary-search-tree property continues to hold. As we shall see, modifying the tree to insert a new element is relatively straightforward, but handling deletion is somewhat more intricate. Insertion To insert a new value v into a binary search tree T , we use the procedure T REE I NSERT. The procedure is passed a node z for which key[z ] = v , left [z ] = NIL , and right [z ] = NIL . It modifies T and some of the fields of z in such a way that z is inserted into an appropriate position in the tree. T REE -I NSERT (T , z ) 1 y ← NIL 2 x ← root [T ] 3 while x = NIL 4 do y ← x 5 if key[z ] < key[x ] 6 then x ← left [x ] 7 else x ← right [x ] 8 p [z ] ← y 9 if y = NIL 10 then root [T ] ← z 11 else if key[z ] < key[ y ] 12 then left [ y ] ← z 13 else right [ y ] ← z £ Tree T was empty Figure 12.3 shows how T REE -I NSERT works. Just like the procedures T REE S EARCH and I TERATIVE -T REE -S EARCH, T REE -I NSERT begins at the root of the tree and traces a path downward. The pointer x traces the path, and the pointer y is maintained as the parent of x . After initialization, the while loop in lines 3–7 causes these two pointers to move down the tree, going left or right depending on the comparison of key[z ] with key[x ], until x is set to NIL. This NIL occupies the position where we wish to place the input item z . Lines 8–13 set the pointers that cause z to be inserted. Like the other primitive operations on search trees, the procedure T REE -I NSERT runs in O (h ) time on a tree of height h . 262 Chapter 12 Binary Search Trees 12 5 2 9 13 15 17 18 19 Figure 12.3 Inserting an item with key 13 into a binary search tree. Lightly shaded nodes indicate the path from the root down to the position where the item is inserted. The dashed line indicates the link in the tree that is added to insert the item. Deletion The procedure for deleting a given node z from a binary search tree takes as an argument a pointer to z . The procedure considers the three cases shown in Figure 12.4. If z has no children, we modify its parent p [z ] to replace z with NIL as its child. If the node has only a single child, we “splice out” z by making a new link between its child and its parent. Finally, if the node has two children, we splice out z ’s successor y , which has no left child (see Exercise 12.2-5) and replace z ’s key and satellite data with y ’s key and satellite data. The code for T REE -D ELETE organizes these three cases a little differently. T REE -D ELETE (T , z ) 1 if left [z ] = NIL or right [z ] = NIL 2 then y ← z 3 else y ← T REE -S UCCESSOR (z ) 4 if left [ y ] = NIL 5 then x ← left [ y ] 6 else x ← right [ y ] 7 if x = NIL 8 then p [x ] ← p [ y ] 9 if p [ y ] = NIL 10 then root [T ] ← x 11 else if y = left [ p [ y ]] 12 then left [ p [ y ]] ← x 13 else right [ p [ y ]] ← x 14 if y = z 15 then key[z ] ← key[ y ] 16 copy y ’s satellite data into z 17 return y 12.3 Insertion and deletion 263 15 5 3 10 6 7 (a) 15 5 3 10 6 7 (b) 15 z5 3 10 y6 7 (c) 12 13 18 16 20 23 7 3 10 y6 z5 12 13 18 16 z 20 23 6 7 3 10 5 12 13 z 18 16 20 23 6 7 3 10 5 15 16 12 18 20 23 15 20 12 13 18 23 15 16 12 13 18 20 23 7 3 10 6 15 16 12 13 18 20 23 Figure 12.4 Deleting a node z from a binary search tree. Which node is actually removed depends on how many children z has; this node is shown lightly shaded. (a) If z has no children, we just remove it. (b) If z has only one child, we splice out z . (c) If z has two children, we splice out its successor y , which has at most one child, and then replace z ’s key and satellite data with y ’s key and satellite data. In lines 1–3, the algorithm determines a node y to splice out. The node y is either the input node z (if z has at most 1 child) or the successor of z (if z has two children). Then, in lines 4–6, x is set to the non-NIL child of y , or to NIL if y has no children. The node y is spliced out in lines 7–13 by modifying pointers in p [ y ] and x . Splicing out y is somewhat complicated by the need for proper handling of the boundary conditions, which occur when x = NIL or when y is the root. Finally, in lines 14–16, if the successor of z was the node spliced out, y ’s key and satellite data are moved to z , overwriting the previous key and satellite data. The node y is returned in line 17 so that the calling procedure can recycle it via the free list. The procedure runs in O (h ) time on a tree of height h . 264 Chapter 12 Binary Search Trees In summary, we have proved the following theorem. Theorem 12.3 The dynamic-set operations I NSERT and D ELETE can be made to run in O (h ) time on a binary search tree of height h . Exercises 12.3-1 Give a recursive version of the T REE -I NSERT procedure. 12.3-2 Suppose that a binary search tree is constructed by repeatedly inserting distinct values into the tree. Argue that the number of nodes examined in searching for a value in the tree is one plus the number of nodes examined when the value was first inserted into the tree. 12.3-3 We can sort a given set of n numbers by first building a binary search tree containing these numbers (using T REE -I NSERT repeatedly to insert the numbers one by one) and then printing the numbers by an inorder tree walk. What are the worstcase and best-case running times for this sorting algorithm? 12.3-4 Suppose that another data structure contains a pointer to a node y in a binary search tree, and suppose that y ’s predecessor z is deleted from the tree by the procedure T REE -D ELETE . What problem can arise? How can T REE -D ELETE be rewritten to solve this problem? 12.3-5 Is the operation of deletion “commutative” in the sense that deleting x and then y from a binary search tree leaves the same tree as deleting y and then x ? Argue why it is or give a counterexample. 12.3-6 When node z in T REE -D ELETE has two children, we could splice out its predecessor rather than its successor. Some have argued that a fair strategy, giving equal priority to predecessor and successor, yields better empirical performance. How might T REE -D ELETE be changed to implement such a fair strategy? 12.4 Randomly built binary search trees 265 12.4 Randomly built binary search trees We have shown that all the basic operations on a binary search tree run in O (h ) time, where h is the height of the tree. The height of a binary search tree varies, however, as items are inserted and deleted. If, for example, the items are inserted in strictly increasing order, the tree will be a chain with height n − 1. On the other hand, Exercise B.5-4 shows that h ≥ lg n . As with quicksort, we can show that the behavior of the average case is much closer to the best case than the worst case. Unfortunately, little is known about the average height of a binary search tree when both insertion and deletion are used to create it. When the tree is created by insertion alone, the analysis becomes more tractable. Let us therefore define a randomly built binary search tree on n keys as one that arises from inserting the keys in random order into an initially empty tree, where each of the n ! permutations of the input keys is equally likely. (Exercise 12.4-3 asks you to show that this notion is different from assuming that every binary search tree on n keys is equally likely.) In this section, we shall show that the expected height of a randomly built binary search tree on n keys is O (lg n ). We assume that all keys are distinct. We start by defining three random variables that help measure the height of a randomly built binary search tree. We denote the height of a randomly built binary search on n keys by X n , and we define the exponential height Y n = 2 X n . When we build a binary search tree on n keys, we choose one key as that of the root, and we let Rn denote the random variable that holds this key’s rank within the set of n keys. The value of Rn is equally likely to be any element of the set {1, 2, . . . , n }. If Rn = i , then the left subtree of the root is a randomly built binary search tree on i − 1 keys, and the right subtree is a randomly built binary search tree on n − i keys. Because the height of a binary tree is one more than the larger of the heights of the two subtrees of the root, the exponential height of a binary tree is twice the larger of the exponential heights of the two subtrees of the root. If we know that Rn = i , we therefore have that Yn = 2 · max(Yi −1 , Yn−i ) . As base cases, we have Y1 = 1, because the exponential height of a tree with 1 node is 20 = 1 and, for convenience, we define Y0 = 0. Next we define indicator random variables Z n,1 , Z n,2 , . . . , Z n,n , where Because Rn is equally likely to be any element of {1, 2, . . . , n }, we have that Pr { Rn = i } = 1/ n for i = 1, 2, . . . , n , and hence, by Lemma 5.1, (12.1) Z n,i = I { Rn = i } . for i = 1, 2, . . . , n . Because exactly one value of Z n,i is 1 and all others are 0, we also have E [ Z n,i ] = 1/ n , 266 Chapter 12 n Binary Search Trees Yn = We will show that E [Yn ] is polynomial in n , which will ultimately imply that E [ X n ] = O (lg n ). The indicator random variable Z n,i = I { Rn = i } is independent of the values of Yi −1 and Yn−i . Having chosen Rn = i , the left subtree, whose exponential height is Yi −1 , is randomly built on the i − 1 keys whose ranks are less than i . This subtree is just like any other randomly built binary search tree on i − 1 keys. Other than the number of keys it contains, this subtree’s structure is not affected at all by the choice of Rn = i ; hence the random variables Yi −1 and Z n,i are independent. Likewise, the right subtree, whose exponential height is Y n−i , is randomly built on the n − i keys whose ranks are greater than i . Its structure is independent of the value of Rn , and so the random variables Yn−i and Z n,i are independent. Hence, n i =1 Z n,i (2 · max(Yi −1 , Yn−i )) . E [Yn ] = E n i =1 Z n,i (2 · max(Yi −1 , Yn−i )) (by linearity of expectation) = = = = ≤ i =1 n i =1 n i =1 E [ Z n,i (2 · max(Yi −1 , Yn−i ))] E [ Z n,i ] E [2 · max(Yi −1 , Yn−i )] (by independence) 1 · E [2 · max(Yi −1 , Yn−i )] n n (by equation (12.1)) (by equation (C.21)) (by Exercise C.3-4) . 2 n 2 n i =1 n i =1 E [max(Yi −1 , Yn−i )] (E [Yi −1 ] + E [Yn−i ]) Each term E [Y0 ] , E [Y1 ] , . . . , E [Yn−1 ] appears twice in the last summation, once as E [Yi −1 ] and once as E [Yn−i ], and so we have the recurrence E [Yn ] ≤ 4 n n −1 i =0 E [Yi ] . (12.2) Using the substitution method, we will show that for all positive integers n , the recurrence (12.2) has the solution E [Yn ] ≤ 1 n+3 . 4 3 In doing so, we will use the identity 12.4 n −1 i =0 Randomly built binary search trees 267 i +3 n+3 = . 3 4 (12.3) (Exercise 12.4-1 asks you to prove this identity.) For the base case, we verify that the bound 1 = Y 1 = E [Y1 ] ≤ 4 n 4 n 1 n 1 n 1 n 1 4 1 4 n −1 i =0 1 1+3 =1 3 4 holds. For the substitution, we have that E [Yn ] ≤ = = = = = = E [Yi ] 1 i +3 4 3 i +3 3 (by equation (12.3)) (by the inductive hypothesis) n −1 i =0 n −1 i =0 n+3 4 (n + 3)! · 4! (n − 1)! (n + 3)! · 3! n ! n+3 . 3 We have bounded E [Yn ], but our ultimate goal is to bound E [ X n ]. As Exercise 12.4-4 asks you to show, the function f (x ) = 2 x is convex (see page 1109). Therefore, we can apply Jensen’s inequality (C.25), which says that 2E[ X n ] ≤ E [2 X n ] = E [Yn ] , to derive that 1 n+3 2E[ X n ] ≤ 4 3 1 (n + 3)(n + 2)(n + 1) = · 4 6 n 3 + 6n 2 + 11n + 6 = . 24 Taking logarithms of both sides gives E [ X n ] = O (lg n ). Thus, we have proven the following: 268 Chapter 12 Binary Search Trees Theorem 12.4 The expected height of a randomly built binary search tree on n keys is O (lg n ). Exercises 12.4-1 Prove equation (12.3). 12.4-2 Describe a binary search tree on n nodes such that the average depth of a node in the tree is (lg n ) but the height of the tree is ω(lg n ). Give an asymptotic upper bound on the height of an n -node binary search tree in which the average depth of a node is (lg n ). 12.4-3 Show that the notion of a randomly chosen binary search tree on n keys, where each binary search tree of n keys is equally likely to be chosen, is different from the notion of a randomly built binary search tree given in this section. (Hint: List the possibilities when n = 3.) 12.4-4 Show that the function f (x ) = 2 x is convex. 12.4-5 Consider R ANDOMIZED -Q UICKSORT operating on a sequence of n input numbers. Prove that for any constant k > 0, all but O (1/ n k ) of the n ! input permutations yield an O (n lg n ) running time. Problems 12-1 Binary search trees with equal keys Equal keys pose a problem for the implementation of binary search trees. a. What is the asymptotic performance of T REE -I NSERT when used to insert n items with identical keys into an initially empty binary search tree? We propose to improve T REE -I NSERT by testing before line 5 whether or not key[z ] = key[x ] and by testing before line 11 whether or not key[z ] = key[ y ]. If equality holds, we implement one of the following strategies. For each strategy, find the asymptotic performance of inserting n items with identical keys into an initially empty binary search tree. (The strategies are described for line 5, in which Problems for Chapter 12 269 0 0 1 1 0 10 1 011 0 100 1 1 1011 Figure 12.5 A radix tree storing the bit strings 1011, 10, 011, 100, and 0. Each node’s key can be determined by traversing the path from the root to that node. There is no need, therefore, to store the keys in the nodes; the keys are shown here for illustrative purposes only. Nodes are heavily shaded if the keys corresponding to them are not in the tree; such nodes are present only to establish a path to other nodes. we compare the keys of z and x . Substitute y for x to arrive at the strategies for line 11.) b. Keep a boolean flag b[x ] at node x , and set x to either left [x ] or right [x ] based on the value of b[x ], which alternates between FALSE and TRUE each time x is visited during insertion of a node with the same key as x . c. Keep a list of nodes with equal keys at x , and insert z into the list. d. Randomly set x to either left [x ] or right [x ]. (Give the worst-case performance and informally derive the average-case performance.) 12-2 Radix trees Given two strings a = a0 a1 . . . a p and b = b0 b1 . . . bq , where each ai and each b j is in some ordered set of characters, we say that string a is lexicographically less than string b if either 1. there exists an integer j , where 0 ≤ j ≤ min( p , q ), such that a i = bi for all i = 0, 1, . . . , j − 1 and a j < b j , or 2. p < q and ai = bi for all i = 0, 1, . . . , p . For example, if a and b are bit strings, then 10100 < 10110 by rule 1 (letting j = 3) and 10100 < 101000 by rule 2. This is similar to the ordering used in English-language dictionaries. The radix tree data structure shown in Figure 12.5 stores the bit strings 1011, 10, 011, 100, and 0. When searching for a key a = a 0 a1 . . . a p , we go left at a node 270 Chapter 12 Binary Search Trees of depth i if ai = 0 and right if ai = 1. Let S be a set of distinct binary strings whose lengths sum to n . Show how to use a radix tree to sort S lexicographically in (n ) time. For the example in Figure 12.5, the output of the sort should be the sequence 0, 011, 10, 100, 1011. 12-3 Average node depth in a randomly built binary search tree In this problem, we prove that the average depth of a node in a randomly built binary search tree with n nodes is O (lg n ). Although this result is weaker than that of Theorem 12.4, the technique we shall use reveals a surprising similarity between the building of a binary search tree and the running of R ANDOMIZED -Q UICKSORT from Section 7.3. We define the total path length P (T ) of a binary tree T as the sum, over all nodes x in T , of the depth of node x , which we denote by d (x , T ). a. Argue that the average depth of a node in T is 1 n d (x , T ) = 1 P (T ) . n x ∈T Thus, we wish to show that the expected value of P (T ) is O (n lg n ). b. Let TL and TR denote the left and right subtrees of tree T , respectively. Argue that if T has n nodes, then P ( T ) = P ( TL ) + P ( T R ) + n − 1 . c. Let P (n ) denote the average total path length of a randomly built binary search tree with n nodes. Show that P (n ) = 1 n n −1 i =0 ( P (i ) + P (n − i − 1) + n − 1) . d. Show that P (n ) can be rewritten as P (n ) = 2 n n −1 k =1 P (k ) + (n ) . e. Recalling the alternative analysis of the randomized version of quicksort given in Problem 7-2, conclude that P (n ) = O (n lg n ). At each recursive invocation of quicksort, we choose a random pivot element to partition the set of elements being sorted. Each node of a binary search tree partitions the set of elements that fall into the subtree rooted at that node. Problems for Chapter 12 271 f. Describe an implementation of quicksort in which the comparisons to sort a set of elements are exactly the same as the comparisons to insert the elements into a binary search tree. (The order in which comparisons are made may differ, but the same comparisons must be made.) 12-4 Number of different binary trees Let bn denote the number of different binary trees with n nodes. In this problem, you will find a formula for bn , as well as an asymptotic estimate. a. Show that b0 = 1 and that, for n ≥ 1, bn = n −1 k =0 bk bn−1−k . b. Referring to Problem 4-5 for the definition of a generating function, let B (x ) be the generating function B (x ) = ∞ n =0 bn x n . Show that B (x ) = x B (x )2 + 1, and hence one way to express B (x ) in closed form is B (x ) = √ 1 1 − 1 − 4x . 2x f (k ) (a ) ( x − a )k , k! The Taylor expansion of f (x ) around the point x = a is given by f (x ) = ∞ k =0 where f (k ) (x ) is the k th derivative of f evaluated at x . c. Show that bn = 2n 1 n+1 n √ (the n th Catalan number) by using the Taylor expansion of 1 − 4x around x = 0. (If you wish, instead of using the Taylor expansion, you may use the generalization of the binomial expansion (C.4) to nonintegral exponents n , where for any real number n and for any integer k , we interpret n to be k n (n − 1) · · · (n − k + 1)/ k ! if k ≥ 0, and 0 otherwise.) 272 Chapter 12 Binary Search Trees d. Show that 4n bn = √ 3/2 (1 + O (1/ n )) . πn Chapter notes Knuth [185] contains a good discussion of simple binary search trees as well as many variations. Binary search trees seem to have been independently discovered by a number of people in the late 1950’s. Radix trees are often called tries, which comes from the middle letters in the word retrieval. They are also discussed by Knuth [185]. Section 15.5 will show how to construct an optimal binary search tree when search frequencies are known prior to constructing the tree. That is, given the frequencies of searching for each key and the frequencies of searching for values that fall between keys in the tree, we construct a binary search tree for which a set of searches that follows these frequencies examines the minimum number of nodes. The proof in Section 12.4 that bounds the expected height of a randomly built binary search tree is due to Aslam [23]. Mart´nez and Roura [211] give randomized ı algorithms for insertion into and deletion from binary search trees in which the result of either operation is a random binary search tree. Their definition of a random binary search tree differs slightly from that of a randomly built binary search tree in this chapter, however. 13 Red-Black Trees Chapter 12 showed that a binary search tree of height h can implement any of the basic dynamic-set operations—such as S EARCH, P REDECESSOR, S UCCESSOR, M INIMUM, M AXIMUM, I NSERT, and D ELETE—in O (h ) time. Thus, the set operations are fast if the height of the search tree is small; but if its height is large, their performance may be no better than with a linked list. Red-black trees are one of many search-tree schemes that are “balanced” in order to guarantee that basic dynamic-set operations take O (lg n ) time in the worst case. 13.1 Properties of red-black trees A red-black tree is a binary search tree with one extra bit of storage per node: its color, which can be either RED or BLACK. By constraining the way nodes can be colored on any path from the root to a leaf, red-black trees ensure that no such path is more than twice as long as any other, so that the tree is approximately balanced. Each node of the tree now contains the fields color , key, left , right , and p . If a child or the parent of a node does not exist, the corresponding pointer field of the node contains the value NIL. We shall regard these NIL’s as being pointers to external nodes (leaves) of the binary search tree and the normal, key-bearing nodes as being internal nodes of the tree. A binary search tree is a red-black tree if it satisfies the following red-black properties: 1. Every node is either red or black. 2. The root is black. 3. Every leaf (NIL) is black. 4. If a node is red, then both its children are black. 5. For each node, all paths from the node to descendant leaves contain the same number of black nodes. 274 Chapter 13 Red-Black Trees Figure 13.1(a) shows an example of a red-black tree. As a matter of convenience in dealing with boundary conditions in red-black tree code, we use a single sentinel to represent NIL (see page 206). For a red-black tree T , the sentinel nil[T ] is an object with the same fields as an ordinary node in the tree. Its color field is BLACK, and its other fields— p , left, right , and key—can be set to arbitrary values. As Figure 13.1(b) shows, all pointers to NIL are replaced by pointers to the sentinel nil[T ]. We use the sentinel so that we can treat a NIL child of a node x as an ordinary node whose parent is x . Although we instead could add a distinct sentinel node for each NIL in the tree, so that the parent of each NIL is well defined, that approach would waste space. Instead, we use the one sentinel nil[T ] to represent all the NIL’s—all leaves and the root’s parent. The values of the fields p , left , right , and key of the sentinel are immaterial, although we may set them during the course of a procedure for our convenience. We generally confine our interest to the internal nodes of a red-black tree, since they hold the key values. In the remainder of this chapter, we omit the leaves when we draw red-black trees, as shown in Figure 13.1(c). We call the number of black nodes on any path from, but not including, a node x down to a leaf the black-height of the node, denoted bh(x ). By property 5, the notion of black-height is well defined, since all descending paths from the node have the same number of black nodes. We define the black-height of a red-black tree to be the black-height of its root. The following lemma shows why red-black trees make good search trees. Lemma 13.1 A red-black tree with n internal nodes has height at most 2 lg(n + 1). Proof We start by showing that the subtree rooted at any node x contains at least 2bh(x ) − 1 internal nodes. We prove this claim by induction on the height of x . If the height of x is 0, then x must be a leaf (nil[T ]), and the subtree rooted at x indeed contains at least 2bh(x ) − 1 = 20 − 1 = 0 internal nodes. For the inductive step, consider a node x that has positive height and is an internal node with two children. Each child has a black-height of either bh(x ) or bh(x ) − 1, depending on whether its color is red or black, respectively. Since the height of a child of x is less than the height of x itself, we can apply the inductive hypothesis to conclude that each child has at least 2bh(x )−1 − 1 internal nodes. Thus, the subtree rooted at x contains at least (2bh(x )−1 − 1) + (2bh(x )−1 − 1) + 1 = 2bh(x ) − 1 internal nodes, which proves the claim. To complete the proof of the lemma, let h be the height of the tree. According to property 4, at least half the nodes on any simple path from the root to a leaf, not 13.1 Properties of red-black trees 275 3 3 2 2 1 1 NIL 26 41 2 1 1 1 17 2 14 1 21 1 2 30 38 1 47 NIL 10 1 16 NIL 1 19 1 23 NIL 1 28 NIL NIL 7 NIL NIL 12 NIL 1 15 NIL NIL 20 NIL NIL NIL 35 NIL 39 NIL 3 NIL NIL NIL NIL NIL (a) 26 17 14 10 7 3 12 15 16 19 20 21 23 28 35 30 38 39 41 47 nil[T] (b) 26 17 14 10 7 3 12 15 16 19 20 (c) 21 23 28 35 30 38 39 41 47 Figure 13.1 A red-black tree with black nodes darkened and red nodes shaded. Every node in a red-black tree is either red or black, the children of a red node are both black, and every simple path from a node to a descendant leaf contains the same number of black nodes. (a) Every leaf, shown as a NIL, is black. Each non-NIL node is marked with its black-height; NIL’s have black-height 0. (b) The same red-black tree but with each NIL replaced by the single sentinel nil[T ], which is always black, and with black-heights omitted. The root’s parent is also the sentinel. (c) The same red-black tree but with leaves and the root’s parent omitted entirely. We shall use this drawing style in the remainder of this chapter. 276 Chapter 13 Red-Black Trees including the root, must be black. Consequently, the black-height of the root must be at least h /2; thus, Moving the 1 to the left-hand side and taking logarithms on both sides yields lg(n + 1) ≥ h /2, or h ≤ 2 lg(n + 1). An immediate consequence of this lemma is that the dynamic-set operations S EARCH, M INIMUM, M AXIMUM, S UCCESSOR, and P REDECESSOR can be implemented in O (lg n ) time on red-black trees, since they can be made to run in O (h ) time on a search tree of height h (as shown in Chapter 12) and any red-black tree on n nodes is a search tree with height O (lg n ). (Of course, references to NIL in the algorithms of Chapter 12 would have to be replaced by nil[T ].) Although the algorithms T REE -I NSERT and T REE -D ELETE from Chapter 12 run in O (lg n ) time when given a red-black tree as input, they do not directly support the dynamic-set operations I NSERT and D ELETE, since they do not guarantee that the modified binary search tree will be a red-black tree. We shall see in Sections 13.3 and 13.4, however, that these two operations can indeed be supported in O (lg n ) time. Exercises 13.1-1 In the style of Figure 13.1(a), draw the complete binary search tree of height 3 on the keys {1, 2, . . . , 15}. Add the NIL leaves and color the nodes in three different ways such that the black-heights of the resulting red-black trees are 2, 3, and 4. 13.1-2 Draw the red-black tree that results after T REE -I NSERT is called on the tree in Figure 13.1 with key 36. If the inserted node is colored red, is the resulting tree a red-black tree? What if it is colored black? 13.1-3 Let us define a relaxed red-black tree as a binary search tree that satisfies redblack properties 1, 3, 4, and 5. In other words, the root may be either red or black. Consider a relaxed red-black tree T whose root is red. If we color the root of T black but make no other changes to T , is the resulting tree a red-black tree? 13.1-4 Suppose that we “absorb” every red node in a red-black tree into its black parent, so that the children of the red node become children of the black parent. (Ignore what happens to the keys.) What are the possible degrees of a black node after all its red children are absorbed? What can you say about the depths of the leaves of the resulting tree? n ≥ 2h /2 − 1 . 13.2 Rotations 277 13.1-5 Show that the longest simple path from a node x in a red-black tree to a descendant leaf has length at most twice that of the shortest simple path from node x to a descendant leaf. 13.1-6 What is the largest possible number of internal nodes in a red-black tree with blackheight k ? What is the smallest possible number? 13.1-7 Describe a red-black tree on n keys that realizes the largest possible ratio of red internal nodes to black internal nodes. What is this ratio? What tree has the smallest possible ratio, and what is the ratio? 13.2 Rotations The search-tree operations T REE -I NSERT and T REE -D ELETE , when run on a redblack tree with n keys, take O (lg n ) time. Because they modify the tree, the result may violate the red-black properties enumerated in Section 13.1. To restore these properties, we must change the colors of some of the nodes in the tree and also change the pointer structure. We change the pointer structure through rotation, which is a local operation in a search tree that preserves the binary-search-tree property. Figure 13.2 shows the two kinds of rotations: left rotations and right rotations. When we do a left rotation on a node x , we assume that its right child y is not nil[T ]; x may be any node in the tree whose right child is not nil[T ]. The left rotation “pivots” around the link from x to y . It makes y the new root of the subtree, with x as y ’s left child and y ’s left child as x ’s right child. The pseudocode for L EFT-ROTATE assumes that right [x ] = nil[T ] and that the root’s parent is nil[T ]. 278 Chapter 13 Red-Black Trees x LEFT-ROTATE(T, x) y x y α β γ RIGHT-ROTATE(T, y) γ β α Figure 13.2 The rotation operations on a binary search tree. The operation L EFT-ROTATE(T , x ) transforms the configuration of the two nodes on the left into the configuration on the right by changing a constant number of pointers. The configuration on the right can be transformed into the configuration on the left by the inverse operation R IGHT-ROTATE(T , y ). The letters α , β , and γ represent arbitrary subtrees. A rotation operation preserves the binary-search-tree property: the keys in α precede key[x ], which precedes the keys in β , which precede key[ y ], which precedes the keys in γ . L EFT-ROTATE (T , x ) 1 y ← right [x ] £ Set y . 2 right [x ] ← left [ y ] £ Turn y ’s left subtree into x ’s right subtree. 3 p [left [ y ]] ← x 4 p [ y ] ← p [x ] £ Link x ’s parent to y . 5 if p [x ] = nil[T ] 6 then root [T ] ← y 7 else if x = left [ p [x ]] 8 then left [ p [x ]] ← y 9 else right [ p [x ]] ← y 10 left [ y ] ← x £ Put x on y ’s left. 11 p [x ] ← y Figure 13.3 shows how L EFT-ROTATE operates. The code for R IGHT-ROTATE is symmetric. Both L EFT-ROTATE and R IGHT-ROTATE run in O (1) time. Only pointers are changed by a rotation; all other fields in a node remain the same. Exercises 13.2-1 Write pseudocode for R IGHT-ROTATE. 13.2-2 Argue that in every n -node binary search tree, there are exactly n − 1 possible rotations. 13.2 Rotations 279 7 4 3 2 LEFT-ROTATE(T, x) 12 6 9 14 17 20 11 x 18 y 19 22 7 4 3 2 6 9 12 x 11 14 17 20 18 y 19 22 Figure 13.3 An example of how the procedure L EFT-ROTATE(T , x ) modifies a binary search tree. Inorder tree walks of the input tree and the modified tree produce the same listing of key values. 13.2-3 Let a , b, and c be arbitrary nodes in subtrees α , β , and γ , respectively, in the left tree of Figure 13.2. How do the depths of a , b, and c change when a left rotation is performed on node x in the figure? 13.2-4 Show that any arbitrary n -node binary search tree can be transformed into any other arbitrary n -node binary search tree using O (n ) rotations. (Hint: First show that at most n − 1 right rotations suffice to transform the tree into a right-going chain.) 13.2-5 We say that a binary search tree T1 can be right-converted to binary search tree T2 if it is possible to obtain T2 from T1 via a series of calls to R IGHT-ROTATE. Give an example of two trees T1 and T2 such that T1 cannot be right-converted to T2 . Then show that if a tree T1 can be right-converted to T2 , it can be right-converted using O (n 2 ) calls to R IGHT-ROTATE. 280 Chapter 13 Red-Black Trees 13.3 Insertion Insertion of a node into an n -node red-black tree can be accomplished in O (lg n ) time. We use a slightly modified version of the T REE -I NSERT procedure (Section 12.3) to insert node z into the tree T as if it were an ordinary binary search tree, and then we color z red. To guarantee that the red-black properties are preserved, we then call an auxiliary procedure RB-I NSERT-F IXUP to recolor nodes and perform rotations. The call RB-I NSERT (T , z ) inserts node z , whose key field is assumed to have already been filled in, into the red-black tree T . RB-I NSERT (T , z ) 1 y ← nil[T ] 2 x ← root [T ] 3 while x = nil[T ] 4 do y ← x 5 if key[z ] < key[x ] 6 then x ← left[x ] 7 else x ← right [x ] 8 p [z ] ← y 9 if y = nil[T ] 10 then root [T ] ← z 11 else if key[z ] < key[ y ] 12 then left [ y ] ← z 13 else right [ y ] ← z 14 left [z ] ← nil[T ] 15 right [z ] ← nil[T ] 16 color [z ] ← RED 17 RB-I NSERT-F IXUP (T , z ) There are four differences between the procedures T REE -I NSERT and RBI NSERT. First, all instances of NIL in T REE -I NSERT are replaced by nil[T ]. Second, we set left [z ] and right [z ] to nil[T ] in lines 14–15 of RB-I NSERT, in order to maintain the proper tree structure. Third, we color z red in line 16. Fourth, because coloring z red may cause a violation of one of the red-black properties, we call RB-I NSERT-F IXUP (T , z ) in line 17 of RB-I NSERT to restore the red-black properties. 13.3 Insertion 281 RB-I NSERT-F IXUP (T , z ) 1 while color [ p [z ]] = RED 2 do if p [z ] = left [ p [ p [z ]]] 3 then y ← right [ p [ p [z ]]] 4 if color [ y ] = RED 5 then color [ p [z ]] ← BLACK 6 color [ y ] ← BLACK 7 color [ p [ p [z ]]] ← RED 8 z ← p [ p [z ]] 9 else if z = right [ p [z ]] 10 then z ← p [z ] 11 L EFT-ROTATE (T , z ) 12 color [ p [z ]] ← BLACK 13 color [ p [ p [z ]]] ← RED 14 R IGHT-ROTATE (T , p [ p [z ]]) 15 else (same as then clause with “right” and “left” exchanged) 16 color [root [T ]] ← BLACK £ Case 1 £ Case 1 £ Case 1 £ Case 1 £ Case 2 £ Case 2 £ Case 3 £ Case 3 £ Case 3 To understand how RB-I NSERT-F IXUP works, we shall break our examination of the code into three major steps. First, we shall determine what violations of the red-black properties are introduced in RB-I NSERT when the node z is inserted and colored red. Second, we shall examine the overall goal of the while loop in lines 1–15. Finally, we shall explore each of the three cases 1 into which the while loop is broken and see how they accomplish the goal. Figure 13.4 shows how RB-I NSERT-F IXUP operates on a sample red-black tree. Which of the red-black properties can be violated upon the call to RB-I NSERTF IXUP? Property 1 certainly continues to hold, as does property 3, since both children of the newly inserted red node are the sentinel nil[T ]. Property 5, which says that the number of black nodes is the same on every path from a given node, is satisfied as well, because node z replaces the (black) sentinel, and node z is red with sentinel children. Thus, the only properties that might be violated are property 2, which requires the root to be black, and property 4, which says that a red node cannot have a red child. Both possible violations are due to z being colored red. Property 2 is violated if z is the root, and property 4 is violated if z ’s parent is red. Figure 13.4(a) shows a violation of property 4 after the node z has been inserted. The while loop in lines 1–15 maintains the following three-part invariant: 1 Case 2 falls through into case 3, and so these two cases are not mutually exclusive. 282 Chapter 13 Red-Black Trees 11 2 (a) 1 5 z 4 7 8y Case 1 14 15 11 2 (b) 1 5 4 7 z 8 Case 2 14 y 15 11 7 (c) 1 4 7 z (d) 1 4 2 5 8 11 14 15 z 2 5 8 Case 3 14 y 15 Figure 13.4 The operation of RB-I NSERT-F IXUP. (a) A node z after insertion. Since z and its parent p[z ] are both red, a violation of property 4 occurs. Since z ’s uncle y is red, case 1 in the code can be applied. Nodes are recolored and the pointer z is moved up the tree, resulting in the tree shown in (b). Once again, z and its parent are both red, but z ’s uncle y is black. Since z is the right child of p[z ], case 2 can be applied. A left rotation is performed, and the tree that results is shown in (c). Now z is the left child of its parent, and case 3 can be applied. A right rotation yields the tree in (d), which is a legal red-black tree. 13.3 Insertion 283 At the start of each iteration of the loop, a. Node z is red. b. If p [z ] is the root, then p [z ] is black. c. If there is a violation of the red-black properties, there is at most one violation, and it is of either property 2 or property 4. If there is a violation of property 2, it occurs because z is the root and is red. If there is a violation of property 4, it occurs because both z and p [z ] are red. Part (c), which deals with violations of red-black properties, is more central to showing that RB-I NSERT-F IXUP restores the red-black properties than parts (a) and (b), which we use along the way to understand situations in the code. Because we will be focusing on node z and nodes near it in the tree, it is helpful to know from part (a) that z is red. We shall use part (b) to show that the node p [ p [z ]] exists when we reference it in lines 2, 3, 7, 8, 13, and 14. Recall that we need to show that a loop invariant is true prior to the first iteration of the loop, that each iteration maintains the loop invariant, and that the loop invariant gives us a useful property at loop termination. We start with the initialization and termination arguments. Then, as we examine how the body of the loop works in more detail, we shall argue that the loop maintains the invariant upon each iteration. Along the way, we will also demonstrate that there are two possible outcomes of each iteration of the loop: the pointer z moves up the tree, or some rotations are performed and the loop terminates. Initialization: Prior to the first iteration of the loop, we started with a red-black tree with no violations, and we added a red node z . We show that each part of the invariant holds at the time RB-I NSERT-F IXUP is called: a. When RB-I NSERT-F IXUP is called, z is the red node that was added. b. If p [z ] is the root, then p [z ] started out black and did not change prior to the call of RB-I NSERT-F IXUP . c. We have already seen that properties 1, 3, and 5 hold when RB-I NSERTF IXUP is called. If there is a violation of property 2, then the red root must be the newly added node z , which is the only internal node in the tree. Because the parent and both children of z are the sentinel, which is black, there is not also a violation of property 4. Thus, this violation of property 2 is the only violation of redblack properties in the entire tree. If there is a violation of property 4, then because the children of node z are black sentinels and the tree had no other violations prior to z being added, the violation must be because both z and p [z ] are red. Moreover, there are no other violations of red-black properties. 284 Chapter 13 Red-Black Trees Termination: When the loop terminates, it does so because p [z ] is black. (If z is the root, then p [z ] is the sentinel nil[T ], which is black.) Thus, there is no violation of property 4 at loop termination. By the loop invariant, the only property that might fail to hold is property 2. Line 16 restores this property, too, so that when RB-I NSERT-F IXUP terminates, all the red-black properties hold. Maintenance: There are actually six cases to consider in the while loop, but three of them are symmetric to the other three, depending on whether z ’s parent p [z ] is a left child or a right child of z ’s grandparent p [ p [z ]], which is determined in line 2. We have given the code only for the situation in which p [z ] is a left child. The node p [ p [z ]] exists, since by part (b) of the loop invariant, if p [z ] is the root, then p [z ] is black. Since we enter a loop iteration only if p [z ] is red, we know that p [z ] cannot be the root. Hence, p [ p [z ]] exists. Case 1 is distinguished from cases 2 and 3 by the color of z ’s parent’s sibling, or “uncle.” Line 3 makes y point to z ’s uncle right [ p [ p [z ]]], and a test is made in line 4. If y is red, then case 1 is executed. Otherwise, control passes to cases 2 and 3. In all three cases, z ’s grandparent p [ p [z ]] is black, since its parent p [z ] is red, and property 4 is violated only between z and p [z ]. Case 1: z’s uncle y is red Figure 13.5 shows the situation for case 1 (lines 5–8). Case 1 is executed when both p [z ] and y are red. Since p [ p [z ]] is black, we can color both p [z ] and y black, thereby fixing the problem of z and p [z ] both being red, and color p [ p [z ]] red, thereby maintaining property 5. We then repeat the while loop with p [ p [z ]] as the new node z . The pointer z moves up two levels in the tree. Now we show that case 1 maintains the loop invariant at the start of the next iteration. We use z to denote node z in the current iteration, and z = p [ p [z ]] to denote the node z at the test in line 1 upon the next iteration. a. Because this iteration colors p [ p [z ]] red, node z is red at the start of the next iteration. b. The node p [z ] is p [ p [ p [z ]]] in this iteration, and the color of this node does not change. If this node is the root, it was black prior to this iteration, and it remains black at the start of the next iteration. c. We have already argued that case 1 maintains property 5, and it clearly does not introduce a violation of properties 1 or 3. If node z is the root at the start of the next iteration, then case 1 corrected the lone violation of property 4 in this iteration. Since z is red and it is the root, property 2 becomes the only one that is violated, and this violation is due to z . 13.3 Insertion 285 C (a) A Dy Bz A new z C D α β δ ε α β B δ γ ε γ C (b) z A B Dy B A new z C D γ β δ ε α γ β δ ε α Figure 13.5 Case 1 of the procedure RB-I NSERT. Property 4 is violated, since z and its parent p[z ] are both red. The same action is taken whether (a) z is a right child or (b) z is a left child. Each of the subtrees α , β , γ , δ , and ε has a black root, and each has the same black-height. The code for case 1 changes the colors of some nodes, preserving property 5: all downward paths from a node to a leaf have the same number of blacks. The while loop continues with node z ’s grandparent p[ p[z ]] as the new z . Any violation of property 4 can now occur only between the new z , which is red, and its parent, if it is red as well. If node z is not the root at the start of the next iteration, then case 1 has not created a violation of property 2. Case 1 corrected the lone violation of property 4 that existed at the start of this iteration. It then made z red and left p [z ] alone. If p [z ] was black, there is no violation of property 4. If p [z ] was red, coloring z red created one violation of property 4 between z and p [z ]. Case 2: z’s uncle y is black and z is a right child Case 3: z’s uncle y is black and z is a left child In cases 2 and 3, the color of z ’s uncle y is black. The two cases are distinguished by whether z is a right or left child of p [z ]. Lines 10–11 constitute case 2, which is shown in Figure 13.6 together with case 3. In case 2, node z is a right child of its parent. We immediately use a left rotation to transform the situation into case 3 (lines 12–14), in which node z is a left child. Because both z and p [z ] are red, the rotation affects neither the black-height of nodes nor property 5. Whether we enter case 3 directly or through case 2, z ’s uncle y is black, since otherwise we would have executed case 1. Additionally, the node p [ p [z ]] exists, since we have argued that this node existed at the time that 286 Chapter 13 Red-Black Trees C A C B δy Bz z A B δy γ z A C α β α β γ δ γ Case 2 α β Case 3 Figure 13.6 Cases 2 and 3 of the procedure RB-I NSERT. As in case 1, property 4 is violated in either case 2 or case 3 because z and its parent p[z ] are both red. Each of the subtrees α , β , γ , and δ has a black root (α , β , and γ from property 4, and δ because otherwise we would be in case 1), and each has the same black-height. Case 2 is transformed into case 3 by a left rotation, which preserves property 5: all downward paths from a node to a leaf have the same number of blacks. Case 3 causes some color changes and a right rotation, which also preserve property 5. The while loop then terminates, because property 4 is satisfied: there are no longer two red nodes in a row. lines 2 and 3 were executed, and after moving z up one level in line 10 and then down one level in line 11, the identity of p [ p [z ]] remains unchanged. In case 3, we execute some color changes and a right rotation, which preserve property 5, and then, since we no longer have two red nodes in a row, we are done. The body of the while loop is not executed another time, since p [z ] is now black. Now we show that cases 2 and 3 maintain the loop invariant. (As we have just argued, p [z ] will be black upon the next test in line 1, and the loop body will not execute again.) a. Case 2 makes z point to p [z ], which is red. No further change to z or its color occurs in cases 2 and 3. b. Case 3 makes p [z ] black, so that if p [z ] is the root at the start of the next iteration, it is black. c. As in case 1, properties 1, 3, and 5 are maintained in cases 2 and 3. Since node z is not the root in cases 2 and 3, we know that there is no violation of property 2. Cases 2 and 3 do not introduce a violation of property 2, since the only node that is made red becomes a child of a black node by the rotation in case 3. Cases 2 and 3 correct the lone violation of property 4, and they do not introduce another violation. Having shown that each iteration of the loop maintains the invariant, we have shown that RB-I NSERT-F IXUP correctly restores the red-black properties. 13.3 Insertion 287 Analysis What is the running time of RB-I NSERT? Since the height of a red-black tree on n nodes is O (lg n ), lines 1–16 of RB-I NSERT take O (lg n ) time. In RB-I NSERTF IXUP, the while loop repeats only if case 1 is executed, and then the pointer z moves two levels up the tree. The total number of times the while loop can be executed is therefore O (lg n ). Thus, RB-I NSERT takes a total of O (lg n ) time. Interestingly, it never performs more than two rotations, since the while loop terminates if case 2 or case 3 is executed. Exercises 13.3-1 In line 16 of RB-I NSERT, we set the color of the newly inserted node z to red. Notice that if we had chosen to set z ’s color to black, then property 4 of a red-black tree would not be violated. Why didn’t we choose to set z ’s color to black? 13.3-2 Show the red-black trees that result after successively inserting the keys 41, 38, 31, 12, 19, 8 into an initially empty red-black tree. 13.3-3 Suppose that the black-height of each of the subtrees α, β, γ , δ, ε in Figures 13.5 and 13.6 is k . Label each node in each figure with its black-height to verify that property 5 is preserved by the indicated transformation. 13.3-4 Professor Teach is concerned that RB-I NSERT-F IXUP might set color [nil[T ]] to RED, in which case the test in line 1 would not cause the loop to terminate when z is the root. Show that the professor’s concern is unfounded by arguing that RBI NSERT-F IXUP never sets color [nil[T ]] to RED. 13.3-5 Consider a red-black tree formed by inserting n nodes with RB-I NSERT. Argue that if n > 1, the tree has at least one red node. 13.3-6 Suggest how to implement RB-I NSERT efficiently if the representation for redblack trees includes no storage for parent pointers. 288 Chapter 13 Red-Black Trees 13.4 Deletion Like the other basic operations on an n -node red-black tree, deletion of a node takes time O (lg n ). Deleting a node from a red-black tree is only slightly more complicated than inserting a node. The procedure RB-D ELETE is a minor modification of the T REE -D ELETE procedure (Section 12.3). After splicing out a node, it calls an auxiliary procedure RB-D ELETE -F IXUP that changes colors and performs rotations to restore the redblack properties. RB-D ELETE (T , z ) 1 if left [z ] = nil[T ] or right [z ] = nil[T ] 2 then y ← z 3 else y ← T REE -S UCCESSOR (z ) 4 if left [ y ] = nil[T ] 5 then x ← left [ y ] 6 else x ← right [ y ] 7 p [x ] ← p [ y ] 8 if p [ y ] = nil[T ] 9 then root [T ] ← x 10 else if y = left [ p [ y ]] 11 then left [ p [ y ]] ← x 12 else right [ p [ y ]] ← x 13 if y = z 14 then key[z ] ← key[ y ] 15 copy y ’s satellite data into z 16 if color [ y ] = BLACK 17 then RB-D ELETE -F IXUP (T , x ) 18 return y There are three differences between the procedures T REE -D ELETE and RBD ELETE. First, all references to NIL in T REE -D ELETE are replaced by references to the sentinel nil[T ] in RB-D ELETE. Second, the test for whether x is NIL in line 7 of T REE -D ELETE is removed, and the assignment p [x ] ← p [ y ] is performed unconditionally in line 7 of RB-D ELETE. Thus, if x is the sentinel nil[T ], its parent pointer points to the parent of the spliced-out node y . Third, a call to RBD ELETE -F IXUP is made in lines 16–17 if y is black. If y is red, the red-black properties still hold when y is spliced out, for the following reasons: • • no black-heights in the tree have changed, no red nodes have been made adjacent, and 13.4 Deletion 289 • since y could not have been the root if it was red, the root remains black. The node x passed to RB-D ELETE -F IXUP is one of two nodes: either the node that was y ’s sole child before y was spliced out if y had a child that was not the sentinel nil[T ], or, if y had no children, x is the sentinel nil[T ]. In the latter case, the unconditional assignment in line 7 guarantees that x ’s parent is now the node that was previously y ’s parent, whether x is a key-bearing internal node or the sentinel nil[T ]. We can now examine how the procedure RB-D ELETE -F IXUP restores the redblack properties to the search tree. RB-D ELETE -F IXUP (T , x ) 1 while x = root [T ] and color [x ] = BLACK 2 do if x = left[ p [x ]] 3 then w ← right [ p [x ]] 4 if color [w ] = RED 5 then color [w ] ← BLACK £ Case 1 6 color [ p [x ]] ← RED £ Case 1 7 L EFT-ROTATE (T , p [x ]) £ Case 1 8 w ← right [ p [x ]] £ Case 1 9 if color [left [w ]] = BLACK and color [right [w ]] = BLACK 10 then color [w ] ← RED £ Case 2 11 x ← p [x ] £ Case 2 12 else if color [right [w ]] = BLACK 13 then color [left [w ]] ← BLACK £ Case 3 14 color [w ] ← RED £ Case 3 15 R IGHT-ROTATE (T , w) £ Case 3 16 w ← right [ p [x ]] £ Case 3 17 color [w ] ← color [ p [x ]] £ Case 4 18 color [ p [x ]] ← BLACK £ Case 4 19 color [right [w ]] ← BLACK £ Case 4 20 L EFT-ROTATE (T , p [x ]) £ Case 4 21 x ← root [T ] £ Case 4 22 else (same as then clause with “right” and “left” exchanged) 23 color [x ] ← BLACK If the spliced-out node y in RB-D ELETE is black, three problems may arise. First, if y had been the root and a red child of y becomes the new root, we have violated property 2. Second, if both x and p [ y ] (which is now also p [x ]) were red, then we have violated property 4. Third, y ’s removal causes any path that previously contained y to have one fewer black node. Thus, property 5 is now violated by any ancestor of y in the tree. We can correct this problem by saying 290 Chapter 13 Red-Black Trees that node x has an “extra” black. That is, if we add 1 to the count of black nodes on any path that contains x , then under this interpretation, property 5 holds. When we splice out the black node y , we “push” its blackness onto its child. The problem is that now node x is neither red nor black, thereby violating property 1. Instead, node x is either “doubly black” or “red-and-black,” and it contributes either 2 or 1, respectively, to the count of black nodes on paths containing x . The color attribute of x will still be either RED (if x is red-and-black) or BLACK (if x is doubly black). In other words, the extra black on a node is reflected in x ’s pointing to the node rather than in the color attribute. The procedure RB-D ELETE -F IXUP restores properties 1, 2, and 4. Exercises 13.4-1 and 13.4-2 ask you to show that the procedure restores properties 2 and 4, and so in the remainder of this section, we shall focus on property 1. The goal of the while loop in lines 1–22 is to move the extra black up the tree until 1. x points to a red-and-black node, in which case we color x (singly) black in line 23, 2. x points to the root, in which case the extra black can be simply “removed,” or 3. suitable rotations and recolorings can be performed. Within the while loop, x always points to a nonroot doubly black node. We determine in line 2 whether x is a left child or a right child of its parent p [x ]. (We have given the code for the situation in which x is a left child; the situation in which x is a right child—line 22—is symmetric.) We maintain a pointer w to the sibling of x . Since node x is doubly black, node w cannot be nil[T ]; otherwise, the number of blacks on the path from p [x ] to the (singly black) leaf w would be smaller than the number on the path from p [x ] to x . The four cases2 in the code are illustrated in Figure 13.7. Before examining each case in detail, let’s look more generally at how we can verify that the transformation in each of the cases preserves property 5. The key idea is that in each case the number of black nodes (including x ’s extra black) from (and including) the root of the subtree shown to each of the subtrees α, β, . . . , ζ is preserved by the transformation. Thus, if property 5 holds prior to the transformation, it continues to hold afterward. For example, in Figure 13.7(a), which illustrates case 1, the number of black nodes from the root to either subtree α or β is 3, both before and after the transformation. (Again, remember that node x adds an extra black.) Similarly, the number of black nodes from the root to any of γ , δ , ε , and ζ is 2, both before and after the transformation. In Figure 13.7(b), the counting must involve the value c of the color attribute of the root of the subtree shown, which can be either RED or BLACK. If we define count(RED ) = 0 and count(BLACK ) = 1, then the num2 As in RB-I NSERT-F IXUP , the cases in RB-D ELETE -F IXUP are not mutually exclusive. 13.4 Deletion 291 B (a) xA Dw Case 1 B E xA new w C D E α β γ C ε δ ζ δ ε ζ Case 2 α β γ Bc (b) xA Dw new x A Bc D α β γ C E α ζ Case 3 β γ C E δ ε δ ε ζ Bc (c) xA Dw Bc xA C new w α β γ C E α ζ Case 4 β γ δ D E δ ε ε Bc (d) xA Dw Dc B E A C c′ ε E ζ α β γ C c′ ζ new x = root[T] δ ε ζ α β γ δ Figure 13.7 The cases in the while loop of the procedure RB-D ELETE -F IXUP. Darkened nodes have color attributes BLACK, heavily shaded nodes have color attributes RED, and lightly shaded nodes have color attributes represented by c and c , which may be either RED or BLACK. The letters α, β, . . . , ζ represent arbitrary subtrees. In each case, the configuration on the left is transformed into the configuration on the right by changing some colors and/or performing a rotation. Any node pointed to by x has an extra black and is either doubly black or red-and-black. The only case that causes the loop to repeat is case 2. (a) Case 1 is transformed to case 2, 3, or 4 by exchanging the colors of nodes B and D and performing a left rotation. (b) In case 2, the extra black represented by the pointer x is moved up the tree by coloring node D red and setting x to point to node B . If we enter case 2 through case 1, the while loop terminates because the new node x is red-and-black, and therefore the value c of its color attribute is RED. (c) Case 3 is transformed to case 4 by exchanging the colors of nodes C and D and performing a right rotation. (d) In case 4, the extra black represented by x can be removed by changing some colors and performing a left rotation (without violating the red-black properties), and the loop terminates. 292 Chapter 13 Red-Black Trees ber of black nodes from the root to α is 2 + count(c), both before and after the transformation. In this case, after the transformation, the new node x has color attribute c, but this node is really either red-and-black (if c = RED ) or doubly black (if c = BLACK ). The other cases can be verified similarly (see Exercise 13.4-5). Case 1: x’s sibling w is red Case 1 (lines 5–8 of RB-D ELETE -F IXUP and Figure 13.7(a)) occurs when node w , the sibling of node x , is red. Since w must have black children, we can switch the colors of w and p [x ] and then perform a left-rotation on p [x ] without violating any of the red-black properties. The new sibling of x , which is one of w ’s children prior to the rotation, is now black, and thus we have converted case 1 into case 2, 3, or 4. Cases 2, 3, and 4 occur when node w is black; they are distinguished by the colors of w ’s children. Case 2: x’s sibling w is black, and both of w ’s children are black In case 2 (lines 10–11 of RB-D ELETE -F IXUP and Figure 13.7(b)), both of w ’s children are black. Since w is also black, we take one black off both x and w , leaving x with only one black and leaving w red. To compensate for removing one black from x and w , we would like to add an extra black to p [x ], which was originally either red or black. We do so by repeating the while loop with p [x ] as the new node x . Observe that if we enter case 2 through case 1, the new node x is red-and-black, since the original p [x ] was red. Hence, the value c of the color attribute of the new node x is RED, and the loop terminates when it tests the loop condition. The new node x is then colored (singly) black in line 23. Case 3: x’s sibling w is black, w ’s left child is red, and w ’s right child is black Case 3 (lines 13–16 and Figure 13.7(c)) occurs when w is black, its left child is red, and its right child is black. We can switch the colors of w and its left child left [w ] and then perform a right rotation on w without violating any of the red-black properties. The new sibling w of x is now a black node with a red right child, and thus we have transformed case 3 into case 4. Case 4: x’s sibling w is black, and w ’s right child is red Case 4 (lines 17–21 and Figure 13.7(d)) occurs when node x ’s sibling w is black and w ’s right child is red. By making some color changes and performing a left rotation on p [x ], we can remove the extra black on x , making it singly black, without violating any of the red-black properties. Setting x to be the root causes the while loop to terminate when it tests the loop condition. 13.4 Deletion 293 Analysis What is the running time of RB-D ELETE? Since the height of a red-black tree of n nodes is O (lg n ), the total cost of the procedure without the call to RB-D ELETE F IXUP takes O (lg n ) time. Within RB-D ELETE -F IXUP , cases 1, 3, and 4 each terminate after performing a constant number of color changes and at most three rotations. Case 2 is the only case in which the while loop can be repeated, and then the pointer x moves up the tree at most O (lg n ) times and no rotations are performed. Thus, the procedure RB-D ELETE -F IXUP takes O (lg n ) time and performs at most three rotations, and the overall time for RB-D ELETE is therefore also O (lg n ). Exercises 13.4-1 Argue that after executing RB-D ELETE -F IXUP, the root of the tree must be black. 13.4-2 Argue that if in RB-D ELETE both x and p [ y ] are red, then property 4 is restored by the call RB-D ELETE -F IXUP (T , x ). 13.4-3 In Exercise 13.3-2, you found the red-black tree that results from successively inserting the keys 41, 38, 31, 12, 19, 8 into an initially empty tree. Now show the red-black trees that result from the successive deletion of the keys in the order 8, 12, 19, 31, 38, 41. 13.4-4 In which lines of the code for RB-D ELETE -F IXUP might we examine or modify the sentinel nil[T ]? 13.4-5 In each of the cases of Figure 13.7, give the count of black nodes from the root of the subtree shown to each of the subtrees α, β, . . . , ζ , and verify that each count remains the same after the transformation. When a node has a color attribute c or c , use the notation count(c) or count(c ) symbolically in your count. 13.4-6 Professors Skelton and Baron are concerned that at the start of case 1 of RBD ELETE -F IXUP , the node p [x ] might not be black. If the professors are correct, then lines 5–6 are wrong. Show that p [x ] must be black at the start of case 1, so that the professors have nothing to worry about. 294 Chapter 13 Red-Black Trees 13.4-7 Suppose that a node x is inserted into a red-black tree with RB-I NSERT and then immediately deleted with RB-D ELETE. Is the resulting red-black tree the same as the initial red-black tree? Justify your answer. Problems 13-1 Persistent dynamic sets During the course of an algorithm, we sometimes find that we need to maintain past versions of a dynamic set as it is updated. Such a set is called persistent. One way to implement a persistent set is to copy the entire set whenever it is modified, but this approach can slow down a program and also consume much space. Sometimes, we can do much better. Consider a persistent set S with the operations I NSERT, D ELETE, and S EARCH, which we implement using binary search trees as shown in Figure 13.8(a). We maintain a separate root for every version of the set. In order to insert the key 5 into the set, we create a new node with key 5. This node becomes the left child of a new node with key 7, since we cannot modify the existing node with key 7. Similarly, the new node with key 7 becomes the left child of a new node with key 8 whose right child is the existing node with key 10. The new node with key 8 becomes, in turn, the right child of a new root r with key 4 whose left child is the existing node with key 3. We thus copy only part of the tree and share some of the nodes with the original tree, as shown in Figure 13.8(b). Assume that each tree node has the fields key, left , and right but no parent field. (See also Exercise 13.3-6.) a. For a general persistent binary search tree, identify the nodes that need to be changed to insert a key k or delete a node y . b. Write a procedure P ERSISTENT-T REE -I NSERT that, given a persistent tree T and a key k to insert, returns a new persistent tree T that is the result of inserting k into T . c. If the height of the persistent binary search tree T is h , what are the time and space requirements of your implementation of P ERSISTENT-T REE -I NSERT ? (The space requirement is proportional to the number of new nodes allocated.) d. Suppose that we had included the parent field in each node. In this case, P ERSISTENT-T REE -I NSERT would need to perform additional copying. Prove that P ERSISTENT-T REE -I NSERT would then require (n ) time and space, where n is the number of nodes in the tree. Problems for Chapter 13 295 4 r r 4 4 r′ 3 8 3 8 8 2 7 10 2 7 7 10 5 (a) (b) Figure 13.8 (a) A binary search tree with keys 2, 3, 4, 7, 8, 10. (b) The persistent binary search tree that results from the insertion of key 5. The most recent version of the set consists of the nodes reachable from the root r , and the previous version consists of the nodes reachable from r . Heavily shaded nodes are added when key 5 is inserted. e. Show how to use red-black trees to guarantee that the worst-case running time and space are O (lg n ) per insertion or deletion. 13-2 Join operation on red-black trees The join operation takes two dynamic sets S 1 and S2 and an element x such that for any x 1 ∈ S1 and x 2 ∈ S2 , we have key[x 1 ] ≤ key[x ] ≤ key[x 2 ]. It returns a set S = S1 ∪ {x } ∪ S2. In this problem, we investigate how to implement the join operation on red-black trees. a. Given a red-black tree T , we store its black-height as the field bh[T ]. Argue that this field can be maintained by RB-I NSERT and RB-D ELETE without requiring extra storage in the nodes of the tree and without increasing the asymptotic running times. Show that while descending through T , we can determine the black-height of each node we visit in O (1) time per node visited. We wish to implement the operation RB-J OIN (T1 , x , T2 ), which destroys T1 and T2 and returns a red-black tree T = T1 ∪ {x } ∪ T2 . Let n be the total number of nodes in T1 and T2 . b. Assume that bh[T1 ] ≥ bh[T2 ]. Describe an O (lg n )-time algorithm that finds a black node y in T1 with the largest key from among those nodes whose blackheight is bh[T2 ]. 296 Chapter 13 Red-Black Trees c. Let Ty be the subtree rooted at y . Describe how T y ∪ {x } ∪ T2 can replace Ty in O (1) time without destroying the binary-search-tree property. d. What color should we make x so that red-black properties 1, 3, and 5 are maintained? Describe how properties 2 and 4 can be enforced in O (lg n ) time. e. Argue that no generality is lost by making the assumption in part (b). Describe the symmetric situation that arises when bh[T1 ] ≤ bh[T2 ]. f. Argue that the running time of RB-J OIN is O (lg n ). 13-3 AVL trees An AVL tree is a binary search tree that is height balanced: for each node x , the heights of the left and right subtrees of x differ by at most 1. To implement an AVL tree, we maintain an extra field in each node: h [x ] is the height of node x . As for any other binary search tree T , we assume that root [T ] points to the root node. a. Prove that an AVL tree with n nodes has height O (lg n ). (Hint: Prove that in an AVL tree of height h , there are at least Fh nodes, where Fh is the h th Fibonacci number.) b. To insert into an AVL tree, a node is first placed in the appropriate place in binary search tree order. After this insertion, the tree may no longer be height balanced. Specifically, the heights of the left and right children of some node may differ by 2. Describe a procedure BALANCE (x ), which takes a subtree rooted at x whose left and right children are height balanced and have heights that differ by at most 2, i.e., |h [right [x ]] − h [left [x ]]| ≤ 2, and alters the subtree rooted at x to be height balanced. (Hint: Use rotations.) c. Using part (b), describe a recursive procedure AVL-I NSERT (x , z ), which takes a node x within an AVL tree and a newly created node z (whose key has already been filled in), and adds z to the subtree rooted at x , maintaining the property that x is the root of an AVL tree. As in T REE -I NSERT from Section 12.3, assume that key[z ] has already been filled in and that left [z ] = NIL and right [z ] = NIL ; also assume that h [z ] = 0. Thus, to insert the node z into the AVL tree T , we call AVL-I NSERT (root [T ], z ). d. Give an example of an n -node AVL tree in which an AVL-I NSERT operation causes (lg n ) rotations to be performed. 13-4 Treaps If we insert a set of n items into a binary search tree, the resulting tree may be horribly unbalanced, leading to long search times. As we saw in Section 12.4, however, Problems for Chapter 13 297 G: 4 B: 7 A: 10 E: 23 H: 5 K: 65 I: 73 Figure 13.9 A treap. Each node x is labeled with key[x ] : priority[x ]. For example, the root has key G and priority 4. randomly built binary search trees tend to be balanced. Therefore, a strategy that, on average, builds a balanced tree for a fixed set of items is to randomly permute the items and then insert them in that order into the tree. What if we do not have all the items at once? If we receive the items one at a time, can we still randomly build a binary search tree out of them? We will examine a data structure that answers this question in the affirmative. A treap is a binary search tree with a modified way of ordering the nodes. Figure 13.9 shows an example. As usual, each node x in the tree has a key value key[x ]. In addition, we assign priority [x ], which is a random number chosen independently for each node. We assume that all priorities are distinct and also that all keys are distinct. The nodes of the treap are ordered so that the keys obey the binary-searchtree property and the priorities obey the min-heap order property: • • • If v is a left child of u , then key[v ] < key[u ]. If v is a right child of u , then key[v ] > key[u ]. If v is a child of u , then priority[v ] > priority [u ]. (This combination of properties is why the tree is called a “treap;” it has features of both a binary search tree and a heap.) It helps to think of treaps in the following way. Suppose that we insert nodes x 1 , x 2 , . . . , x n , with associated keys, into a treap. Then the resulting treap is the tree that would have been formed if the nodes had been inserted into a normal binary search tree in the order given by their (randomly chosen) priorities, i.e., priority [x i ] < priority [x j ] means that x i was inserted before x j . a. Show that given a set of nodes x 1 , x 2 , . . . , x n , with associated keys and priorities (all distinct), there is a unique treap associated with these nodes. 298 Chapter 13 Red-Black Trees b. Show that the expected height of a treap is for a value in the treap is (lg n ). (lg n ), and hence the time to search Let us see how to insert a new node into an existing treap. The first thing we do is assign to the new node a random priority. Then we call the insertion algorithm, which we call T REAP -I NSERT, whose operation is illustrated in Figure 13.10. c. Explain how T REAP -I NSERT works. Explain the idea in English and give pseudocode. (Hint: Execute the usual binary-search-tree insertion procedure and then perform rotations to restore the min-heap order property.) d. Show that the expected running time of T REAP -I NSERT is (lg n ). T REAP -I NSERT performs a search and then a sequence of rotations. Although these two operations have the same expected running time, they have different costs in practice. A search reads information from the treap without modifying it. In contrast, a rotation changes parent and child pointers within the treap. On most computers, read operations are much faster than write operations. Thus we would like T REAP -I NSERT to perform few rotations. We will show that the expected number of rotations performed is bounded by a constant. In order to do so, we will need some definitions, which are illustrated in Figure 13.11. The left spine of a binary search tree T is the path from the root to the node with the smallest key. In other words, the left spine is the path from the root that consists of only left edges. Symmetrically, the right spine of T is the path from the root consisting of only right edges. The length of a spine is the number of nodes it contains. e. Consider the treap T immediately after x is inserted using T REAP -I NSERT. Let C be the length of the right spine of the left subtree of x . Let D be the length of the left spine of the right subtree of x . Prove that the total number of rotations that were performed during the insertion of x is equal to C + D . We will now calculate the expected values of C and D . Without loss of generality, we assume that the keys are 1, 2, . . . , n , since we are comparing them only to one another. For nodes x and y , where y = x , let k = key[x ] and i = key[ y ]. We define indicator random variables X i,k = I { y is in the right spine of the left subtree of x (in T ) } . f. Show that X i,k = 1 if and only if priority [ y ] > priority [x ], key[ y ] < key[x ], and, for every z such that key[ y ] < key[z ] < key[x ], we have priority [ y ] < priority [z ]. Problems for Chapter 13 299 G: 4 B: 7 A: 10 E: 23 H: 5 K: 65 I: 73 (a) C: 25 A: 10 B: 7 G: 4 H: 5 E: 23 C: 25 (b) K: 65 I: 73 G: 4 D: 9 A: 10 B: 7 E: 23 C: 25 D: 9 (c) H: 5 K: 65 I: 73 A: 10 B: 7 G: 4 H: 5 E: 23 D: 9 C: 25 (d) K: 65 I: 73 G: 4 B: 7 A: 10 D: 9 C: 25 E: 23 H: 5 K: 65 I: 73 F: 2 F: 2 … A: 10 B: 7 D: 9 C: 25 E: 23 G: 4 H: 5 K: 65 I: 73 (e) (f) Figure 13.10 The operation of T REAP -I NSERT. (a) The original treap, prior to insertion. (b) The treap after inserting a node with key C and priority 25. (c)–(d) Intermediate stages when inserting a node with key D and priority 9. (e) The treap after the insertion of parts (c) and (d) is done. (f) The treap after inserting a node with key F and priority 2. 300 Chapter 13 Red-Black Trees 15 9 3 6 (a) 12 21 18 25 3 6 9 12 15 18 25 21 (b) Figure 13.11 Spines of a binary search tree. The left spine is shaded in (a), and the right spine is shaded in (b). g. Show that Pr { X i,k = 1} = h. Show that E [C ] = k −1 j =1 (k − i − 1)! 1 = . (k − i + 1)! (k − i + 1)(k − i ) 1 1 =1− . j ( j + 1) k 1 . n−k+1 i. Use a symmetry argument to show that E [ D] = 1 − j. Conclude that the expected number of rotations performed when inserting a node into a treap is less than 2. Chapter notes The idea of balancing a search tree is due to Adel’son-Vel’ski˘ and Landis [2], who ı introduced a class of balanced search trees called “AVL trees” in 1962, described in Problem 13-3. Another class of search trees, called “2-3 trees,” was introduced by J. E. Hopcroft (unpublished) in 1970. Balance is maintained in a 2-3 tree by manipulating the degrees of nodes in the tree. A generalization of 2-3 trees introduced by Bayer and McCreight [32], called B-trees, is the topic of Chapter 18. Red-black trees were invented by Bayer [31] under the name “symmetric binary B-trees.” Guibas and Sedgewick [135] studied their properties at length and introduced the red/black color convention. Andersson [15] gives a simpler-to-code Notes for Chapter 13 301 variant of red-black trees. Weiss [311] calls this variant AA-trees. An AA-tree is similar to a red-black tree except that left children may never be red. Treaps were proposed by Seidel and Aragon [271]. They are the default implementation of a dictionary in LEDA, which is a well-implemented collection of data structures and algorithms. There are many other variations on balanced binary trees, including weightbalanced trees [230], k -neighbor trees [213], and scapegoat trees [108]. Perhaps the most intriguing are the “splay trees” introduced by Sleator and Tarjan [281], which are “self-adjusting.” (A good description of splay trees is given by Tarjan [292].) Splay trees maintain balance without any explicit balance condition such as color. Instead, “splay operations” (which involve rotations) are performed within the tree every time an access is made. The amortized cost (see Chapter 17) of each operation on an n -node tree is O (lg n ). Skip lists [251] are an alternative to balanced binary trees. A skip list is a linked list that is augmented with a number of additional pointers. Each dictionary operation runs in expected time O (lg n ) on a skip list of n items. Introduction Graphs are a pervasive data structure in computer science, and algorithms for working with them are fundamental to the field. There are hundreds of interesting computational problems defined in terms of graphs. In this part, we touch on a few of the more significant ones. Chapter 22 shows how we can represent a graph on a computer and then discusses algorithms based on searching a graph using either breadth-first search or depth-first search. Two applications of depth-first search are given: topologically sorting a directed acyclic graph and decomposing a directed graph into its strongly connected components. Chapter 23 describes how to compute a minimum-weight spanning tree of a graph. Such a tree is defined as the least-weight way of connecting all of the vertices together when each edge has an associated weight. The algorithms for computing minimum spanning trees are good examples of greedy algorithms (see Chapter 16). Chapters 24 and 25 consider the problem of computing shortest paths between vertices when each edge has an associated length or “weight.” Chapter 24 considers the computation of shortest paths from a given source vertex to all other vertices, and Chapter 25 considers the computation of shortest paths between every pair of vertices. Finally, Chapter 26 shows how to compute a maximum flow of material in a network (directed graph) having a specified source of material, a specified sink, and specified capacities for the amount of material that can traverse each directed edge. This general problem arises in many forms, and a good algorithm for computing maximum flows can be used to solve a variety of related problems efficiently. 526 Part VI Graph Algorithms In describing the running time of a graph algorithm on a given graph G = ( V , E ), we usually measure the size of the input in terms of the number of vertices | V | and the number of edges | E | of the graph. That is, there are two relevant parameters describing the size of the input, not just one. We adopt a common notational convention for these parameters. Inside asymptotic notation (such as O -notation or -notation), and only inside such notation, the symbol V denotes | V | and the symbol E denotes | E |. For example, we might say, “the algorithm runs in time O ( V E ),” meaning that the algorithm runs in time O ( | V | | E |). This convention makes the running-time formulas easier to read, without risk of ambiguity. Another convention we adopt appears in pseudocode. We denote the vertex set of a graph G by V [G ] and its edge set by E [G ]. That is, the pseudocode views vertex and edge sets as attributes of a graph. 22 Elementary Graph Algorithms This chapter presents methods for representing a graph and for searching a graph. Searching a graph means systematically following the edges of the graph so as to visit the vertices of the graph. A graph-searching algorithm can discover much about the structure of a graph. Many algorithms begin by searching their input graph to obtain this structural information. Other graph algorithms are organized as simple elaborations of basic graph-searching algorithms. Techniques for searching a graph are at the heart of the field of graph algorithms. Section 22.1 discusses the two most common computational representations of graphs: as adjacency lists and as adjacency matrices. Section 22.2 presents a simple graph-searching algorithm called breadth-first search and shows how to create a breadth-first tree. Section 22.3 presents depth-first search and proves some standard results about the order in which depth-first search visits vertices. Section 22.4 provides our first real application of depth-first search: topologically sorting a directed acyclic graph. A second application of depth-first search, finding the strongly connected components of a directed graph, is given in Section 22.5. 22.1 Representations of graphs There are two standard ways to represent a graph G = ( V , E ): as a collection of adjacency lists or as an adjacency matrix. Either way is applicable to both directed and undirected graphs. The adjacency-list representation is usually preferred, because it provides a compact way to represent sparse graphs—those for which | E | is much less than | V |2 . Most of the graph algorithms presented in this book assume that an input graph is represented in adjacency-list form. An adjacency-matrix representation may be preferred, however, when the graph is dense— | E | is close to | V |2 —or when we need to be able to tell quickly if there is an edge connecting two given vertices. For example, two of the all-pairs shortest-paths algorithms pre- 528 Chapter 22 Elementary Graph Algorithms 1 2 3 5 4 (a) 1 2 3 4 5 2 1 2 2 4 5 5 4 5 1 (b) 3 3 2 4 1 2 3 4 5 1 0 1 0 0 1 2 1 0 1 1 1 3 0 1 0 1 0 (c) 4 0 1 1 0 1 5 1 1 0 1 0 Figure 22.1 Two representations of an undirected graph. (a) An undirected graph G having five vertices and seven edges. (b) An adjacency-list representation of G . (c) The adjacency-matrix representation of G . 1 2 3 4 5 (a) 6 1 2 3 4 5 6 2 5 6 2 4 6 (b) 4 5 1 2 3 4 5 6 1 0 0 0 0 0 0 2 1 0 0 1 0 0 3 0 0 0 0 0 0 4 1 0 0 0 1 0 5 0 1 1 0 0 0 6 0 0 1 0 0 1 (c) Figure 22.2 Two representations of a directed graph. (a) A directed graph G having six vertices and eight edges. (b) An adjacency-list representation of G . (c) The adjacency-matrix representation of G . sented in Chapter 25 assume that their input graphs are represented by adjacency matrices. The adjacency-list representation of a graph G = ( V , E ) consists of an array Adj of | V | lists, one for each vertex in V . For each u ∈ V , the adjacency list Adj[u ] contains all the vertices v such that there is an edge (u , v) ∈ E . That is, Adj[u ] consists of all the vertices adjacent to u in G . (Alternatively, it may contain pointers to these vertices.) The vertices in each adjacency list are typically stored in an arbitrary order. Figure 22.1(b) is an adjacency-list representation of the undirected graph in Figure 22.1(a). Similarly, Figure 22.2(b) is an adjacency-list representation of the directed graph in Figure 22.2(a). If G is a directed graph, the sum of the lengths of all the adjacency lists is | E |, since an edge of the form (u , v) is represented by having v appear in Adj[u ]. If G is an undirected graph, the sum of the lengths of all the adjacency lists is 2 | E |, since if (u , v) is an undirected edge, then u appears in v ’s adjacency list and vice versa. 22.1 Representations of graphs 529 For both directed and undirected graphs, the adjacency-list representation has the desirable property that the amount of memory it requires is ( V + E ). Adjacency lists can readily be adapted to represent weighted graphs, that is, graphs for which each edge has an associated weight, typically given by a weight function w : E → R. For example, let G = ( V , E ) be a weighted graph with weight function w . The weight w(u , v) of the edge (u , v) ∈ E is simply stored with vertex v in u ’s adjacency list. The adjacency-list representation is quite robust in that it can be modified to support many other graph variants. A potential disadvantage of the adjacency-list representation is that there is no quicker way to determine if a given edge (u , v) is present in the graph than to search for v in the adjacency list Adj[u ]. This disadvantage can be remedied by an adjacency-matrix representation of the graph, at the cost of using asymptotically more memory. (See Exercise 22.1-8 for suggestions of variations on adjacency lists that permit faster edge lookup.) For the adjacency-matrix representation of a graph G = ( V , E ), we assume that the vertices are numbered 1, 2, . . . , | V | in some arbitrary manner. Then the adjacency-matrix representation of a graph G consists of a | V | × | V | matrix A = (ai j ) such that ai j = 1 if (i , j ) ∈ E , 0 otherwise . Figures 22.1(c) and 22.2(c) are the adjacency matrices of the undirected and directed graphs in Figures 22.1(a) and 22.2(a), respectively. The adjacency matrix of a graph requires ( V 2 ) memory, independent of the number of edges in the graph. Observe the symmetry along the main diagonal of the adjacency matrix in Figure 22.1(c). We define the transpose of a matrix A = (a i j ) to be the matrix AT = (aiTj ) given by aiTj = a j i . Since in an undirected graph, (u , v) and (v, u ) represent the same edge, the adjacency matrix A of an undirected graph is its own transpose: A = AT . In some applications, it pays to store only the entries on and above the diagonal of the adjacency matrix, thereby cutting the memory needed to store the graph almost in half. Like the adjacency-list representation of a graph, the adjacency-matrix representation can be used for weighted graphs. For example, if G = ( V , E ) is a weighted graph with edge-weight function w , the weight w(u , v) of the edge (u , v) ∈ E is simply stored as the entry in row u and column v of the adjacency matrix. If an edge does not exist, a NIL value can be stored as its corresponding matrix entry, though for many problems it is convenient to use a value such as 0 or ∞. Although the adjacency-list representation is asymptotically at least as efficient as the adjacency-matrix representation, the simplicity of an adjacency matrix may make it preferable when graphs are reasonably small. Moreover, if the graph is unweighted, there is an additional advantage in storage for the adjacency-matrix 530 Chapter 22 Elementary Graph Algorithms representation. Rather than using one word of computer memory for each matrix entry, the adjacency matrix uses only one bit per entry. Exercises 22.1-1 Given an adjacency-list representation of a directed graph, how long does it take to compute the out-degree of every vertex? How long does it take to compute the in-degrees? 22.1-2 Give an adjacency-list representation for a complete binary tree on 7 vertices. Give an equivalent adjacency-matrix representation. Assume that vertices are numbered from 1 to 7 as in a binary heap. 22.1-3 The transpose of a directed graph G = ( V , E ) is the graph G T = ( V , E T ), where E T = {(v, u ) ∈ V × V : (u , v) ∈ E }. Thus, G T is G with all its edges reversed. Describe efficient algorithms for computing G T from G , for both the adjacencylist and adjacency-matrix representations of G . Analyze the running times of your algorithms. 22.1-4 Given an adjacency-list representation of a multigraph G = ( V , E ), describe an O ( V + E )-time algorithm to compute the adjacency-list representation of the “equivalent” undirected graph G = ( V , E ), where E consists of the edges in E with all multiple edges between two vertices replaced by a single edge and with all self-loops removed. 22.1-5 The square of a directed graph G = ( V , E ) is the graph G 2 = ( V , E 2 ) such that (u , w) ∈ E 2 if and only if for some v ∈ V , both (u , v) ∈ E and (v, w) ∈ E . That is, G 2 contains an edge between u and w whenever G contains a path with exactly two edges between u and w . Describe efficient algorithms for computing G 2 from G for both the adjacency-list and adjacency-matrix representations of G . Analyze the running times of your algorithms. 22.1-6 When an adjacency-matrix representation is used, most graph algorithms require time ( V 2 ), but there are some exceptions. Show that determining whether a directed graph G contains a universal sink—a vertex with in-degree | V | − 1 and out-degree 0—can be determined in time O ( V ), given an adjacency matrix for G . 22.2 Breadth-first search 531 Describe what the entries of the matrix product B B T represent, where B T is the transpose of B . 22.1-8 Suppose that instead of a linked list, each array entry Adj[u ] is a hash table containing the vertices v for which (u , v) ∈ E . If all edge lookups are equally likely, what is the expected time to determine whether an edge is in the graph? What disadvantages does this scheme have? Suggest an alternate data structure for each edge list that solves these problems. Does your alternative have disadvantages compared to the hash table? 22.1-7 The incidence matrix of a directed graph G = ( V , E ) is a | V | × | E | matrix B = (bi j ) such that −1 if edge j leaves vertex i , if edge j enters vertex i , bi j = 1 0 otherwise . 22.2 Breadth-first search Breadth-first search is one of the simplest algorithms for searching a graph and the archetype for many important graph algorithms. Prim’s minimum-spanningtree algorithm (Section 23.2) and Dijkstra’s single-source shortest-paths algorithm (Section 24.3) use ideas similar to those in breadth-first search. Given a graph G = ( V , E ) and a distinguished source vertex s , breadth-first search systematically explores the edges of G to “discover” every vertex that is reachable from s . It computes the distance (smallest number of edges) from s to each reachable vertex. It also produces a “breadth-first tree” with root s that contains all reachable vertices. For any vertex v reachable from s , the path in the breadth-first tree from s to v corresponds to a “shortest path” from s to v in G , that is, a path containing the smallest number of edges. The algorithm works on both directed and undirected graphs. Breadth-first search is so named because it expands the frontier between discovered and undiscovered vertices uniformly across the breadth of the frontier. That is, the algorithm discovers all vertices at distance k from s before discovering any vertices at distance k + 1. To keep track of progress, breadth-first search colors each vertex white, gray, or black. All vertices start out white and may later become gray and then black. A vertex is discovered the first time it is encountered during the search, at which time it becomes nonwhite. Gray and black vertices, therefore, have been discovered, but 532 Chapter 22 Elementary Graph Algorithms breadth-first search distinguishes between them to ensure that the search proceeds in a breadth-first manner. If (u , v) ∈ E and vertex u is black, then vertex v is either gray or black; that is, all vertices adjacent to black vertices have been discovered. Gray vertices may have some adjacent white vertices; they represent the frontier between discovered and undiscovered vertices. Breadth-first search constructs a breadth-first tree, initially containing only its root, which is the source vertex s . Whenever a white vertex v is discovered in the course of scanning the adjacency list of an already discovered vertex u , the vertex v and the edge (u , v) are added to the tree. We say that u is the predecessor or parent of v in the breadth-first tree. Since a vertex is discovered at most once, it has at most one parent. Ancestor and descendant relationships in the breadth-first tree are defined relative to the root s as usual: if u is on a path in the tree from the root s to vertex v , then u is an ancestor of v and v is a descendant of u . The breadth-first-search procedure BFS below assumes that the input graph G = ( V , E ) is represented using adjacency lists. It maintains several additional data structures with each vertex in the graph. The color of each vertex u ∈ V is stored in the variable color [u ], and the predecessor of u is stored in the variable π [u ]. If u has no predecessor (for example, if u = s or u has not been discovered), then π [u ] = NIL . The distance from the source s to vertex u computed by the algorithm is stored in d [u ]. The algorithm also uses a first-in, first-out queue Q (see Section 10.1) to manage the set of gray vertices. BFS (G , s ) 1 for each vertex u ∈ V [G ] − {s } 2 do color [u ] ← WHITE 3 d [u ] ← ∞ 4 π [u ] ← NIL 5 color [s ] ← GRAY 6 d [s ] ← 0 7 π [s ] ← NIL 8 Q←∅ 9 E NQUEUE ( Q , s ) 10 while Q = ∅ 11 do u ← D EQUEUE ( Q ) 12 for each v ∈ Adj[u ] 13 do if color [v ] = WHITE 14 then color [v ] ← GRAY 15 d [v ] ← d [u ] + 1 16 π [v ] ← u 17 E NQUEUE ( Q , v) 18 color [u ] ← BLACK 22.2 Breadth-first search 533 r ∞ (a) ∞ v r 1 (c) ∞ v r 1 (e) 2 v r 1 (g) 2 v r 1 s 0 ∞ w s 0 1 w s 0 1 w s 0 1 w s 0 1 w t ∞ ∞ x t 2 2 x t 2 2 x t 2 2 x t 2 2 x u ∞ Q ∞ y u ∞ Q ∞ y u 3 Q ∞ y u 3 Q 3 y u 3 Q 3 y uy 33 (h) xvu 223 (f) r 1 tx 22 (d) s 0 (b) r 1 ∞ v r 1 2 v r 1 2 v r 1 2 v s 0 1 w s 0 1 w s 0 1 w s 0 1 w t ∞ ∞ x t 2 2 x t 2 2 x t 2 2 x u ∞ Q ∞ y u ∞ Q ∞ y u 3 Q 3 y u 3 Q 3 y y 3 vuy 233 txv 222 wr 11 PSfrag replacements (i) 2 v ∅ Figure 22.3 The operation of BFS on an undirected graph. Tree edges are shown shaded as they are produced by BFS. Within each vertex u is shown d [u ]. The queue Q is shown at the beginning of each iteration of the while loop of lines 10–18. Vertex distances are shown next to vertices in the queue. Figure 22.3 illustrates the progress of BFS on a sample graph. The procedure BFS works as follows. Lines 1–4 paint every vertex white, set d [u ] to be infinity for each vertex u , and set the parent of every vertex to be NIL. Line 5 paints the source vertex s gray, since it is considered to be discovered when the procedure begins. Line 6 initializes d [s ] to 0, and line 7 sets the predecessor of the source to be NIL. Lines 8–9 initialize Q to the queue containing just the vertex s . 534 Chapter 22 Elementary Graph Algorithms The while loop of lines 10–18 iterates as long as there remain gray vertices, which are discovered vertices that have not yet had their adjacency lists fully examined. This while loop maintains the following invariant: At the test in line 10, the queue Q consists of the set of gray vertices. Although we won’t use this loop invariant to prove correctness, it is easy to see that it holds prior to the first iteration and that each iteration of the loop maintains the invariant. Prior to the first iteration, the only gray vertex, and the only vertex in Q , is the source vertex s . Line 11 determines the gray vertex u at the head of the queue Q and removes it from Q . The for loop of lines 12–17 considers each vertex v in the adjacency list of u . If v is white, then it has not yet been discovered, and the algorithm discovers it by executing lines 14–17. It is first grayed, and its distance d [v ] is set to d [u ] + 1. Then, u is recorded as its parent. Finally, it is placed at the tail of the queue Q . When all the vertices on u ’s adjacency list have been examined, u is blackened in lines 11–18. The loop invariant is maintained because whenever a vertex is painted gray (in line 14) it is also enqueued (in line 17), and whenever a vertex is dequeued (in line 11) it is also painted black (in line 18). The results of breadth-first search may depend upon the order in which the neighbors of a given vertex are visited in line 12: the breadth-first tree may vary, but the distances d computed by the algorithm will not. (See Exercise 22.2-4.) Analysis Before proving the various properties of breadth-first search, we take on the somewhat easier job of analyzing its running time on an input graph G = ( V , E ). We use aggregate analysis, as we saw in Section 17.1. After initialization, no vertex is ever whitened, and thus the test in line 13 ensures that each vertex is enqueued at most once, and hence dequeued at most once. The operations of enqueuing and dequeuing take O (1) time, so the total time devoted to queue operations is O ( V ). Because the adjacency list of each vertex is scanned only when the vertex is dequeued, each adjacency list is scanned at most once. Since the sum of the lengths of all the adjacency lists is ( E ), the total time spent in scanning adjacency lists is O ( E ). The overhead for initialization is O ( V ), and thus the total running time of BFS is O ( V + E ). Thus, breadth-first search runs in time linear in the size of the adjacency-list representation of G . Shortest paths At the beginning of this section, we claimed that breadth-first search finds the distance to each reachable vertex in a graph G = ( V , E ) from a given source vertex s ∈ V . Define the shortest-path distance δ(s , v) from s to v as the minimum number of edges in any path from vertex s to vertex v ; if there is no path from s to v , 22.2 Breadth-first search 535 then δ(s , v) = ∞. A path of length δ(s , v) from s to v is said to be a shortest path 1 from s to v . Before showing that breadth-first search actually computes shortestpath distances, we investigate an important property of shortest-path distances. Lemma 22.1 Let G = ( V , E ) be a directed or undirected graph, and let s ∈ V be an arbitrary vertex. Then, for any edge (u , v) ∈ E , δ(s , v) ≤ δ(s , u ) + 1 . Proof If u is reachable from s , then so is v . In this case, the shortest path from s to v cannot be longer than the shortest path from s to u followed by the edge (u , v), and thus the inequality holds. If u is not reachable from s , then δ(s , u ) = ∞, and the inequality holds. We want to show that BFS properly computes d [v ] = δ(s , v) for each vertex v ∈ V . We first show that d [v ] bounds δ(s , v) from above. Lemma 22.2 Let G = ( V , E ) be a directed or undirected graph, and suppose that BFS is run on G from a given source vertex s ∈ V . Then upon termination, for each vertex v ∈ V , the value d [v ] computed by BFS satisfies d [v ] ≥ δ(s , v). Proof We use induction on the number of E NQUEUE operations. Our inductive hypothesis is that d [v ] ≥ δ(s , v) for all v ∈ V . The basis of the induction is the situation immediately after s is enqueued in line 9 of BFS. The inductive hypothesis holds here, because d [s ] = 0 = δ(s , s ) and d [v ] = ∞ ≥ δ(s , v) for all v ∈ V − {s }. For the inductive step, consider a white vertex v that is discovered during the search from a vertex u . The inductive hypothesis implies that d [u ] ≥ δ(s , u ). From the assignment performed by line 15 and from Lemma 22.1, we obtain d [v ] = d [u ] + 1 ≥ δ(s , u ) + 1 ≥ δ(s , v) . 1 In Chapters 24 and 25, we shall generalize our study of shortest paths to weighted graphs, in which every edge has a real-valued weight and the weight of a path is the sum of the weights of its constituent edges. The graphs considered in the present chapter are unweighted or, equivalently, all edges have unit weight. 536 Chapter 22 Elementary Graph Algorithms Vertex v is then enqueued, and it is never enqueued again because it is also grayed and the then clause of lines 14–17 is executed only for white vertices. Thus, the value of d [v ] never changes again, and the inductive hypothesis is maintained. To prove that d [v ] = δ(s , v), we must first show more precisely how the queue Q operates during the course of BFS. The next lemma shows that at all times, there are at most two distinct d values in the queue. Lemma 22.3 Suppose that during the execution of BFS on a graph G = ( V , E ), the queue Q contains the vertices v1 , v2 , . . . , vr , where v1 is the head of Q and vr is the tail. Then, d [vr ] ≤ d [v1 ] + 1 and d [vi ] ≤ d [vi +1 ] for i = 1, 2, . . . , r − 1. Proof The proof is by induction on the number of queue operations. Initially, when the queue contains only s , the lemma certainly holds. For the inductive step, we must prove that the lemma holds after both dequeuing and enqueuing a vertex. If the head v 1 of the queue is dequeued, v2 becomes the new head. (If the queue becomes empty, then the lemma holds vacuously.) By the inductive hypothesis, d [v1 ] ≤ d [v2 ]. But then we have d [vr ] ≤ d [v1 ] + 1 ≤ d [v2 ] + 1, and the remaining inequalities are unaffected. Thus, the lemma follows with v2 as the head. Enqueuing a vertex requires closer examination of the code. When we enqueue a vertex v in line 17 of BFS, it becomes vr +1 . At that time, we have already removed vertex u , whose adjacency list is currently being scanned, from the queue Q , and by the inductive hypothesis, the new head v 1 has d [v1 ] ≥ d [u ]. Thus, d [vr +1 ] = d [v ] = d [u ] + 1 ≤ d [v1 ] + 1. From the inductive hypothesis, we also have d [vr ] ≤ d [u ] + 1, and so d [vr ] ≤ d [u ] + 1 = d [v ] = d [vr +1 ], and the remaining inequalities are unaffected. Thus, the lemma follows when v is enqueued. The following corollary shows that the d values at the time that vertices are enqueued are monotonically increasing over time. Corollary 22.4 Suppose that vertices vi and v j are enqueued during the execution of BFS, and that vi is enqueued before v j . Then d [vi ] ≤ d [v j ] at the time that v j is enqueued. Proof Immediate from Lemma 22.3 and the property that each vertex receives a finite d value at most once during the course of BFS. We can now prove that breadth-first search correctly finds shortest-path distances. 22.2 Breadth-first search 537 Theorem 22.5 (Correctness of breadth-first search) Let G = ( V , E ) be a directed or undirected graph, and suppose that BFS is run on G from a given source vertex s ∈ V . Then, during its execution, BFS discovers every vertex v ∈ V that is reachable from the source s , and upon termination, d [v ] = δ(s , v) for all v ∈ V . Moreover, for any vertex v = s that is reachable from s , one of the shortest paths from s to v is a shortest path from s to π [v ] followed by the edge (π [v ], v). Proof Assume, for the purpose of contradiction, that some vertex receives a d value not equal to its shortest path distance. Let v be the vertex with minimum δ(s , v) that receives such an incorrect d value; clearly v = s . By Lemma 22.2, d [v ] ≥ δ(s , v), and thus we have that d [v ] > δ(s , v). Vertex v must be reachable from s , for if it is not, then δ(s , v) = ∞ ≥ d [v ]. Let u be the vertex immediately preceding v on a shortest path from s to v , so that δ(s , v) = δ(s , u ) + 1. Because δ(s , u ) < δ(s , v), and because of how we chose v , we have d [u ] = δ(s , u ). Putting these properties together, we have Now consider the time when BFS chooses to dequeue vertex u from Q in line 11. At this time, vertex v is either white, gray, or black. We shall show that in each of these cases, we derive a contradiction to inequality (22.1). If v is white, then line 15 sets d [v ] = d [u ] + 1, contradicting inequality (22.1). If v is black, then it was already removed from the queue and, by Corollary 22.4, we have d [v ] ≤ d [u ], again contradicting inequality (22.1). If v is gray, then it was painted gray upon dequeuing some vertex w , which was removed from Q earlier than u and for which d [v ] = d [w ] + 1. By Corollary 22.4, however, d [w ] ≤ d [u ], and so we have d [v ] ≤ d [u ] + 1, once again contradicting inequality (22.1). Thus we conclude that d [v ] = δ(s , v) for all v ∈ V . All vertices reachable from s must be discovered, for if they were not, they would have infinite d values. To conclude the proof of the theorem, observe that if π [v ] = u , then d [v ] = d [u ] + 1. Thus, we can obtain a shortest path from s to v by taking a shortest path from s to π [v ] and then traversing the edge (π [v ], v). Breadth-first trees The procedure BFS builds a breadth-first tree as it searches the graph, as illustrated in Figure 22.3. The tree is represented by the π field in each vertex. More formally, for a graph G = ( V , E ) with source s , we define the predecessor subgraph of G as G π = ( Vπ , E π ), where and Vπ = {v ∈ V : π [v ] = NIL } ∪ {s } d [v ] > δ(s , v) = δ(s , u ) + 1 = d [u ] + 1 . (22.1) 538 Chapter 22 Elementary Graph Algorithms The predecessor subgraph G π is a breadth-first tree if Vπ consists of the vertices reachable from s and, for all v ∈ Vπ , there is a unique simple path from s to v in G π that is also a shortest path from s to v in G . A breadth-first tree is in fact a tree, since it is connected and | E π | = | Vπ | − 1 (see Theorem B.2). The edges in E π are called tree edges. After BFS has been run from a source s on a graph G , the following lemma shows that the predecessor subgraph is a breadth-first tree. Lemma 22.6 When applied to a directed or undirected graph G = ( V , E ), procedure BFS constructs π so that the predecessor subgraph G π = ( Vπ , E π ) is a breadth-first tree. Proof Line 16 of BFS sets π [v ] = u if and only if (u , v) ∈ E and δ(s , v) < ∞— that is, if v is reachable from s —and thus Vπ consists of the vertices in V reachable from s . Since G π forms a tree, by Theorem B.2, it contains a unique path from s to each vertex in Vπ . By applying Theorem 22.5 inductively, we conclude that every such path is a shortest path. The following procedure prints out the vertices on a shortest path from s to v , assuming that BFS has already been run to compute the shortest-path tree. P RINT-PATH (G , s , v) 1 if v = s 2 then print s 3 else if π [v ] = NIL 4 then print “no path from” s “to” v “exists” 5 else P RINT-PATH (G , s , π [v ]) 6 print v This procedure runs in time linear in the number of vertices in the path printed, since each recursive call is for a path one vertex shorter. Exercises 22.2-1 Show the d and π values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex 3 as the source. 22.2-2 Show the d and π values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex u as the source. E π = {(π [v ], v) : v ∈ Vπ − {s }} . 22.2 Breadth-first search 539 22.2-3 What is the running time of BFS if its input graph is represented by an adjacency matrix and the algorithm is modified to handle this form of input? 22.2-4 Argue that in a breadth-first search, the value d [u ] assigned to a vertex u is independent of the order in which the vertices in each adjacency list are given. Using Figure 22.3 as an example, show that the breadth-first tree computed by BFS can depend on the ordering within adjacency lists. 22.2-5 Give an example of a directed graph G = ( V , E ), a source vertex s ∈ V , and a set of tree edges E π ⊆ E such that for each vertex v ∈ V , the unique path in the graph ( V , E π ) from s to v is a shortest path in G , yet the set of edges E π cannot be produced by running BFS on G , no matter how the vertices are ordered in each adjacency list. 22.2-6 There are two types of professional wrestlers: “good guys” and “bad guys.” Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have n professional wrestlers and we have a list of r pairs of wrestlers for which there are rivalries. Give an O (n + r )-time algorithm that determines whether it is possible to designate some of the wrestlers as good guys and the remainder as bad guys such that each rivalry is between a good guy and a bad guy. If is it possible to perform such a designation, your algorithm should produce it. 22.2-7 The diameter of a tree T = ( V , E ) is given by u ,v ∈ V max δ(u , v) ; that is, the diameter is the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm. 22.2-8 Let G = ( V , E ) be a connected, undirected graph. Give an O ( V + E )-time algorithm to compute a path in G that traverses each edge in E exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies. 540 Chapter 22 Elementary Graph Algorithms 22.3 Depth-first search The strategy followed by depth-first search is, as its name implies, to search “deeper” in the graph whenever possible. In depth-first search, edges are explored out of the most recently discovered vertex v that still has unexplored edges leaving it. When all of v ’s edges have been explored, the search “backtracks” to explore edges leaving the vertex from which v was discovered. This process continues until we have discovered all the vertices that are reachable from the original source vertex. If any undiscovered vertices remain, then one of them is selected as a new source and the search is repeated from that source. This entire process is repeated until all vertices are discovered. As in breadth-first search, whenever a vertex v is discovered during a scan of the adjacency list of an already discovered vertex u , depth-first search records this event by setting v ’s predecessor field π [v ] to u . Unlike breadth-first search, whose predecessor subgraph forms a tree, the predecessor subgraph produced by a depth-first search may be composed of several trees, because the search may be repeated from multiple sources.2 The predecessor subgraph of a depth-first search is therefore defined slightly differently from that of a breadth-first search: we let G π = ( V , E π ), where The predecessor subgraph of a depth-first search forms a depth-first forest composed of several depth-first trees. The edges in E π are called tree edges. As in breadth-first search, vertices are colored during the search to indicate their state. Each vertex is initially white, is grayed when it is discovered in the search, and is blackened when it is finished, that is, when its adjacency list has been examined completely. This technique guarantees that each vertex ends up in exactly one depth-first tree, so that these trees are disjoint. Besides creating a depth-first forest, depth-first search also timestamps each vertex. Each vertex v has two timestamps: the first timestamp d [v ] records when v is first discovered (and grayed), and the second timestamp f [v ] records when the search finishes examining v ’s adjacency list (and blackens v ). These timestamps 2 It may seem arbitrary that breadth-first search is limited to only one source whereas depth-first E π = {(π [v ], v) : v ∈ V and π [v ] = NIL } . search may search from multiple sources. Although conceptually, breadth-first search could proceed from multiple sources and depth-first search could be limited to one source, our approach reflects how the results of these searches are typically used. Breadth-first search is usually employed to find shortest-path distances (and the associated predecessor subgraph) from a given source. Depth-first search is often a subroutine in another algorithm, as we shall see later in this chapter. 22.3 Depth-first search 541 are used in many graph algorithms and are generally helpful in reasoning about the behavior of depth-first search. The procedure DFS below records when it discovers vertex u in the variable d [u ] and when it finishes vertex u in the variable f [u ]. These timestamps are integers between 1 and 2 | V |, since there is one discovery event and one finishing event for each of the | V | vertices. For every vertex u , d [u ] < f [u ] . (22.2) Vertex u is WHITE before time d [u ], GRAY between time d [u ] and time f [u ], and BLACK thereafter. The following pseudocode is the basic depth-first-search algorithm. The input graph G may be undirected or directed. The variable time is a global variable that we use for timestamping. DFS(G ) 1 for each vertex u ∈ V [G ] 2 do color [u ] ← WHITE 3 π [u ] ← NIL 4 time ← 0 5 for each vertex u ∈ V [G ] 6 do if color [u ] = WHITE 7 then DFS-V ISIT (u ) DFS-V ISIT (u ) 1 color [u ] ← GRAY £ White vertex u has just been discovered. 2 time ← time +1 3 d [u ] ← time 4 for each v ∈ Adj[u ] £ Explore edge (u , v). 5 do if color [v ] = WHITE 6 then π [v ] ← u 7 DFS-V ISIT (v) 8 color [u ] ← BLACK £ Blacken u ; it is finished. 9 f [u ] ← time ← time +1 Figure 22.4 illustrates the progress of DFS on the graph shown in Figure 22.2. Procedure DFS works as follows. Lines 1–3 paint all vertices white and initialize their π fields to NIL. Line 4 resets the global time counter. Lines 5–7 check each vertex in V in turn and, when a white vertex is found, visit it using DFSV ISIT. Every time DFS-V ISIT (u ) is called in line 7, vertex u becomes the root of a new tree in the depth-first forest. When DFS returns, every vertex u has been assigned a discovery time d [u ] and a finishing time f [u ]. 542 u 1/ v Chapter 22 w Elementary Graph Algorithms u 1/ v 2/ w u 1/ v 2/ w u 1/ v 2/ w x y (a) v 2/ B z x y (b) v 2/ B z x 3/ y (c) v 2/ B z 4/ x 3/ y (d) v 2/7 B z u 1/ w u 1/ w u 1/ w u 1/ w 4/ x 3/ y (e) z 4/5 x 3/ y (f) z 4/5 x 3/6 y (g) z 4/5 x 3/6 y (h) z u 1/ F B v 2/7 w u 1/8 F B v 2/7 w u 1/8 F B v 2/7 w 9/ u 1/8 F B v 2/7 C w 9/ 4/5 x 3/6 y (i) v 2/7 B C z 4/5 x 3/6 y (j) v 2/7 B C z 4/5 x 3/6 y (k) v 2/7 B C z 4/5 x 3/6 y (l) v 2/7 B C z u 1/8 F w 9/ u 1/8 F w 9/ B u 1/8 F w 9/ B u 1/8 F w 9/12 B 4/5 x 3/6 y (m) 10/ z 4/5 x 3/6 y (n) 10/ z 4/5 x 3/6 y (o) 10/11 z 4/5 x 3/6 y (p) 10/11 z Figure 22.4 The progress of the depth-first-search algorithm DFS on a directed graph. As edges are explored by the algorithm, they are shown as either shaded (if they are tree edges) or dashed (otherwise). Nontree edges are labeled B, C, or F according to whether they are back, cross, or forward edges. Vertices are timestamped by discovery time/finishing time. In each call DFS-V ISIT (u ), vertex u is initially white. Line 1 paints u gray, line 2 increments the global variable time, and line 3 records the new value of time as the discovery time d [u ]. Lines 4–7 examine each vertex v adjacent to u and recursively visit v if it is white. As each vertex v ∈ Adj[u ] is considered in line 4, we say that edge (u , v) is explored by the depth-first search. Finally, after every edge leaving u has been explored, lines 8–9 paint u black and record the finishing time in f [u ]. Note that the results of depth-first search may depend upon the order in which the vertices are examined in line 5 of DFS, and upon the order in which the neighbors of a vertex are visited in line 4 of DFS-V ISIT. These different visitation orders tend not to cause problems in practice, as any depth-first search result can usually be used effectively, with essentially equivalent results. 22.3 Depth-first search 543 What is the running time of DFS? The loops on lines 1–3 and lines 5–7 of DFS take time ( V ), exclusive of the time to execute the calls to DFS-V ISIT. As we did for breadth-first search, we use aggregate analysis. The procedure DFSV ISIT is called exactly once for each vertex v ∈ V , since DFS-V ISIT is invoked only on white vertices and the first thing it does is paint the vertex gray. During an execution of DFS-V ISIT (v), the loop on lines 4–7 is executed |Adj[v ]| times. Since v ∈V |Adj[v ]| = (E ) , ( E ). The running time of the total cost of executing lines 4–7 of DFS-V ISIT is DFS is therefore ( V + E ). Properties of depth-first search Depth-first search yields valuable information about the structure of a graph. Perhaps the most basic property of depth-first search is that the predecessor subgraph G π does indeed form a forest of trees, since the structure of the depthfirst trees exactly mirrors the structure of recursive calls of DFS-V ISIT. That is, u = π [v ] if and only if DFS-V ISIT (v) was called during a search of u ’s adjacency list. Additionally, vertex v is a descendant of vertex u in the depth-first forest if and only if v is discovered during the time in which u is gray. Another important property of depth-first search is that discovery and finishing times have parenthesis structure. If we represent the discovery of vertex u with a left parenthesis “(u ” and represent its finishing by a right parenthesis “u )”, then the history of discoveries and finishings makes a well-formed expression in the sense that the parentheses are properly nested. For example, the depth-first search of Figure 22.5(a) corresponds to the parenthesization shown in Figure 22.5(b). Another way of stating the condition of parenthesis structure is given in the following theorem. Theorem 22.7 (Parenthesis theorem) In any depth-first search of a (directed or undirected) graph G = ( V , E ), for any two vertices u and v , exactly one of the following three conditions holds: • the intervals [d [u ], f [u ]] and [d [v ], f [v ]] are entirely disjoint, and neither u nor v is a descendant of the other in the depth-first forest, the interval [d [u ], f [u ]] is contained entirely within the interval [d [v ], f [v ]], and u is a descendant of v in a depth-first tree, or the interval [d [v ], f [v ]] is contained entirely within the interval [d [u ], f [u ]], and v is a descendant of u in a depth-first tree. • • 544 Chapter 22 Elementary Graph Algorithms y 3/6 (a) 4/5 x B z 2/9 F s 1/10 C 12/13 v t 11/16 B 14/15 u C 7/8 w C C s z (b) y x w v t u 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 (s (z (y (x x) y) (w w) z) s) (t (v v) (u u) t) s C B y C x z F C w v t C B u (c) Figure 22.5 Properties of depth-first search. (a) The result of a depth-first search of a directed graph. Vertices are timestamped and edge types are indicated as in Figure 22.4. (b) Intervals for the discovery time and finishing time of each vertex correspond to the parenthesization shown. Each rectangle spans the interval given by the discovery and finishing times of the corresponding vertex. Tree edges are shown. If two intervals overlap, then one is nested within the other, and the vertex corresponding to the smaller interval is a descendant of the vertex corresponding to the larger. (c) The graph of part (a) redrawn with all tree and forward edges going down within a depth-first tree and all back edges going up from a descendant to an ancestor. 22.3 Depth-first search 545 Proof We begin with the case in which d [u ] < d [v ]. There are two subcases to consider, according to whether d [v ] < f [u ] or not. The first subcase occurs when d [v ] < f [u ], so v was discovered while u was still gray. This implies that v is a descendant of u . Moreover, since v was discovered more recently than u , all of its outgoing edges are explored, and v is finished, before the search returns to and finishes u . In this case, therefore, the interval [d [v ], f [v ]] is entirely contained within the interval [d [u ], f [u ]]. In the other subcase, f [u ] < d [v ], and inequality (22.2) implies that the intervals [d [u ], f [u ]] and [d [v ], f [v ]] are disjoint. Because the intervals are disjoint, neither vertex was discovered while the other was gray, and so neither vertex is a descendant of the other. The case in which d [v ] < d [u ] is similar, with the roles of u and v reversed in the above argument. Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of vertex u in the depth-first forest for a (directed or undirected) graph G if and only if d [u ] < d [v ] < f [v ] < f [u ]. Proof Immediate from Theorem 22.7. The next theorem gives another important characterization of when one vertex is a descendant of another in the depth-first forest. Theorem 22.9 (White-path theorem) In a depth-first forest of a (directed or undirected) graph G = ( V , E ), vertex v is a descendant of vertex u if and only if at the time d [u ] that the search discovers u , vertex v can be reached from u along a path consisting entirely of white vertices. Proof ⇒: Assume that v is a descendant of u . Let w be any vertex on the path between u and v in the depth-first tree, so that w is a descendant of u . By Corollary 22.8, d [u ] < d [w ], and so w is white at time d [u ]. ⇐: Suppose that vertex v is reachable from u along a path of white vertices at time d [u ], but v does not become a descendant of u in the depth-first tree. Without loss of generality, assume that every other vertex along the path becomes a descendant of u . (Otherwise, let v be the closest vertex to u along the path that doesn’t become a descendant of u .) Let w be the predecessor of v in the path, so that w is a descendant of u (w and u may in fact be the same vertex) and, by Corollary 22.8, f [w ] ≤ f [u ]. Note that v must be discovered after u is discovered, but before w is finished. Therefore, d [u ] < d [v ] < f [w ] ≤ f [u ]. Theorem 22.7 then implies that the interval [d [v ], f [v ]] is contained entirely within the interval [d [u ], f [u ]]. By Corollary 22.8, v must after all be a descendant of u . 22.3 Depth-first search 547 In an undirected graph, there may be some ambiguity in the type classification, since (u , v) and (v, u ) are really the same edge. In such a case, the edge is classified as the first type in the classification list that applies. Equivalently (see Exercise 22.3-5), the edge is classified according to whichever of (u , v) or (v, u ) is encountered first during the execution of the algorithm. We now show that forward and cross edges never occur in a depth-first search of an undirected graph. Theorem 22.10 In a depth-first search of an undirected graph G , every edge of G is either a tree edge or a back edge. Proof Let (u , v) be an arbitrary edge of G , and suppose without loss of generality that d [u ] < d [v ]. Then, v must be discovered and finished before we finish u (while u is gray), since v is on u ’s adjacency list. If the edge (u , v) is explored first in the direction from u to v , then v is undiscovered (white) until that time, for otherwise we would have explored this edge already in the direction from v to u . Thus, (u , v) becomes a tree edge. If (u , v) is explored first in the direction from v to u , then (u , v) is a back edge, since u is still gray at the time the edge is first explored. We shall see several applications of these theorems in the following sections. Exercises 22.3-1 Make a 3-by-3 chart with row and column labels WHITE, GRAY, and BLACK. In each cell (i , j ), indicate whether, at any point during a depth-first search of a directed graph, there can be an edge from a vertex of color i to a vertex of color j . For each possible edge, indicate what edge types it can be. Make a second such chart for depth-first search of an undirected graph. 22.3-2 Show how depth-first search works on the graph of Figure 22.6. Assume that the for loop of lines 5–7 of the DFS procedure considers the vertices in alphabetical order, and assume that each adjacency list is ordered alphabetically. Show the discovery and finishing times for each vertex, and show the classification of each edge. 22.3-3 Show the parenthesis structure of the depth-first search shown in Figure 22.4. 548 Chapter 22 Elementary Graph Algorithms q s v w t x z y r u Figure 22.6 A directed graph for use in Exercises 22.3-2 and 22.5-2. 22.3-4 Show that edge (u , v) is a. a tree edge or forward edge if and only if d [u ] < d [v ] < f [v ] < f [u ], b. a back edge if and only if d [v ] < d [u ] < f [u ] < f [v ], and c. a cross edge if and only if d [v ] < f [v ] < d [u ] < f [u ]. 22.3-5 Show that in an undirected graph, classifying an edge (u , v) as a tree edge or a back edge according to whether (u , v) or (v, u ) is encountered first during the depth-first search is equivalent to classifying it according to the priority of types in the classification scheme. 22.3-6 Rewrite the procedure DFS, using a stack to eliminate recursion. 22.3-7 Give a counterexample to the conjecture that if there is a path from u to v in a directed graph G , and if d [u ] < d [v ] in a depth-first search of G , then v is a descendant of u in the depth-first forest produced. 22.3-8 Give a counterexample to the conjecture that if there is a path from u to v in a directed graph G , then any depth-first search must result in d [v ] ≤ f [u ]. 22.3-9 Modify the pseudocode for depth-first search so that it prints out every edge in the directed graph G , together with its type. Show what modifications, if any, must be made if G is undirected. 22.4 Topological sort 549 22.3-10 Explain how a vertex u of a directed graph can end up in a depth-first tree containing only u , even though u has both incoming and outgoing edges in G . 22.3-11 Show that a depth-first search of an undirected graph G can be used to identify the connected components of G , and that the depth-first forest contains as many trees as G has connected components. More precisely, show how to modify depth-first search so that each vertex v is assigned an integer label cc[v ] between 1 and k , where k is the number of connected components of G , such that cc[u ] = cc[v ] if and only if u and v are in the same connected component. 22.3-12 A directed graph G = ( V , E ) is singly connected if u Y v implies that there is at most one simple path from u to v for all vertices u , v ∈ V . Give an efficient algorithm to determine whether or not a directed graph is singly connected. 22.4 Topological sort This section shows how depth-first search can be used to perform a topological sort of a directed acyclic graph, or a “dag” as it is sometimes called. A topological sort of a dag G = ( V , E ) is a linear ordering of all its vertices such that if G contains an edge (u , v), then u appears before v in the ordering. (If the graph is not acyclic, then no linear ordering is possible.) A topological sort of a graph can be viewed as an ordering of its vertices along a horizontal line so that all directed edges go from left to right. Topological sorting is thus different from the usual kind of “sorting” studied in Part II. Directed acyclic graphs are used in many applications to indicate precedences among events. Figure 22.7 gives an example that arises when Professor Bumstead gets dressed in the morning. The professor must don certain garments before others (e.g., socks before shoes). Other items may be put on in any order (e.g., socks and pants). A directed edge (u , v) in the dag of Figure 22.7(a) indicates that garment u must be donned before garment v . A topological sort of this dag therefore gives an order for getting dressed. Figure 22.7(b) shows the topologically sorted dag as an ordering of vertices along a horizontal line such that all directed edges go from left to right. The following simple algorithm topologically sorts a dag. 550 Chapter 22 Elementary Graph Algorithms 11/16 undershorts pants shirt 1/8 belt tie 2/5 socks 17/18 watch 9/10 shoes 13/14 12/15 (a) 6/7 jacket 3/4 (b) socks 17/18 undershorts 11/16 pants 12/15 shoes 13/14 watch 9/10 shirt 1/8 belt 6/7 tie 2/5 jacket 3/4 Figure 22.7 (a) Professor Bumstead topologically sorts his clothing when getting dressed. Each directed edge (u , v) means that garment u must be put on before garment v . The discovery and finishing times from a depth-first search are shown next to each vertex. (b) The same graph shown topologically sorted. Its vertices are arranged from left to right in order of decreasing finishing time. Note that all directed edges go from left to right. T OPOLOGICAL -S ORT (G ) 1 call DFS(G ) to compute finishing times f [v ] for each vertex v 2 as each vertex is finished, insert it onto the front of a linked list 3 return the linked list of vertices Figure 22.7(b) shows how the topologically sorted vertices appear in reverse order of their finishing times. We can perform a topological sort in time ( V + E ), since depth-first search takes ( V + E ) time and it takes O (1) time to insert each of the | V | vertices onto the front of the linked list. We prove the correctness of this algorithm using the following key lemma characterizing directed acyclic graphs. Lemma 22.11 A directed graph G is acyclic if and only if a depth-first search of G yields no back edges. Proof ⇒: Suppose that there is a back edge (u , v). Then, vertex v is an ancestor of vertex u in the depth-first forest. There is thus a path from v to u in G , and the back edge (u , v) completes a cycle. 22.4 Topological sort 551 m q t x n r u y o s v z p w Figure 22.8 A dag for topological sorting. ⇐: Suppose that G contains a cycle c. We show that a depth-first search of G yields a back edge. Let v be the first vertex to be discovered in c, and let (u , v) be the preceding edge in c. At time d [v ], the vertices of c form a path of white vertices from v to u . By the white-path theorem, vertex u becomes a descendant of v in the depth-first forest. Therefore, (u , v) is a back edge. Theorem 22.12 T OPOLOGICAL -S ORT (G ) produces a topological sort of a directed acyclic graph G . Proof Suppose that DFS is run on a given dag G = ( V , E ) to determine finishing times for its vertices. It suffices to show that for any pair of distinct vertices u , v ∈ V , if there is an edge in G from u to v , then f [v ] < f [u ]. Consider any edge (u , v) explored by DFS(G ). When this edge is explored, v cannot be gray, since then v would be an ancestor of u and (u , v) would be a back edge, contradicting Lemma 22.11. Therefore, v must be either white or black. If v is white, it becomes a descendant of u , and so f [v ] < f [u ]. If v is black, it has already been finished, so that f [v ] has already been set. Because we are still exploring from u , we have yet to assign a timestamp to f [u ], and so once we do, we will have f [v ] < f [u ] as well. Thus, for any edge (u , v) in the dag, we have f [v ] < f [u ], proving the theorem. Exercises 22.4-1 Show the ordering of vertices produced by T OPOLOGICAL -S ORT when it is run on the dag of Figure 22.8, under the assumption of Exercise 22.3-2. 552 Chapter 22 Elementary Graph Algorithms 22.4-2 Give a linear-time algorithm that takes as input a directed acyclic graph G = ( V , E ) and two vertices s and t , and returns the number of paths from s to t in G . For example, in the directed acyclic graph of Figure 22.8, there are exactly four paths from vertex p to vertex v : pov , por y v , posr y v , and psr y v . (Your algorithm only needs to count the paths, not list them.) 22.4-3 Give an algorithm that determines whether or not a given undirected graph G = ( V , E ) contains a cycle. Your algorithm should run in O ( V ) time, independent of | E |. 22.4-4 Prove or disprove: If a directed graph G contains cycles, then T OPOLOGICAL S ORT(G ) produces a vertex ordering that minimizes the number of “bad” edges that are inconsistent with the ordering produced. 22.4-5 Another way to perform topological sorting on a directed acyclic graph G = ( V , E ) is to repeatedly find a vertex of in-degree 0, output it, and remove it and all of its outgoing edges from the graph. Explain how to implement this idea so that it runs in time O ( V + E ). What happens to this algorithm if G has cycles? 22.5 Strongly connected components We now consider a classic application of depth-first search: decomposing a directed graph into its strongly connected components. This section shows how to do this decomposition using two depth-first searches. Many algorithms that work with directed graphs begin with such a decomposition. After decomposition, the algorithm is run separately on each strongly connected component. The solutions are then combined according to the structure of connections between components. Recall from Appendix B that a strongly connected component of a directed graph G = ( V , E ) is a maximal set of vertices C ⊆ V such that for every pair of vertices u and v in C , we have both u Y v and v Y u ; that is, vertices u and v are reachable from each other. Figure 22.9 shows an example. Our algorithm for finding strongly connected components of a graph G = ( V , E ) uses the transpose of G , which is defined in Exercise 22.1-3 to be the graph G T = ( V , E T ), where E T = {(u , v) : (v, u ) ∈ E }. That is, E T consists of the edges of G with their directions reversed. Given an adjacency-list representation of G , the time to create G T is O ( V + E ). It is interesting to observe that G and G T have 22.5 Strongly connected components 553 a 13/14 (a) 12/15 e a b 11/16 c 1/10 d 8/9 3/4 f b 2/7 g c 5/6 h d (b) e f g h cd (c) abe fg h Figure 22.9 (a) A directed graph G . The strongly connected components of G are shown as shaded regions. Each vertex is labeled with its discovery and finishing times. Tree edges are shaded. (b) The graph G T , the transpose of G . The depth-first forest computed in line 3 of S TRONGLY-C ONNECTED C OMPONENTS is shown, with tree edges shaded. Each strongly connected component corresponds to one depth-first tree. Vertices b, c, g , and h , which are heavily shaded, are the roots of the depthfirst trees produced by the depth-first search of G T . (c) The acyclic component graph G SCC obtained by contracting all edges within each strongly connected component of G so that only a single vertex remains in each component. exactly the same strongly connected components: u and v are reachable from each other in G if and only if they are reachable from each other in G T . Figure 22.9(b) shows the transpose of the graph in Figure 22.9(a), with the strongly connected components shaded. The following linear-time (i.e., ( V + E )-time) algorithm computes the strongly connected components of a directed graph G = ( V , E ) using two depth-first searches, one on G and one on G T . 554 Chapter 22 Elementary Graph Algorithms S TRONGLY-C ONNECTED -C OMPONENTS (G ) 1 call DFS(G ) to compute finishing times f [u ] for each vertex u 2 compute G T 3 call DFS(G T ), but in the main loop of DFS, consider the vertices in order of decreasing f [u ] (as computed in line 1) 4 output the vertices of each tree in the depth-first forest formed in line 3 as a separate strongly connected component The idea behind this algorithm comes from a key property of the component graph G SCC = ( V SCC , E SCC ), which we define as follows. Suppose that G has strongly connected components C 1 , C2 , . . . , Ck . The vertex set V SCC is {v1 , v2 , . . . , vk }, and it contains a vertex vi for each strongly connected component Ci of G . There is an edge (vi , v j ) ∈ E SCC if G contains a directed edge (x , y ) for some x ∈ C i and some y ∈ C j . Looked at another way, by contracting all edges whose incident vertices are within the same strongly connected component of G , the resulting graph is G SCC . Figure 22.9(c) shows the component graph of the graph in Figure 22.9(a). The key property is that the component graph is a dag, which the following lemma implies. Lemma 22.13 Let C and C be distinct strongly connected components in directed graph G = ( V , E ), let u , v ∈ C , let u , v ∈ C , and suppose that there is a path u Y u in G . Then there cannot also be a path v Y v in G . Proof If there is a path v Y v in G , then there are paths u Y u Y v and v Y v Y u in G . Thus, u and v are reachable from each other, thereby contradicting the assumption that C and C are distinct strongly connected components. We shall see that by considering vertices in the second depth-first search in decreasing order of the finishing times that were computed in the first depth-first search, we are, in essence, visiting the vertices of the component graph (each of which corresponds to a strongly connected component of G ) in topologically sorted order. Because S TRONGLY-C ONNECTED -C OMPONENTS performs two depth-first searches, there is the potential for ambiguity when we discuss d [u ] or f [u ]. In this section, these values always refer to the discovery and finishing times as computed by the first call of DFS, in line 1. We extend the notation for discovery and finishing times to sets of vertices. If U ⊆ V , then we define d (U ) = min u ∈U {d [u ]} and f (U ) = maxu ∈U { f [u ]}. That is, d (U ) and f (U ) are the earliest discovery time and latest finishing time, respectively, of any vertex in U . 22.5 Strongly connected components 555 The following lemma and its corollary give a key property relating strongly connected components and finishing times in the first depth-first search. Lemma 22.14 Let C and C be distinct strongly connected components in directed graph G = ( V , E ). Suppose that there is an edge (u , v) ∈ E , where u ∈ C and v ∈ C . Then f (C ) > f (C ). Proof There are two cases, depending on which strongly connected component, C or C , had the first discovered vertex during the depth-first search. If d (C ) < d (C ), let x be the first vertex discovered in C . At time d [x ], all vertices in C and C are white. There is a path in G from x to each vertex in C consisting only of white vertices. Because (u , v) ∈ E , for any vertex w ∈ C , there is also a path at time d [x ] from x to w in G consisting only of white vertices: x Y u → v Y w . By the white-path theorem, all vertices in C and C become descendants of x in the depth-first tree. By Corollary 22.8, f [x ] = f (C ) > f (C ). If instead we have d (C ) > d (C ), let y be the first vertex discovered in C . At time d [ y ], all vertices in C are white and there is a path in G from y to each vertex in C consisting only of white vertices. By the white-path theorem, all vertices in C become descendants of y in the depth-first tree, and by Corollary 22.8, f [ y ] = f (C ). At time d [ y ], all vertices in C are white. Since there is an edge (u , v) from C to C , Lemma 22.13 implies that there cannot be a path from C to C . Hence, no vertex in C is reachable from y . At time f [ y ], therefore, all vertices in C are still white. Thus, for any vertex w ∈ C , we have f [w ] > f [ y ], which implies that f (C ) > f (C ). The following corollary tells us that each edge in G T that goes between different strongly connected components goes from a component with an earlier finishing time (in the first depth-first search) to a component with a later finishing time. Corollary 22.15 Let C and C be distinct strongly connected components in directed graph G = ( V , E ). Suppose that there is an edge (u , v) ∈ E T , where u ∈ C and v ∈ C . Then f (C ) < f (C ). Proof Since (u , v) ∈ E T , we have (v, u ) ∈ E . Since the strongly connected components of G and G T are the same, Lemma 22.14 implies that f (C ) < f (C ). Corollary 22.15 provides the key to understanding why the S TRONGLYC ONNECTED -C OMPONENTS procedure works. Let us examine what happens when we perform the second depth-first search, which is on G T . We start with the strongly connected component C whose finishing time f (C ) is maximum. The 556 Chapter 22 Elementary Graph Algorithms search starts from some vertex x ∈ C , and it visits all vertices in C . By Corollary 22.15, there are no edges in G T from C to any other strongly connected component, and so the search from x will not visit vertices in any other component. Thus, the tree rooted at x contains exactly the vertices of C . Having completed visiting all vertices in C , the search in line 3 selects as a root a vertex from some other strongly connected component C whose finishing time f (C ) is maximum over all components other than C . Again, the search will visit all vertices in C , but by Corollary 22.15, the only edges in G T from C to any other component must be to C , which we have already visited. In general, when the depth-first search of G T in line 3 visits any strongly connected component, any edges out of that component must be to components that were already visited. Each depth-first tree, therefore, will be exactly one strongly connected component. The following theorem formalizes this argument. Theorem 22.16 S TRONGLY-C ONNECTED -C OMPONENTS (G ) correctly computes the strongly connected components of a directed graph G . Proof We argue by induction on the number of depth-first trees found in the depth-first search of G T in line 3 that the vertices of each tree form a strongly connected component. The inductive hypothesis is that the first k trees produced in line 3 are strongly connected components. The basis for the induction, when k = 0, is trivial. In the inductive step, we assume that each of the first k depth-first trees produced in line 3 is a strongly connected component, and we consider the (k + 1)st tree produced. Let the root of this tree be vertex u , and let u be in strongly connected component C . Because of how we choose roots in the depth-first search in line 3, f [u ] = f (C ) > f (C ) for any strongly connected component C other than C that has yet to be visited. By the inductive hypothesis, at the time that the search visits u , all other vertices of C are white. By the white-path theorem, therefore, all other vertices of C are descendants of u in its depth-first tree. Moreover, by the inductive hypothesis and by Corollary 22.15, any edges in G T that leave C must be to strongly connected components that have already been visited. Thus, no vertex in any strongly connected component other than C will be a descendant of u during the depth-first search of G T . Thus, the vertices of the depth-first tree in G T that is rooted at u form exactly one strongly connected component, which completes the inductive step and the proof. Here is another way to look at how the second depth-first search operates. Consider the component graph (G T )SCC of G T . If we map each strongly connected component visited in the second depth-first search to a vertex of (G T )SCC , the vertices of (G T )SCC are visited in the reverse of a topologically sorted order. If we re- 22.5 Strongly connected components 557 verse the edges of (G T )SCC , we get the graph ((G T )SCC )T . Because ((G T )SCC )T = G SCC (see Exercise 22.5-4), the second depth-first search visits the vertices of G SCC in topologically sorted order. Exercises 22.5-1 How can the number of strongly connected components of a graph change if a new edge is added? 22.5-2 Show how the procedure S TRONGLY-C ONNECTED -C OMPONENTS works on the graph of Figure 22.6. Specifically, show the finishing times computed in line 1 and the forest produced in line 3. Assume that the loop of lines 5–7 of DFS considers vertices in alphabetical order and that the adjacency lists are in alphabetical order. 22.5-3 Professor Deaver claims that the algorithm for strongly connected components can be simplified by using the original (instead of the transpose) graph in the second depth-first search and scanning the vertices in order of increasing finishing times. Is the professor correct? 22.5-4 Prove that for any directed graph G , we have ((G T )SCC )T = G SCC . That is, the transpose of the component graph of G T is the same as the component graph of G . 22.5-5 Give an O ( V + E )-time algorithm to compute the component graph of a directed graph G = ( V , E ). Make sure that there is at most one edge between two vertices in the component graph your algorithm produces. 22.5-6 Given a directed graph G = ( V , E ), explain how to create another graph G = ( V , E ) such that (a) G has the same strongly connected components as G , (b) G has the same component graph as G , and (c) E is as small as possible. Describe a fast algorithm to compute G . 22.5-7 A directed graph G = ( V , E ) is said to be semiconnected if, for all pairs of vertices u , v ∈ V , we have u Y v or v Y u . Give an efficient algorithm to determine whether or not G is semiconnected. Prove that your algorithm is correct, and analyze its running time. 558 Chapter 22 Elementary Graph Algorithms Problems 22-1 Classifying edges by breadth-first search A depth-first forest classifies the edges of a graph into tree, back, forward, and cross edges. A breadth-first tree can also be used to classify the edges reachable from the source of the search into the same four categories. a. Prove that in a breadth-first search of an undirected graph, the following properties hold: 1. There are no back edges and no forward edges. 2. For each tree edge (u , v), we have d [v ] = d [u ] + 1. 3. For each cross edge (u , v), we have d [v ] = d [u ] or d [v ] = d [u ] + 1. b. Prove that in a breadth-first search of a directed graph, the following properties hold: 1. 2. 3. 4. There are no forward edges. For each tree edge (u , v), we have d [v ] = d [u ] + 1. For each cross edge (u , v), we have d [v ] ≤ d [u ] + 1. For each back edge (u , v), we have 0 ≤ d [v ] ≤ d [u ]. 22-2 Articulation points, bridges, and biconnected components Let G = ( V , E ) be a connected, undirected graph. An articulation point of G is a vertex whose removal disconnects G . A bridge of G is an edge whose removal disconnects G . A biconnected component of G is a maximal set of edges such that any two edges in the set lie on a common simple cycle. Figure 22.10 illustrates these definitions. We can determine articulation points, bridges, and biconnected components using depth-first search. Let G π = ( V , E π ) be a depth-first tree of G . a. Prove that the root of G π is an articulation point of G if and only if it has at least two children in G π . b. Let v be a nonroot vertex of G π . Prove that v is an articulation point of G if and only if v has a child s such that there is no back edge from s or any descendant of s to a proper ancestor of v . c. Let low[v ] = min d [v ] , d [w ] : (u , w) is a back edge for some descendant u of v . Problems for Chapter 22 559 1 2 4 3 5 6 Figure 22.10 The articulation points, bridges, and biconnected components of a connected, undirected graph for use in Problem 22-2. The articulation points are the heavily shaded vertices, the bridges are the heavily shaded edges, and the biconnected components are the edges in the shaded regions, with a bcc numbering shown. Show how to compute low[v ] for all vertices v ∈ V in O ( E ) time. d. Show how to compute all articulation points in O ( E ) time. e. Prove that an edge of G is a bridge if and only if it does not lie on any simple cycle of G . f. Show how to compute all the bridges of G in O ( E ) time. g. Prove that the biconnected components of G partition the nonbridge edges of G . h. Give an O ( E )-time algorithm to label each edge e of G with a positive integer bcc[e] such that bcc[e] = bcc[e ] if and only if e and e are in the same biconnected component. 22-3 Euler tour An Euler tour of a connected, directed graph G = ( V , E ) is a cycle that traverses each edge of G exactly once, although it may visit a vertex more than once. a. Show that G has an Euler tour if and only if in-degree (v) = out-degree (v) for each vertex v ∈ V . b. Describe an O ( E )-time algorithm to find an Euler tour of G if one exists. (Hint: Merge edge-disjoint cycles.) 22-4 Reachability Let G = ( V , E ) be a directed graph in which each vertex u ∈ V is labeled with a unique integer L (u ) from the set {1, 2, . . . , | V |}. For each vertex u ∈ V , let 560 Chapter 22 Elementary Graph Algorithms R (u ) = {v ∈ V : u Y v } be the set of vertices that are reachable from u . Define min(u ) to be the vertex in R (u ) whose label is minimum, i.e., min(u ) is the vertex v such that L (v) = min { L (w) : w ∈ R (u )}. Give an O ( V + E )-time algorithm that computes min(u ) for all vertices u ∈ V . Chapter notes Even [87] and Tarjan [292] are excellent references for graph algorithms. Breadth-first search was discovered by Moore [226] in the context of finding paths through mazes. Lee [198] independently discovered the same algorithm in the context of routing wires on circuit boards. Hopcroft and Tarjan [154] advocated the use of the adjacency-list representation over the adjacency-matrix representation for sparse graphs and were the first to recognize the algorithmic importance of depth-first search. Depth-first search has been widely used since the late 1950’s, especially in artificial intelligence programs. Tarjan [289] gave a linear-time algorithm for finding strongly connected components. The algorithm for strongly connected components in Section 22.5 is adapted from Aho, Hopcroft, and Ullman [6], who credit it to S. R. Kosaraju (unpublished) and M. Sharir [276]. Gabow [101] also developed an algorithm for strongly connected components that is based on contracting cycles and uses two stacks to make it run in linear time. Knuth [182] was the first to give a linear-time algorithm for topological sorting. ...
View Full Document

Page1 / 250

kitap - 1 The Role of Algorithms in Computing What are...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online