3 a greedy algorithm for the task scheduling problem

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ine j with no task conflicting with task i then schedule task i on machine j else m ← m+1 {add a new machine} schedule task i on machine m Algorithm 5.3: A greedy algorithm for the task scheduling problem. 262 Chapter 5. Fundamental Techniques Correctness of Greedy Task Scheduling In the algorithm TaskSchedule, we begin with no machines and we consider the tasks in a greedy fashion, ordered by their start times. For each task i, if we have a machine that can handle task i, then we schedule i on that machine. Otherwise, we allocate a new machine, schedule i on it, and repeat this greedy selection process until we have considered all the tasks in T . The fact that the above TaskSchedule algorithm works correctly is established by the following theorem. Theorem 5.2: Given a set of n tasks specified by their start and finish times, Algorithm TaskSchedule produces a schedule of the tasks with the minimum number of machines in O(n log n) time. Proof: We can show that the above simple greedy algorithm, TaskSchedule, finds an optimal schedule on the minimum number of machines by a simple contradiction argument. So, suppose the algorithm does not work. That is, suppose the algorithm finds a nonconflicting schedule using k machines but there is a nonconflicting schedule that uses only k − 1 machines. Let k be the last machine allocated by our algorithm, and let i be the first task scheduled on k. By the structure of the algorithm, when we scheduled i, each of the machines 1 through k − 1 contained tasks that conflict with i. Since they conflict with i and because we consider tasks ordered by their start times, all the tasks currently conflicting with task i must have start times less than or equal to si , the start time of i, and have finish times after si . In other words, these tasks not only conflict with task i, they all conflict with each other. But this means we have k tasks in our set T that conflict with each other, which implies it is impossible for us to schedule all the tasks in T using only k − 1 machines. Therefore, k is the minimum number of machines needed to schedule all the tasks in T . We leave as a simple exercise (R-5.2) the job of showing how to implement the Algorithm TaskSchedule in O(n log n) time. We consider several other applications of the greedy method in this book, including two problems in string compression (Section 9.3), where the greedy approach gives rise to a construction known as Huffman coding, and graph algorithms (Section 7.3), where the greedy approach is used to solve shortest path and minimum spanning tree problems. The next technique we discuss is the divide-and-conquer technique, which is a general methodology for using recursion to design efficient algorithms. 5.2. Divide-and-Conquer 5.2 263 Divide-and-Conquer The divide-and-conquer technique involves solving a particular computational problem by dividing it into one or more subproblems of smaller size, recursively solving each subproblem, and then “merging” or “marrying” the solutions to the subproblem(s) to produce a solution to the original problem. We can model the divide-and-conquer approach by using a parameter n to denote the size of the original problem, and let S(n) denote this problem. We solve the problem S(n) by solving a collection of k subproblems S(n1 ), S(n2 ), . . ., S(nk ), where ni < n for i = 1, . . . , k, and then merging the solutions to these subproblems. For example, in the classic merge-sort algorithm (Section 4.1), S(n) denotes the problem of sorting a sequence of n numbers. Merge-sort solves problem S(n) by dividing it into two subproblems S( n/2 ) and S( n/2 ), recursively solving these two subproblems, and then merging the resulting sorted sequences into a single sorted sequence that yields a solution to S(n). The merging step takes O(n) time. This, the total running time of the merge-sort algorithm is O(n log n). As with the merge-sort algorithm, the general divide-and-conquer technique can be used to build algorithms that have fast running times. 5.2.1 Divide-and-Conquer Recurrence Equations To analyze the running time of a divide-and-conquer algorithm we utilize a recurrence equation (Section 1.1.4). That is, we let a function T (n) denote the running time of the algorithm on an input of size n, and characterize T (n) using an equation that relates T (n) to values of the function T for problem sizes smaller than n. In the case of the merge-sort algorithm, we get the recurrence equation T (n) = b if n < 2 2T (n/2) + bn if n ≥ 2, for some constant b > 0, taking the simplifying assumption that n is a power of 2. In fact, throughout this section, we take the simplifying assumption that n is an appropriate power, so that we can avoid using floor and ceiling functions. Every asymptotic statement we make about recurrence equations will still be true, even if we relax this assumption, but justifying this fact formally involves long and boring proofs. As we observed above, we can show that T (n) is O(n log n) in this case. In general, however, we will possibly get a...
View Full Document

Ask a homework question - tutors are online