Chapter 16 Extended - SHEN'S CLASS NOTES Chapter 16 Greedy...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
SHEN’S CLASS NOTES Chapter 16 Greedy Algorithms Like   the   dynamic   programming,   greedy   algorithms   are  used to solve optimization problems also.  Examples of optimization problems: (1) Find the largest number among  n  numbers. (2) Find the minimum spanning tree (MST) for a given graph. (3) Find the shortest path from vertex a to vertex z in a graph. A greedy algorithm works in stages also: (1) Initially, the greedy algorithm provides a simple partial solution (or a feasible solution) to the problem. For example, it can start with a single vertex for the MST problem. (2) At each stage, the greedy algorithm grows the current feasible or partial solution from previous stage to a larger, better, or more complete solution. After a number of stages, the algorithm stops when: (1) An optimal solution is obtained in O( f ( n )) time. In this case, we say that the algorithm solves the problem in O( f ( n )) time. 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
SHEN’S CLASS NOTES (2) A good but sub-optimal solution is obtained. In this case, we say that this is an approximation algorithm. Note that the greedy algorithms are different from dynamic programming in that the greedy algorithms usually grow only one unique partial solution at each stage. The dynamic programming develops a set of solutions at each stage and dynamically determines which solutions in previous stages should be used. Moreover, the dynamic programming must solve all smaller problems optimally. However, the dynamic programming and greedy algorithms do share a common idea that is to solve a large problem by solving smaller problems. This idea is also shared by divide and conquer algorithms which take the top-down approach. 16.1 An Activity-Selection Problem Suppose we have a set of n proposed activities: S = { a 1 , a 2 , …, a n }. Each activity a i has a start time s i and a finish time f i , where 0 s i < f i < . Moreover, we assume that all these activities share a common resource. Therefore, if a i is selected, then this activity will take place and occupy the resource in the time interval [ s i , f i ). Definition 16.1 Two activities a i and a j are compatible if [ s i , f i ) and [ s j , f j ) do not overlap, that is f i s j , or f j s i . 2
Background image of page 2
Definition 16.2 The activity-selection problem is to select a maximum-size subset of mutually compatible activities. Example. Consider the following set S of activities. i 1 2 3 4 5 6 7 8 9 10 11 s i 1 3 0 5 3 5 6 8 8 2 12 f i 4 5 6 7 8 9 10 11 12 13 14 We  notice  that  { a 3 , a 9 , a 11 }  are  three  mutually  compatible  activities: Fig. 16-1 However, { a 3 , a 9 , a 11 } is not the largest set. An optimal one is  { a 1 , a 4 , a 8 , a 11 }.  Fig. 16-2 So, how to solve the problem? There are many ways to solve this problem.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 04/12/2008 for the course CS 592 taught by Professor Shen during the Fall '05 term at University of Missouri-Kansas City .

Page1 / 20

Chapter 16 Extended - SHEN'S CLASS NOTES Chapter 16 Greedy...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online