This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Lecture 36
CSE 331
Nov 28, 2011 New Office: 319 Davis CSEd Week celebrations
http://csedweek.wordpress.com/ Volunteers
needed! Warning on blog posts/scribes
AT MOST 3 blog posts (and scribe)/lecture Need volunteers for today A new experiment today Weighted Interval Scheduling
Input: n jobs (si,ti,vi)
Output: A schedule S s.t. no two jobs in S have a conflict
Goal: max Σi in S vj
Assume: jobs are sorted by their finish time Couple more definitions
p(j) = largest i<j s.t. i does not conflict with j
= 0 if no such i exists OPT(j) = optimal value on instance 1,..,j p(j) < j Property of OPT
j in OPT(j) j not in OPT(j) OPT(j) = max { vj + OPT( p(j) ), OPT(j1) } Given OPT(1),…., OPT(j1), how can one figure
out if j in optimal solution
or not? A recursive algorithm
ComputeOpt(j) Correct for j=0 Proof of
correctness by
induction on j If j = 0 then return 0
return max { vj + ComputeOpt( p(j) ), ComputeOpt( j1 ) } = OPT( p(j) ) = OPT( j1 ) OPT(j) = max { vj + OPT( p(j) ), OPT(j1) } Exponential Running Time
1
2 p(j) = j2 3
4 Only 5 OPT
values! 5
OPT(
5)
OPT(
3) Formal
proof: Ex. OPT(
1) OPT(
4) OPT(
2) OPT(
2) OPT(
1) OPT(
1) OPT(
3) OPT(
1) OPT(
2) OPT(
1) A recursive algorithm
MComputeOpt(j) MComputeOpt(j)
= OPT(j) If j = 0 then return 0
If M[j] is not null then return M[j]
M[j] = max { vj + MComputeOpt( p(j) ), MComputeOpt( j1 ) }
return M[j] Run time = O(# recursive calls) Bounding # recursions
MComputeOpt(j)
If j = 0 then return 0
If M[j] is not null then return M[j]
M[j] = max { vj + MComputeOpt( p(j) ), MComputeOpt( j1 ) }
return M[j] Whenever a recursive call is
made an M value of assigned At most n values of M can be assigned O(n)
overall Property of OPT OPT(j) = max { vj + OPT( p(j) ), OPT(j1) } Given OPT(1), …,
OPT(j1),
one can compute OPT(j) Recursion+ memory = Iteration
Iteratively compute the OPT(j) values IterativeComputeOpt
M[0] = 0
For j=1,…,n
M[j] = max { vj + M[p(j)], M[j1] } M[j] = OPT(j) O(n) run
time Reading Assignment
Sec 6.1, 6.2 of [KT] When to use Dynamic
Programming There are polynomially many subproblems
Richard Bellman Optimal solution can be computed from solutions to subproblems There is an ordering among subproblem that allows for iterative solution Shortest Path Problem
Input: (Directed) Graph G=(V,E) and for every edge e has a cost ce (can be <0)
t in V
Output: Shortest path from every s to t s
Shortest path
has cost
negative infinity 1 1 899 100 1000 t Assume that G
has no negative
cycle Today’s agenda
Dynamic Program for shortest path May the Bellman force be with
you ...
View Full
Document
 Fall '11
 RUDRA

Click to edit the document details