MIT1_204S10_lec13

MIT1_204S10_lec13 - 1.204 Lecture 13 Dynamic programming:...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1.204 Lecture 13 Dynamic programming: Method Method Resource allocation Introduction • Divide and conquer starts with the entire problem, divides it into subproblems and then combines them into a solution – This is a top-down approach • Dynamic programming starts with the smallest, simplest subproblems and combines them in stages to obtain solutions to larger subproblems until we get the solution to the original problem – This is a bottom-up approach • Dynamic programming is used much more than divide and conquer – It is more flexible and controllable – It is more efficient on most problems since it must consider far fewer combinations 1 Principle of optimality • “Principle of optimality”: – In an optimal sequence of decisions or choices, each subsequence must also be optimal – For some problems, an optimal sequence may be found by making decisions one at a time and never making a mistake • True for greedy algorithms (except label correctors) – For many problems it’s not possible to make stepwise decisions based only on local information so that the sequence of decisions is optimal sequence of decisions is optimal. • One way to solve such problems is to enumerate all possible decision sequences and choose the best • Dynamic programming can drastically reduce the amount of computation by avoiding sequences that cannot be optimal by the “principle of optimality” Project selection example • Suppose we have: – $4 million budget – 3 possible projects (e.g. flood control) • Each funded at $1 million increments from $0 to $4 million • Each increment produces a different marginal benefit – Dynamic programming problems are usually discrete, not continuous • We want to find the plan that produces the maximum benefit • Stages are the number of decisions to be made – We have 3 stages, since we have 3 projects • States are the number of distinct possibilities – At each stage there are 5 states ($0, 1, 2, 3, 4 million) 2 3 Project selection formulation • We build a multistage graph to represent this problem: – Source node at start of graph, representing ‘null’ initial stage – Set of nodes at each stage for each state – Sink node at end of graph, which is a collapsed representation of the final state • Each node characterized by V(i,j): – V(i,j) is value (benefit) obtained up to (but not including) stage i by committing j resources – Each node also stores its predecessor node in P(i) • Each arc is characterized by E(m,n): – E(m,n) is value obtained by spending n resources on project m Project selection data Investment Benefit Investment Benefit Investment Benefit Project 0 Project 1 Project 2 • In theory, projects could have dependencies, but in practice it’s an improbable model. In the example above: – Project 1’s benefits could depend on project 0 investmen 1 6 1 5 1 1 2 8 2 1 1 2 4 3 8 3 1 6 3 5 4 1 4 1 7 4 6 – Project 1 s benefits could depend on project 0 investment...
View Full Document

This note was uploaded on 12/04/2011 for the course ESD 1.204 taught by Professor Georgekocur during the Spring '10 term at MIT.

Page1 / 15

MIT1_204S10_lec13 - 1.204 Lecture 13 Dynamic programming:...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online