{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lec13_14_15_DynamicProgramming

# Lec13_14_15_DynamicProgramming - Dynamic Programming The...

This preview shows pages 1–10. Sign up to view the full content.

Dynamic Programming

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The basic idea is drawn from intuition behind divide and conquer : One implicitly explores the space of all possible solutions , by decomposing things into a series of sub-problems , and then building up correct solutions to larger and larger sub-problems.
The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables (arrays) to construct a solution. Used extensively in "Operation Research" given in the Math dept.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The Main Idea In dynamic programming we usually reduce time by increasing the amount of space We solve the problem by solving subproblems of increasing size and saving each optimal solution in a table (usually). The table is then used for finding the optimal solution to larger problems. Time is saved since each subproblem is solved only once.
When is Dynamic Programming used? Used for problems in which an optimal solution for the original problem can be found from optimal solutions to subproblems of the original problem Usually, a recursive algorithm can solve the problem. But the algorithm computes the optimal solution to the same subproblem more than once and therefore is slow. The two examples (Fibonacci and binomial coefficient) have such a recursive algorithm Dynamic programming reduces the time by computing the optimal solution of a subproblem only once and saving its value. The saved value is then used whenever the same subproblem needs to be solved.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Principle of Optimality (Optimal Substructure) The principle of optimality applies to a problem (not an algorithm) A large number of optimization problems satisfy this principle. Principle of optimality: Given an optimal sequence of decisions or choices, each subsequence must also be optimal.
Principle of optimality - shortest path problem Problem: Given a graph G and vertices s and t , find a shortest path in G from s to t Theorem : A subpath P’ (from s’ to t’) of a shortest path P is a shortest path from s’ to t’ of the subgraph G’ induced by P’. Subpaths are paths that start or end at an intermediate vertex of P. Proof : If P’ was not a shortest path from s’ to t’ in G’, we can substitute the subpath from s’ to t’ in P, by the shortest path in G’ from s’ to t’. The result is a shorter path from s to t than P. This contradicts our assumption that P is a shortest path from s to t.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Principle of optimality a b c d f e G’ G 3 1 3 5 6 P’={(c.d), (d,e)} P={ (a,b), (b,c) (c.d), (d,e)} P’ must be a shortest path from c to e in G’, otherwise P cannot be a shortest path from a to e in G. 10 7 13
Principle of optimality - MST problem (minimum spanning tree) Problem : Given an undirected connected graph G , find a minimum spanning tree Theorem : Any subtree T’ of an MST T of G , is an MST of the subgraph G’ of G induced by the vertices of the subtree.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}