dynprog2

# dynprog2 - Dynamic Programming II Many of the slides are...

This preview shows pages 1–10. Sign up to view the full content.

Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Dynamic Programming Similar to divide-and-conquer, it breaks problems down into smaller problems that are solved recursively. In contrast, DP is applicable when the sub-problems are not independent, i.e. when sub-problems share sub-sub-problems. It solves every sub-sub-problem just once and save the results in a table to avoid duplicated computation.
Elements of DP Algorithms Sub-structure: decompose problem into smaller sub- problems. Express the solution of the original problem in terms of solutions for smaller problems. Table-structure: Store the answers to the sub-problem in a table, because sub-problem solutions may be used many times. Bottom-up computation: combine solutions on smaller sub-problems to solve larger sub-problems, and eventually arrive at a solution to the complete problem.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Applicability to Optimization Problems Optimal sub-structure (principle of optimality): for the global problem to be solved optimally, each sub-problem should be solved optimally. This is often violated due to sub-problem overlaps. Often by being “less optimal” on one problem, we may make a big savings on another sub-problem. Small number of sub-problems: Many NP-hard problems can be formulated as DP problems, but these formulations are not efficient, because the number of sub-problems is exponentially large. Ideally, the number of sub-problems should be at most a polynomial number.
Optimized Chain Operations Determine the optimal sequence for performing a series of operations. (the general class of the problem is important in compiler design for code optimization & in databases for query optimization) For example: given a series of matrices: A 1 …A n , we can “parenthesize” this expression however we like, since matrix multiplication is associative (but not commutative). Multiply a p x q matrix A times a q x r matrix B, the result will be a p x r matrix C. (# of columns of A must be equal to # of rows of B.)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Matrix Multiplication In particular for 1 i p and 1 j r ,
Chain Matrix Multiplication Given a sequence of matrices A 1 A 2 …A n , and dimensions p 0 p 1 …p n where A i is of dimension p i-1 x p i , determine multiplication sequence that minimizes the number of operations. This algorithm does not perform the multiplication, it just figures out the best order in which to perform the multiplication.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example: CMM Consider 3 matrices: A 1 be 5 x 4, A 2 be 4 x 6, and A 3 be 6 x 2. Mult[(( A 1 A 2 ) A 3 )] = (5x4x6) + (5x6x2) = 180 Mult[( A 1 ( A 2 A 3 ))] = (4x6x2) + (5x4x2) = 88 Even for this small example, considerable savings can be achieved by reordering the evaluation sequence.
Naive Algorithm If we have just 1 item, then there is only one way to parenthesize. If we have n items, then there are n-1 places where you could break the list with the outermost pair of

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/03/2011 for the course COT 5407 taught by Professor Staff during the Fall '08 term at FIU.

### Page1 / 50

dynprog2 - Dynamic Programming II Many of the slides are...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online