Dynamic Programming Pattern
Problem
Many problems appear with natural optimal substructures where by optimally
solving a sequence of local problems, one can arrive at a globally optimal solution.
There can also be significant parallelism in solving independent locally optimal
solutions. How can we organize data and computation to efficiently arrive at the
globally optimal solution?
Context
In many problems such as finding critical path in circuit timing analysis, finding
most likely sequence of signals in a symbol state space, or finding minimum edit
distance between two strings, the solution space is exponential with respect to input,
i.e. one can concurrently check an exponential number of alternative solutions, and
compare them to find the optimal solution to the problem.
By imposing a computation sequence based on the problem’s structure, one can
reduce the amount of computation for some classes of these problems from
exponential to polynomial run time. The computation order (or sequence) limits the
amount of parallelism in the problem. However, for large inputs (on the order of
thousands to billions of elements), exponential time algorithms are not
computationally practical. Polynomial time algorithms leverage problem structure
to restrict computation sequence and avoid exponential computation.
There are two ways to compute the global optimal solution: top‐down and bottom‐
up. The top‐down approach starts from the top‐level problem and recursively
divides the problem into a set of sub problems until it hits the smallest sub problem
that it could solve trivially. The higher‐level problem obtains optimal solutions form
its sub problems in order to produce a higher‐level optimal solution. In contrast, the
bottom‐up approach does not have the recursive problem dividing phase; it simply
starts from the smallest sub problem and provides the result up to the higher‐level
problem. The top‐down approach should involve memoization to avoid redundant
computations.
The parallel opportunities in this pattern is similar to the Divide‐and‐Conquer
pattern with the following three properties: 1) there are natural initial division
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
boundaries in the problem; 2) there are frequent, and well defined reduction and
synchronization points in the algorithm; and 3) number of fan‐ins are strictly
limited by the problem.
The two main difference compared to the Divide‐and‐Conquer pattern is: 1) the
presence of overlapping shared sub‐problems, and 2) exponential size of the overall
problem, which prohibits starting with the problem as a whole and then apply the
divide‐and‐conquer techniques. In this pattern, the starting point is often the
naturally defined set of sub‐problems, and computation is often limited to a wave‐
front of sub‐problems.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '12
 KarlLieberherr
 Algorithms, Subroutine, Return statement, recursive relation, lcsr

Click to edit the document details