This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: What is Dynamic Programming Like DaC, Dynamic Programming is another useful method for designing efficient algorithms. Why the name? Eye of the Hurricane: An Autobiography - A quote from Richard Bellman I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision process. An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. ... I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. .... Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object. So I used it as an umbrella for my activities. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 2 / 64 General Description of Dynamic Programming Divide the problem into smaller subproblems ( of the same type ). Solve each subproblem. Combine the solutions of subproblems into the solution of the original problem. Looks familiar? Its identical to DaC! But they differ substantially in details. For DaC : The sub-problems are independent, they do not overlap . We solve sub-problems in a top-down fashion . Namely, solve the largest sub-problem first, then the second largest ... Usually by recursive calls. For Dynamic Programming: The sub-problems are intermingled, they do overlap . We solve sub-problems in a bottom-up fashion . Namely, solve the smallest sub-problem first, then the second smallest ... c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 3 / 64 Example for DaC MergeSort Sort array A [ 1 .. n ] Divide it into two subproblems: Sort A [ 1 .. n / 2 ] and Sort A [( n / 2 + 1 ) .. n ] The two sub-problems do not overlap . They are totally independent . We solve these two largest sub-problems, by recursive calls. Move to smaller problems Sort A [ 1 .. n / 4 ] , Sort A [( n / 4 + 1 ) .. n / 2 ] etc. c Xin He (University at Buffalo) CSE 431/531 Algorithm Analysis and Design 4 / 64 Example for Dynamic Programming Fib ( n ) = Fib ( n- 1 ) + Fib ( n- 2 ) We divide Fib( n ) into two sub-problems Fib( n- 1 ) and Fib( n- 2 ). They overlap, not totally independent : Fib( n- 1 ) contains Fib( n- 2 ). If we solve the largest sub-problems Fib( n- 1 ) and Fib( n- 2 ) first by using recursive calls, as we did before, we get exp time algorithm. Fib( n ) 1: Fib = 0; Fib = 1; 2: for i = 2 to n do 3: Fib[ i ] = Fib[ i- 1 ] + Fib[ i- 2 ] 4: end for 5: output Fib[ n ] Solves the smallest sub-problem Fib, then Fib ... bottom-up....
View Full Document