This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Algorithms Lecture 3: Dynamic Programming Those who cannot remember the past are doomed to repeat it. — George Santayana, The Life of Reason, Book I: Introduction and Reason in Common Sense (1905) The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was secretary of Defense, and he actually had a pathological fear and hatred of the word ‘ research ’. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term ‘ research ’ in his presence. You can imagine how he felt, then, about the term ‘ mathematical ’. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? — Richard Bellman, on the origin of his term ‘dynamic programming’ (1984) If we all listened to the professor, we may be all looking for professor jobs. — Pittsburgh Steelers’ head coach Bill Cowher, responding to David Romer’s dynamic-programming analysis of football strategy (2003) 3 Dynamic Programming 3.1 Fibonacci Numbers The Fibonacci numbers F n , named after Leonardo Fibonacci Pisano 1 , the mathematician who popularized ‘algorism’ in Europe in the 13th century, are defined as follows: F = 0, F 1 = 1, and F n = F n- 1 + F n- 2 for all n ≥ 2. The recursive definition of Fibonacci numbers immediately gives us a recursive algorithm for computing them: R EC F IBO ( n ) : if ( n < 2 ) return n else return R EC F IBO ( n- 1 )+ R EC F IBO ( n- 2 ) How long does this algorithm take? Except for the recursive calls, the entire algorithm requires only a constant number of steps: one comparison and possibly one addition. If T ( n ) represents the number of recursive calls to R ECFIBO, we have the recurrence T ( ) = 1, T ( 1 ) = 1, T ( n ) = T ( n- 1 ) + T ( n- 2 ) + 1. This looks an awful lot like the recurrence for Fibonacci numbers! The annihilator method gives us an asymptotic bound of Θ( φ n ) , where φ = ( p 5 + 1 ) / 2 ≈ 1.61803398875, the so-called golden ratio , is the largest root of the polynomial r 2- r- 1. But it’s fairly easy to prove (hint, hint) the exact solution T ( n ) = 2 F n + 1- 1 . In other words, computing F n using this algorithm takes more than twice as many steps as just counting to F n ! Another way to see this is that the R ECFIBO is building a big binary tree of additions, with nothing but zeros and ones at the leaves. Since the eventual output is F n , our algorithm must call R ECRIBO ( 1 ) (which returns 1) exactly F n times. A quick inductive argument implies that R ECFIBO ( ) is called exactly F n- 1 times. Thus, the recursion tree has F n + F n- 1 = F n + 1 leaves, and therefore, because it’s a full binary tree, it must have 2...
View Full Document
- Spring '09
- Dynamic Programming, RECFIBO