This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Section Notes 10 CS51Spring 2009 Week of April 19, 2009 1 Outline 1. Dynamic Programming 2. Segmented Least Squares 3. Tail Recursion At the end of this section, you should know how Dynamic Programming can be used to reduce the complexity of problems like Segmented Least Squares, be able to eliminate unnecessary stack usage in some recursive functions. 2 Dynamic Programming Despite the fact that you are in a Computer Science course, the concept of Dynamic Programming is not related, at a fundamental level, to computer programming or programming languages. Rather, Dynamic Programming is a form of optimization that just so happens to be well suited to being implemented on a computer. Specifically, this optimization provides an efficient method for solving a certain class of problems: those that contain optimal substructure and overlapping subproblems. In this section we will discuss the theory behind Dynamic Programming as well as review an example of one application. 2.1 What problems can it solve? Dynamic Programming can be used to solve a particular class of problems. Specifically, it is used to optimize problems that are solved in terms of smaller subproblems. These problems share two properties: 1. Optimal Substructure  finding an optimal solution to subproblems can be used to find an optimal solution to whole problem. Or, viewed from the other direction, the optimal solution to the whole problem contains optimal solutions to any subproblems. For example, if we use DP to guide us between cities that are represented as a graph, then if we know that the shortest path from Boston to Miami goes through New York, then we know that the path generated between New York and Miami as part of the larger solution is also the shortest path between these two nodes in our graph. 2. Overlapping Subproblems  a form of divide and conquer, the same smaller problem is solved for different parameters. However, unlike with basic recursion, these smaller problems overlap. For ex ample, Fib(n) is defined in terms of Fib(n2) and Fib(n1). The definition of Fib(n) overlaps with the definitions of Fib(n1), which is defined in terms of Fib(n2) and Fib(n3). 1 2.2 Memoization One key idea in Dynamic Programming is that problems that consist of Overlapping Subproblems can often be solved efficiently by memoizing the answers to subproblems. As we discussed last week in section, memoizing is a technique that we can use to reduce the running time of a program. Specifically, memoization works by saving the result of function calls into a table to avoid recomputing the result each time the same function is called with the same arguments. A classic example of a function that could benefit from memoization is fib , the function that computes the n th Fibonacci number. To see this in action, lets consider the simple, unmemoized fib function: (define (fib n) (cond [(= n 0) 1] [(= n 1) 1] [else (+ (fib ( n 1)) (fib ( n 2)))])) The problem with this implementation can be demonstrated by analyzing what function calls will be...
View
Full
Document
This note was uploaded on 07/26/2009 for the course COMPUTERSC CS51 taught by Professor Gregmorrisett during the Spring '09 term at Harvard.
 Spring '09
 GREGMORRISETT

Click to edit the document details