This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Illinois Institute of Technology Department of Computer Science Lectures 1011: February 25March 2, 2009 CS 430 Introduction to Algorithms Spring Semester, 2009 1 Dynamic Programming We now introduce a useful approach to algorithm design: dynamic programming . 1 The word programming here is used as in the phrase television programming (that is, meaning to set a schedule), not in the sense of computer programming, which originally had that sense also. Dynamic programming is typically used on optimization problems, particularly optimization problems that exhibit optimal substructure : an optimal solution is composed of optimal solutions to smaller subproblems. In dynamic programming, we store the solutions to these subproblems in a table so we can avoid computing them multiple times. 1.1 Matrixchain multiplication In the problem of matrixchain multiplication 2 we are given a sequence (or chain) of n matrices that we would like to multiply: A 1 A 2 A 3 A n CLRS describes an algorithm for multipling two matrices, but in this case we are multiplying together many matrices, and have to do these multiplications two matrices at a time. Note that matrix multiplication is associative: that is, A 1 ( A 2 A 3 ) = ( A 1 A 2 ) A 3 . Thus, if we wanted to compute A 1 A 2 A 3 , we could either first multiply A 1 and A 2 and then multiply the result by A 3 or we could first multiply A 2 and A 3 and then multiply A 1 by the result. If the results are the same, why do we care? Recall that the efficiency of the CLRS matrix multiplication algorithm depends on the dimensions of the matrices. Specifically, multiplying an a b matrix by a b c matrix requires roughly abc computation steps.. In our example, we will let let A 1 be a p p 1 matrix, A 2 be a p 1 p 2 matrix, and A 3 be a p 2 p 3 matrix. Let us first consider A 1 ( A 2 A 3 ). Multiplying A 2 and A 3 using our simple algorithm will take ( p 1 p 2 p 3 ) steps and multiplying A 1 and A 2 A 3 will take ( p p 1 p 3 ) steps. (Recall that A 2 A 3 is a p 1 p 3 matrix.) Thus under this parenthesization the total multiplication will take roughly ( p p 1 p 3 + p 1 p 2 p 3 ) steps. Now consider ( A 1 A 2 ) A 3 . Multiplying A 1 and A 2 will take ( p p 1 p 2 ) steps and multiplying A 1 A 2 and A 3 will take ( p p 2 p 3 ) steps. Thus under this parenthesization the total multiplication will take roughly ( p p 1 p 2 + p p 2 p 3 ) steps. It is clear that for any given values of p , p 1 , p 2 , and p 3 , one of the parenthesizations will likely yield a more efficient multiplication procedure. For example, if p = 1, p 1 = 2, p 2 = 3, and p 3 = 4, the the first parenthesization will require roughly ( p p 1 p 3 + p 1 p 2 p 3 ) = 8 + 24 = 32 steps, though the second parenthesization will require only roughly ( p p 1 p 2 + p p 2 p 3 ) = 6 + 12 = 18 steps....
View
Full
Document
This note was uploaded on 04/07/2009 for the course CS 430 taught by Professor Kapoor during the Spring '08 term at Illinois Tech.
 Spring '08
 KAPOOR
 Algorithms, C Programming

Click to edit the document details