This preview shows page 1. Sign up to view the full content.
Unformatted text preview: UMass Lowell Computer Science 91.503 Analysis of Algorithms
Prof. Karen Daniels
Spring, Spring 2011 Design Patterns for Optimization Problems Dynamic D namic Programming
Matrix Parenthesizing Longest Common Subsequence Activity Selection Algorithmic Paradigm Context
Divide & Conquer View problem as collection of subproblems "Recursive" nature Independent subproblems Number of subproblems Preprocessing Characteristic running time Primarily for optimization problems (find an optimal solution) Optimal substructure: optimal solution to problem contains within it optimal solutions to subproblems Greedy choice p p y y property: locally optimal produces globally optimal Heuristic version useful for bounding optimal value
Subproblem solution order Dynamic Programming Greedy Algorithm overlapping typically q sequential dependence depends on partitioning factors typically log function of n typically small depends on number and difficulty of subproblems typically sort often dominated by nlogn sort Solve subproblem(s), then make choice Make choice, then solve subproblem(s) Dynamic Programming Approach to Optimization Problems
1. 2. 3. 4. Characterize structure of an optimal solution. Recursively define value of an optimal solution. Compute value of an optimal solution in bottombottomup fashion. Construct an optimal solution from computed information.
source: 91.503 textbook Cormen, et al. Dynamic Programming Matrix Parenthesization Example: Matrix Parenthesization Definitions D fi i i
Given "chain" of n matrices: <A1, A2, ... An, > chain Compute product A1A2... An efficiently Minimize "cost" = number of scalar multiplications Multiplication order matters! u p c o o de e s! source: 91.503 textbook Cormen, et al. Example: Matrix Parenthesization
Step 1: Characterizing an Optimal Solution Observation:
Any parenthesization of AiAi+1... Aj must split it between Ak and Ak+1 for some k. THM: Optimal Matrix Parenthesization:
If an optimal parenthesization of AiAi+1... Aj splits at k, then parenthesization prefix parenthesi ation of prefi AiAi+1... Ak m st be an optimal parenthesi ation must parenthesization. Why? If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize AiAi+1... Aj , contradicting optimality of that parenthesization. common DP proof technique: "cutand"cutandpaste" proof by contradiction
source: 91.503 textbook Cormen, et al. Example: Matrix Parenthesization Step 2: A Recursive Solution Recursive definition of minimum parenthesization p cost:
m[i,j]= min{m[i,k] + m[k+1,j] + pi1pkpj}
i <= k < j 0 if i = j if i < j How many distinct subproblems? each matrix Ai has dimensions pi1 x pi
source: 91.503 textbook Cormen, et al. Example: Matrix Parenthesization Step 3: Computing Optimal Costs 2,500 2,625 2 625 1,000 1 000 0 s: value of k that achieves optimal cost in computing m[i, j] source: 91.503 textbook Cormen, et al. Example: Matrix Parenthesization
Step 4: Constructing an Optimal Solution
PRINTOPTIMALPRINTOPTIMALPARENS(s, i, j) if i == j print "A"i else print "(" PRINTOPTIMALPRINTOPTIMALPARENS(s, i, s[i, j]) s[i PRINTOPTIMALPRINTOPTIMALPARENS(s, s[i, j]+1, j) s[i print ")" source: 91.503 textbook Cormen, et al. Example: Matrix Parenthesization Memoization source: 91.503 textbook Cormen, et al. Provide Dynamic Programming y g g efficiency But with topdown strategy LOOKUPCHAIN(m,p,i,j) topLOOKUPCHAIN(m,p,i,j) Use recursion Fill in table "on demand" 1 if m[i j] < m[i,j m[i j] i,j] 2 4 6 3 if i==j ==j return m[i,j] m[i,j] m[i,j m[i j] = 0 [i,j] j] q = LOOKUPCHAIN(m,p,i,k) LOOKUPCHAIN(m,p,i,k) + LOOKUPCHAIN(m,p,k+1,j) LOOKUPCHAIN(m,p,k+1,j) + pi1 pk pj Example:
RECURSIVEMATRIXCHAIN: RECURSIVEMATRIXCHAIN:
MEMOIZEDMATRIXMEMOIZEDMATRIXCHAIN(p) 1 n = p length 1 p.length 2 let m[1...n,1...n] be a new table. 3 for i = 1 to n 4 5 for f j = it n to m[i,j] = m[i,j] 5 else for k = i to j1 j 7 8 if q < m[i,j] m[i,j] [ ,j ,j] m[i,j] = q m[i,j] 6 return LOOKUPCHAIN(m, p,1,n) LOOKUPp,1,n) 9 return m[i,j] m[i,j] Dynamic Programming Longest Common Subsequence Example: Longest Common Subsequence (LCS): Motivation Strand of DNA: string over finite set {A,C,G,T} each element of set is a base: adenine, guanine, cytosine or thymine Compare DNA similarities S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA ACCG TCGAGT CGC S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA GTCGTTCGGAATGCCGTTGCT AAA
One measure of similarity: y find the longest string S3 containing bases that also appear (not necessarily consecutively) in S1 and S2 consecutively) S3 = GTCGTCGGAAGCCGGCCGAA
source: 91.503 textbook Cormen, et al. Example: LCS Definitions source: 91.503 textbook Cormen, et al. Sequence q if
indices of X) such that example: is a subsequence of q
(strictly increasing is subsequence of with index sequence Z is common subsequence of X and Y if Z i i b f d is subsequence of both X and Y example: common subsequence but not longest common subsequence. Longest? Longest Common Subsequence Problem: Given 2 sequences Problem: X, Y, find maximumlength common subsequence Z. maximum Example: LCS Step 1: Characterize an LCS
THM 15.1: Opt a LCS Substructure 5 Optimal CS Subst uctu e Given sequences: For any LCS of X and Y:
1 if 2 if then then then and Zk1 is an LCS of Xm1 and Yn1 Z is an LCS of Xm1 and Y Z is an LCS of X and Yn1 3 if PROOF: based on producing contradictions
1 a) Suppose
. Appending to Z contradicts longest nature of Z. b) To establish longest nature of Zk1, suppose common subsequence W of Xm1 and Yn1 has length > k1. Appending kto W yields common subsequence of length > k = contradiction. 2C Common subsequence W of Xm1 and Y of length > k would also b common subsequence b f d fl th ld l be b
of Xm, Y, contradicting longest nature of Z. 3 Similar to proof of (2) source: 91.503 textbook Cormen, et al. Example: LCS Step 2: A Recursive Solution
Implications of Theorem 15.1: 15 1:
yes
? no Find Fi d LCS(Xm1, Yn1) Find LCS(Xm1, Y) Find LCS(X, Yn1) LCS1(X, Y) = LCS(Xm1, Yn1) + xm LCS2(X, Y) = max(LCS(Xm1, Y), LCS(X, Yn1)) Example: LCS Step 2: A Recursive Solution (continued)
source: 91.503 textbook Cormen, et al. Cormen, Overlapping subproblem structure: pp g p
LCS ( X m 1 , Yn 1 ) LCS ( X m 1 , Y ) LCS ( X m 1 , Yn 1 ) LCS ( X , Yn 1 ) LCS ( X , Y )
(mn) distinct subproblems Recurrence for length of optimal solution:
0 c[i,j]= c[i1,j1]+1 max(c[i,j 1], c[i1,j]) max(c[i,j1], c[i 1,j]) if i=0 or j=0 if i,j > 0 and xi=yj if i,j > 0 and xi=yj y Conditions of problem can exclude some subproblems! Example: LCS Step 3: Compute Length of an LCS
What is the asymptotic worstworstcase time complexity?
0 1 2 3 4 c table (represent b table)
source: 91.503 textbook Cormen, et al. Example: LCS Step 4: Construct an LCS 8 source: 91.503 textbook Cormen, et al. Dynamic Programming ...leading to a Greedy Algorithm... l di t G d Al ith Activity Selection Activity Selection Optimization Problem Problem Instance: Set S = {1,2,...,n} of n activities {1,2,...,n Each activity i has: start time: si finish time: fi si f i Activities i, j are compatible iff nonoverlapping: non [ si f i ) [s j f j ) Objective: select a maximumsized set of mutually compatible activities maximumsource: 91.404 textbook Cormen, et al. Activity Selection
Activity Time Duration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Activity Number 1 2 3 4 5 6 7 8 Activity Selection
Sij = {ak S : f i sk < f k s j }
Solution to Sij including ak produces 2 subproblems: 1) Sik (start after ai finishes; finish before ak starts) 2) Skj (start after ak finishes; finish before aj starts) c[i j] size of maximumsize subset of i,j]=size i,j]=size maximumj] mutually compatible activities in Sij. 0 if Sij = 0 / c[i, j ] = max i < k < j ;ak Sij {c[i, k ] + c[k , j ] + 1} if Sij 0 / source: 91.404 textbook Cormen, et al. ...
View
Full
Document
This note was uploaded on 02/13/2012 for the course CS 91.503 taught by Professor Staff during the Spring '11 term at UMass Lowell.
 Spring '11
 Staff
 Algorithms, C Programming

Click to edit the document details