{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

week14

This preview shows pages 1–3. Sign up to view the full content.

13 Column generation methods We return to the question of solving large-scale integer programs and their linear programming relaxations. We have already studied a technique for solving large-scale (but still compactly represented) linear programs by con- straint generation methods. In constraint generation methods, we ignored many of the explicitly given constraints, solved the remaining LP, and then checked if the resulting optimal solution satisfied all of the constraints that were ignored; if so, the solution is optimal for the entire LP (by the relax- ation principle); if not, then there is some constraint that is violated by the current optimum which can be added to the LP, and then we resolve. Now we will focus on an approach dual to constraint generation, known as column generation. We will consider the cutting stock problem and the “pattern” formulation that was introduced in recitation section earlier. This problem was concerned with the efficient production of specified demand for given widths of paper rolls (called finals ), so as to minimize the number of original longer rolls (called raws ) used. The input consists of a number of raws of fixed width (for example, 100 inches), and there is is a demand for a given number of different widths of finals: for example, 97 rolls of width 45 inches, 610 rolls of width 36 inches, 395 rolls of width 31 inches, and 211 rolls of width 14 inches. The goal is to cut all of these finals from as few raws as possible. (This example is from the textbook by Chvatal on linear programming.) If one thinks about cutting one raw, then we specify the number of finals of width 45, the number of width 36, the number of width 31, and the number of width 14. If those four numbers are given by the vector ( f 1 , f 2 , f 3 , f 4 ), then for this pattern to be feasible, given the raw is of width 100, we have that 45 f 1 + 36 f 2 + 31 f 3 + 14 f 4 100 . So, for example, possible vectors include (2 , 0 , 0 , 0) , (0 , 2 , 0 , 0) , (0 , 0 , 3 , 0) , (0 , 0 , 0 , 7) , (1 , 1 , 0 , 1) , (0 , 2 , 0 , 2) , . . . In total, there are exactly 37 ways to specify a (non-trivial) pattern for cutting a raw. Some might seem not so useful. At first, it might seem wasteful to use (0,0,0,5), for example, since one might instead cut 7 finals of width 14, but this lesser pattern might still get used to complete a packing nonetheless (if, as one example, there were only 5 finals required of width 14). We can 132

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
build a table of these patterns as a matrix A = ( a ij ), where (the transpose of) each feasible vector is a column of A : the j th pattern corresponds to the j th column of A , where ( f 1 , f 2 , f 3 , f 4 ) is the j th of those 37 patterns, and a 1 j = f 1 , a 2 j = f 2 , a 3 j = f 3 , and a 4 j = f 4 . That is, a ij is the number of finals of the i th width produced by using the j th pattern to cut a raw. Let N be the number of distinct patterns. (We have N = 37.) Let b i denote the demand for finals of the i th width, w i . We can construct an integer programming formulation for our optimization model as follows. Let x j be a decision variable that specifies the number of raws that are to cut according to the j th pattern. Then, using as few raws as possible, while meeting the
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}