This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Mathematical Programming 25 (1983) 228—239
North—Holland Publishing Company THE MINIMUM COST FLOW PROBLEM: A UNIFYING
APPROACH TO DU_AL ALGORITHMS AND A NEW
TREESEARCH ALGORITHM* Refael HASSIN Statistics Department, TelArit University, TelArie. Israel Received 20 January l98l
Revised manuscript received 27 November I98] This paper is concerned with the minimum cost ﬂow problem. It is shown that the class of
dual algorithms which solve this problem consists of different variants of a common general
algorithm. We develop a new variant which is, in fact, a new form of the ‘primal—dual
algorithm‘ and which has several interesting properties. It uses, explicitly, only dual variables.
The slope of the change in the (dual) objective is monotone. The bound on the maximum num
ber of iterations to solve a problem with integral bounds on the ﬂow is better than bounds for
other algorithms. Key words: Minimum Cost Network Flow, Tree—Search Algorithm, Primal—Dual Algorithm. Let (N, A) be a directed network, where N is a ﬁnite set and where A g N x
N. We investigate the familiar minimal cost network ﬂow problem: (P) Minimize Z ciixij,
(i.1 EA subject to 2 xi,  x,, = 0, i E N, (l)
(LNEA (LDEA
dii Sxij Ski), (LDEA (2) In problem (P), x,, is the ﬂow along are (i, j), q, is the unit cost of this ﬂow, and
d,, and k), are lower and upper bounds on this ﬂow, possibly with k1, = 00. The
ﬂow into each node equals the ﬂow out of it, and therefore (P) is a circulation
problem. The dual to (P) can be written as follows: (D) Maximize 2 {flu(Var + kij(Vij)_}a
(mes
Subject to U; _ U, + Vii = cij, (i, j) E A, (3)
U,, V,, unrestricted, i E N, (i, j) E A. (4) In (D) and throughout the paper 00* = max{0, x} and (x)' = min{0, x}. * This paper is part of the author‘s doctoral dissertation submitted at Yale University. "I78 R. Hassianhe minimum cost ﬂow problem 229 Numerous computational and theoretical Works developed algorithms for
solving the minimal cost network ﬂow problem (cf. [1—6, 8—13]). Recently, Jensen
and Barnes [9] classiﬁed minimum cost network algorithms. They deﬁne four
classes; primal, dual—nodeinfeasible, dualarcinfeasible, and primal—dual. In this
paper we investigate the essential features of these algorithms in order to ﬁnd
the relations between them, and to evaluate the advantages each one has over
the other. We divide the most common algorithms into two classes, the primal
and the dual algorithms, each class consisting of different variants of a common
general algorithm. Further we show that the so called primaldual algorithm can
be carried out in the same manner as a pure dual algorithm by using only dual
variables and a slight modiﬁcation of the wellknown version of this algorithm.
Finally, We show that this modiﬁcation presents two valuable properties: the
slope of the change in the (dual) objective is monotonous, and the bound on the
maximum number of iterations is better (at least in some special cases) than the
bounds for other algorithms. We note that in [5] and [6] the superiority of the primal simplex procedure for
the minimum cost problem was attested. However, it is important to have improved dual algorithms, as these methods are useful in some situations such as
sensitivity analysis. 1. Notation and terminology A path in (N, A) is a sequence (al,...,oz,,) of n (n 21) arcs having, for
m = 1,. . . , n, arc am 6 A and either am = (im, im+,) or am = (i,,,.l, im). This path is a
cycle if i1 = in“. Arc am in this path has positive orientation if a," = (im, i,,,+,) and
negative orientation if a,,, = (imH, im). A cycle is a directed cycle if all its arcs
have positive orientation. If arc (i, 1') has cost c,,, then the cost of this path is the
sum of the costs of its positively oriented arcs less the sum of the costs of its
negatively oriented arcs. A subgraph (M, B) of (N, A) has ﬂ¢MQN, B QMXM and B QA. (We
allow a subgraph to have no arcs.) A subgraph (M, B) of (N, A) is connected if
(M, B) contains a path from each node i E M to each node j E M having j¢ i. A subgraph is called a tree if it is connected and if it has no cylces. A set of
nodedisjoint trees is called a forest. For sets SC_ZN, TC_:N, BgA and a function f, we have the following
deﬁnitions: s'={ieN; if S},
(s, T)={(i,j)EA: iES,jET}, “3) = 2 f1; (LDEB 230 R. Hassinl The minimum cost ﬂow problem 2. Classiﬁcation of algorithms The most common algorithms for solving network ﬂow problems can be
classiﬁed to ‘primal’ and ‘dual' algorithms, according to the method by which
they improve solutions and the optimality criterion they use. We note that some
algorithms which use both primal and dual variables may belong to both classes. Primal Algorithms. For a given feasible circulation x.,, deﬁne the modiﬁed
network with the set of arcs A'" and the costs c'" as follows: (i, j) e A’" and as = c,, if (i, j) e A and x, < k,,.
(i, j) E Am and C3! = —cji if (j, i) E A and xii > dji To improve the solution, ﬁnd in the modiﬁed network a directed cycle with
negative cost (i.e., a negative cycle) and increase all the ﬂow values of its arcs by
the same amount until some xi, not previously equal to one of its bounds
becomes equal to it (and a modiﬁed cost changes). The following theorem gives a
necessary and sufﬁcient condition for the termination of the algorithm. Theorem 1 (Busacker and Saaty [2]). A feasible solution to (P) is optimal if and
only if the modiﬁed network (N, A’") has no negative cycles. Dual Algorithms. In our study of (D), we call U, the potential of node i, and Vi,
the reduced cost of are (i, 1'). Each reduced cost Vi} appears in exactly one dual
constraint. The potentials U, and the reduced cost Vi, are not restricted in sign.
Consequently, every set {U,} of potentials is dual feasible, since (3) is satisﬁed
by taking Vi, = c” — U, + U]. For a given feasible set of reduced costs Vii, deﬁne the modiﬁed network
(N, A) with upper (b,,) and lower (a,,) bounds as follows: ( (dﬁ, (1,.) if V, > 0,
(an, bu) = (dij, kij) if Vii = 0,
(kii, kij) if Vii < 0.
Find a set M C N with I(M) >0, where I(M)=a(M’,M)b(M,M’), (5) and increase all potentials in the set M by the same amount 6, until some Vi,
previously not 0 becomes 0 (and a modiﬁed bound changes). The effect of this step is to increase V,j by e for (i, j)E(M’,M) and to
decrease Vi, by e for (i, j) E (M, M’). It is an elementary matter to check that for R. Hassin/The minimum cost ﬂow problem 231 0 s e < T(M) where 00
, T(M) = min {— V,,: (i, j) e (M’, M), V,j <0}, (6)
{Vilz (i, j) E (M, M,)! Vij > 0}! the change in the objective of (D) is I (M)  6. Since I (M) > 0 we obtained a new
feasible solution with a greater objective value. Therefore we call M an
improving set. If T(M) = 00, then (D) is feasible and unbounded, which indicates
that (P) is infeasible. The following theorem gives a necessary and sufficient
condition for the termination of the algorithm. Theorem 2.‘ A feasible solution to (D) is optimal if and only if the modiﬁed
network has no improving sets. Proof. The condition is trivially necessary. To prove sufﬁciency, we use
‘Hoﬂman’s Existence Theorem for Circulations’ [7; 13, p. 268]: A feasible
circulation exists in a network (N, A) if and only if k(M, M ’)2 d(M’, M) for
every M (_: N. Suppose no improving sets exist. By Hoifman’s Theorem, there exists a
feasible circulation with respect to the modiﬁed bounds. That is, X" = dij for Vii > 0,
(in 5 x5} 5 kg} for Vij = 0,
xi, = ki" for Vij < 0. By ‘complementary slackness’ this circulation is an optimal solution to (P) and
the set of reduced costs constitutes an optimal solution to (D). The most important part of the algorithm is the search for improving sets, and
it is here that the various algorithms ditfer. 3. Some existing dual algorithms In this section we demonstrate how some of the most common existing
algorithms ﬁt into the class of dual algorithms described in Section 2. 3.1. The out of kilter algorithm
The deviation of an arc is deﬁned as: dij — xi, if V,, 2 0, xi, < d,,,
xi, — d,, if V,, > 0, x,, > d",
xi} — k,, if V,, s 0, xi, > k”,
ku — xi, if V,, < 0, xii < k,,, and
zero otherwise. 232 R. Hassin/The minimum cost flow problem
Improving sets are chosen as follows: Step 1 : Arbitrarily choose an arc with a positive deviation. Step 2: By any ﬂow algorithm (e.g. the labeling algorithm), try to ﬁnd a cycle
which includes this are, such that, by increasing all flows in arcs oriented in one
way, and decreasing ﬂows in .arcs of opposite orientation, no deviation is
increased (and the deviation of the original arc is decreased). Step 3: By repeating Step 2, attempt to decrease the deviation of the arc to
zero. If the attempt succeeds, go to Step 1. Else, let M be the set of labeled
nodes; then I(M) > 0. . The last assertionrequires proof. If (i, j) is an are for which the origin i is
labeled, and the extremity 1' cannot be labeled, then either Vii > 0 and x,, Z dij, or Va 5 0 and x,, .>_ k,,. If (i, j) is an are for which the extremity j is labeled, and the origin i is not
labeled, then either Vll > 0 and L] S (1,], 0r Vij < 0 and Xu‘ 5 k”.
Since x is a circulation, b(M, M') = NO, 1') E (M, M’): V, > 0}) + k({(i, j) E (M, M), V., S 0})
S X(M, M’) = x(M', M)
S d({(i, 1') E (M’, M): V”. 2 0})
+ k({(i, I) E (M', M): V”. < 0}) : U(M', M), However, since at least one are in (M, M’) U(M’, M) has positive deviation
(by construction), one of the inequalities is strict and, b(M, M ’) < a(M’, M) or,
I (M ) > 0. 3.2. The dual simplex algorithm The dual simplex algorithm maintains a circulation and a Spanning tree T
with V = 0, such that all arcs not in the tree satisfy complementary slackness:
x,, = d,, if V,, >0, xi, = kij if V,, <0. For simplicity we assume that the current
dual solution is nondegenerate. In this case, all arcs not in T have nonzero
reduced costs. The algorithm chooses the are (m, n)EA which has the maxi
mum deviation from the feasible region (dij — Xi] for xi, < d”, x,—, — k,, for
x., > k,,). The tree is cut at this are and its components are M, M’. One of them is
an improving set. Its potentials are increased until a new tree is obtained. Then
flow is sent through the unique path of T connecting the end nodes of the are
which blocks the change of potentials, to create a new primal solution. To see that either I(M)> 0 or I(M’)> 0, suppose for example that (m, n)E R. Hassinl The minimum cost ﬂow problem 233 (M, M’). If x,,,,,, > kmm, then
I(M) = a(M’, M) — b(M, M’)
= [M03 1') E (M ’, M )2 Vi. > 0}) + k({(i, j) e (M’, M): V, < 0})]
— [d({(i, 1') E (M, M’)! Vi; > 0})
+ k({(i, j) E (M, M’): Vij s 0}) + km]
> x(M’, M) — x(M, M’) = 0.
Similarily, if x,,.,, < dmn, then I (M ’) > 0. 3.3. The primaldual algorithm For each set of reduced costs, a ﬂow is constructed such that complementary
slackness holds, and the sum of ‘node infeasibilities’ is minimum. This requires
solving the following ‘restricted’ primal problem: minimize 2 (Y?+ YE), iEN subject to x(i,N)—x(N,i)+Y§’—Y,~'=0, iEN, xi, = k”, Vi]. < 0,
xi, = dij, V,, > 0,
d,, s xa; s k.,~, V, = 0,
Y2“, Y: 2 0. If the solution equals zero, then the ﬂow values x constitute a feasible
circulation and complementary slackness holds. Hence, this circulation is opti—
mal. Otherwise, let U be the corresponding optimal dual solution. Then the set
of nodes i for which U. >0 is an improving set. In fact, for every vertex of the
dual polyhedron U, E{+1, —1}, and the set {i: U, = +1} is an improving set. 4. A treesearch algorithm The utility of Theorem 2 is enhanced if one ﬁnds efﬁcient ways to determine
improving sets. So far we described some common methods that perform the
search. We describe below a more direct algorithm which is a modiﬁed version
of the primal—dual algorithm. This algorithm solves the ‘restricted’ dual problem
directly by using explicitly only dual variables. Network (N, A) is said to have independent costs if it contains no simple cycle
whose cost is 0. It is always possible to perturb the car’s so that a network has
this property [3, p. 231]. To simplify the exposition, we assume throughout that
network (N, A) has independent costs. We note, however, the algorithm des R. Hassin/The minimum cost ﬂow problem cribed in this section can be executed also with costs which are not independent.
In this case, when some reduced costs become simultaneously zero, all except
for one are assumed to retain their sign. Consequently it may happen that
T(m) = 0 in equation (6). Consider any feasible solution to (D), and set F = {(i, .l) Vii = 0, dij < kn}: so that F is the set of arcs whose reduced costs are (currently) 0. Suppose F
contained a simple cycle. Sum (3) to see that the cost of this cycle equals 0. But
this contradicts our assumption of independent costs. Hence, F has no cycle.
Consequently, (N, F) is a forest. Call M a good improving set if I(M) > 0 and if I(M) > I(S) for every proper
subset S of M. Call M the best improving set if M is the unique good improving
set such that I(M) a I(S) for every S g N. The tree—search algorithm locates the best improving set: Step 1: Set f(i) = 0 for each i E N. Set F = {(i, j) E A: V,, = 0, d” < k,,}. Step 2: For each (i, j)EF, set f(i)<——[f(i)+ 1] and f(j) <—[f(j)+ 1]. (Step 2
initializes f(i) to the number of arcs in F which are incident with node i.) Step 3: Set S = N, M = Q} and P(i) ={i}, I(i) = I({i}) for each iE N. (This
procedure is intended to terminate with S = 95 and M = best improving set. At
each stage I(i) is the ‘effective’ improvement for node i, and P(i) is a set of
nodes which belong to the best improving set if and only if node i belongs to the
set.) Step 4: Stop if 5:0. Else find i e S such that f(i)sl. Set Si—[S— {1}]. Step 5: If I(i) SO, go to Step 8. Else set M <— [M U P(i)]. Step 6: If f(i) = 0, go to Step 4. Else ﬁnd the unique 1’ such that either (i, j) E F
or (j, i) E F. Step 73 Set ! f(i)<—[f(l) — 11 If (Ll) E F. set 1(i)<—[J(i) + bij * aij . F‘—
[F —{(i, i)}]. Else set J(j)<—[J(j)+ b,, — aﬁ]. F<—[F —{(j. i)}]. Go to Step 4. (The
new value of I(j) represents the change in (5) caused by joining j to M, given
i E M.) Step 8: If f(i) = 0, go to Step 4. Else ﬁnd the unique 1' such that either (i, l) E F
or (j, i)EF. Set f(j)<—[f(j)— 1]. If (i, j)EF set a <—(i,j), and if (1,051: set
a <—(j, i). Set F <—F—a. If J(i)+ b” — 0,, SO and a =(i,j) or if I(i)+ bi, — a,, $0
and a = (j, i) go to Step 4. Step 9: Set P(j) < [P(j) U P(i)]. If a = (i, 1') set I(j) <— [I(j) + J(i) + b,, ~ a,,~].
Else set I(j) <— [J(j) + 1(1) + bi, — a,,]. Go to Step 4. Theorem 3. The tree—search algorithm ﬁnds the best improving set. Proof. Denote by T the best improving set. Suppose that at a certain stage of the R. Hassin/The minimum cost ﬂow problem 235 algorithm
(i) P(i) c; T or P(i) g T’,for every i E N;
(ii) MgT,N—S—M(;T’. Note that since i E P(i), assumption (i) implies that for every iE N:
(iii) P(i) g T if, and only if, i E T. These assumptions clearly hold when Step 3 is executed, and we must show
that they still hold after each execution of Steps 5 and 9. In Step 5, if J(i) > 0, then joining i to any set which contains M increases the
value of (5) of this set. Since by (ii) M g T also iE T, and by (iii) P(i); T. If
I(i) 50 the value of (5) will not increase and P(i) Q T’. We conclude that (ii) is
preserved in Step 5. In Step 9, if either I(i) + b,, — aii >0 and (i, j) E F or I(i) + b,,» — (1,, > 0 and
(j, i)€ F, then joining i to any set containing P(i) increases the value of (5) for
this set. In any other case the value of (5) decreases. Together with (iii) this
implies that (i) is preserved by this step. Since (ii) holds until the algorithm terminates with s =ﬂ, the ﬁnal set M
satisﬁes M t; T and M’ g T’ so that M = T. An alternative search policy is to apply the treesearch algorithm until any
(good but not necessarily the best) improving set is found, and then to change its
potentials. This policy requires less computations in each iteration. However,
when best improving sets are found, some theorems, including a bound on the
number of iterations, can be proved. We now state and prove these theorems. Theorem 4. Let M, be the best improving set at iteration r. Let I(M. r) be the
value of I(M) at iteration r. Then I(M,, r) is nonincreasing in r. Proof. Suppose, for example that only arc (i, j) E (M,. M’,) blocks the change of
potentials at iteration r. (The same proof holds when more than one are blocks
the change.) The only decrease in I(M,) is caused by b,, which was changed from (1,, to kij. (If (j, i) E (M’,, M,), then the decrease is caused only by a,, which
was changed from k,, to d,,.) (a) Suppose i 6 MM, then M,+l :_) M,.
If M,“ = M,, then I(Mr+‘, r+ 1) = I(Mr, r)+ an _ bi] < I(Mr, r).
If M,+1¢ M,, then M,“ I) M,, j E M,“ and I'(M..., r +1): I(M..., r) 5 HM. r). (b) Suppose iE’M,+., then M,“ C M, and since M, is a good improving set
I(M,+1, r+l)=I(M,+,, r)<I(M,, r). 236 R. Hassin/The minimum cost ﬂow problem Note that by perturbation it is possible to ensure that only one arc blocks the
change of potentials. In this case either M,,.l g M, or M,“ 2 M, in each iteration. Corollary 1. If I(M,, r) = I(M,“, r + 1), then M,,l 3 M,.
Corollary 2. The same value of I(M,, r) cannot recur more than N — 1 times. Corollary 3. Suppose all bounds are integers, then no more than N  I(M,, 1)
iterations are needed to ﬁnd an optimal solution. Since the order of work needed in each iteration is INF, the bound is of order
1N I3  I (M,, 1). This bound is better than the bound for the outofkilter method
which is IN [3 times the sum of initial primal infeasibilities (cf. [12]). Corollary 4. The direction of change in the reduced cost of any arc cannot be
opposite in two successive iterations. Corollary 5. If the same arc blocks the change of potentials in iterations m and
n, m <n, then m Sn—3. Proof for Corollary 5. Suppose that (i, 1') blocks in iteration m. In iteration m + 1,
Vi, may become nonzero. In iteration m + 2 the direction of change in V,, cannot
be opposite. Only in iteration m + 3, (i, 1') can block again. Theorem 5. The algorithm converges to an optimal solution in a ﬁnite number of
iterations. Proof. I(M) is equal to a linear combination 2 6112.}, where 6,, E {0, 1, — 1}, 2,, E
{dip k,,}. Hence there is only a ﬁnite set of possible values of I(M) for M = the best improving set. By Corollary 2 of Theorem 3, the algorithm terminates after
a ﬁnite number of iterations. Theorem 6. Let M,, I(M, r) be as in Theorem 3. Then ﬂ,M,¢ (ii. (In other
words, there exists at least one node which is included in all best improving sets.) Proof. Suppose the assertion is false. Then there exist r and M such that
I(M, r) > 0, I(S, r) s 0 for all S C M (i.e. M is a minimal improving set in
iteration r) and ((1:2) M,) f) M = ill.
Let s = max{t: t < r, M is not a minimal improving set in iteration t}. Then, either (a) or (b) holds: (a) For all S g M, I(S, s) so. If M, r) M = (6, then M, is not the best improving set (since if we join M to M,
we obtain a better set). If M, n M = M,, then I(M, s) 2 I(M, s + 1). But this is a contradiction since we
assumed I(M, s + 1) > 0 2 I(M, 3). Therefore, M r") M,;é ill and M r) M; aé (ll is incident to the blocking arc in
iteration 8. Since M is a minimal improving set in iteration s + 1, HM n M,, s) = R. Hassin/The minimum cost ﬂow problem 237 I(M 0 M,, s +1)SO. But I(M, s +1)>O, hence by joining M (‘1 M; to M, we
obtain a better set. Again, this is a contradiction. (b) There exists 3 C M such that I (S, s)>0. Replace M by S, r by s and
restart the prOcedure. Since N is ﬁnite, the process must result in a contradiction. Therefore there exists at least o...
View
Full Document
 Fall '10
 SETHURAMAN

Click to edit the document details