This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 15.083/6.859J
Integer Programming & Combinatorial Optimization
Solutions to Problem Set 2 October 18, 2004 1. BW 2.15
As shown in BW example 2.3, the class of clique tree inequalities deﬁned in Eq. (2.22)
h
�� xe + t
�� xe ≥ 3t + 2h − 1, i=1 e∈δ (Ti ) i=1 e∈δ (Hi ) is valid when h = 1. We shall then use induction to show the validity.
By multiplying the degree constraints by
� xe + 1
2 and summing over Hi , Ti , we get 1�
xe = Hi 
2 i = 1, . . . , h 1�
xe = Ti 
2 i = 1, . . . , t e∈δ (Hi ) e∈E (Hi ) � xe + e∈E (Ti ) e∈δ (Ti ) Substituting them into the clique tree inequalities, we get
h
� � (Hi  − i=1 xe ) + e∈E (Hi ) t
� (Ti  − i=1 � xe ) ≥ e∈E (Ti ) 3t + 2h − 1
2 �
h
�� xe + i=1 e∈E (Hi ) t
�� xe ≤ i=1 e∈E (Ti ) h
�
i=1 Hi  + t
�
i=1 Ti  − 3t + 2h − 1
2 Let C (H, T ) be a clique tree where H  = h + 1, T  = t. Without loss of generality, consider
handle Hh+1 and let the teeth intersecting with it be T1 , . . . , Tk and k is odd.
Removing the nodes in Hh+1 \ k Ti leave us with k clique trees Ci (H (Ci ), T (Ci )), i = 1, . . . , k i=1 each with at most h handles. Therefore, by the induction hypothesis, for all i
1 � � xe + H ∈H (Ci ) e∈E (H ) � � � xe ≤ T ∈T (Ci ) e∈E (T ) H ∈H (Ci ) Summing over the k inequalities, and given k
� � H  + T  − T ∈T (Ci ) H (Ci ) = h and i=1
h
�� xe + t
�� xe ≤ From Hh+1 , we have
� xe + e∈E (Hh+1 ) Adding 1
− 2
xe 1
2 T (Ci ) = t i=1
h
� Hi  + t
� Ti  − i=1 i=1 i=1 e∈E (Ti ) i=1 e∈E (Hi ) k
� 3T (Ci ) + 2H (Ci ) − 1
2 � 3t + 2h − 1
2 xe = Hh+1  e∈δ (Hh+1 ) ≤ 0 to the above inequality for all e ∈ δ (Hh+1 )\ t E (Ti ), we get i=1 t � xe + � xe ≤ Hh+1  i=1 e∈δ (Hh+1 )∩E (Ti ) e∈E (Hh+1 ) Since Hh+1 intersects with k teeth, 1�
2 t
� � xe ≥ k , we get i=1 e∈δ (Hh+1 )∩E (Ti ) � xe ≤ Hh+1  − e∈E (Hh+1 ) k
2 and since k is odd, � xe ≤
Hh+1  − e∈E (Hh+1 ) k + 1 2 Adding the contribution from Hh+1 to the other handles, we get
�t
�h+1 �
�t �
�h+1
Ti  −
i=1
e∈E (Hi ) xe +
i=1 Hi  +
i=1
e∈E (Ti ) xe ≤
�h+1
�i=1
t
=
i=1 Ti  −
i=1 Hi  + 3t+2h−k+k+1 2
3t+2(h+1)−1 2 � 2. BW 3.13
The three matroids, M1 , M2 and M3 are:
(a) M1 is the matroid (A, I2 ) as deﬁned in the book where I2 is the collection of acyclic subgraphs.
(b) Deﬁne M2 as the matroid (A, I5 ) where I5 is the collection of arcs in which no more than
one arc enters each node.
(c) Deﬁne M3 as the matroid (A, I6 ) where I6 is the collection of arcs in which no more than
one arc leaves each node.
2 Since we know how to represent the convex hull of the incidence vectors of a matroid as an integral
polyhedron, and a Hamiltonian path in D contains V  − 1 arcs, a Hamiltonian path in digraph
D can be formulated as the integer program
�
ZIP = max
xj
e∈A � xe ≤ m1 (S ), S⊆A xe ≤ m2 (S ), S⊆A xe ≤ m3 (S ), S⊆A e∈S �
e∈S �
e∈S � xe = V  − 1 e∈A xe ∈ {0, 1}, e ∈ A, where m1 (·), m2 (·), and m3 (·) are the rank functions corresponding to matroids M1 , M2 and M3 ,
respectively.
3. BW 3.17
Dual is
max � bi pi − i∈V s.t. � uij qij (i,j )∈E pi − pj − qij ≤ cij
qij ≥ 0 (1)
(2) ∗
let (p∗ , q ∗ ) be the dual optimal solution, since uij ≥ 0, qij = max{0, p∗ − p∗ − cij }. Consider the
i
j
following randomized rounding algorithm for arbitrary integral C : (a) Generate U ∼ unif orm(0, 1)
(b) ∀i, p∗ − �p∗ � ≤ U =⇒ pi = �p∗ � ¯
i
i
i
∗ − � p∗ � > U =
pi
⇒ pi = � p ∗ � + 1
¯
i
i
∗
(c) If qij > 0, (d) If ∗
qij = 0, qij = pi − pj − cij
¯
¯
¯
qij = 0
¯ Claim: (p, q) are dual feasible. Proof : ¯¯
¯
If p∗ and p∗ are both rounded up or down, both constraints (1) and (2) are satisﬁed by pi , pj and qij . i
j
∗
Case qij > 0:
∗ is rounded up and p∗ is rounded down, b oth constraints (1) and (2) are satisﬁed by p , p
If pi ¯i ¯j
j
and qij .
¯
∗
Let p∗ = �p∗ � + f rac(p∗ ). If pi is rounded down and p∗ is rounded up, f rac(p∗ ) > f rac(p∗ ).
i
i
i
j
j
i
∗ = p∗ − p∗ − c = �p∗ � + f rac(p∗ ) − �p∗ � − f rac(p∗ ) − c > 0, �p∗ � − �p∗ � − c > 1.
Since qij
ij
ij
ij
i
j
i
i
j
j
i
j
Therefore, qij = pi − pj − cij = �p∗ � − �p∗ � − 1 − cij > 0. Both constraints (1) and (2) are met.
¯ ¯
¯
i
j 3 Case qij = 0: p∗ − p∗ − cij ≤ 0. i
j
If p∗ is rounded up and p∗ is rounded down, f rac(p∗ ) > f rac(p∗ ). i
j
i
j
p∗ − p∗ − cij = �p∗ � + f rac(p∗ ) − �p∗ � − f rac(p∗ ) − cij ≤ 0, �p∗
� − �p∗ � − cij ≤ −1.
i
j
i
i
j
j
i
j
Therefore, pi − pj − cij = �p∗ � + 1 − �p∗ � − cij ≤ 0. qij = 0, both (1) and (2) are met.
¯
¯
¯
i
j
If p∗ is rounded down and p∗ is rounded up, both constraints (1) and (2) are satisﬁed by pi , pj
¯¯
i
j
and qij . �
¯
Under randomized rounding scheme proposed, E [p] = p∗ . As qij = 0 or is function of p, due to
¯
¯
¯
∗.
iterative expectations, E [q ] = q
¯
Therefore, E [ZH ] = ZDLP , optimal value of the relaxed dual LP. ZDIP , optimal value of the dual
IP, = ZDLP for all integral c.
4. BW 3.18
Let G� = (S ∪ {u}, E � ) be the graph obtained by contracting in G the node set V \ S to a single
�
node u. Deﬁne x� ∈ RE  as follows:
x�
e
�
x(s,u) := �
xe
:=
r∈V \S,(s,r)∈E x(s,r ) for e ∈ E (S ),
for s ∈ S, (s, u) ∈ E � . Claim: x� ∈ PG� .
Proof:
For S � ⊆ S we obtain
�
�
x� =
e
e∈δG� (S � ) x� s,u) =
( s∈S �
(s,u)∈E � e∈δG� (S � )
e∈E (S ) Therefore, if S � odd, S  ≥ 3, � x� +
e xe + ��
s∈S � e∈δG� (S � )
e∈E (S ) �
e∈δG� (S � ) xe � � r ∈V \S
(s,r )∈E x(s,r) = � xe . e∈δ (S � ) ≥ 1. Deﬁne V � = S ∪ u, and l S � ⊆ V � where S �  is odd, S �  ≥ 3, u ∈ S � ,
�
�
�
x� =
xe ≥ 1.
e
e∈δG� (S � ) e∈δG� (V � \S � ) Furthermore, for s ∈ S
� e∈δG� (u) x� =
e xe = 1, e∈δG� (s) e∈δG� (s) � � x� =
e
�
e∈δG� (S ) x� =
e � xe = 1. e∈δ (S ) The last step is given in the text. From above, 0 ≤ x� ≤ 1 for all e ∈ E � is also satisﬁed. This
e
shows that x� ∈ PG� and x� satisﬁes (3.16) with respect to G� . For completeness, since G is a
minimal example, x� is a convex combination of perfect matchings in G� , i.e.,
�
�
�
where λM � ≥ 0,
λM � = 1,
x� =
λM � χM
M � ∈M� M � ∈M� 4 and M� denotes the set of all perfect matchings in G� .
Similarly, we can show that x�� ∈ PG�� .
Deﬁne
� x= αM χM M ∈M where M denotes the set of all perfect matchings in G. αM is further deﬁned as follows:
If M is a perfect matching in G such that M ∩ δ (S ) > 1, then we set αM := 0.
If M is a perfect matching in G such that M ∩ δ (S ) = 1. Then by contracting V \ S or S we
obtain perfect matchings M � in G� and M �� in G�� , respectively:
If {(s, r)} = M ∩ δ (S ) where s ∈ S, r ∈ V \ S , then
M � = (M ∩ E (S )) ∪ {(s, u)} ∈ M� ,
Set
αM := M �� = {(t, r)} ∪ (M ∩ E (V \ S )) ∈ M�� . x(s,r)
λM � µM �� .
�
x(s,u) · x��t,r)
( Claim: The above deﬁnes a convex combination of perfect matchings.
Proof:
We have that αM ≥ 0 for all M ∈ M. For (s, r) ∈ δ (S ) , s ∈ S, r ∈ V \ S we have
��
�
��
�
�
x(s,r)
λM �
αM =
µM ��
x� s,u) · x��t,r)
(
(
M ∈M
M � ∈M�
M �� ∈M��
M ∩δ (S )={(s,r )} P�
= (s,u)∈M � �� M � ∈M� x(s,r)
�
x(s,u) · x��t,r)
( = (t,r )∈M �� � � �
λM � χM ) =x(s,u)
(s,u · x� s,u) · x��t,r)
(
( = x(s,r) .
This implies that
�
M ∈M as well as � � (s,r)∈δ (S ) M ∈M
M ∩δ (S )={(s,r )} αM = �� M αM χ For e ∈ E (S ) we have
��
�
M
αM χ =
e � =
(s,r) M ∈M M ∈M � � αM = (s,r )∈δ (S ) αM χM ) =
(s,r M ∈M � � x(s,r) = � αM = x(s,r) . M ∈M
M ∩δ (S )={(s,r )} M ∈M
e∈M 5 � � (s,r)∈δ (S ) αM = M ∈M
M ∩δ (S )={(s,r )}
e∈M xe = 1 e∈δ (S ) αM = �
(s,r )∈δ (S ) = �
(s,r )∈δ (S ) = x(s,r)
x� s,u)
( �� � λM � �� � M � ∈M�
(s,u)∈M �
e∈M � �
µM �� M �� ∈M��
(t,r )∈M �� �� � =x��t,r)
( � �
λM � M � ∈M�
(s,u)∈M �
e∈M � �
λM � 1 � �
x(s,u) �� M � ∈M�
(s,u)∈M � � =x� s,u)
( � � λM � χM =
e x(s,r) r ∈V \S
(s,r )∈E �
� ��
s∈ S �� �� M � ∈M�
(s,u)∈M �
e∈M � s∈S = x(s,r)
�
x(s,u) x��t,r)
( λM � χM = x�
e
e M � ∈M� = xe .
For e ∈ E (V \ S ) we have
��
�
αM χM =
e M ∈M �
(s,r )∈δ (S ) x(s,r)
�
x(s,u) x��t,r)
( �� = (s,r )∈δ (S ) = x(s,r)
x��t,r)
( �� r ∈ V \S � � �
µM �� M �� ∈M��
(t,r )∈M ��
e∈M �� �
µM �� M �� ∈M��
(t,r )∈M ��
e∈M �� �
µM �� 1 � x��t,r)
( M �� ∈M��
(t,r )∈M ��
e∈M �� r ∈ V \S = � =x� s,u)
( �� �� � �� � M � ∈M�
(s,u)∈M � �
� λM � �
�� M �� ∈M��
(t,r )∈M �� µM �� χM =
e x(s,r) s ∈S
(s,r )∈E �� =x��t,r)
( � �
�� µM �� χM = x��
e
e M �� ∈M�� = xe .
This gives
x= � αM χM where αM ≥ 0, M ∈M � αM = 1. M ∈M 5. Hands On Exercise: Facility Location
Some ﬁgures may diﬀer from your observations (i.e. time, etc.)
(a) Solving FL.prj and AFL.prj with cuts
i. Compared the size of the two formulations in terms of the number of constraints and
variables. Which formulation is more compact?
FL is a larger formulation as compared to AFL
6 con
var FL
4200
4020 AFL
220
4020 ii. Which formulation takes a shorter solution time? How many cuts were used? How many
nodes were evaluated?
FL
AFL
Time 0.42
4.81
Cuts
660 implied bound cuts
Nodes
0
19
iii. Did we converge because the duality gap has met the tolerance or did we exhaust the
branch and bound tree? How do you know?
For AFL, we exhaust the branch and bound tree. There are no nodes left to be explored
at the end of the CPLEX runs. For FL, problem was solved in root relaxation using
heuristics.
iv. Why does it take longer to solve the reduced formulation? Do the constraints in FL
describe the convex hull of integer solutions?
AFL, while being more compact, is a weaker formulation than FL. Its linear relaxation
is not as close to the convex hull of the integer solution as FL.
The constraints in FL do not describe the convex hull of integer solutions. (We know
FL is not a problem belonging to class P)
v. In solving FL, will more time be required to solve the problem if we do not use cuts?
Why?
No. FL is solved during root relaxation. Cuts were not used in solving the problem even
when CPLEX can use them.
(b) Solving FL.prj and AFL.prj without cuts
i. What are the new solution times and number of nodes evaluated? Does it take longer/shorter
to solve the FL and AFL? Time
Nodes FL
0.42
0 AFL
9.52
>2900 It takes longer to solve AFL. FL takes the same amount of time to solve.
ii. Why are more nodes explored when solving AFL without cut generations? Why are
there many cutoﬀs before the solution run terminates?
Without cuts, the branch and bound tree to be evaluated is larger. Consequently, more
nodes have to be evaluated. There are many cutoﬀs because of the branch and bound
7 process. Remaining nodes are shown not to be able to improve on the current solution,
and the branches are cut oﬀ. This problem is relatively easy, and we can solve the prob
lem via an exhaustive exploration of the branch and bound tree.
(c) FLhard.lp and AFLhard.lp
i. Why is this problem harder?
The problem is harder simply because the value and the service costs are diﬀerent. The
numerical properties of the problem are diﬀerent.
ii. Record and the time taken, the number of nodes evaluated. Does it always take longer
to solve without cuts? Why? Time
Nodes FL cuts
19.19
100 FL no cuts
18.79
193 AFL cuts
39.03
1383 AFL no cuts
> 2 min
>23600 No. Using cuts require more time per evaluation, but less nodes may need to be evalu
ated. When cuts are not helpful, more time is required when cuts are used.
iii. If you have the luxury of time but your only option is to solve via the AFL formulation
without the use of cuts, what problems might you encounter if the problem has more
facilities and clients, and you just let cplex run continuously?
You might run out of memory because of the growth of the branch and bound tree. Also,
cplex might not terminate because it simply takes too long to evaluate the whole of the
branch and bound tree. 8 ...
View
Full
Document
This note was uploaded on 02/14/2012 for the course IE 530 taught by Professor Ravindran during the Spring '97 term at Purdue UniversityWest Lafayette.
 Spring '97
 RAVINDRAN
 Optimization

Click to edit the document details