This preview shows pages 1–3. Sign up to view the full content.
MATH
{
3/M
}
2510 Optimisation 34
Friday, 30 January, 2009
Lecture 3
Lecturer: Aram W. Harrow
Farkas’ Lemma and strong LP duality
3.1
Using reductions between LPs to extend Farkas’ Lemma
Yesterday we saw how diﬀerent forms of linear programs could be reduced to one another. Today, we
will use these reductions to extend Farkas’s Lemma and derive other strong alternatives (i.e. either A is
feasible or B is, but not both).
Theorem 3.1.
Fix
A
∈
R
m
×
n
and
b
∈
R
m
. Then
Ax
≤
b
is infeasible iﬀ
∃
z
such that
z
T
b <
0
,
A
T
z
= 0
and
z
≥
0
.
Proof.
(
⇒
) Suppose
Ax
≤
b
is infeasible. Then the set
C
:=
{
y
∈
R
m
:
∃
x
∈
R
n
s.t.
y
≥
Ax
}
is convex (proof: directly check the deﬁnition of convexity) and
b
6∈
C
. Thus the separating
hyperplane theorem tells us that there exists
z
∈
R
m
such that
z
T
b <
min
y
∈
C
z
T
y
=
min
x
∈
R
n
,y
≥
Ax
z
T
y
=
min
x
∈
R
n
,s
∈
R
m
+
z
T
(
Ax
+
s
)
= min
x
∈
R
n
z
T
Ax
+ min
s
∈
R
m
+
z
T
s
In the second line, we have used the fact that
y
≥
Ax
is equivalent to
∃
s
≥
0
, Ax
+
s
=
y
. To
analyse the minimisations in the last line, note the ﬁrst equals
∞
unless
z
T
A
= 0 and the second
equals
∞
unless
z
≥
0. However, they are lowerbounded by
z
T
b
, and so it follows that
A
T
z
= 0,
z
≥
0 and both minimisations equal zero. Thus
z
T
b <
0,
A
T
z
= 0 and
z
≥
0, as desired.
(
⇐
) This is the easy direction. We will prove it by contradiction. Suppose there exists
x
and
z
with
Ax
≤
b
,
z
T
b <
0,
A
T
z
= 0 and
z
≥
0. Then
0
> z
T
b
≥
z
T
Ax
= 0
,
where the
≥
uses the fact that
z
≥
0 and
b
≥
Ax
. This is a contradiction, so our assumption that
a feasible
x
existed must have been false. We say that
z
is a witness to the infeasibility of
Ax
≤
b
.
(Were an
x
to exist, we could equally well say that it is a witness to the infeasibility of
z
T
b <
0,
A
T
z
= 0 and
z
≥
0.)
What about a slightly more complicated system of constraints? In addition to constraining
Ax
≤
b
we will also demand that
x
≥
0. Since
x
≥
0 is equivalent to (

I
)
x
≤
0, we can express both conditions
together as
±
A

I
²
x
≤
±
b
0
²
.
Applying Thm. 3.1, we ﬁnd that there exists
z
∈
R
m
, w
∈
R
m
such that
0
>
±
z
w
²
T
±
b
0
²
=
(
z
w
)
·
±
b
0
²
=
z
T
b
=
b
T
z,
Next
±
z
w
²
≥
0, meaning that both
z
≥
0 and
w
≥
0. And
0 =
±
A

I
²
T
±
z
w
²
=
(
A
T

I
)
·
±
z
w
²
=
A
T
z

w,
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Documentmeaning that
w
=
A
T
z
. Putting this together, we ﬁnd we can eliminate
w
and are left with the
constraints
b
T
z <
0
z
≥
0
A
T
z
≥
0
(3.1)
We have proven one direction of
Theorem 3.2.
Fix
A
∈
R
m
×
n
and
b
∈
R
m
. Then
Ax
≤
b, x
≥
0
is infeasible iﬀ
∃
z
∈
R
m
such that
z
T
b <
0
,
A
T
z
≥
0
and
z
≥
0
.
An alternate proof of Thm. 3.2 is to follow the proof of Thm. 3.1 until the step where we minimise
over
x
. Then instead of minimising
z
T
Ax
over
x
∈
R
n
, we are minimising
z
T
Ax
over
x
≥
0. In the
former case, we need to have
z
T
A
= 0 to avoid obtaining
∞
from the minimisation; in the latter, we
merely need
z
T
A
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '09

Click to edit the document details