lect03 - MATHcfw_3/M2510 Optimisation 34 Friday, 30...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
MATH { 3/M } 2510 Optimisation 34 Friday, 30 January, 2009 Lecture 3 Lecturer: Aram W. Harrow Farkas’ Lemma and strong LP duality 3.1 Using reductions between LPs to extend Farkas’ Lemma Yesterday we saw how different forms of linear programs could be reduced to one another. Today, we will use these reductions to extend Farkas’s Lemma and derive other strong alternatives (i.e. either A is feasible or B is, but not both). Theorem 3.1. Fix A R m × n and b R m . Then Ax b is infeasible iff z such that z T b < 0 , A T z = 0 and z 0 . Proof. ( ) Suppose Ax b is infeasible. Then the set C := { y R m : x R n s.t. y Ax } is convex (proof: directly check the definition of convexity) and b 6∈ C . Thus the separating hyperplane theorem tells us that there exists z R m such that z T b < min y C z T y = min x R n ,y Ax z T y = min x R n ,s R m + z T ( Ax + s ) = min x R n z T Ax + min s R m + z T s In the second line, we have used the fact that y Ax is equivalent to s 0 , Ax + s = y . To analyse the minimisations in the last line, note the first equals -∞ unless z T A = 0 and the second equals -∞ unless z 0. However, they are lower-bounded by z T b , and so it follows that A T z = 0, z 0 and both minimisations equal zero. Thus z T b < 0, A T z = 0 and z 0, as desired. ( ) This is the easy direction. We will prove it by contradiction. Suppose there exists x and z with Ax b , z T b < 0, A T z = 0 and z 0. Then 0 > z T b z T Ax = 0 , where the uses the fact that z 0 and b Ax . This is a contradiction, so our assumption that a feasible x existed must have been false. We say that z is a witness to the infeasibility of Ax b . (Were an x to exist, we could equally well say that it is a witness to the infeasibility of z T b < 0, A T z = 0 and z 0.) What about a slightly more complicated system of constraints? In addition to constraining Ax b we will also demand that x 0. Since x 0 is equivalent to ( - I ) x 0, we can express both conditions together as ± A - I ² x ± b 0 ² . Applying Thm. 3.1, we find that there exists z R m , w R m such that 0 > ± z w ² T ± b 0 ² = ( z w ) · ± b 0 ² = z T b = b T z, Next ± z w ² 0, meaning that both z 0 and w 0. And 0 = ± A - I ² T ± z w ² = ( A T - I ) · ± z w ² = A T z - w, 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
meaning that w = A T z . Putting this together, we find we can eliminate w and are left with the constraints b T z < 0 z 0 A T z 0 (3.1) We have proven one direction of Theorem 3.2. Fix A R m × n and b R m . Then Ax b, x 0 is infeasible iff z R m such that z T b < 0 , A T z 0 and z 0 . An alternate proof of Thm. 3.2 is to follow the proof of Thm. 3.1 until the step where we minimise over x . Then instead of minimising z T Ax over x R n , we are minimising z T Ax over x 0. In the former case, we need to have z T A = 0 to avoid obtaining -∞ from the minimisation; in the latter, we merely need z T A
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 5

lect03 - MATHcfw_3/M2510 Optimisation 34 Friday, 30...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online