Homework #2, Problem 1
(a)
Any feasible solution
x
must satisfy

δ
≤
x
j
≤
δ
for all
j
, and all
c
j
are finite constants.
The product of two finite numbers must be finite, as is the sum of finite numbers. Thus,
the LP has a finite objective function value for any feasible solution, and so its optimal
solution must be finite.
(b)
Extreme points of F occur at the intersection of linear constraints.
Since an LP must
have a finite set of linear constraints, there must be a finite set of intersections, and,
consequently, finitely many extreme points.
Notice that this linear system looks similar to statement (I) in Farkas Theorem.
The
first step that needs to be done is to combine the first two equations. We can create a
matrix ‘
A
’ that is
M
plus a row of 1’s on the bottom, and a column vector ‘
x*
’ that is
x
plus a 1 at the bottom.
Thus, we have the following system:
┌
┐
┌
┐
│
M
1
, M
2
, . . . , M
k
│
│
x
│
│
│λ
=
│
│
,
λ
≥
0
│
1,
1, . . . ,
1
│
│
1
│
└
┘
└
┘
or
A
λ
= x*,
λ
≥
0
The goal is to prove that there exists a vector
λ
such that
A
λ
= x*,
λ
≥
0
.
The best way
to prove this is to suppose no such
λ
exists and come to a contradiction.
If no such
λ
exists, then, by Farkas Theorem, there exists a
y
such that
yA
≥
0
and
yx* < 0
. Note that
since no such
λ
exists and
M
1
, M
2
, …, M
k
are the only extreme points, then
x*
cannot be
an extreme point.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 TODD
 Operations Research, Linear Programming, Optimization, Mathematical optimization, LP, extreme points

Click to edit the document details