2
Convex and Linear Optimization
2.1
Convexity and Strong Duality
Let
S
⊆
R
n
.
S
is called a
convex set
if for all
δ
∈
[
0, 1
]
,
x, y
∈
S
implies that
δx
+(
1

δ
)
y
∈
S
. A function
f
:
S
→
R
is called
convex function
if for all
x, y
∈
S
and
δ
∈
[
0, 1
]
,
δf
(
x
) + (
1

δ
)
f
(
y
)
greaterorequalslant
f
(
δx
+ (
1

δ
)
y
)
.
A point
x
∈
S
is called an
extreme point
of
S
if for all
y, z
∈
S
and
δ
∈
(
0, 1
)
,
x
=
δy
+(
1

δ
)
z
implies that
x
=
y
=
z
. A point
x
∈
S
is called an
interior point
of
S
if there exists
ǫ > 0
such that
{
y
:

y

x

2
lessorequalslant
ǫ
}
⊆
S
. The set of all interior points of
S
is called the
interior
of
S
.
We saw in the previous lecture that strong duality is equivalent to the existence of
a supporting hyperplane. The following result establishes a sufficient condition for the
latter.
Theorem
2.1 (Supporting Hyperplane Theorem)
.
Suppose that
φ
is convex and
b
∈
R
lies in the interior of the set of points where
φ
is finite. Then there exists
a (nonvertical) supporting hyperplane to
φ
at
b
.
The following result identifies a condition that guarantees convexity of
φ
.
Theorem
2.2
.
Consider the optimization problem to
minimize
f
(
x
)
subject to
h
(
x
)
lessorequalslant
b
x
∈
X,
and let
φ
be given by
φ
(
b
) =
inf
x
∈
X
(
b
)
f
(
x
)
. Then,
φ
is convex when
X
,
f
, and
h
are convex.
Proof.
Consider
b
1
, b
2
∈
R
m
such that
φ
(
b
1
)
and
φ
(
b
2
)
are defined, and let
δ
∈
[
0, 1
]
and
b
=
δb
1
+ (
1

δ
)
b
2
.
Further consider
x
1
∈
X
(
b
1
)
,
x
2
∈
X
(
b
2
)
, and let
x
=
δx
1
+(
1

δ
)
x
2
. Then convexity of
X
implies that
x
∈
X
, and convexity of
h
that
h
(
x
)=
h
(
δx
1
+(
1

δ
)
x
2
)
lessorequalslant
δh
(
x
1
)+(
1

δ
)
h
(
x
2
)
=
δb
1
+(
1

δ
)
b
2
=
b.
Thus
x
∈
X
(
b
)
, and by convexity of
f
,
φ
(
b
)
lessorequalslant
f
(
x
)=
f
(
δx
1
+(
1

δ
)
x
2
)
lessorequalslant
δf
(
x
1
)+(
1

δ
)
f
(
x
2
)
.
This holds for all
x
1
∈
X
(
b
1
)
and
x
2
∈
X
(
b
2
)
, so taking infima on the right hand
side yields
φ
(
b
)
lessorequalslant
δφ
(
b
1
)+(
1

δ
)
φ
(
b
2
)
.
7
8
2
·
Convex and Linear Optimization
Observe that an equality constraint
h
(
x
) =
b
is equivalent to constraints
h
(
x
)
lessorequalslant
b
and

h
(
x
)
lessorequalslant

b
. In this case, the above result requires that
X
,
f
,
h
, and

h
are all
convex, which in particular requires that
h
is linear.
2.2
Linear Programs
A
linear program
is an optimization problem in which the objective and all constraints
are linear. It has the form
minimize
c
T
x
subject to
a
T
i
x
greaterorequalslant
b
i
,
i
∈
M
1
a
T
i
x
lessorequalslant
b
i
,
i
∈
M
2
a
T
i
x
=
b
i
,
i
∈
M
3
x
j
greaterorequalslant
0,
j
∈
N
1
x
j
lessorequalslant
0,
j
∈
N
2
where
c
∈
R
n
is a cost vector,
x
∈
R
n
is a vector of decision variables, and constraints
are given by
a
i
∈
R
n
and
b
i
∈
R
for
i
∈
{
1, . . . , m
}
. Index sets
M
1
, M
2
, M
3
⊆
{
1, . . . , m
}
and
N
1
, N
2
⊆
{
1, . . . , n
You've reached the end of your free preview.
Want to read all 6 pages?
 Fall '10
 BOB
 Operations Research, Linear Programming, Optimization