We have
X
=
{
(
x, z
):
x
greaterorequalslant
0, z
greaterorequalslant
0
}
⊆
R
m
+
n
. The Lagrangian is given by
L
((
x, z
)
, λ
)=
c
T
x

λ
T
(
Ax

z

b
)=(
c
T

λ
T
A
)
x
+
λ
T
z
+
λ
T
b
and has a finite minimum over
X
if and only if
λ
∈
Y
=
{
µ
∈
R
m
:
c
T

µ
T
A
greaterorequalslant
0, µ
greaterorequalslant
0
}
.
10
2
·
Convex and Linear Optimization
For
λ
∈
Y
, the minimum of
L
((
x, z
)
, λ
)
is attained when both
(
c
T

λ
T
A
)
x
=
0
and
λ
T
z
=
0
, and thus
g
(
λ
)=
inf
(
x,z
)
∈
X
L
((
x, z
)
, λ
)=
λ
T
b.
We obtain the dual
max
{
b
T
λ
:
A
T
λ
lessorequalslant
c, λ
greaterorequalslant
0
}
.
(2.3)
The dual of (2.2) can be determined analogously as
max
{
b
T
λ
:
A
T
λ
lessorequalslant
c
}
.
2.4
Complementary Slackness
An important relationship between primal and dual solutions is provided by conditions
known as
complementary slackness
. Complementary slackness requires that slack does
not occur simultaneously in a variable, of the primal or dual, and the corresponding
constraint, of the dual or primal. Here, a variable is said to have slack if its value is
nonzero, and an inequality constraint is said to have slack if it does not hold with
equality. It is not hard to see that complementary slackness is a necessary condition for
optimality. Indeed, if complementary slackness was violated by some variable and the
corresponding contraint, reducing the value of the variable would reduce the value of
the Lagrangian, contradicting optimality of the current solution. The following result
formalizes this intuition.
Theorem
2.4
.
Let
x
and
λ
be feasible solutions for the primal
(2.1)
and the
dual
(2.3)
, respectively.
Then
x
and
λ
are optimal if and only if they satisfy
complementary slackness, i.e., if
(
c
T

λ
T
A
)
x
=
0
and
λ
T
(
Ax

b
)=
0.
Proof.
If
x
and
λ
are optimal, then
c
T
x
=
λ
T
b
=
inf
x
′
∈
X
(
c
T
x
′

λ
T
(
Ax
′

b
)
)
lessorequalslant
c
T
x

λ
T
(
Ax

b
)
lessorequalslant
c
T
x.
Since the first and last term are the same, the two inequalities must hold with equality.
Therefore,
λ
T
b
=
c
T
x

λ
T
(
Ax

b
) = (
c
T

λ
T
A
)
x
+
λ
T
b
, and thus
(
c
T

λ
T
A
)
x
=
0
.
Furthermore,
c
T
x

λ
T
(
Ax

b
)=
c
T
x
, and thus
λ
T
(
Ax

b
)=
0
.
If on the other hand
(
c
T

λ
T
A
)
x
=
0
and
λ
T
(
Ax

b
)=
0
, then
c
T
x
=
c
T
x

λ
T
(
Ax

b
)=(
c
T

λ
T
A
)
x
+
λ
T
b
=
λ
T
b,
and by weak duality
x
and
λ
must be optimal.
2.5
·
Shadow Prices
11
2.5
Shadow Prices
A more intuitive understanding of Lagrange multipliers can be obtained by again
viewing (1.1) as a family of problems parameterized by
b
∈
R
m
.
As before, let
φ
(
b
) =
inf
{
f
(
x
) :
h
(
x
) =
b, x
∈
R
n
}
. It turns out that at the optimum, the Lagrange
multipliers equal the partial derivatives of
φ
.
You've reached the end of your free preview.
Want to read all 6 pages?
 Fall '10
 BOB
 Operations Research, Linear Programming, Optimization