ARE211, Fall 2007
NPP2: TUE, NOV 13, 2007
PRINTED: DECEMBER 14, 2007
(LEC# 22)
Contents
6.
Nonlinear Programming Problems and the Kuhn Tucker conditions (cont)
1
6.2.
Necessary and sufficient conditions for a solution to an NPP (cont)
1
6.2.1.
Preliminaries: the problem of the vanishing gradient
2
6.2.2.
Preliminaries: The relationship between quasiconcavity and the Hessian of
f
5
6.2.3.
Preliminaries:
The relationship between (strict) quasiconcavity and (strict)
concavity
6
6.3.
Sufficient Conditions for a solution to the NPP
8
6.
Nonlinear Programming Problems and the Kuhn Tucker conditions (cont)
6.2.
Necessary and sufficient conditions for a solution to an NPP (cont)
So far we’ve only established necessary conditions for a solution to the NPP. But we can’t stop
here.
We could have found a minimum on the constraint set, and the same KKT conditions
would be satisfied. In this lecture we focus on finding sufficient conditions for a solution, and in
particular, conditions under which the KKT conditions will be both necessary and
almost but not
quite
sufficient for a solution. The basic sufficiency conditions we’re going to rely on are that the
objective function
f
is strictly quasiconcave while the constraint functions are quasiconvex. But
there are a lot of subtleties that we need to address. We begin with some preliminary issues.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
2
NPP2: TUE, NOV 13, 2007
PRINTED: DECEMBER 14, 2007
(LEC# 22)
6.2.1.
Preliminaries: the problem of the vanishing gradient.
In addition to the usual quasiconcavity,
quasiconvexity conditions, we have to deal with the familiar annoyance posed by the example: max
x
3
on [

1
,
1]. Obviously this problem has a solution at
x
= 1. However, at
x
= 0, the KKT condi
tions are satisfied, i.e., gradient is zero, and so can be written as a nonnegative linear combination
of the constraints vectors, with weights zero applied to each of the constraints, niether of which is
satisfied with equality. That is,
f
prime
(
x
) = 0 which is the sum of the gradients of two constraints at
zero, each weighted by zero. So without imposing a restriction that excludes function such as this,
we cannot say that satisfying the KKT conditions is sufficient for a max when the objective and
constraint functions have the right “quasi” properties.
To exclude this case, we could assume that
f
has a nonvanishing gradient. But this restriction
throws the baby out with the bathwater: e.g., the problem max
x
(1

x
) s.t.
x
∈
[0
,
1] has a global
max at 0.5, at which point the gradient vanishes.
So we want to exclude precisely those functions that have vanishing gradients at
x
’s which are
not
unconstrained maxima. The following condition on
f
—called
pseudoconcavity
in S&B (the original
name) and M.K.9 in MWG—does just this, in addition to implying quasiconcavity.
∀
x
,
x
prime
∈
X,
if
f
(
x
prime
)
> f
(
x
) then
∇
f
(
x
)
·
(
x
prime

x
)
>
0
.
(1)
Note that (1) says a couple of things. First, it says that a
necessary
condition for
f
(
x
prime
)
> f
(
x
)
is that
dx
= (
x
prime

x
) makes an acute angle with the gradient of
f
.
(This looks very much like
quasiconcavity). Second, it implies that
if
∇
f
(
·
) = 0 at
x
then
f
(
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '07
 Simon
 Optimization, NPP, dx, Necessary and sufficient condition, Level set, dx Hf

Click to edit the document details