This preview shows pages 1–3. Sign up to view the full content.
Notes on optimization
Francesc M. Torralba
January 12, 2005
1
Unconstrained optimization
An unconstrained optimization problem is a problem of the form
max
x
f
(
x
)
subject to
x
2
X
or
min
x
f
(
x
)
subject to
x
2
X
where
f
:
<
n
! <
is at least once di¤erentiable (i.e. at least the …rst deriva
tive exists) and
X
is an open set, i.e. every feasible point is ”surrounded”
1
by
other feasible points. You can also think of an open set as a set that contains
its own boundary. According to this de…nition, the following problem
min
x
e
x
s.t.
x >
0
is an unconstrained problem, because
f
(
x
) =
e
x
is in…nitely di¤erentiable
and the set of strictly positive real numbers is open (the boundary of the set
of strictly positive numbers is not contained in the set itself).
1
For a more formal de…nition of open set, consult any textbook in real analysis (for
example, Kolmogorov and Fomin (1970)).
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document1.1
First order conditions
From your calculus class you probably know Fermat’s rule: in a
univariate
optimization problem without constraints, if the objective function,
f;
is
di¤erentiable then the …rst derivative of the objective function,
f
0
;
must be
equal to zero at any point at which
f
attains a maximum or a minimum:
df
(
x
¤
)
dx
= 0
where
x
¤
is a point that maximizes or minimizes the objective function.
The points at which this condition holds will be called critical points. The
reason why Fermat’s rule holds is pretty intuitive: if the …rst derivative is
positive at a given point
x
, then the function is strictly increasing around
x
, so the value of the function strictly increases or decreases depending on
whether we take points at the right or the left of
x;
respectively. But then
f
cannot attain either a maximum or a minimum at
x
. On the other hand,
if the …rst derivative is negative at a given point
x
, then the function is
strictly decreasing around that point, and therefore the value of the function
increases or decreases depending on whether we take points at the left or
the right of
x
, respectively, and therefore
x
can be neither a maximum nor a
minimum. Only when we pick a critical point
x
¤
, where the …rst derivative
of the objective function is zero, we can have a maximum or a minimum.
Remarks:
1) For a
multivariate
problem, where the objective function is de…ned
over
<
n
, a critical point is a vector at which
all
the partial derivatives of the
objective function are zero.
2) Fermat’s rule does not give us a su¢cient, but a necessary condition
that all maxima/minima satisfy.
In other words, all critical points in an
unconstrained optimization problem must satisfy Fermat’s rule, but not all
critical points are maxima/minima. In some cases we will …nd that some
points at which all the partial derivatives of the objective function are zero
are neither maximizers nor minimizers. We will need additional criteria (the
second order conditions) to discern maxima/minima from the whole set of
critical points.
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '11
 PP

Click to edit the document details