For unconstrained optimization involving two variables, the analytic approach
is to write what are called the “partial” derivatives of the function. We then set
both partial derivatives equal to 0, which creates two equations in two unknowns.
We then solve these equations to find the stationary point(s). This is only easy to
do when the original function is a quadratic, because then the two equations will
be linear. Otherwise, the equations will be non-linear, and all we would have ac-
complished is the conversion of a nonlinear optimization problem into a problem
of solving nonlinear equations. In other words, we might not be any further ahead.
If we are successful in obtaining an analytical solution for the stationary point, we
then need to do a second-order test, which is more complicated than it is for the
single-variable case. The details on how to find partial derivatives, and how to do
the second-order test, are described Appendix
D
.
With
n
variables, we need to find
n
partial derivatives, set them equal to 0,
and then solve these
n
equations in
n
unknowns.
Again, some or even all of
these
n
equations could be non-linear, and finding a closed-form solution might
be impossible. Furthermore, there is a very extensive procedure for determining if
a local minimum or a local maximum has been found. A brief outline is provided
in Appendix
D
.
Naturally, adding constraints complicates things further. It is important that
the feasible region be
convex
. A region is convex if we can take any two points

290
September 1, 2015 David M. Tulett
in the region, draw a line between them, and all points on the line between the
two points are also in the region. For example, a sphere is a convex region. A
doughnut, however, is not convex.
Convexity, along with other conditions on
the constraint set and the function we are optimizing, helps ensure that a local
optimum is also a global optimum. The entire set of conditions for optimality is
known as the Karush-Kuhn-Tucker (KKT) conditions.
11
All of this is very complex. We soon learn that closed-form analytic solutions
are not usually available, and therefore we must use an algorithm. The GRG (Gen-
eralized Reduced Gradient) Algorithm is a general-purpose algorithm for solving
constrained multivariate optimization. We saw this earlier when solving a prob-
lem with one variable and no constraints; it is built-in to the Excel Solver. When
used, it solves to find a local point of optimality, and verifies that the conditions
for local optimality are satisfied at that point. However, the Solver has no way of
telling if the feasible region is convex, nor can it tell if the function being opti-
mized only has one stationary point. Unless the user has knowledge about these
things, the solution found by the Solver cannot be guaranteed to be correct, except
in the sense that it’s better than all neighbouring points.

#### You've reached the end of your free preview.

Want to read all 553 pages?

- Spring '16
- David M. Tulett