Chapter 20 Problems with Equality
Constraints
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
Solve a class of nonlinear constrained optimization
problems that can be formulated as
where
,
,
,
, and
. In vector notation, the pro
Chapter 21 Problems with Inequality
Constraints
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Karush-Kuhn-Tucker Condition
Consider the following problem:
where
,
,
, and
.
Definition 21.1. An inequality constraint
is said
to be active at
Chapter 15 Introduction to Linear
Programming
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Brief History of Linear Programming
The goal of linear programming is to determine the values
of decision variables that maximize or minimize a lin
Chapter 19 Integer Linear Programming
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
Integer linear programming (ILP), or simply integer
programming, is linear problems with the additional
constraint that the solution component
Chapter 16 Simplex Method
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Solving Linear Equations Using Row Operations
An elementary row operation on a given matrix is an algebraic
manipulation of the matrix that corresponds to one of the
f
Chapter 12 Solving Linear Equations
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Least-Squares Analysis
Consider a system of linear equations
, where
and
,
, and
. Note that the number of
unknowns, , is no larger than the number of equati
Chapter 14 Global Search Algorithms
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
We discuss various search methods that attempts to search
throughout the entire feasible set. These methods use only
objective function values a
Chapter 8 Gradient Methods
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
Recall that a level set of a function
is the set of
points satisfying
for some constant . Thus, a point
is on the level set corresponding to level if
In
Chapter 11 Quasi-Newton Methods
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
In Newtons method, for a general nonlinear objective function,
convergence to a solution cannot be guaranteed from an
arbitrary initial point
.
The
Chapter 5 Elements of Calculus
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Sequences and Limits
A sequence of real numbers can be viewed as a set of numbers
, which is often also denoted as
or
A sequence
is increasing if
. If
then we say
Chapter 9 Newtons Method
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
The steepest descent method uses only first derivatives in
selecting a suitable search direction.
Newtons method (sometimes called Newton-Raphson method)
u
Chapter 6 Basics of Set-Constrained and
Unconstrained Optimization
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Introduction
Consider the optimization problem
The function
that we wish to minimize is a realvalued function called the objec
Chapter 7 One-Dimensional Search Methods
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Golden Section Search
Determine the minimizer of a function
over a closed
interval, say
. The only assumption is that the objective
function is unimodal
Chapter 4 Concepts from Geometry
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Line Segments
The line segment between two points and in Rn is the set
of points on the straight line joining points and . If lies on
the line segment, then
Hen
Chapter 3 Transformations
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Linear Transformations
A function
1.
2.
is called a linear transformation if
for every
and
for every
If we fix the bases for
and
, then the linear
transformation can b
Chapter 2 Vector Spaces and Matrices
An Introduction to Optimization
Fall 2015
Dr Mohsen Sojoudi
1
Vectors and Matrices
n-dimensional column vector and row vector
Properties
2
Linearly Independent
A set of vectors
is said to be linearly independent if
the
» tuma,
Number of iterations =
8
The reader is again cautioned not to draw any conclusions about the superiority or inferiority of any of the formulas
for H k based only on the above single numerical experiment.
11.10
a. The plot of the level sets of f we
238 sowmc LINEAR EQUATIONS
EXERCISES
12.1 A rock is accelerated to 3, 5, and 6 m/s2 by applying forces of 1, 2,
and 3 N, respectively. Assuming that Newtons law F 2 ma holds, where F
is the force and a is the acceleration, estimate the mass m'of the rock
function alpha=linesearchgsecant(grad,x,d)
%Line search using secant method
epsilon=lOA(4); %1ine search tolerance
max = 100; %maximum number of iterations
alpha_curr=0;
alpha=0.001;
dphi_zero=feval(grad,x)'*d;
dphi_curr=dphi-zero;
i=0;
while abs(dphi_cur
b. From part a, we have mu) = kam). Therefore, as long as mm) ¢ 0, the sequence {mm} does not converge to 0.
9.3
a. Clearly f(a:) Z O for all m. We have
f(:1:)=0 <=> mgm§=0 and 1x1=0
<=> a}:[1,1]T.
Hence, f(a:) > f([1, 1]T) for all m 75 [1,1]T, and ther
182 CONJUGATE DIRECTION METHODS
The formulas above give us conjugate gradient algorithms that do not re
quire explicit knowledge of the Hessian matrix Q. All we need are the objec-
tive function and gradient values at each iteration. For the quadratic cas
and f(m) = f(a:*) if and only ifa: = 12*.
6.25
Write u = [111, . . . ,un]. We have
mn = axn_1 + bun
= a(axn_2 + bunel) + bun
= a2xn_2 + abun_1 + bun
= a.an + anlbul + - - - + abun_1 + bun
= cTu,
where c : [an1b, . . . ,ab, b]T. Therefore, the problem ca
EXERCISES 205
Hence,
(I)T (1) (1) (1)T
H2:H1+ 1+ Ag (UflAg) AmmTAm (1)
Ag Am Am Ag
Am()Ag(1)TH1+ HlAgmAmmT
Ag(l)TA$(l)
_ 2 3
3 5 '
Note that indeed HgQ = QH2 2 I2, and hence H2 2 Q. I
For nonquadratic problems, quasi-Newton algorithms will not usuall
EXERCISES 165
In some applications, the matrix S(:B) involving the second derivatives of
the function r can be ignored because its components are negligibly small. In
this case Newtons algorithm reduces to what is commonly called the Gauss-
Newton method:
end
else
disp('Fina1 point =);
disp(xnew');
disp(Number of iterations =);
diSP(k);
end %if
% _ _
To apply the above MATLAB routine to the function in Example 8.1, we need the following M-le to specify the
gradient.
function y=g (x)
Y=[4*(x(l)-4).3; 2*(X(2
EXERCISES 121
EXERCISES
7.1 Suppose that we have a unimodal function over the interval [5,8]. Give
an example of a desired nal uncertainty range where the Golden Section
method requires at least four iterations, whereas the Fibonacci method re
quires only
n.1,. .4,
6. Basics of Unconstrained Optimization
6.1
a. In this case, 33* is denitely not a local minimizer. To see this, note that d = [1, -2]T is a feasible direction at 32*.
However, dTV f (a:*) = 1, which violates the FONC.
b. In this case, a:* satis
EXERCISES 91
rigure 0.5 urapn or jun) = x1+ x5.
EXERCISES
6.1 Consider the problem
minimize f(m)
subject to m E Q,
where f 6 C2. For each of the following specications for Q, m, and f, de-
termine if the given point m* is: denitely a local minimizer; (i