This preview shows page 1. Sign up to view the full content.
Unformatted text preview: x)
∂xi ∂xj which is symmetric For continuously differentiable f : S ⊆ Rn → R, any interior
point x∗ of S at which f has local minimum must be critical
point of f Michael T. Heath 10 / 74 Deﬁnitions
Existence and Uniqueness
Optimality Conditions SecondOrder Optimality Condition For function of one variable, one can ﬁnd extremum by
differentiating function and setting derivative to zero Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Scientiﬁc Computing f (x∗ ) = L(x, λ) =
Its Hessian is given by T
Jg (x∗ )λ HL (x, λ) = where Jg is Jacobian matrix of g , and λ is vector of
Lagrange multipliers Michael T. Heath Scientiﬁc Computing T
B (x, λ) Jg (x)
Jg (x)
O where This condition says we cannot reduce objective function
without violating constraints Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization T
f (x) + Jg (x)λ
g (x) m B (x, λ) = Hf (x) + λi Hgi (x)
i=1 13 / 74 Deﬁnitions
Existence and Uniqueness
Optimality Conditions Michael T. Heath
Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Constrained Optimality, continued Scientiﬁc Computing 14 / 74 Deﬁnitions
Existence and Uniqueness
Optimality Conditions Constrained Optimality, continued Together, necessary condition and feasibility imply critical
point of Lagrangian function,
L(x, λ) = T
f (x) + Jg (x)λ
=0
g (x) If inequalities are present, then KKT optimality conditions
also require nonnegativity of Lagrange multipliers
corresponding to inequalities, and complementarity
condition Hessian of Lagrangian is symmetric, but not positive
deﬁnite, so critical point of L is saddle point rather than
minimum or maximum
Critical point (x∗ , λ∗ ) of L is constrained minimum of f if
B (x∗ , λ∗ ) is positive deﬁnite on null space of Jg (x∗ )
If columns of Z form basis for null space, then test
projected Hessian Z T BZ for positive deﬁniteness
Michael T. Heath Scientiﬁc Computing 15 / 74 Michael T. Heath Scientiﬁc Computing 16 / 74 Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Optimization Problems
OneDimensional Optimization
MultiDimensional Optimization Deﬁnitions
Existence and Uniqueness
Optimality Conditions Golden Section Search
Successive Parabolic Interpolation
Newton’s Method Unimodality Sensitivity and Conditioning
Function minimization and equation solving are closely
related problems, but their sensitivities differ For minimizing function of one variable, we need “bracket”
for solution analogous to sign change for nonlinear
equation In one dimension, absolute condition number of root x∗ of
equation f (x) = 0 is 1/f (x∗ ), so if f (ˆ) ≤ , then
x
x − x∗  may be as large as /f (x∗ )
ˆ Realvalued function f is unimodal on interval [a, b] if there
is unique x∗ ∈ [a, b] such that f (x∗ ) is minimum of f on
[a, b], and f is strictly decreasing for x ≤ x∗ , strictly
increasing for x∗ ≤ x For minimizing f , Taylor series expansion
f (ˆ) = f (x∗ + h)
x
= f (x∗ ) + f (x∗ )h + 1 f (x∗ )h2 + O(h3 )
2
shows that, since f (x∗ ) = 0, i...
View Full
Document
 Fall '11
 Wasfy
 Mechanical Engineering

Click to edit the document details