This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 2. UNCONSTRAINED OPTIMIZATION In this chapter, we develop principles for identifying and validating candidates for opti mality in problems with no explicit constraints: min f ( x ) over all x ∈ S, where S is typically the whole space R n or some simply described subset, such as an interval or neighborhood of a given point. To lay the foundation for these developments, we first review and refine the onedimensional optimization principles from Calculus. These are then extended in a natural way to higher dimensions, where they are made practical by incorporating ideas from Linear Algebra. 2.1 FirstOrder Necessary Conditions for Optimality We start with the onedimensional case of optimizing a function on an interval in R . The following theorem, due to Fermat, is the best known result in optimization theory. Theorem 2.1.1 (Necessary condition for onedimensional optimality) . Suppose ¯ x is a local minimizer (or maximizer) of f on the interval I ⊆ R . If ¯ x is not an endpoint of I and f is differentiable at ¯ x , then f (¯ x ) = 0 . Definition 2.1.2 (Critical point) . We say that ¯ x is a critical point for f if f (¯ x ) = 0. Observation 2.1.3 (Identifying candidates via necessary conditions) . Note that The orem 2.1.1 provides us with candidates for optimality. If ¯ x is a local minimizer for f on I ⊆ R , then one of the following conditions must hold: (a) ¯ x is an endpoint of I ; (b) f is not differentiable at ¯ x ; or (c) ¯ x is a critical point for f . We may discard any points that do not satisfy at least one of these three conditions. Any rule for selecting candidates in this way is called a necessary condition for optimality. Proof of Theorem 2.1.1. Recall that f (¯ x ) = lim x → ¯ x f ( x ) f (¯ x ) x ¯ x . Since ¯ x is a local minimizer on I and not an endpoint of I , there must exist δ > 0 so that f (¯ x ) ≤ f ( x ) for all x ∈ (¯ x δ, ¯ x + δ ). In other words, f ( x ) f (¯ x ) ≥ 0 for all x close to ¯ x , so f ( x ) f (¯ x ) x ¯ x ≤ , if x ∈ (¯ x δ, ¯ x ) 13 and f ( x ) f (¯ x ) x ¯ x ≥ , if x ∈ (¯ x, ¯ x + δ ) . Thus, lim x → ¯ x f ( x ) f (¯ x ) x ¯ x ≤ and lim x → ¯ x + f ( x ) f (¯ x ) x ¯ x ≥ , so f (¯ x ) = 0. By considering onedimensional derivatives along the coordinate axes, we can extend this result to higher dimensions. Theorem 2.1.4 (Necessary condition for multidimensional optimality) . Consider a set S ⊆ R n and a function f : S → R n . Suppose that ¯ x is a local minimizer for f on S . If ¯ x is an interior point of S and ∇ f (¯ x ) exists, then ∇ f (¯ x ) = 0 . Proof. Given a coordinate i , we need to show that ∂f (¯ x ) /∂x i = 0. Define ϕ i ( t ) = f (¯ x 1 ,..., ¯ x i 1 ,t, ¯ x i +1 ,..., ¯ x n ) ....
View
Full
Document
This note was uploaded on 03/18/2012 for the course MTH 432 taught by Professor Douglasward during the Spring '12 term at Miami University.
 Spring '12
 DouglasWard

Click to edit the document details