math408text - Nonlinear Optimization James V Burke University of Washington Contents Chapter 1 Introduction 5 Chapter 2 Review of Matrices and Block

math408text - Nonlinear Optimization James V Burke...

This preview shows page 1 - 6 out of 110 pages.

Nonlinear OptimizationJames V. BurkeUniversity of Washington
Background image
ContentsChapter 1.Introduction5Chapter 2.Review of Matrices and Block Structures71.Rows and Columns72.Matrix Multiplication93.Block Matrix Multiplication114.Gauss-Jordan Elimination Matrices and Reduction to Reduced Echelon Form135.Some Special Square Matrices156.The LU Factorization167.Solving Equations with the LU Factorization188.The Four Fundamental Subspaces and Echelon Form19Chapter 3.The Linear Least Squares Problem211.Applications212.Optimality in the Linear Least Squares Problem263.Orthogonal Projection onto a Subspace284.Minimal Norm Solutions toAx=b305.Gram-Schmidt Orthogonalization, the QR Factorization, and Solving the Normal Equations31Chapter 4.Optimization of Quadratic Functions371.Eigenvalue Decomposition of Symmetric Matrices372.Optimality Properties of Quadratic Functions403.Minimization of a Quadratic Function on an Affine Set424.The Principal Minor Test for Positive Definiteness445.The Cholesky Factorizations456.Linear Least Squares Revisited487.The Conjugate Gradient Algorithm48Chapter 5.Elements of Multivariable Calculus531.Norms and Continuity532.Differentiation553.The Delta Method for Computing Derivatives584.Differential Calculus595.The Mean Value Theorem59Chapter 6.Optimality Conditions for Unconstrained Problems631.Existence of Optimal Solutions632.First-Order Optimality Conditions643.Second-Order Optimality Conditions654.Convexity66Chapter 7.Optimality Conditions for Constrained Optimization731.First–Order Conditions732.Regularity and Constraint Qualifications763.Second–Order Conditions784.Optimality Conditions in the Presence of Convexity795.Convex Optimization, Saddle Point Theory, and Lagrangian Duality833
4CONTENTSChapter 8.Line Search Methods891.The Basic Backtracking Algorithm892.The Wolfe Conditions94Chapter 9.Search Directions for Unconstrained Optimization991.Rate of Convergence992.Newton’s Method for Solving Equations993.Newton’s Method for Minimization1024.Matrix Secant Methods103Index109
CHAPTER 1IntroductionIn mathematical optimization we seek to either minimize or maximize a function over a set of alternatives. Thefunction is called theobjective function, and we allow it to be transfinite in the sense that at each point its valueis either a real number or it is one of the infinite values±∞. The set of alternatives is called theconstraint region.Since every maximization problem can be restated as a minimization problem by simply replacing the objectivef0by its negative-f0(and visa versa), we choose to focus only on minimization problems. We denote such problemsusing the notation(1)minimizexXf0(x)subject toxΩ,wheref0:XR∪ {±∞}is the objective function,Xis the space over which the optimization occurs, and ΩXis the constraint region. This is a very general description of an optimization problem and as one might imagine

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture