optimization

# optimization - Chapter 6: Optimization Steepest descent...

This preview shows pages 1–9. Sign up to view the full content.

Chapter 6: Optimization Steepest descent method (gradient methods) Newton’s method Optimality conditions Convexity Example 1: strategic bidding in energy Example 2: classification, machine learning Example 3: The Netflix problem revisited

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Convexity
Unconstrained Optimization min f ( x ) where x is a real valued function of n variables Constrained Optimization f ( x , y ) = (1 x ) 2 + 100( y x 2 ) 2 min f ( x ) s.t h ( x ) = 0 g ( x ) 0

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Chapter 2: Identifying a Local Solution Thm 1. First Order Necessary Conditions. If x * is a local minimizer of f and f is continuously differentiable, then f ( x *) = 0 Thm 2. Second Order Necessary Conditions. If x * is a local minimizer of f and 2 f is continuously differentiable, then f ( x *) = 0 and 2 f ( x *) 0 Necessary but not sufficient. Example: f ( x ) = x 3
Gradient Vectors, Hessian Matrices f ( x , y ) = 2 x y = f / x f / y Example : f ( x , y ) = x 2 + 0.5 y 2 2 f ( x , y ) = 2 0 0 1 = 2 f / x 2 f / x y f / y x 2 f / y 2 If f is a continuously differentiable function, the Hessian matrix is symmetric

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Thm 4. Suppose that
This is the end of the preview. Sign up to access the rest of the document.

## optimization - Chapter 6: Optimization Steepest descent...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online