This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ined tolerance (a small number!)
[For you: How do you know how many correct digits you have in your solution?] RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 31 Overview of LineSearch and Newton Type Methods of Unconstrained
Optimization We first describe the Newton method for the onedimensional unconstrained
optimization problem min f (x )
x where we assume that the first and second order derivatives of f exist
Assume we have an approximation x k of the optimal solution x * and we want to
compute a “better” approximation x k +1 . The Taylor series expansion around x k
is given by 1
f (x k + h ) = f (x k ) + f ¢(x k )h + f ¢¢(x k )h 2 + O(h 3 )
2
where h is some small number. For h is small enough, we solve the optimization
problem 1
min f (x k ) + f ¢(x k )h + f ¢¢(x k )h 2
h
2 RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 32 1
min f (x k ) + f ¢(x k )h + f ¢¢(x k )h 2
h
2
This is a simple quadratic optimization problem in h. By taking derivatives with
respect to h we have
f ¢(x k ) + f ¢¢(x k )h = 0 Solving for h we obtain
h = f ¢(x k )
f ¢¢(x k ) Therefore, the new approximation x k +1 becomes
x k +1 = x k + h = x k  f ¢(x k )
f ¢¢(x k ) This is the Newton method for the onedimensional unconstrained optimization
problem above RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 33 NDimensional Unconstrained Optimization The Newton method is easily extended to Ndimensional problems and then takes
the form8
x k +1 = x k + h = x k  [2 f (x k )]1 f (x k ) where x k +1 , x k are N dimensional vectors, and f (x k ) and 2 f (x k ) are the
gradient and the Hessian of f at x k , respectively Note: You should not calculate [2 f (x k )]1 explicitly and then multiply it with
f (x k ). [For you: Why?] Rather, you should solve the linear system for h
[2 f (x k )]1 h = f (x k ) RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 34 Line Search Strategies The Newton method is a socalled line search strategy: After the kth step, x k is
given and the (k+1)th approximation is calculated according to the iterative
scheme x k +1 = x k + g pk
where pk Î R N is the search direction chosen by the algorithm
[Of course, in the case of the Newton method the search direction is chosen to be
pk = [2 f (x k )]1 f (x k ) and g = 1 .] Other search directions lead to algorithms with different properties RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 35 Example: Method of steepest descent the search direction is chosen as9
pk = f (x k ) Steepest descent requires the firstorder derivatives of the function f, and not secondorder derivatives as the Newton method Therefore, a steepest descent iteration is computationally cheaper to perform than a Newton iteration. RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 36 Convergence Steepest descent and the Newton method have different convergence properties. The rate of convergence to...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details