This preview shows page 1. Sign up to view the full content.
Unformatted text preview: . In most instances analytic
derivatives will also increase the numerical stability and accuracy of the
algorithm RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 52 Dense vs. Sparse and Medium vs. LargeSize Problems When many decision variables are involved (for nonlinear problems more than thousand or tens of thousands, and for linear problems more than
hundred thousand) we refer to the problem as a largescale optimization
problem For efficiency reasons, largescale numerical algorithms try to take advantage of the specific structure in a particular problem. For example, socalled sparse matrix techniques are often used if possible, in order to improve the
efficiency of the linear algebra type of computations inside the routines RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 53 User Interface and Settings Good optimization software allows the user to specify different options and settings of the algorithms such as the maximum number of iterations or
function evaluations allowed, the convergence criteria and tolerances, etc. Many optimization platforms also provide a preoptimization phase that can be used to analyze the problem at hand to provide different kinds of diagnostics in
order to help choosing the best and most suitable algorithm Normally, there is also software support for checking that the analytically supplied derivatives are correctly supplied by comparing them with numerical
approximations RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 54 References
Frank J. Fabozzi, Sergio M. Focardi and Petter N. Kolm (2006). Financial Modeling of the Equity Market: From Capm to
Cointegration. Hoboken, New Jersey, John Wiley & Sons, Inc. 1 Herzel, S., "Arbitrage Opportunities on Derivatives: A Linear Programming Approach," Technical Report, Department of Economics, University of Perugia, 2000.
2 M. A. H. Dempster, Hutton, J.P., Richards, D.G., "LP valuation of exotic American options exploiting structure," Computational Finance, Vol. 2, No. 1, 1998.
3 D. Bertsimas, Lauprete G., and Samarov, "Shortfall As Risk Measure: Properties, Optimization, and Applications," Journal of Economic Dynamics and Control, Vol. 88, No. 7, 2004.
4 A function f is said to be concave if −f is convex. 5 A set C is a cone if for all x Î C it follows that ax Î C for all a ³ 0 . A convex cone is a cone with the property that x + y ÎC
6 for all x, y Î C . In this formulation the constraint 0 £ wi £ di ensures that the weight of asset i is equal to 0 in the optimal solution if di is equal to 0. If di = 1 , then the constraint is irrelevant (redundant), because all asset weights are at most 1.
7 Strictly speaking to be technically correct, we also need to require that the gradient vectors (1) h j (x * ) for all j, and (2) gi (x * ) for all indices i for which gi (x * ) = 0 , are linearly independent.
8 We emphasize that [2 f (x k )]1 f (x k ) is shorthand for solving the linear system 2 f (x k )h = f (x k ) 9 The name of this method comes from the fact that at point xk the direction given by f (x k ) is the direction in which the function f decreases most rapidly. The step size g can be chosen in a variety of ways. RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 55 10 The Newton method can be shown to always guarantee that the value of the objective function decreases with each iteration when the Hessian matrices 2 f (x k ) are positive definite and have condition numbers that can be uniformly
bounded. For the method of steepest descent these requirements do not have to be applied for the same property to hold.
11 We obtain the approximations through the second and firstorder Taylor expansions 3
1
f (x k + d ) = f (x k ) + f (x k )d + d ¢2 f (x k )d + O( d )
2
2
hi (x k + d ) = hi (x k ) + hi (x k )d + O( d ), i = 1,..., I
2 g j (x k + d ) = g j (x k ) + g j (x k )d + O( d ), j = 1,..., J
We note that by using a firstorder Taylor expansion we would get a linear approximation to the nonlinear programming
problem. This is the basic idea behind Sequential Linear Programming (SLP) in which a sequence of linear
approximations each are solved by linear programming to produce a final solution of the nonlinear programming problem.
12 See for example, B. Borchers, and Mitchell, J. E. (1994) "An Improved Branch and Bound Algorithm for Mixed Integer Nonlinear Programs," Computers and Operations Research, Vol. 21, No. 4, pp. 359367. RISK AND PORTFOLIO MANAGEMENT WITH ECONOMETRICS, VER. 11/21/2012. © P. KOLM. 56...
View
Full
Document
This document was uploaded on 02/17/2014 for the course COURANT G63.2751.0 at NYU.
 Fall '14

Click to edit the document details