{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Continuous Optimization Problems

Continuous Optimization Problems - Continuous Optimization...

This preview shows pages 1–4. Sign up to view the full content.

Continuous Optimization Problems Mathematical optimization techniques are crucial in the design and construction of portfolios of financial assets. It is assumed that you have taken an introductory course in linear and nonlinear programming such as MA 614 and are familiar with various classes of optimization problems (e.g., linear versus nonlinear problems, continuous versus discrete problems, unconstrained versus constrained problems). The material that follows is meant as a quick review of optimization problems with continuous variables. Problems without Constraints The simplest class of optimization problems involves finding the values of m decision variables or parameters 12 ,,, m uu u that minimize a performance index , an objective function of these parameters. We will denote this function as () m Lu u u L = u where u is the m -dimensional column vector of decision variables. Note that we will use the vector matrix notation of multivariate functions that we defined and developed in Lecture 3. If there are no constraints on possible values of the vector u and if the function L u has first and second partial derivatives everywhere, then the necessary conditions for a minimum are L == g0 u ( 1 ) and 2 2 L = H0 u ( 2 )

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Equation (2) defines the mm × Hessian matrix of second partial derivatives which must be positive semi-definite at the optimal solution * u . That is, the matrix H has non- negative eigenvalues or 0 xHx for all m -dimensional vectors x . All solutions or points that satisfy equation (2) are called stationary points . Sufficient conditions for a local minimum are equation (1) and 2 2 L => H0 u ( 3 ) That is, the Hessian matrix of second partial derivatives must be positive definite at the optimal solution * u . If equation (1) is satisfied but 22 L ∂∂ = u0 , that is, the determinant of the Hessian matrix is zero (i.e., one or more of the eigenvalues is zero), additional information is needed to establish whether or not the point is a minimum. Such a point is called a singular point . Note that if L is a linear function of u , then L = everwhere, and, in general, a minimum does not exist. Example 1 Let A be a symmetric × matrix and b an m -dimensional column vector. Suppose that L is the following quadratic form () 1 2 L ′′ = uu A u - b u ( 4 )
The gradient vector and Hessian are given by = gA ub = HA If matrix A is positive definite, then matrix H is positive definite. It follows that matrix A is non-singular and there is a unique optimal solution obtained by setting the gradient vector equal to zero and solving for the decision vector. *1 = uA b One type of portfolio optimization problem we will be solving can be expressed in a form similar to this example where the mm × matrix A is the covariance matrix of returns which can be shown to be positive definite. Hence, the necessary and sufficient conditions are met. Only one solution exists and it is said to be a global optimal solution .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 13

Continuous Optimization Problems - Continuous Optimization...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online