14solving_qp_art

# 14solving_qp_art - Solution Methods for Quadratic...

This preview shows pages 1–5. Sign up to view the full content.

Solution Methods for Quadratic Optimization Robert M. Freund April 1, 2004 c ± 2004 Massachusetts Institute of Technology. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1 Outline Active Set Methods for Quadratic Optimization Pivoting Algorithms for Quadratic Optimization Four Key Ideas that Motivate Interior Point Methods Interior-Point Methods for Quadratic Optimization Reduced Gradient Algorithm for Quadratic Optimization Some Computational Results 2 Active Set Methods for Quadratic Optimization In a constrained optimization problem, some constraints will be inactive at the optimal solution, and so can be ignored, and some constraints will be active at the optimal solution. If we knew which constraints were in which category, we could throw away the inactive constraints, and we could reduce the dimension of the problem by maintaining the active constraints at equality. Of course, in practice, we typically do not know which constraints might be active or inactive at the optimal solution. Active set methods are designed to make an intelligent guess of the active set of constraints, and to modify this guess at each iteration. Herein we describe a relatively simple active-set method that can be used to solve quadratic optimization problems. Consider a quadratic optimization problem in the format: T QP : minimize x x T Qx + c x s.t. Ax = b (1) x 0 . 2 1 2
± ² ³ ´ The KKT optimality conditons for QP are as follows: Ax = b x 0 Qx + c A T p s =0 s 0 x j s j ,j =1 ,...,n. We suppose for this problem that n is very large, but that intuition suggests that very few variables x j will be non-zero at the optimal solution. Suppose that we have a feasible solution x , and that we partition the components of x into x =( x β ,x η ), where x β 0and x η = 0. In this partition, the number of elements in β will be small, relative to the value of n . Using this partition, we can re-write the data for our problem as: c β Q = Q ββ Q βη ,A =[ A β A η ] ,c = . Q ηβ Q ηη c η Now let us “guess” that the variables x j η will be zero in the opti- mal solution. That being the case, we can eliminate the variables x η from further consideration for the moment. will then concentrate our eﬀorts on solving the much smaller problem: QP β : z β = minimum x β 1 2 x T T β Q x β + c β x β s.t. A β x β = b x β 0 . Let x β be the optimal solution of QP β ,and l e t p be the associated KKT multipliers for the constraints A β x β = b ”. Also, let s β denote the 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
± ² associated KKT multipliers on the nonnegativity constraints x β 0. Then x and p ,s β will satisfy the KKT optimality conditions for QP β , namely: β β p s β =0 Q ββ x β + c β A T = b A β x β x 0 β 0 β ( x β ) T s β . We can expand x into the following feasible solution to the original β problem: x =( x β ,x η )=( x β , 0) by setting all of the variables x η := 0. This solution will of course be feasible for the original problem, but will it be optimal? The answer lies, of course, in checking the KKT optimality conditions
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 12/04/2011 for the course ESD 15.094 taught by Professor Jiesun during the Spring '04 term at MIT.

### Page1 / 38

14solving_qp_art - Solution Methods for Quadratic...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online