This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 10725 Optimization, Spring 2008: Homework 1 Solutions Due: Wednesday, February 6, beginning of the class 1 L ∞Regularized Regression [Han, 5 points] 1 2 Art Class [Gaurav, 10 points] This question is designed to help you visualize the geometry of linear programming. Recall that if, in standard form, there are n variables in an LP, and there are m equality constraints, then the solution lies in a n m dimensional subspace. Consider the following linear program: max 3 x 1 + x 2 x 3 + x 4 x 5 subject to 3 x 1 x 2 7 x 3 + x 4 + x 5 = 3 x 1 + x 3 + x 5 = 2 3 x 1 4 x 3 + x 4 + 2 x 5 = 1 x i ≥ ∀ i 1. [5 pts] Draw the feasible set of solutions for the above LP. Enumerate the coordinates of the extreme vertices of the feasible set. 2 F SOLUTION: Since there are 5 variables and 3 equality constraints in the problem, the solution space is a 2 dimensional affine subspace. We can thus transform this problem into an equivalent optimization problem that involves only 2 variables. To do this, we need to select a two dimensional basis for the solution space. Let us choose our basis as consisting of the variables B = { x 1 ,x 3 } . Note that we can choose any linearly independent set as our basis, as long as the dot product of each of the vectors in the set, and the solution space is nonzero (i.e. the vectors do not belong to the null space). For example, we could have chosen { x 4 + x 5 ,x 1 + x 2 } as our basis, though this would make the calculations messy. In the following analysis, we use the basis B = { x 1 ,x 3 } . We now express all variables in terms of our basis variables. x 1 + x 3 + x 5 = 2 = ⇒ x 5 = 2 x 1 x 3 (1) 3 x 1 4 x 3 + x 4 + 2 x 5 = 1 = ⇒ x 4 = 1 3 x 1 + 4 x 3 2(2 x 1 x 3 ) = ⇒ x 4 = 3 x 1 + 6 x 3 (2) 3 x 1 x 2 7 x 3 + x 4 + x 5 = 3 = ⇒ x 2 = 3 x 1 7 x 3 + (2 x 1 x 3 ) + ( 3 x 1 + 6 x 3 ) + 3 = ⇒ x 2 = x 1 2 x 3 + 2 (3) Using ( 1 ), ( 2 ) and ( 3 ), the objective function becomes max 3 x 1 + x 2 x 3 + x 4 x 5 = max 3 x 1 + ( x 1 2 x 3 + 2) x 3 + ( 3 x 1 + 6 x 3 ) (2 x 1 x 3 ) = max 4 x 1 + 4 x 3 3 Thus, the new optimization problem (equivalent to the original optimization problem) is max 4 x 1 + 4 x 3 3 s.t. x 1 ≥ (4) x 1 2 x 3 + 2 ≥ (from ( 3 )) (5) x 3 ≥ (6) 3 x 1 + 6 x 3 ≥ (from ( 2 )) (7) 2 x 1 x 3 ≥ (from ( 1 )) (8) Since this optimization problem involves only 2 variables, we can visualize it as shown in Figure 1 . From the diagram, the extreme vertices of the feasible set are the intersection of the following constraint pairs { 4 , 5 } , { 4 , 7 } , { 5 , 8 } , { 7 , 8 } . These points are: { 4 , 5 } : (0 , , 1 , 3 , 1) { 4 , 7 } : ( , 1 , 1 2 , , 3 2 ) { 5 , 8 } : ( 2 3 , , 4 3 , 13 3 , ) { 7 , 8 } : ( 9 7 , 13 7 , 5 7 , , ) COMMON MISTAKE 1: Not choosing a basis. Simply enumerating all the basic solution and then looking at which ones are feasible, will give you the coordinates of the extreme points of the feasible region. However, in order to make a diagram that is to scale, you must choose a basis and project theregion....
View
Full
Document
This note was uploaded on 05/25/2008 for the course MACHINE LE 10708 taught by Professor Carlosgustin during the Spring '07 term at Carnegie Mellon.
 Spring '07
 CarlosGustin

Click to edit the document details