This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 6.867 Machine learning 1 Lagrange multipliers and optimization problems We’ll present here a very simple tutorial example of using and understanding Lagrange multipliers. Let w be a scalar parameter we wish to estimate and x a fixed scalar. We wish to solve the following (tiny) SVM like optimization problem: minimize 1 2 w 2 subject to wx- 1 ≥ (1) This is difficult only because of the constraint. We’d rather solve an unconstrained version of the problem but, somehow, we have to take into account the constraint. We can do this by including the constraint itself in the minimization objective as it allows us to twist the solution towards satisfying the constraint. We need to know how much to emphasize the constraint and this is what the Lagrange multiplier is doing. We will denote the Lagrange multiplier by α to be consistent with the SVM problem. So we have now constructed a new minimization problem (still minimizing with respect to w ) that includes the constraint as an additional linear term: J ( w ; α ) = 1 2 w 2- α ( wx- 1) (2) The Lagrange multiplier α appears here as a parameter. You might view this new objective a bit suspiciously since we appear to have lost the information about what type of constraint we had, i.e., whether the constraint was...
View Full Document
This note was uploaded on 02/20/2012 for the course ECE 8443 taught by Professor Staff during the Spring '10 term at University of Houston.
- Spring '10
- Machine Learning