lagrangeMultipliers_MIT_CSAIL

lagrangeMultipliers_MIT_CSAIL - 6.867 Machine learning 1...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 6.867 Machine learning 1 Lagrange multipliers and optimization problems Well present here a very simple tutorial example of using and understanding Lagrange multipliers. Let w be a scalar parameter we wish to estimate and x a fixed scalar. We wish to solve the following (tiny) SVM like optimization problem: minimize 1 2 w 2 subject to wx- 1 (1) This is difficult only because of the constraint. Wed rather solve an unconstrained version of the problem but, somehow, we have to take into account the constraint. We can do this by including the constraint itself in the minimization objective as it allows us to twist the solution towards satisfying the constraint. We need to know how much to emphasize the constraint and this is what the Lagrange multiplier is doing. We will denote the Lagrange multiplier by to be consistent with the SVM problem. So we have now constructed a new minimization problem (still minimizing with respect to w ) that includes the constraint as an additional linear term: J ( w ; ) = 1 2 w 2- ( wx- 1) (2) The Lagrange multiplier appears here as a parameter. You might view this new objective a bit suspiciously since we appear to have lost the information about what type of constraint we had, i.e., whether the constraint was...
View Full Document

Page1 / 3

lagrangeMultipliers_MIT_CSAIL - 6.867 Machine learning 1...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online