This preview shows page 1. Sign up to view the full content.
Unformatted text preview: we knew, since the constraint binds.
∗ b) We notice that
for 2) ∗ , ∗ ∗ ∗ , ∗ ∗ , ∗ ∗, ∗
∗, ∗ ∗ , ∗ 0 Then, 0. This condition, that ,
, You could think of the term ∗, ∗
∗, ∗ function from a marginal change in
the constraint , , should make intuitive sense in this case. , loosely, as the marginal increment in the objective
∗ (adjusted for the fact that changes in ). The same can be said about ∗, ∗ ∗, ∗ ∗ move along . If they are equal, then, at the marginal they sort of give equal ‘push’ to the objective function
. This is
what happens in Case a). If one variable gives greater ‘push’ than the other at the marginal,
we’d expect the other variable isn’t a factor, e.g., ∗ 0. The above conditions 1‐3 are referred to as Kuhn‐Tucker conditions. It is important to note
that these are necessary but not sufficient conditions for an optimum. However, if we have
enough conditions to ensure the solution to these conditions is unique, then we are
assured that the solution to the Kuhn‐Tucker conditions will be the solution to our
Taking all this into account, we have a general formulation for our first‐order conditions and
the K‐T conditions:
, ≡ Here,
. Note we are using
to denote a constraint i, it isn’t referring to the
partial derivative of with respect to . We call this a Kuhn‐Tucker Lagrangian. We have
left out the non‐negatively constraints for each but is implicit in our formulation.
Our first order conditions are then:
, 1. 2. , 3. 0 for 4. 0 for 0 for 0 for 1,2, …
1,2, … 1,2, … .
1,2, … . An Example
Consider the problem of a firm that produces two product lines,
View Full Document
This document was uploaded on 02/07/2014.
- Winter '14