*This preview shows
page 1. Sign up
to
view the full content.*

**Unformatted text preview: **± + *
*
*
âˆ‚ Æ’( x1 (Î± ),..., xn (Î± );Î± ) dxn (Î± )
â‹…
âˆ‚xn
dÎ± + dÏ† (Î± )
dÎ± *
*
âˆ‚ Æ’( x1 (Î± ),..., xn (Î± );Î± )
âˆ‚Î± where the last term is obviously the direct effect of Î± upon the objective function. The first n
terms are there because a change in Î± affects the optimal xi values, which in turn affect the
objective function. All in all, this derivative is a big mess.
However, if we recall the first order conditions to this problem, we see that since âˆ‚Æ’/âˆ‚x1 = ... =
âˆ‚Æ’/âˆ‚xn = 0 at the optimum, all of these first n terms are zero, so that we just get:
dÏ† (Î± )
dÎ± = *
*
âˆ‚ Æ’( x1 (Î± ),..., xn (Î± );Î± )
âˆ‚Î± This means that when we evaluate how the optimal value function is affected when we change a
parameter, we only have to consider that parameterâ€™s direct affect on the objective function, and
can ignore the indirect effects caused by the resulting changes in the optimal values of the
control variables. If we keep this in mind, we can save a lot of time.
This also works for constrained maximization problems. Consider the problem
max Æ’( x1 ,..., xn ;Î± ) x1 ,..., xn subject to g ( x1 ,..., xn ;Î± ) = c Once again, we get the optimal value function by plugging the optimal values of the control
*
variables (namely x1*(Î±),..., xn (Î±)) into the objective function:
*
*
Ï† (Î± ) â‰¡ Æ’( x1 (Î± ),..., xn (Î± );Î± )
Î± Note that since these values must also satisfy the constraint, we also have:
*
*
c âˆ’ g ( x1 (Î± ),..., xn (Î± );Î± ) â‰¡ 0
Î± Econ 100A 16 Mathematical Handout so we can multiply by Î»(Î±) and add to the previous equation to get:
*
*
Ï†(Î±) â‰¡ Æ’(x1*(Î±),..., xn (Î±);Î±) + Î»(Î±)â‹…[c â€“ g(x1*(Î±),..., xn (Î±);Î±)]
*
which is the same as if we had plugged the optimal values x1*(Î±),..., xn (Î±) and Î»âˆ—(Î±) directly into
the Lagrangian formula, or in other words:
*
*
*
Ï†(Î±) â‰¡ L(x1*(Î±),..., xn (Î±),Î»âˆ—(Î±);Î±) â‰¡ Æ’(x1*(Î±),..., xn (Î±);Î±) + Î»(Î±)â‹…[c â€“ g(x1*(Î±),..., xn (Î±);Î±)] Now if we differentiate the above identity with respect to Î±, we get:
= *
*
*
âˆ‚L ( x1 (Î± ),..., xn (Î± ), Î»*(Î± );Î± ) d x1 (Î± )
â‹…
dÎ±
âˆ‚x1 + dÏ† (Î± )
dÎ± *
*
*
âˆ‚L ( x1 (Î± ),..., xn (Î± ), Î»*(Î± );Î± ) d xn (Î± )
â‹…
dÎ±
âˆ‚xn +
+ *
*
âˆ‚L ( x1 (Î± ),..., xn (Î± ), Î»*(Î± );Î± ) d Î» *(Î± )
â‹…
dÎ±
âˆ‚Î»
*
*
âˆ‚L ( x1 (Î± ),..., xn (Î± ), Î»*(Î± );Î± )
âˆ‚Î± But once again, since the first order conditions for the constrained maximization problem are
âˆ‚L/âˆ‚x1 = â‹…â‹…â‹… = âˆ‚L/âˆ‚xn = âˆ‚L/âˆ‚Î» = 0, all but the last of these right hand terms are zero, so we get:
dÏ† (Î± )
dÎ± = *
*
âˆ‚L ( x1 (Î± ),..., xn (Î± ), Î»*(Î± );Î± )
âˆ‚Î± In other words, we only have to take into account the direct effect of Î± on the Lagrangian
function, and can ignore the indirect effects due to changes in the optimal values of the xiâ€™s and
Î». An extremely helpful thing to know. Econ 100A 17 Mathematical Handout I. DETERMINANTS, SYSTEMS OF LINEAR EQUATIONS & CRAMERâ€™S RULE
The Determinant of a Matrix In order to...

View
Full
Document