This preview shows page 1. Sign up to view the full content.
Unformatted text preview: optimal value function.
Sometimes we will be optimizing subject to a constraint on the control variables (such as the
budget constraint of the consumer). Since this constraint may also depend upon the parameter(s),
our problem becomes:
max ƒ( x1 ,..., xn ; α )
x1 ,..., xn subject to g ( x1 ,..., xn ; α ) = c (Note that we now have an additional parameter, namely the constant c.) In this case we still
define the solution functions and optimal value function in the same way – we just have to
remember to take into account the constraint. Although it is possible that there could be more
than one constraint in a given problem, we will only consider problems with a single constraint.
For example, if we were looking at the profit maximization problem, the control variables would
be the quantities of inputs and outputs chosen by the firm, the parameters would be the current
input and output prices, the constraint would be the production function, and the optimal value
function would be the firm’s “profit function,” i.e., the highest attainable level of profits given
current input and output prices.
In economics we are interested both in how the optimal values of the control variables, and the
optimal attainable value, vary with the parameters. In other words, we will be interested in
differentiating both the solution functions and the optimal value function with respect to the
parameters. Before we can do this, however, we need to know how to solve unconstrained or
constrained optimization problems. Econ 100A 7 Mathematical Handout First Order Conditions for Unconstrained Optimization Problems The first order conditions for the unconstrained optimization problem:
max ƒ( x1 ,..., xn ) x1 ,..., xn are simply that each of the partial derivatives of the objective function be zero at the solution
values (x1 ,..., xn ), i.e. that:
ƒ1 ( x1 ,..., xn ) = 0
ƒ n ( x1 ,..., xn ) = 0 The intuition is that if you want to be at a “mountain top” (a maximum) or the “bottom of a
bowl” (a minimum) it must be the case that no small change in any control variable be able to
move you up or down. That means that the partial derivatives of ƒ(x1,..., xn) with respect to each
xi must be zero. Second Order Conditions for Unconstrained Optimization Problems If our optimization problem is a maximization problem, the second order condition for this
solution to be a local maximum is that ƒ(x1, ..., xn) be a weakly concave function of (x1,..., xn) (i.e.,
a mountain top) in the locality of this point. Thus, if there is only one control variable, the
second order condition is that ƒ ″(x*) < 0 at the optimum value of the control variable x. If there
are two control variables, it turns out that the conditions are:
ƒ11(x1*, x2*) < 0 ƒ22(x1*, x2*) < 0 and
ƒ11 ( x1 , x2 ) ƒ12 ( x1 , x2 )
ƒ 21 ( x1 , x2 ) ƒ 22 ( x1 , x2 ) >0 When we have a minimization problem, the second order condition for this solution...
View Full Document