This preview shows page 1. Sign up to view the full content.
Unformatted text preview: eneous of degree 0.
Say that ƒ(x1,..., xn) is homogeneous of degree one, so that we have ƒ(λ⋅x1, ...,λ⋅xn) ≡ λ⋅ƒ(x1,..., xn).
Differentiating this identity with respect to λ yields:
n ∑ ƒ (λ ⋅x ,..., λ ⋅x ) ⋅ x
i =1 i n 1 ≡ ƒ( x1 ,..., xn ) i for all x1 ,..., xn , all λ > 0 ≡ ƒ( x1 ,..., xn ) for all x1 ,..., xn and setting λ = 1 then gives:
n ∑ ƒ (x ,..., x ) ⋅ x
i =1 i 1 n i which is called Euler’s theorem, and which will turn out to have very important implications for
the distribution of income among factors of production.
Here’s another useful result: if a function is homogeneous of degree 1, then its partial derivatives
are all homogeneous of degree 0. To see this, take the identity ƒ(λ⋅x1,...,λ⋅xn) ≡ λ⋅ƒ(x1,..., xn) and
this time differentiate with respect to xi, to get: λ⋅ƒi(λ⋅x1,...,λ⋅xn) ≡ λ⋅ƒi(x1,..., xn)
or equivalently: for all x1,..., xn and λ > 0 ƒi(λ⋅x1,...,λ⋅xn) ≡ ƒi(x1,..., xn) for all x1,..., xn and λ > 0 which establishes our result. In other words, if a production function exhibits constant returns to
scale (i.e., is homogeneous of degree 1), the marginal products of all the factors will be scale
invariant (i.e., homogeneous of degree 0). E. OPTIMIZATION #1: SOLVING OPTIMIZATION PROBLEMS
The General Structure of Optimization Problems Economics is full of optimization (maximization or minimization) problems: the maximization
of utility, the minimization of expenditure, the minimization of cost, the maximization of profits,
etc. Understanding these is a lot easier if one knows what is systematic about such problems.
Each optimization problem has an objective function ƒ(x1,..., xn;α1,...,αm) which we are trying to
either maximize or minimize (in our examples, we’ll always be maximizing). This function
depends upon both the control variables x1,..., xn which we (or the economic agent) are able to
Econ 100A 6 Mathematical Handout set, as well as some parameters α1,...,αm, which are given as part of the problem. Thus a general
unconstrained maximization problem takes the form:
max ƒ( x1 ,..., xn ; α1 ,..., α m ) x1 ,..., xn Consider the following one-parameter maximization problem
max ƒ( x1 ,..., xn ; α ) x1 ,..., xn (It’s only for simplicity that we assume just one parameter. All of our results will apply to the
general case of many parameters α1,...,αm.) We represent the solutions to this problem, which
obviously depend upon the values of the parameter(s), by the n solution functions:
x1 (α )
x2 (α ) ∗
xn = ∗
xn (α ) It is often useful to ask “how well have we done?” or in other words, “how high can we get
ƒ(x1,..., xn;α), given the value of the parameter α?” This is obviously determined by substituting
in the optimal solutions back into the objective function, to obtain:
φ (α ) ≡ ƒ( x1 ,..., xn ;α ) ≡ ƒ( x1 (α ),..., xn (α );α ) and φ(α) is called the...
View Full Document