**Unformatted text preview: **back at our Hessian matrix (in general terms):
≡ Let rows of ≡ …
⋱
… ⋮ , where ⋮ ⋮ ⋯
⋱
⋯ ⋮ . be the determinant of a matrix consisting of the first 1,2, … , . is called the ith‐order principal minor of . For those of you unfamiliar with matrix algebra, the determinant of a 1st – order matrix, ; for a 2nd – order matrix, . For a 3rd order matrix: . Higher order determinants can be obtained by breaking them down in determinants of matrices of lower order, as we have done here for a 3rd order matrix. 2. Optimization in Economics
The above machinery is all well and good, except for the fact that in
economics, most of our problems deal with constrained optimization. Take,
for example the standard consumer problem we’ve mentioned previously:
, to maximize
, subject to ∈
≡
∈
⋮ ∙
Choose ∈
. It doesn’t make sense to simply say the agent wants to maximize her
utility ; given the typical view in economics that individuals have
insatiable wants, the consumer will want to set
∞! Above, we have
been talking about the possibility of maximizing or minimizing a function,
with little additional constraints on the values may take, other than the
fact that it must lie in the domain of the function.
Fortunately, there is a practical way we can introduce our constraints and
utilize some of the good mathematics regarding concavity, convexity, and
optima discussed above. This is due to the insights of Joseph Louis Lagrange,
an Italian mathematician active in the latter half of the 1700s to early 1800s.
Side Note: is a row vector with K columns, and is a row vector with K columns. Technically, we
should write the product ∙ as
where is the transpose of , a column vector with K
.
rows. But we write ∙ as a short cut for Method of Lagrange The Lagrange method is simple enough. Convert a constrained optimization into a new
optimization problem, one that ‘looks’ unconstrained. However, if that new optimization
problem has a solution, the additional conditions we impose on the pro...

View
Full Document