This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Saving Under Uncertainty
Peter Ireland∗
EC720.01  Math for Economists
Boston College, Department of Economics
Fall 2010
This last example presents a dynamic, stochastic optimization problem that is simple
enough to allow a relatively straightforward application of the KuhnTucker theorem. The
optimality conditions derived with the help of the Lagrangian and the KuhnTucker theorem
can then be compared with those that can be derived with the help of the Bellman equation
and dynamic programming. 1 The Problem Consider the simplest possible dynamic, stochastic optimization problem with:
Two periods, t = 0 and t = 1
No uncertainty at t = 0
Two possible states at t = 1:
Good, or high, state H occurs with probability π
Bad, or low, state L occurs with probability 1 − π Notation for a consumer’s problem: y0 = income at t = 0
c0 = consumption at t = 0
s = savings at t = 0, carried into t = 1 (s can be negative, that is, the consumer
is allowed to borrow)
r = interest rate on savings
H
y1 = income at t = 1 in the high state
L
y1 = income at t = 1 in the low state
H
L
y1 > y1 makes H the good state and L the bad state
cH = consumption at t = 1 in the high state
1
∗ c
Copyright 2010 by Peter Ireland. Redistribution is permitted for educational and research purposes,
so long as no changes are made. All copies must be provided free of charge and must include this copyright
notice. 1 cL = consumption at t = 1 in the low state
1
Expected utility:
u(c0 ) + β E [u(c1 )] = u(c0 ) + βπ u(cH ) + β (1 − π )u(cL )
1
1
Constraints:
y0 ≥ c0 + s H
(1 + r)s + y1 ≥ cH
1
L
(1 + r)s + y1 ≥ cL
1 The problem:
max u(c0 ) + βπ u(cH ) + β (1 − π )u(cL )
1
1 c0 ,s,cH ,cL
1
1 subject to
y0 ≥ c0 + s H
(1 + r)s + y1 ≥ cH
1 and
L
(1 + r)s + y1 ≥ cL
1 Notes:
There are two constraints for period t = 1: one for each possible realization of y1 .
What makes the problem interesting is that savings s at t = 0 must be chosen before
income y1 at t = 1 is known.
From the viewpoint of t = 0, uncertainty about y1 induces uncertainty about c1 : the
consumer must choose a “contingency plan” for c1 .
In this simple case, it’s not really that much of a problem to deal with all of the
constraints in forming a Lagrangian.
In this simple case, it’s relatively easy to describe the contingency plan using the
notation cH and cL to distinguish between consumption at t = 1 in each of the
1
1
two states.
But, as the number of periods and/or the number of possible states grow, these notational burdens become increasing tedious, which is what motivates our interest
in dynamic programming as a way of dealing with stochastic problems. 2 The KuhnTucker Formulation Set up the Lagrangian, using separate multipliers λH and λL for each constraint at t = 1:
1
1
L(c0 , s, cH , cL , λ0 , λH , λL ) = u(c0 ) + βπ u(cH ) + β (1 − π )u(cL ) + λ0 (y0 − c0 − s)
1
1
1
1
1
1
H
H
H
L
L
+λ1 [(1 + r)s + y1 − c1 ] + λ1 [(1 + r)s + y1 − cL ]
1
2 FOC for c0 : u ( c 0 ) − λ0 = 0 FOC for s
−λ0 + λH (1 + r) + λL (1 + r) = 0
1
1
FOC for cH
1 βπ u (cH ) − λH = 0
1
1 FOC for cL :
1 β (1 − π )u (cL ) − λL = 0
1
1 Use the FOC’s for c0 , cH , and cL to eliminate reference to the multipliers λ0 , λH , and λL
1
1
1
1
in the FOC for s:
u (c0 ) = βπ u (cH )(1 + r) + β (1 − π )u (cL )(1 + r)
1
1 (1) Together with the binding constraints
y 0 = c0 + s
H
(1 + r)s + y1 = cH
1 and
L
(1 + r)s + y1 = cL
1 (1) gives us a system of 4 equations in the 4 unknowns: c0 , s, cH , and cL .
1
1
Note also that (1) can be written more compactly as
u (c0 ) = β (1 + r)E [u (c1 )],
which is a special case of the more general optimality condition that we derived previously in the “saving with multiple random returns” example, reﬂecting that in this
simple example:
There are only two periods.
The return on the single asset is known. 3 The Dynamic Programming Formulation Consider “restarting” the problem at t = 1, in state H , given that s has already been
H
chosen and y1 already determined.
The consumer solves the static problem:
max u(cH )
1
cH
1 subject to
H
(1 + r)s + y1 ≥ cH .
1 3 The solution is trivial: set
H
cH = (1 + r)s + y1
1 Hence, if we deﬁne the maximum value function
H
H
v (s, y1 ) = max u(cH ) subject to (1 + r)s + y1 ≥ cH
1
1
cH
1 then we know right away that
H
H
v (s, y1 ) = u[(1 + r)s + y1 ] and hence
H
H
v1 (s, y1 ) = (1 + r)u [(1 + r)s + y1 ] = (1 + r)u (cH )
1 (2) Likewise, if we restart the problem at t = 1 in state L, given that s has already been chosen
L
and y1 already determined, then
L
L
v (s, y1 ) = max u(cL ) subject to (1 + r)s + y1 ≥ cL
1
1
cL
1 and we know right away that
L
L
v (s, y1 ) = u[(1 + r)s + y1 ] and
L
L
v1 (s, y1 ) = (1 + r)u [(1 + r)s + y1 ] = (1 + r)u (cL )
1 (3) Now back up to t = 0, and consider the problem
max u(c0 ) + β Ev (s, y1 ) subject to y0 ≥ c0 + s
c0 ,s or, even more simply
max u(y0 − s) + β Ev (s, y1 )
s (4) Equation (4) is like the Bellman equation for the consumer’s problem:
The problem described on the righthandside is a static problem: the dynamic programming approach breaks the dynamic program down into a sequence of static
problems.
Note, too, that the problem is an unconstrained optimization problem.
And note that in (4), the “maximize with respect to cH and cL ” part of the original
1
1
dynamic problem has been moved inside the expectation term, sidestepping the
need to talk explicitly about “contingency plans” for the future.
Take the FOC for the value of s that solves the problem in (4):
−u (y0 − s) + β Ev1 (s, y1 ) = 0
and rewrite it using (2) and (3) as
u (c0 ) = β E [(1 + r)u (c1 )] = β (1 + r)π u (cH ) + β (1 + r)(1 − π )u (cL )
1
1
4 (5) Notes:
Together with the binding constraints
y0 = c0 + s
H
(1 + r)s + y1 = cH
1 and
L
(1 + r)s + y1 = cL
1 (5) gives us a system of 4 equations in the 4 unknowns: c0 , s, cH , and cL .
1
1
This system of equations is exactly the same one that we derived earlier with the help
of the Lagrangian and the KuhnTucker theorem. 5 ...
View Full
Document
 Fall '09
 IRELAND
 Economics

Click to edit the document details