4 - Larry Karp Notes for Dynamics October 2001 IV....

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Larry Karp Notes for Dynamics October 2001 IV. Uncertainty 1) Maximizing expected utility with random time of death. 2) Affecting the probability of catastrophe - avoidable and unavoidable risk. Begin with a review of hazard functions. J is a random variable, the time at which a discrete change (e.g. catastrophe, death, maybe something less severe) occurs. F(t) is the probability the J # t, i.e. F(t) is the CDF of J. FN(t) is the associated density. h(t) is the associated "hazard rate", i.e., h(t)dt = Probability event occurs over (t, t+dt) given that the even has not yet occurred. h(t) = Prob {disaster next second * no catastrophe so far} remember P(A*B) P(A _ B) P(B) Think of the event A as being "disaster in the next instant" and the event B as being "no disaster so far". Using Bayes' Rule, the fact that Pr( A and B) = Pr (A)1, and the definition, i.e., the hazard rate is simply this conditional probability, we have h(t) = FN(t)/[1 - F(t)] Simple problem in which controller does not affect the hazard rate. Statement of problem. Optimal consumption with random time of death F(t) = Pr {dying by time t} T upper bound on lifetime, f(t) = FN U(c) = flow of utility Maximized Expected PDV of utility = The probability that there is a disaster next instant and disaster has not yet occurred, equals the probability that there is a disaster next instant, because if the disaster has already occurred, there is 0 probability that it will occur next instant. 1 T t max (1) m m 0 0 e rs U(cS)ds a(t)W(K(t)) F N (t) dt z(t) s.t. K = iK + y(t) B c(t) K0 given W(K) = current value of bequest, a(t)W(K) is the present value at time 0 of a bequest made at time t; y = wage z(t) + a(t)W(K(t)) is utility conditional death at t. Note that z(t) = e-rtU'(ct). Convert the problem to a familiar form, using integration by parts. T T 0 T m 0 z(t) dF(t ) F(t)z(t ) * m 0 F(t) dz dt dt (use z(0) = 0, F(T) = 1) T T T m 0 e rs U(cs )ds e rsU(cs)F(s)ds m 0 (1) = m 0 e rs(1 F(s)) U(cs ) a(s) W(Ks)FN(s) ds / G You either die and get utility aW or live and get utility U(c). The introduction of uncertainty is similar to a change in discount factor. Note that scrap value function -- the bequest function -- is now part of the integral. Note: We have converted the original stochastic problem into a problem that is formally equivalent to a deterministic control problem. The solution to this problem gives us a control rule telling us how much to consume at each point in time conditional upon death not having occurred. Thus, the optimal control is no longer "exactly" open loop. (An open loop control rule is one that depends on time and the initial value of the state variable. Here the control rule depends on time, the initial value of the state variable, and the condition Adeath has not yet occurred@.) For more complicated stochastic control problems, we will want to condition the optimal control on "more things", such as the value of a state variable at the time a decision is made. 4:2 4:3 Characterize the solution. H G iK y c (2) MH Gc e rs (1 F ) UN (cs) 0 Mc Interpret last equation in (2): The present value of the probability of not having died by t, times the marginal utility given that you haven't died, should equal the shadow value of the state. (3) MH GK i MK Differentiate (2) using (3) sub back (2) Y c c i r h(t) U N (c) h(t) e rt a(t) WN (K) / U NN (c) U NN (c) c (prob of dying over next dt) prob. of being alive by t h F N (t) 1 F(t) h(t) is the hazard rate. (with uncertainty, discount by r + h(t) h(t) is the probability of dying during the next dt units of time, given that death has not yet occurred. This is the hazzard rate that we discussed before. Remember Bayes' Rule: P(A|B) = P(A 1 B)/P(B). Interpret equilibrium condition when W = 0. Mention special case where U(c) = ln c, so UNc/UNN = -1 Transversality condition K(T) free Y 8(T) = 0, automatically satisfied by (2) since F(T) = 1 4:4 In previous problem, risk was exogenous. Now consider case where risk is endogenous. Follow Clark and Reed. Consumption / pollution tradeoff with threat of catastrophe. Here controller affects probability of catastrophe. Description of problem P = pollution g(P) is decay c = consumption, z(c) = contribution of consumption to pollution (1) P z(c) g (P) U (c,P) = utility / V (z,P) (As an exercise, set up Hamiltonian for deterministic problem and find conditions that determine steady state. Later will compare these to the conditions that determine steady state under risk.) h(t) h P(t) =hazard function Disaster means no more utility. J = r.v. time of disaster Define F(t) / prob. of disaster by t. F h(t) 1 F(t) (definition of hazard) Utility * disaster occurs at J = m 0 e rt V(z , P)dt so expected utility is 4:5 " m m 0 0 e rt dF dF V (zt Pt ) dt d z() d m d d 0 " / z() " 0 " F()z()* m 0 F(t)e rt V(zt pt ) dt Now use F(0)=0, F(4) = 1 to rewrite the maximand as " m 0 1 F(t) e rt V(zt , pt) dt The last integral is the expression we want to maximize. Make a change of variables to simplify problem S(t) = 1BF(t) = Prob. of being alive by t ("survival function") S(t) h(t) S(t) y / ln S (so eBy = S) (3) y h P (t) (2) Now replace 1 - F = S by e-y, so the maximand becomes " m 0 e rty(t) V(zt pt)dt 4:6 The advantage of this formulation is that it is now transparent that this kind of uncertainty changes the problem by changing the effective discount rate. The optimizer discounts the future more heavily because she may not be around to enjoy it. Recall that h(t) = h(P(t)). The hazard function depends on the current stock. The decision-maker controls the evolution of the stock. The control problem is " max z m 0 e (rt y) V(z , P)dt (1) (3) P = z B g (P) y = h(P) P(0) = P0 y(0) = 0 and the current value Hamiltonian is (4) Hcv = e y V(P , z) 1 z g (P) 2h(P) where 1 and 2 are the costate variables associated with states P and y. Note that y is treated as a state variable. We have simplified the problem by "getting rid of" the uncertainty, but at the cost of adding a new state variable. The system (1) and (3) is recursive. This makes it a bit easier to characterize solution. Note that y will never equal 0, so we will not get the "usual sort" of steady state - i.e., one in which all state variables stop changing. However, there is a limiting value of P, i.e, a steady state at which the state P stops changing: P = 0. P = 0 defines a conditional steady state, i.e. one which is conditional upon disaster not having occurred. Whenever I speak of a steady state for the problem with risk, I mean a conditional steady state. Necessary conditions (5) (6) (7) e y Vz 1 0 1 r 1 e y Vp 1 g N (P) 2h N (P) 2 r 2 e yV 4:7 From now on we will be interested only in the steady state analysis, i.e., in the limiting value of P, conditional upon disaster not occurring. As we noted above, the state y grows without bounds, so we cannot speak of it's limiting value, nor can we obtain the steady state simply by setting the time derivatives of equations 1, 3, 6 and 7 equal to 0. However, we can proceed as follows. Totally differentiate (5) wrt time, using the costate equations 6 and 7, and then using 5 to eliminate 1. If we carry out this algebra we obtain an equation for z given by z = a(z,P) + b(z,P,D2; h), where (8) and the functions a( ) and b( ) are given by (9) (10) 2 / e y 2 a / {(* + gN)Vz + VP - VzP(z - g(P))}/Vzz b / {h(P)Vz + hND2}/Vzz. I have decomposed z into two functions, a( ) is the time derivative of z (under optimal control) when there is no risk (h / 0); the function b( ) collects all the terms that involve h, i.e., those terms that involve risk. I will use this decomposition for comparative statics of the steady state, below. A (conditional) steady state requires (using equation 1) that emissions = decay, or: (11) z = g(P) P is constant if and only if z is constant. In order for z to be constant, we require a( ) + b( ) = 0. However, if both z and P are constant, then D2 must be constant. Using the definition of D2, we have D2 = 0 if and only if (12) 2 V r h(P) Using (12) to eliminate D2 in the function b( ), we can write z = 0 iff (13) r g N (P) h(P) Vz Vp h N (P)V r h 4:8 Equations (11) - (13) are three equations in 3 unknowns, z, P and D2. These three equations give the steady state values. Note that equations 11 and 13 are independent of 12. ***************** Digression. Before studying the effect of uncertainty on the steady state, I want to explain why it makes sense that D2 is constant in the steady state, whereas, as we noted, y and 2 are not constant. Define W(t) = W(P(t)) as the current value of the expectation at time t of the value of the program given the state P(t) and optimal behavior: E m t e r(st) V(p , z)ds / Wt (where it is understood without additional notation that I am evaluating the payoff under optimal behavior). Let, P* and z* be the (conditional) steady state, i.e. the values of P and z at which the optimal trajectory remains constant (conditional on continued survival). It should be clear that when P=P* the value of W(P*) is constant (conditional on disaster not occurring). Define J(P(0),y(0),t) as the expectation, at time 0 of the current value at time t of the program. (I am at time 0 with pollution stock P(0) and I ask: What is my expectation today of the current value at time t of the continuation payoff at time t? I am discounting the stream of returns afer time t only back to time t, not to time 0.) Since y(0)=0 in every problem I can suppress this argument. (The value of P at time 0, on the other hand, could be different in different problems.) With this definition, J(P,t) = S(t)W(t). In words, J(P,t) is the expectation at time t of the future payoff, conditional on survival until time t, times the probability of survival until time t. (I'm taking the expectation, with respect to a conditioning event - survival, of the conditional expectation of payoff given survival.) Now recall the interpretation of the current shadow value 2: (14) 2 M S(t) W(t) e y(t) W(t) My (I have used the definition e-y = S. Rearranging equation (14) we have - W = ey 2, which we previously defined (equation 8) as D2. We also noted that in the (conditional) steady state, W must be constant. Therefore, in the steady state D2 must be constant. This is what we wanted to show. (Pheww!) 4:9 End digression. **************************** How does inclusion of uncertainty affect the steady state? Does the decision-maker become more or less cautious, i.e., does uncertainty cause P* increase or decrease? Note that if h / 0, (13) Y (r+gN)Vz + Vp = 0. This equation says that in the absence of risk it is optimal to strike the following balance: An extra unit of consumption today increases the current flow of utility by Vz but increases pollution costs in the future by Vp . These future costs need to be discounted by the interest rate r, and the decay rate (of pollution) gN. A larger value of r means that you value the future less, and a larger value of gN means that the pollution decays more rapidly, and therefore does less damage in the future. Thus, larger values of either r or gN mean that the steady state stock will be larger. We obtained this result in Notes #3. We can analyze the effect of the inclusion of uncertainty by examining the comparative statics of the deterministic steady state, using the stability condition. Consider the deterministic system. As we saw, this is given by P = z - g(P), z = a( ), where the function a( ) is defined in (9). The linearization of this system is P z g 1 aP az P z P z (15) . A We know (see notes #3) that if we have a steady state in the deterministic problem, it must be a saddle point, and therefore the determinant of A must be negative. Here we are not demonstrating stability, we are assuming it. Now we will use this assumption for the comparative statics exercise, much as we use the assumption of negative definiteness in a static optimization problem. Consider the system of equations z - g(P) = 0 and a(z,P) + b = 0. When b = 0 we have equations for the deterministic steady state, and when b is given by equation (10) we have the equations for the steady state under risk. How does the value of b affect the steady state? Totally differentiate the system treating b as a constant parameter, using Cramer's rule and the fact that |A| < 0 to obtain (16) dP/db = 1/|A| < 0 Equation (16) says that an increase in b decreases the steady state. Thus, if b > 0 at the steady state in the risky situation, the (conditional) steady state under risk is lower than the deterministic steady state. Using equation (10) and (12), we see that b > 0 iff (17) Vz/V < hN/(r+ h)h 4:10 where (17) is evaluated at the steady state under risk. We see that (17) will be satisfied if hN is sufficiently large or if h is sufficiently small (and hN remains strictly positive). The inequality in (17) is reversed if hN is small and/or h is large. To summarize, including risk has two, opposing effects on steady state: (a) increases SS pollution and consumption if h independent of P. (Eat, drink and be merry, for tomorrow we die.) (b) decreases SS pollution and consumption if h N (P) is large. (Ascetics live longer.) *ss Distinction between avoidable and unavoidable is crucial. **** Tsur and ????, 1996 JEDC, consider a different type of avoidable risk. In their problem, a catastrophe happens when the state - e.g. the level of greenhouse gasses - reaches (or is above) a critical level. Optimizer has a probability distribution over this level, but does not know its value. Pollution also causes flow of disutillity, so in absence of threat of catastrophe, it would be optimal to move the state to a finite level. With uncertainty, optimal behavior depends on initial condition. If initial condition is low, want to keep the state low (relative to certainty case) to reduce risk. But if initial condition is very high, you know that the critical level is higher then the level you would want to take state to (absent risk), so uncertainty doesn't matter. 4:11 ...
View Full Document

Ask a homework question - tutors are online