This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Larry Karp Notes for Dynamics September 2001 III. The Maximum Principle 1) Necessary conditions using Maximum Principle. 2) Relation between COV and Maximum Principle approaches. 3) Various terminal conditions, sufficiency. 4) An example with two state variables. 5) Current value Hamiltonian 6) The recipe for analyzing one state variable autonomous problems, with an example. 7) Comparative statics of steady state; comparative dynamics. 1. Problem setup and derivation of necessary conditions T (1) (2) x state u control Maximize m t0 f (t , x , u) dt s.t. x = g(x,u,t) x0 fixed xT free state eqn. t0,T fixed If (2) is satisfied, the payoff in (1) must equal T (3) m t0 f (t , x , u) g ( ) x dt . T If 8 is continuously differentiable, can integrate m t0 x dt by parts and sub into (3) T (4) m t0 f ( ) g( ) x dt x x *t0 *T We want to choose a control function u(t) to maximize (1) Suppose u*(t) is optimal control function, and x*(t) is soln to (2) with u = u*(t). Define u = u*(t) + ah(t), a is a parameter and h(t) an arbitrary continuous fixed function. Define y (t, a) as soln to (2) when u = u*(t) + ah (t) y (t,0) = x*(t) y(t0, a) = x0 If x* is optimal, a = 0 must maximize T J(a) T m t0 f t , y(t , a) , u (t) ah(t) dt y(t , a) dt (T) y (T , a) (t0) y (t0) (using (4)) m t0 f ( ) (t)g( ) *y , u ah T (5) J N (0) m t0 fx gx *x , u My Ma fu gu *x , a My (T , 0) h(t) dt (T ) Ma set 0 Choose 8 to satisfy 8 = B(8gx + fx), so first term vanishes, and chose 8(T) = 0, so last term vanishes. So (5) Y T set fu gu *x u h(t) dt 0 J N(0) m t0 This holds for arbitrary h(t), so it must hold for h(t) fu gu T *x ,u , in which case J N (0) m t0 fu gu 2 dt *a 0 0 Y fu gu 0 Summarize: define H = f( ) + 8g( ) necessary conditions x=g() x0 given state eqn. 3:2 MH fx gx Mx 8(T) = 0 MH fu gu 0 Mu Discuss this as a TPBVP costate eqn. transversality condition (at interior optimum) 2. Relation between Maximum Principle and COV Show necessary conditions imply Euler equation and transversality condition, for problem T max m t0 f t , x(t) , u(t) s.t. x(t) = u(t) x(t0) given, x(T) free Recall: E.E. Transversality condition fx d (f ) dt x 0 fx (T)) Hamiltonian H = f (t, x, u) + 8u MH fu 0 Mu 8 = B fx 8(T) = 0 3:3 FOC Costate equation Transversality condition (TC) Differentiating the FOC and using costate equation Y EE d ( f ) fx dt u T.C. and FOC imply 0 = 8(T) + fu(T) Y fx(T) = 0 Huu # 0 (Legendre Condition) Y B.C. fxx # 0 3. Sufficiency, and different terminal conditions (Chapters 7 and 15 K&S) Sufficiency: Suppose f,g concave in x,u. Let x*(t) 8*(t) satisfy necessary conditions and (for cases where g is nonlinear in either x or u) 8*(t) $ 0. Then necessary conditions are sufficient (If g is linear in x and u, no sign restriction on 8*(t)). Assumption on functions f and g sometimes not satisfied in economic problems. A less restrictive sufficiency theorem uses the maximized Hamiltonian -- the Hamiltonian that is obtained by replacing the control u with the function that maximizes the Hamiltonian, i.e. that satisfies the necessary condition. Typically this means replacing the control u with a function of x, the costate variable, and time. If this maximized Hamiltonian is concave in the state, the necessary conditions are sufficient for maximization. Although less restrictive, the assumption that the maximized Hamiltonian is concave in the state may not be satisfied in reasonable problem. (Example: Social planner's resource extraction problem with linear demand and costs bilinear in stock and rate of extraction.) Another approach to verifying that the necessary conditions give the correct solution is to show that an optimal solution exists, and then to show that the solution to necessary conditions is unique. ********************** Interpretation of 8*(t): Let V(x0, t0) be maximized value of objective, given the initial condition. If Vx ( ) exists, 8*(t0 ) = Vx (x0 ,t0 ) the shadow value of state. This relation holds at every point in time, not just at t0. Implication of this interpretation for other terminal conditions: 3:4 (i) X(T) fixed. No reason to suppose Vx(x(T),T) = 0, so we only have 8(T) $ 0 (when g is concave) T (ii) scrap function, N(x(t), T). V(x0 , t0) max m t0 f ( )dt (xt , T ) s.t. x g ( ) At T, V(x(T),T) / N (xT,T) Y 8(T) = Vx(xT, T) = Nx(T)( ). Interpretation of Hamiltonian. If value function V(x,t) is differentiable wrt t, then H = -Vt. Consider the optimization problem over [t, T]. Suppose that you increase the time you begin from t to t +dt. Using Leibniz's rule, this increase results in a loss to the value of the program of f( ) (the flow payoff), so the direct contribution (to the value of the program) of the increase in t is -f. However, over the interval dt, x changes by the amount g. Each unit of x is "worth" 8, so the value of this change is g8. Thus, the total change in the value of the program is Jt = - (f + 8g) = -H. Explain how this interpretation is related to the transversality condition. If T is a choice variable (and there is no scrap function), you choose it so that Jt(T) = 0 Y H(T) = 0; this equation is the transversality condition when T is free when there is no scrap function. Give an example of a scrap function. Suppose that once you have decided to stop harvesting the fish -- i.e. you have reached time T - you can sell the pond for recreational uses. The present value of the price that you get for this sale depends on T and possibly on the stock x. (For example, x may determine the quality of the pond, not just the stock of fish. In other words, the meaning of x depends on the problem -- it is not necessarily a stock of fish.) If you have a scrap function N(x,T) and T is a choice variable, we have (at the optimum) Jt(T) = NT(x,T) Y H(T) + NT = 0, which is the transversality condtion when T is free and there is a scrap function. [Note: If you have more than one state variable, each requires a costate variable. Next problem in notes has two state variables.] 4. Two state variable, non renewable resource problem (I never do this problem in lectures, but I think that it is worth looking at it carefully, as it shows you how to use the necessary conditions, and it illustrates the kinds of results that you can obtain.) 3:5 These notes are taken from Kamien and Schwartz beginning pg 150. I've added a few details, but dropped some others, so you may want to go through the example with their book open. c = consumption u(c) = ln c = utility x = natural resources stock, x = B R, R = rate of extraction K = capital stock output Q = A K1Ba Ra K=QBc 0<a<1 y = R / K resource to capital ratio (so Q AK R K a AKy ) Y (1) (2) x = BKy K = AKya B c x0 given K0 given x(t) $ 0 K(t) $ 0 Problem is: T max c,y m 0 ln c (t) dt s.t. (1) , (2) No discounting in this problem, for simplicity. Note that y, c, > 0 so don't have to worry about nonnegativity constraints (as c 6 0, U N Y 4, U 6 B4; as y 6 0, MP 6 4) 81, 82 costates associated with states x, K. H = ln (c) B 81 Ky + 82 (AKya B c) 81 = shadow value of resource, 82 s.v. of capital Necessary conditions: (3) (4) (5) Hc 1 2 0 (Marginal utility of consump. = shadow value of capital) c Hy 1K a2 AKy a1 0 (interpret this below) 81 = BHx = 0 Y 81 constant 3:6 (6) K Q K Aky a 82 = BHk = 81y B 82 Aya Show grows linearly and consumption c grows faster or slower than linearly as a < (resource not very important) or a > (resource very important). When K > 0 (4) Y (7) 1 a2 Ay a1 SV resource = SV capital @ MP of resource. Differentiate both sides of (7), use (5) (skip algebra) Obtain expression for y 0 a 2 Ay a 1 a 2 A (a 1) y a 2 y . . y 2 2(1 a) y (8) 2 2 (1 a) y y Use (7) to eliminate 81 in (6) 2 a2Ay a 1 y 2Ay a (9) 2 2 (1 a)Ay a (8) & (9) Y y Ay a y y y (1a) A (skip algebra) 3:7 yB(1+a) dy = BA dt y a At k a integrate k is constant (10) K Q 1 a ak y at at k1 A A (10N) Ay a 1 (at k1) R is falling over time K (10N) Y ya is decreasing function of time, so (recall) capital K K 1 a output Q AKy Ay a by (10), this is rising linearly over time at rate a. Find expression for c(t). Sub (10N) into (9) (i.e., solve for 82, use (3)) 2 2 d2 m 2 ln 2 (1 a) (at k1) integrate dt (1 a) m at k1 (1 a) ln(at k1) k2 a k2 is a constant ln at k1 (11) (1a) a k3 k3 = lnk 2(t) k3(at k1) (1 a) a (11) & (3) Y 3:8 (12) 1 1 c (at k1) 2 k3 (1 a) at k1 a at k1 (1 a) a 1 a 1 a c c a 1 a a (1 a) > 0 (at k) 1 3a a 1a c at k1 k3 1 2a < > 0 as a > 1/2 < (If a is small, resource is relatively unimportant) To complete soln need to find k1, k3. (These appear in eqn 12.) Show x(T) = 0. Suppose not. Then 81(T) = 0 (because the constraint x(T) $ 0 is not binding, by hypothesis); but 81 constant by (5) Y 81(t) = 0 Y 82(t) = 0 (by (7)) Y c = 4 (by (3)) YZ conclude x(T) = 0. Show K(T) = 0 (at end of period, producing almost nothing, eating capital). Suppose K(T) > 0 then 82 (T) = 0 Y (by (11)) 0 2(T ) k3 (aT k1) Y k1 4 Y 2(t) / 0 Y by (3) c(t) / 4, infeasible, conclude K(T) = 0. Now explain that we have B.C. x(T) = K(T) = 0 which we can use together with (1) and (2) and other info about path, to complete soln. use (10N) and (12) in (2) Y K K 1 at k1 at k1 k3 1a a 3:9 (13) 1 (at k1) K (at k1)1K k3 1 a a We will use the following relation: integrating factor d at k1 dt at k1 1 a 1 a 1 a K (1 a) a at k1 1 a k at k1 1 aK at k1 a K at k1 1 K rewrite (13) as d 1 at k1 a K at k1 dt k3 1 a 1 (1 a) a at k1 1/a (14) d at k1 1 K at k1 k3 1 2a a dt integrate both sides of (14) from 0 to T 1 a 1 a aT k1 (15) K(T) a @ 0 k1 0 1 a 1 K (0) a k1 k3 m t 0 T 1 2a a dt This is 1 equation in two unknowns, k1 and k3. Use (15) to write k1 as function of k3 and t: k(t,k) Sub k t , k3 and y ( t ) A at k1 into (1) and integrate, using X(T) = 0 to find k3. 3:10 5. Current value Hamiltonians and autonomous control problem A problem is autonomous if time appears only via discounting. Some definitions of autonomous problems include the requirement that the horizon is infinite. Most autonomous problems that we will discuss have an infinite horizon. If time does not appear except via discounting, and if the horizon is infinite, the optimal control depends only on the state variable, not on calendar time. However, if time does not appear except via discounting, and if the horizon is finite, the optimal control will (typically) depend on time, because as time marches along, you get closer to the end of the problem -- that is, something substantive changes exogenously with the passage of time. The following transformation provides a way of removing the time dependence that arises from discounting. This transformation is sometimes useful even if the problem is not autonomous, that is, even if there is another source of time dependence. Note that the following notes do not assume that the functions f and g are independent of time. Current and present value Hamiltonian. 8(t) is the present value, at time 0, of the shadow value at t. The current value at t of shadow value at t: (1) m(t) / ert 8(t) present value Hamiltonian H P.V. e rt f t , x , u e rt(t)g(t , x , u) Current value Hamiltonian (2) H C.V. f (t , x , u) m(t)g(t , x , u) e rtH P.V. (1) Y (3) m e rt re rt recall (4) P.V. C.V. Hx e rtHx use (2) and (4) in (3) 3:11 m e rt e rt Hx c.v. rm(t) (5) m Hx c.v. rm also, max H P.V. equivalent to max H C.V. u u This transformation, from present value to current value Hamiltonian, is especially useful for infinite horizon autonomous problems. In such a problem, the transversality condition is replaced by the requirement that variables reach a steady state. With discounting, the present value of the costate variable (typically) approaches 0 (since the discount factor goes to 0) but the current value of the costate variable approaches a nonzero value. The next example illustrates this. 6. Example of analysis of infinite horizon problem (Similar to K&S pp166) benefit of consumption 9 4 pollution damage 9 example max e rt w(u) D(x) dt m 0 (1) s.t. x = u B bx (w concave, D convex.) H is current value Hamiltonian, 8 is current value costate variable H = w(u) B D(x) + 8(u B bx) (2) (3) MH wN (u) 0 Mu Y w N (u) (interpret) D N (x) (b r) To analyze an autonomous control problem with a single state variable (infinite horizon), and no binding control or state constraints (e.g., non-negativity constraints) you follow this recipe: 3:12 a) Write down the Hamiltonian and necessary conditions (2) and (3), assuming an interior condition. b) Decide whether you want to use: Option (i): Analysis in state-control space. Use (2) to write 8 as a function of u and x, i.e. 8 = 8*(u,x). Differentiate (2) wrt time, sub (3) into the result and use 8 = 8*(u,x) to eliminate 8. Now you have a system of two differential equations in x and u. Option (ii): Analysis in state-costate space. Use (2) to write u as a function of 8 and x, i.e. u = u*(8,x). Differentiate (2) wrt time, sub (3) into the result and use u = u*(8,x) to eliminate u. Now you have a system of two differential equations in x and 8. c) Whichever of the two options you chose, you now have a system of two autonomous differential equations. Find the steady states. d) Use methods described in first set of notes to graph phase plane: i) Find and graph isoclines ii) Draw in directional arrows e) Linearize the nonlinear system at steady state(s) and determine stability of steady state. f) Do comparative statics on steady state and comparative dynamics of trajectories to steady states. I will illustrate this recipe using Option (i), analysis is state-control space. (K&S analyze the same problem in state-costate space. You should study the two approaches in detail so that you understand that they are equivalent.) We have already done step (a) step b Diff. (2) wrt time use (3) to obtain ODE in u1 wNN(u) u + DN(x) B (b + r) wN(u) = 0 B (4) u (b r)w N (u) D N (x) / g(u , x) w NN (u) Steps c and d Analyze (1) and (4) using methods discussed in 1st week I notice some unfortunate notation here. In the first part of these notes I used the function g( ) to denote the equation of motion for the state. Here I am using g( ) to denote the time derivative of the optimal control rule. I hope that this does not cause confusion. 3:13 1 from (1) x = 0 ZY u = bx x > 0 when u > bx u=0 ZY from (4) (5) (b+r) wN(u) B DN(x) = 0 To find slope of isocline, take total differential of (5) (6) (b + r) wNN(u) du B DNN(x) dx = 0 du D NN(x) < 0 . dx * u 0 (b r)w NN (u) (7) From (6), for u above u = 0 isocline, numerator of (4) is negative, conclude u > 0 above isocline. For general problems it is not so easy to determine direction of motion off the isocline. The systematic approach is to take the derivative Mg/)u where the derivative is evaluated where g = 0 (on the isocline). If I evaluate this derivative I see Mg/)u (evaluated where g = 0) > 0. This tells me that if I move slightly above the isocline, u = 0 (since u = 0 on the isocline). Equivalently, I could have examined Mg/Mx (evaluated where g = 0). Figure 1 step e Linearize g(u, x) in (4) 3:14 gu(u , x ) w NN @ (b r) w NN ( 0 ) @ w NNN (w NN )2 gx(u , x ) D NN > 0 w NN b r Linearized system is (8) x u b 1 . D NN br w NN x u A x u *A* b(b r) D NN < 0 w NN (I dropped the constant term in this linearization, since it has no effect on the stability properties. Equivalently, you can think of having made a change of variables -- i.e. translating the axes so that the origin is at the steady state. I described this procedure in Notes 1 near page 10.) Since the determinant equals the product of the eigenvalues, conclude that there is one positive and one negative eigenvalue, so equilibrium is saddle point. x* approached monotonically (This result holds for all saddle point equilibria in autonomous problems with a single state variable. The intuition is as follows. The optimal u is a function of the state, i.e. u = U(x), so the optimally controlled system is x = f(x,u) = f(x, U(x)) / z(x). Now, suppose that x were not approached monotonically. Then x x* such that z(x = 0. This is a ) contradiction, since x is a steady state. Once the state hits a level of x at which z(x)=0, the state does not subsequently change. Therefore, the state could not pass through a point at which z(x) =0 along the way the steady state. Thus z(x) never changes sign. Thus x is monotonic.) Monotonicity of the state does not hold in problems with two or more state variables. For this problem u* approached monotonically. Monotonicity of the control is not a general result. The dashed line in figure 1 is the stable saddle path (one of the separatrices) of the system. This dashed line is the graph of the optimal control rule. At this point I want to work on the intuition of why the steady state is a saddle path; here I take up a comment I made in the first set of notes. If the steady state were an unstable node, there would exist no trajectory that satisfies the necessary conditions and also converges to steady state. If the steady state were a stable node, there would be infinitely many paths that satisfy the necessary 3:15 conditions and also converge to the steady state. That is, widely different control rules would all satisfy the necessary conditions. In that case, the necessary conditions would not tell me much about the optimal solution to the problem. If the steady state is a saddle point (as is the case) then there is one trajectory that satisfies the necessary conditions which converges. That is, there is one optimal control rule. ************ Digression: We have established saddle point stability by examining the linearized system. This is the typical approach, and in any case, you need the information we have obtained in order to do comparative statics on the steady state. However, it's worth knowing that you can show that a steady state is a saddle point using geometric arguments. Here's how. (See Clark, chapter 6.) The isoclines divide phase space into isosectors. An isosector is "terminal" if a trajectory never leaves the isosector once it has entered. Analysis of terminal isosectors enables us to determine the type of a steady state. See Figure 2. In panel A (ii) and (iv) are terminal isosectors. If I reverse the direction of arrows (panel B) (i) and (iii) are terminal isosectors. Trajectories in these isosectors diverge. In this case, the steady state is a saddle point. Note that in both cases the steady state is unstable. However, the nature of the this unstable steady state is different. With a saddle point, there exists a trajectory that approaches the steady state, and all other trajectories diverge from the steady state. With an unstable node, all trajectories diverge from the steady state. Figure 2 In panel (C) (i) and (iii) are terminal isosectors. When I reverse arrows (D) (ii) and (iv) are terminal but trajectories in these isosectors converge. In this case, the steady state is unstable node. Now apply this recipe to Figure 1, where (ii) & (iv) are terminal isosectors. If I reverse direction of arrows (i) & (iii) are terminal isosectors for which trajectories diverge from origin Y steady state is a saddle point. 3:16 (Caution: Clark states that if there does not exist terminal isosectors, the equilibrium is not a saddle point. This statement is incorrect.) End digression. ********************* Step f Comparative statics on steady state. I want to determine how steady state changes with changes in exogenous parameters. The steady state is given by the system u - bx = 0, g( ) = 0. I want to totally differentiate this wrt x,u, and the exogenous parameters (here, b and r). This total differentiation gives A dx du x w N/w db 0 w N/w dr where the matrix A was given in equation (8), and we saw that |A| < 0. Using Cramer's Rule, or inverting A, we obtain the comparative statics results du/db, du/dr, dx/r are all positive. (An increase in r increases the steady state of pollution, x*. We will use this fact for comparative dynamics of optimal trajectory, below.) The final comparative statics result is dx/db = [(b+r)x + wN/wNN]/|A| which has an ambiguous sign. Intuition. b is decay rate. A larger b means that the stock of pollution decays more quickly, so emissions (the flow) causes less damage in the future. Consequently, it is optimal to pollute more. Since the flow of pollution increases, but the stock decays more quickly, the net effect on steady state is uncertain. A larger r means future is discounted more heavily, so it is optimal to pollute more at each point in time. MAIN POINT In order to obtain any comparative statics results we needed to know the sign of |A|. We know that this determinant is negative, because we know that if the system converges to a steady 3:17 state, it is a saddle point. Thus we use the assumption of convergence to a steady state for these comparative statics in much the same way that we use the assumption of optimality (e.g. negative definiteness) for comparative statics of static optimization problems, or for comparative statics of equilibrium problems. (Remember the role of the assumption of stability in some equilibrium problems, e.g. the Marshall-Lerner condition in trade models.) . This problem is simple enough that you can confirm that the optimal trajectory does approach the steady state -- that is, the approach to a steady state is a result, not an assumption. The stable path is the only trajectory that satisfies both the necessary conditions and the Atransversality condition at infinity@ which requires that the product of the state and the present value of the costate approaches 0. This transversality condition is the limit, as T goes to infinity, of the transversality condition for finite T. In more general problems it might be difficult to show that the optimal trajectory converges to a steady state. We sometimes merely assume that the optimal trajectory approaches a steady state. This assumption, together with the fact that the steady state must be a saddle point, implies that the determinant of A is negative. I want to emphasize this point. For this particular problem we were able to verify by calculation that the steady state is a saddle point, and we used this fact to perform comparative statics experiments. In a more general problem, this calculation may be difficult or ambiguous. However, for a control problem with one state variable, we know that if the trajectory approaches a steady state, it must be the case that the steady state is a saddle path. Thus, in some problems we might simply assume that the trajectory approaches a steady state, and use this assumption to sign the determinant. (This procedure should not seem odd to you. The maximand in a static problem may not be globally concave, or the equilibrium in an equilibrium problem may not be unique. However, in performing comparative statics experiments, we assume that the point we are perturbing is a local max in one case, or a stable equilibrium in the other.) The previous discussion involved comparative statics of the steady state. Now we want to ask what happens to the entire trajectory, or equivalently, to the control rule, when a parameter changes. This kind of question is known as a comparative dynamics question. Now consider comparative dynamics wrt r, r2 > r1. An increase in r shifts up the u = 0 isocline, as * shown in figure 3. We know that x increases. I have drawn two candidate optimal trajectories, S1 and S2, corresponding to r1 and r2. If the trajectories are as shown, then for given x, an increase in r increases u. Alternatively, if the optimal trajectory corresponding to r2 look like S3 then for some values of x, and increase in r leads to a decrease in u. We can rule out this possibility. 3:18 Figure 3 Suppose x0 < x*, S1 is saddle path. 1 Want to show that if r increases from r1 to r2, stable path shifts up (so u is higher for given x). Suppose not, and that path is like S3. Then print of intersection at x, u Slope of optimal path is (use (1) and (4)) (9) at x, u : (b r1)wN DN (b r2) wN DN du du < dx *s wNN(u bx) wNN(u bx) dx *s 1 3 Y r1 w N w NN (u bx) () Y r1 > r2 YZ This is our contradiction. Therefore, when r is increased, the new saddlepath cannot intersect the original path. 3:19 < r2 w N w NN (u bx) u du (b r) w N (u) D N (x) x dx w NN(u) u bx Now I'll ask a question about the value of the program, rather than about the nature of the optimal control. How does the value of optimal program change due to parameter change? E.g., in previous problems, what is effect of change in r on payoff? Trick: treat r as a state, with costate D, interpret D H p.v. e rt w(u) D(x) P.V.(u bx) @ 0 T MH te rt w(u) D(x) Mr T m 0 d m 0 t e rt w(u) D(x) dt Let T 6 4 use D(4) = 0 Y 4 (0) m 0 te rt w(u) D(x) dt Conclude that a sufficient condition for an increase in r to decrease the PDV of the stream of payoff is for the stream of payoff to be positive along the entire trajectory. (See LaFrance and Barney and Caputo for more general treatment of comparative dynamics of control problems.) ************ A comment on numerical solutions. You can often learn a lot about a problem using numerical solutions. Here's how you can numerically approximate the optimal control rule for a one-state variable control problem where the optimally controlled trajectory converges to a steady state. Using the necessary conditions, we see that the optimal trajectory must satisfy the ODE du/dx = g(u,x)/[u-bx]. In order to solve this ODE I need a boundary value. I know that value of x at the initial time, but I don't know the corresponding value of u, so information about the initial condition is no help. I need to know a point that lies on the optimal trajectory. The only such point that I know is the steady state. The problem is, I cannot numerically solve the ODE using the steady state as the boundary condition because at that point both the numerator and the denominator of the ODE are 0. Therefore I pick a point close to the boundary condition. But what does Aclose@ mean? I know that the separtrix (i.e. the saddle path) of the linearized system is tangent to the separtrix of the non-linear system, and I know that the latter is the (graph of) the control rule. Therefore I use as my boundary condition a point close to the steady state, on the stable 3:20 saddle path of the linearized system. One of your problem sets asks you to do this. 3:21 ...
View Full Document
This note was uploaded on 08/01/2008 for the course ARE 263 taught by Professor Karp during the Fall '06 term at University of California, Berkeley.
- Fall '06