notes3

notes3 - Dynamic Programming Peter Ireland * EC720.01 -...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Dynamic Programming Peter Ireland * EC720.01 - Math for Economists Boston College, Department of Economics Fall 2009 We have now studied two ways of solving dynamic optimization problems, one based on the Kuhn-Tucker theorem and the other based on the maximum principle. These two methods both lead us to the same sets of optimality conditions; they differ only in terms of how those optimality conditions are derived. Here, we will consider a third way of solving dynamic optimization problems: the method of dynamic programming. We will see, once again, that dynamic programming leads us to the same set of optimality conditions that the Kuhn-Tucker theorem does; once again, this new method differs from the others only in terms of how the optimality conditions are derived. While the maximum principle lends itself equally well to dynamic optimization problems set in both discrete time and continuous time, dynamic programming is easiest to apply in discrete time settings. On the other hand, dynamic programing, unlike the Kuhn-Tucker theorem and the maximum principle, can be used quite easily to solve problems in which optimal decisions must be made under conditions of uncertainty. Thus, in our discussion of dynamic programming, we will begin by considering dynamic programming under certainty; later, we will move on to consider stochastic dynamic pro- gramming. References: Dixit, Chapter 11. Acemoglu, Chapters 6 and 16. 1 Dynamic Programming Under Certainty 1.1 A Perfect Foresight Dynamic Optimization Problem in Dis- crete Time No uncertainty * Copyright c 2009 by Peter Ireland. Redistribution is permitted for educational and research purposes, so long as no changes are made. All copies must be provided free of charge and must include this copyright notice. 1 Discrete time, infinite horizon: t = 0 , 1 , 2 ,... y t = stock, or state, variable z t = flow, or control, variable Objective function: X t =0 t F ( y t ,z t ; t ) 1 > > 0 discount factor Constraint describing the evolution of the state variable Q ( y t ,z t ; t ) y t +1- y t or y t + Q ( y t ,z t ; t ) y t +1 for all t = 0 , 1 , 2 ,... Constraint applying to variables within each period: c G ( y t ,z t ; t ) for all t = 0 , 1 , 2 ,... Constraint on initial value of the state variable: y given The problem: choose sequences { z t } t =0 and { y t } t =1 to maximize the objective function subject to all of the constraints. Notes: a) It is important for the application of dynamic programming that the problem is additively time separable: that is, the values of F , Q , and G at time t must depend only on the values of y t and z t at time t. b) Once again, it must be emphasized that although the constraints describing the evolution of the state variable and that apply to the variables within each period can each be written in the form of a single equation, these constraints must hold for all t = 0 , 1 , 2 ,... . Thus, each equation actually represents an infinite number of constraints....
View Full Document

Page1 / 22

notes3 - Dynamic Programming Peter Ireland * EC720.01 -...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online