DP_SL

DP_SL - Dynamic Programming Prof Lutz Hendricks L Hendricks...

This preview shows pages 1–12. Sign up to view the full content.

Dynamic Programming Prof. Lutz Hendricks September 15, 2009 L. Hendricks () Dynamic Programming September 15, 2009 1 / 42

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Introduction to Dynamic Programming Useful theorems to characterize the solution to a DP problem. There is no reason to remember these results. But you need to know they exist and can be looked up when you need them. L. Hendricks () Dynamic Programming September 15, 2009 2 / 42
Generic problem Problem P1: (The sequence problem) V ( x ( 0 )) = max f x ( t + 1 ) g t = 0 t = 0 β t U ( x ( t ) , x ( t + 1 )) subject to x ( t + 1 ) 2 G ( x ( t )) x ( 0 ) given x ( t ) 2 X ± R k is the set of allowed states. The correspondence G : X X L. Hendricks () Dynamic Programming September 15, 2009 3 / 42

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Generic problems Assumptions that could be relaxed at a cost 1 Stationarity: U and G do not depend on t . 2 Utility is additively separable. Time consistency L. Hendricks () Dynamic Programming September 15, 2009 4 / 42
Mapping into the growth model max f k ( t + 1 ) , c ( t ) g t = 0 t = 0 β t u ( c ( t )) subject to k ( t + 1 ) = f ( k ( t )) c ( t ) ± 0 k ( 0 ) given L. Hendricks () Dynamic Programming September 15, 2009 5 / 42

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Mapping into the growth model U ( k ( t ) , k ( t + 1 )) = u ( k ( t + 1 ) f ( k ( t ))) G ( k ( t )) = f k ( t + 1 ) : k ( t + 1 ) 2 [ 0, f ( k ( t ))] g L. Hendricks () Dynamic Programming September 15, 2009 6 / 42
Recursive problem Problem P2: V ( x ) = max y 2 G ( x ) U ( x , y ) + β V ( y ) , 8 x 2 X This is a Bellman equation. The question: When is solving P1 equivalent to solving P2? L. Hendricks () Dynamic Programming September 15, 2009 7 / 42

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Solution A solution is a policy function π : X X and a value function V ( x ) such that V ( x ) = U ( x , π ( x )) + β V ( π ( x )) , 8 x 2 X When y = π ( x ) , now and forever, the max value is attained. L. Hendricks () Dynamic Programming September 15, 2009 8 / 42
Dynamic Programming Theorems The payo/ of DP: it is easier to prove that solutions exist, are unique, monotone, etc. We state some assumptions and theorems using them. L. Hendricks () Dynamic Programming September 15, 2009 9 ± 42

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Assumption 1 x ( 0 ) by Φ ( x ( 0 )) . G ( x ) is nonempty for all x 2 X . needed to prevent a currently good looking path from running into "dead ends" lim n ! n t = 0 β t U ( x ( t ) , x ( t + 1 )) x ( 0 ) 2 X and feasible paths x 2 Φ ( x ( 0 )) . cannot have unbounded utility L. Hendricks () Dynamic Programming September 15, 2009 10 / 42
The set X in which x lives is compact. G is compact valued and continuous. U is continuous. Notes : Compactness avoids existence issues: without it, there could always be a slightly better x Compact X creates trouble with endogenous growth, but can be relaxed. L. Hendricks ()

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 42

DP_SL - Dynamic Programming Prof Lutz Hendricks L Hendricks...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online