This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: PDE for Finance Notes, Spring 2011 Section 7 Notes by Robert V. Kohn, Courant Institute of Mathematical Sciences. For use only in connection with the NYU course PDE for Finance, G63.2706. Prepared in 2003, minor updates made in 2011. About the final exam: Our Final exam will be Monday May 9 (the last week of classes, not exam week). It will be at our usual time (5:10-7pm), in our usual location (WWH 517). You may bring two sheets of notes (8 . 5 11, both sides, any font). The preparation such notes is an excellent study tool. For an idea what to expect, see the 2003 Final Exam, which is posted with my 2003 lecture notes. In general: the exam problems will address topics you have seen on HW or in class, formulated in such a way that if you understand the material each question can be answered relatively quickly. ******************* Discrete-time dynamic programming. This section achieves two goals at once. One is to demonstrate the utility of discrete-time dynamic programming as a flexible tool for decision-making in the presence of uncertainty. The second is to introduce some financially- relevant applications. To achieve these goals we shall discuss three specific examples: (1) optimal control of execution costs (following a paper by Bertsimas and Lo); (2) a discrete- time version of when to sell an asset (following Bertsekas book Dynamic Programming: Deterministic and Stochastic Models ); and (3) least-square replication of a European option (following a paper by Bertsimas, Kogan, and Lo). In the context of this course it was natural to address continuous-time problems first, be- cause we began the semester with stochastic differential equations and their relation to PDEs. Most courses on optimal control would however discuss the discrete-time setting first, because it is in many ways easier and more flexible. Indeed, continuous-time dynamic programming uses stochastic differential equations, Itos formula, and the HJB equation. Discrete-time dynamic programming on the other hand uses little more than basic prob- ability and the viewpoint of dynamic programming. Of course many problems have both discrete and continuous-time versions, and it is often enlightening to consider both (or com- pare the two). A general discussion of the discrete-time setting, with many examples, can be found in Dimitri Bertsekas, Dynamic Programming: Deterministic and Stochastic Models , Prentice-Hall, 1987, especially Chapter 2. Our approach here is different: we shall explain the method by presenting a few financially-relevant examples. Example 1: Optimal control of execution costs. This example is taken from the recent article: Dimitris Bertsimas and Andrew Lo, Optimal control of execution costs , J....
View Full Document
- Fall '11