This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: PDE for Finance Notes, Spring 2011 – Section 4 Notes by Robert V. Kohn, Courant Institute of Mathematical Sciences. For use only in connection with the NYU course PDE for Finance, G63.2706. Prepared in 2003, minor updates made in 2011. Deterministic optimal control . We began the semester by studying stochastic differ- ential equations and the associated linear partial different equations – the backward Kol- mogorov equation (a PDE for the expected payoff of a generalized option) and the forward Kolmogorov equation (a PDE for the evolving probability density). We’re heading toward stochastic control. That theory considers SDE’s over which we have some influence, modelling for example the value of a portfolio. One typical goal is to maximize the utility of final-time wealth. The task is then two-fold: (i) to identify an optimal strategy, and (ii) to evaluate the associated “value function” u ( x,t ) – the optimal utility of final-time wealth, if the system starts in state x at time t . It will solve the Hamilton-Jacobi- Bellman (HJB) equation – the analogue for stochastic control of the backward Kolmogorov equation. The HJB equation is usually nonlinear , due to the effect of our decision-making. Like the backward Kolmogorov equation, it must be solved backward in time. Underlying the derivation and solution of the HJB equation is the dynamic programming principle – a powerful scheme for solving optimization problems by gradually increasing the time-to- maturity (or a similar parameter). Selected financial applications of stochastic control include: (a) optimizing the allocation of assets between distinct risky investment opportunities; (b) optimizing the rate at which to spend income from an investment portfolio; (c) optimal hedging of an option on a non- traded underlying; and (d) pricing of American options (i.e. optimization of the “exercise rule”). All these problems involve a blend of (i) stochasticity and (ii) control. We already understand stochasticity. This provides an introduction to optimal control, in the simpler deterministic setting. In Sections 5 and beyond, we’ll combine the two ingredients to address financially signficant examples. The material covered in this section is “standard,” however I don’t know many good places to read about it. The book by Fleming and Rishel, Deterministic and Stochastic Optimal Control , covers everything here and much more – but it goes far deeper than the level of this class. Roughly the same comment applies to the book of Bertsekas, Dynamic Programming and Optimal Control. The charming and inexpensive book A. Dixit, Optimization in Eco- nomic Theory (Oxford Univ Press, 1990, paperback) covers some closely related material....
View Full Document
This note was uploaded on 01/02/2012 for the course FINANCE 347 taught by Professor Bayou during the Fall '11 term at NYU.
- Fall '11