section5 (2)

section5 (2) - PDE for Finance Notes, Spring 2011 Section 5...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: PDE for Finance Notes, Spring 2011 Section 5 Notes by Robert V. Kohn, Courant Institute of Mathematical Sciences. For use only in connection with the NYU course PDE for Finance, G63.2706, Prepared in 2003, minor updates made in 2011. Stochastic optimal control. Stochastic optimal control is like deterministic optimal con- trol except that (i) the equation of state is a stochastic differential equation, and (ii) the goal is to maximize or minimize the expected utility or cost. To see the structure of the theory in a simple, uncluttered way, we begin by examining what becomes of a standard deterministic utility maximization problem when the state equation is perturbed by a little noise. Then we present a finance classic: Mertons analysis of optimal consumption and investment, in the simplest meaningful case (a single risk-free asset and a risk-free account). My treatment follows more or less the one in Fleming and Rishels book Deterministic and Stochastic Optimal Control (Springer-Verlag, 1975). However, my best recommendation for reading on this topic and related ones is the book by F-R Chang, Stochastic Optimiza- tion in Continuous Time, Cambridge Univ Press (on reserve in the CIMS library). It has lots of examples and is very readable (though the version of the Merton optimal consump- tion and investment problem considered there is a special case of the one considered here, with maturity T = .) **************************** Perturbation of a deterministic problem by small noise . Weve discussed at length the deterministic dynamic programming problem with state equation dy/ds = f ( y ( s ) , ( s )) for t < s < T, y ( t ) = x, controls ( s ) A , and objective max Z T t h ( y ( s ) , ( s )) ds + g ( y ( T )) . Its value function satisfies the HJB equation u t + H ( u, x ) = 0 for t < T, u ( x,T ) = g ( x ) , with Hamiltonian H ( p, x ) = max a A { f ( x,a ) p + h ( x,a ) } . (1) Let us show (heuristically) that when the state is perturbed by a little noise, the value function of resulting stochastic control problem solves the perturbed HJB equation u t + H ( u, x ) + 1 2 2 u = 0 (2) where H is still given by (1), and u = i 2 u x 2 i . 1 Our phrase perturbing the state by a little noise means this: we replace the ODE gov- erning the state by the stochastic differential equation (SDE) dy = f ( y, ) ds + dw, keeping the initial condition y ( t ) = x . Here dw is a standard, vector-valued Brownian motion (each component w i is a scalar-valued Brownian motion, and different components are independent). The evolution of the state is now stochastic, hence so is the value of the utility. Our goal in the stochastic setting is to maximize the expected utility. The value function is thus u ( x,t ) = max E y ( t )= x Z T t h ( y ( s ) , ( s )) ds + g ( y ( T )) ....
View Full Document

Page1 / 7

section5 (2) - PDE for Finance Notes, Spring 2011 Section 5...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online