This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = { X n : n } , a random time is a discrete random variable on the same probability space as X , taking values in the time set IN = { , 1 , 2 ,... } . X denotes the state at the random time ; if = n , then X = X n . If we were to observe the values X ,X 1 ,... , sequentially in time and then stop doing so right after some time n , basing our decision to stop on (at most) only what we have seen thus far, then we have the essence of a stopping time . The basic feature is that we do not know the future hence cant base our decision to stop now on knowing the future. To make this precise, let the total information known up to time n , for any given n 0, be defined as all the information (events) contained in { X ,...,X n } . For example, events of the form { X A ,X 1 A 1 ,...,X n A n } , where the A i S are subsets of the state space. Definition 1.1 Let X = { X n : n } be a stochastic process. A stopping time with respect to X is a random time such that for each n , the event { = n } is completely determined by (at most) the total information known up to time n , { X ,...,X n } . In the context of gambling, in which X n denotes our total earnings after the n th gamble, a stopping time is thus a rule that tells us at what time to stop gambling. Our decision to stop after a given gamble can only depend (at most) on the information known at that time (not on future information). If X n denotes the price of a stock at time n and denotes the time at which we will sell the stock (or buy the stock), then our decision to sell (or buy) the stock at a given time can only depend on the information known at that time (not on future information). The time at which one might exercise an option is yet again another example. Remark 1.1 All of this can be defined analogously for a sequence { X 1 ,X 2 ,... } in which time is strictly positive; n = 1 , 2 ,... .: is a stopping time with respect to this sequence if { = n } is completely determined by (at most) the total information known up to time n , { X 1 ,...,X n } . 1.2 Examples 1. (First passage/hitting times/Gamblers ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let = min { n 0 : X n = i } . This is called the first passage time of the process into state i . Also called the hitting time of the process to state i . More generally we can let A be a collection of states such as A = { 2 , 3 , 9 } or A= { 2 , 4 , 6 , 8 ,... } , and then is the first passage time (hitting time) into the set A : = min { n 0 : X n A } ....
View
Full
Document
This note was uploaded on 10/16/2010 for the course IEOR 4701 taught by Professor Karlsigma during the Summer '10 term at Columbia.
 Summer '10
 KarlSigma

Click to edit the document details