martingaleNotes

# martingaleNotes - IEOR 4106: Introduction to Operations...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2009, Professor Whitt Supplementary Notes on Martingales and Brownian Motion April 28-30, 2009 1. Martingales We start by defining a martingale, working in discrete time. Definition 0.1 Let { X n : n ≥ } and { Y n : n ≥ } be stochastic processes ( sequences of random variables ) . We say that { X n : n ≥ } is a martingale with respect to { Y n : n ≥ } if ( i ) E [ | X n | ] < ∞ for all n ≥ and ( ii ) E [ X n +1 | Y ,Y 1 ,...,Y n ] = X n for all n ≥ . a. More on Definition 0.1. In Definition 0.1 we think of the stochastic process { Y n : n ≥ } constituting the history or information . Then { Y k : 0 ≤ k ≤ n } is the history up to (and including) time n . The random variables Y k could be random vectors, as we illustrate below. We simply say that { X n : n ≥ } is a martingale if { X n : n ≥ } is a martingale with respect to { X n : n ≥ } ; i.e., if the history process { Y n : n ≥ } is the stochastic process { X n : n ≥ } itself. We then also say that { X n : n ≥ } is a martingale with respect to its internal history (the history generated by { X n : n ≥ } ). In the literature on martingales, the histories are usually characterized via sigma-fields of events, denoted by F n for n ≥ 0. We know whether or not each of the events in F n occurred by time n . We then write instead of (ii) above: ( ii ) E [ X n +1 |F n ] = X n for all n ≥ , where F n is understood to be the history up to time n . With that notation, we assume the history is cumulative, starting at time 0. Then F n can be understood to be shorthand for { Y k : 0 ≤ k ≤ n } . b. Conditional Expectation In order to understand the definitions above, we need to understand conditional expec- tation. The basic concepts are reviewed in the first four sections of Chapter 3 in Ross. In particular, we need to know what E [ X | Y ] means for random variables or random vectors X and Y . For this, see p. 106 of Ross. By E [ X | Y ], we mean a random variable. In particular, E [ X | Y ] = E [ X | Y = y ] when Y = y . Thus E [ X | Y ] can be regarded as a deterministic function of the random variable Y , which makes it itself be a random variable. Since (in the discrete case) E [ X ] = X y E [ X | Y = y ] P ( Y = y ) = E [ E [ X | Y ]] , we have the fundamental relation E [ E [ X | Y ]] = E [ X ] for all random variables X and Y . As a consequence, for a martingale { X n : n ≥ } with respect to { Y n : n ≥ } , we have E [ X n +1 ] = E [ E [ X n +1 | Y ,Y 1 ,...,Y n ]] = E [ X n ] for all n ≥ . Thus, by mathematical induction, for a martingale E [ X n ] = E [ X ] for all n ≥ 1. This last expected-value relation is a consequence of the martingale property, but it is not equivalent; the martingale property implies more than that....
View Full Document

## This note was uploaded on 01/26/2011 for the course IEOR 4106 taught by Professor Whitt during the Spring '08 term at Columbia.

### Page1 / 6

martingaleNotes - IEOR 4106: Introduction to Operations...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online