Topics_covered

# Topics_covered - Introductory Engineering Stochastic...

This preview shows pages 1–7. Sign up to view the full content.

Introductory Engineering Stochastic Processes, ORIE 361 Instructor: Mark E. Lewis, Associate Professor School of Operations Research and Information Engineering Cornell University Spring, 2008 1/ 28

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Disclaimer This ﬁle can be used as a study guide. Please note that as the semester progresses, some of the material may be adjusted (added to or subtracted). This is due to the fact that the class may require further clariﬁcation on some topics and less on others depending on your strengths and weaknesses. 2/ 28
Preliminaries 1 Course Prerequisites 1 Basic knowledge of random variables Discrete random variables Continuous random variables 2 Independence 3 Expectation Functions of random variables (Law of the unconscious statistician) Expectation of linear combinations of random variables Variance of linear combinations of independent random variables 4 Moment Generating Functions (mgf’s) k th moments via diﬀerentiating mgf’s Mgf of sums of independent random variables Uniqueness of mgf’s (equal mgf’s equal distributions) 5 Conditional probability and expectation 3/ 28

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov chain (DTMCs) 2 Deﬁne a stochastic process A sequence of random variables An outcome is called a sample path The set of possible outcomes is called the state space , say S . 3 The Markov Property Given the present, the future is independent of the past When the state space is discrete P ( X n +1 = j | X n = i , X n - 1 = i n - 1 , . . . , X 0 = i 0 ) = P ( X n +1 = j | X n = i ) . for all i , j , i n - 1 , . . . i 0 S . You can think of this as a reﬁnement of independence; the above would be LHS = P ( X n +1 = j ). A discrete time stochastic process with the Markov property is called a discrete-time Markov chain (DTMC) . Since p ij = P ( X n +1 = j | X n = i ) is deﬁned for all i , j S , this deﬁnes a matrix P (with ( i , j ) th element p ij ). The p ij ’s are independent of n ; this is called time homogeneity . 4/ 28
Transient distributions; P ( X n = j | X 0 = i ) = p ( n ) ij Chapman-Kolmogorov equations p n + m ij = X k S p ( n ) ik p ( m ) kj = ( P n P m ) ij (just multiply matrices) Initial distributions: Let α j = P ( X 0 = j ), then P ( X n = k ) = X j S α j p ( n ) jk (condition on the initial state) 5/ 28

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Long-Run Behavior; as n → ∞ Need to classify states: starting in state i 1 Transient: return to state i a ﬁnite number of times 2 Recurrent: return to state i an inﬁnite number of times More formally: let T i = min { n > 0 | X n = i } 1 Transient: P ( T i < ∞| X 0 = i ) < 1 2 Recurrent: P ( T i < ∞| X 0 = i ) = 1 Several checks: let f i = probability of starting in i we ever return to state i 1 Transient: f i < 1. Alternatively, n =0 p ( n ) ii < 2 Recurrent: f i = 1. Alternatively, n =0 p ( n ) ii = j is accessible from i (written i j , not a limit ) if p ( n ) ij > 0 for some n 0 (note “=” is included) j communicates with i if i j and i j (written i j ) Communication is an equivalence
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 04/03/2008 for the course ORIE 361 taught by Professor Lewis,m. during the Spring '07 term at Cornell University (Engineering School).

### Page1 / 31

Topics_covered - Introductory Engineering Stochastic...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online