EquilibriumDistributions_R

# EquilibriumDistributions_R - Stat 333 Equilibrium...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Stat 333 Equilibrium Distributions and the Renewal Theorem Let X n be a discrete-time Markov chain with state space S and transition matrix P . Let the row vector p n denote the marginal distribution of the chain at step n , i.e., p n ( j ) = P ( X n = j ) for j ∈ S . Note that p n ( j ) ≥ 0 and ∑ j ∈ S p n ( j ) = 1. Thus p n is a probability vector . Theorem (the updating rule): p n +1 = p n P Suppose C is a closed class with submatrix P C . A probability vector π is called an equilibrium distribution for the class if it satisfies π = π P C subject to summationdisplay j ∈ C π j = 1 (*) Thus an equilibrium distribution remains unchanged under updating. (note: we use the notation π j , rather than π ( j ), for convenience, because there is no need to indicate the time n of transition here.) π may be determined either by solving the system of linear equations given by (*) or by guessing at the solution and then verifying that (*) holds. Main Applied Result: Let C be a class (periodic or aperiodic) of a Markov chain. Then (a) if C consists of positive recurrent states, then C has a unique equilibrium distribution π ....
View Full Document

{[ snackBarMessage ]}

### Page1 / 2

EquilibriumDistributions_R - Stat 333 Equilibrium...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online