lecture-26 - Chapter 26 Decomposition of Stationary...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Chapter 26 Decomposition of Stationary Processes into Ergodic Components This chapter is concerned with the decomposition of asymptotically- mean-stationary processes into ergodic components. Section 26.1 shows how to write the stationary distribution as a mixture of distributions, each of which is stationary and ergodic, and each of which is supported on a distinct part of the state space. This is connected to ideas in nonlinear dynamics, each ergodic component being a different basin of attraction. Section 26.2 lays out some connections to statistical inference: ergodic components can be seen as minimal sufficient statistics, and lead to powerful tests. 26.1 Construction of the Ergodic Decomposi- tion In the last lecture, we saw that the stationary distributions of a given dynamical system form a convex set, with the ergodic distributions as the extremal points. A standard result in convex analysis is that any point in a convex set can be represented as a convex combination of the extremal points. Thus, any stationary distribution can be represented as a mixture of stationary and ergodic distributions. We would like to be able to determine the weights used in the mixture, and even more to give them some meaningful stochastic interpretation. Let’s begin by thinking about the effective distribution we get from taking time-averages starting from a given point. For every measurable set B , and every finite t , A t 1 B ( x ) is a well-defined measurable function. As B ranges over the σ-field X , holding x and t fixed, we get a set function, and one which, 174 CHAPTER 26. ERGODIC DECOMPOSITION 175 moreover, meets the requirements for being a probability measure. Suppose we go further and pass to the limit. Definition 316 (Long-Run Distribution) The long-run distribution start- ing from the point x is the set function λ ( x ) , defined through λ ( x, B ) = lim t A t 1 B ( x ) , when the limit exists for all B ∈ X . If λ ( x ) exists, x is an ergodic point . The set of all ergodic points is E . Notice that whether or not λ ( x ) exists depends only on x (and T and X ); the initial distribution has nothing to do with it. Let’s look at some properties of the long-run distributions. (The name “ergodic point” is justified by one of them, Proposition 318.) Proposition 317 If x ∈ E , then λ ( x ) is a probability distribution. Proof: For every t , the set function given by A t 1 B ( x ) is clearly a probability measure. Since λ ( x ) is defined by passage to the limit, the Vitali-Hahn Theorem (285) says λ ( x ) must be as well. Proposition 318 If x ∈ E , then λ ( x ) is ergodic. Proof: For every invariant set I , 1 I ( T n x ) = 1 I ( x ) for all n . Hence A 1 I ( x ) exists and is either 0 or 1. This means λ ( x ) assigns every invariant set either probability 0 or probability 1, so by Definition 300 it is ergodic....
View Full Document

  • Spring '06
  • Schalizi
  • Probability theory, Probability space, measure, Lebesgue integration, ergodic components, ergodic component

{[ snackBarMessage ]}

Page1 / 8

lecture-26 - Chapter 26 Decomposition of Stationary...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online