This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Chapter 32 Large Deviations for Markov Sequences This chapter establishes large deviations principles for Markov sequences as natural consequences of the large deviations principles for IID sequences in Chapter 31. (LDPs for continuous-time Markov processes will be treated in the chapter on Freidlin-Wentzell theory.) Section 32.1 uses the exponential-family representation of Markov sequences to establish an LDP for the two-dimensional empirical dis- tribution (pair measure). The rate function is a relative entropy. Section 32.2 extends the results of Section 32.1 to other observ- ables for Markov sequences, such as the empirical process and time averages of functions of the state. For the whole of this chapter, let X 1 , X 2 , . . . be a homogeneous Markov se- quence, taking values in a Polish space , with transition probability kernel , and initial distribution and invariant distribution . If is not discrete, we will assume that and have densities n and r with respect to some reference measure, and that ( x, dy ) has density m ( x, y ) with respect to that same ref- erence measure, for all x . (LDPs can be proved for Markov sequences without such density assumptions see, e.g., Ellis (1988) but the argument is more complicated.) 32.1 Large Deviations for Pair Measure of Markov Sequences It is perhaps not sufficiently appreciated that Markov sequences form expo- nential families (Billingsley, 1961; Kuchler and Srensen, 1997). Suppose is 226 CHAPTER 32. LARGE DEVIATIONS FOR MARKOV SEQUENCES 227 discrete. Then P ( X n 1 = x t 1 ) = ( x 1 ) t- 1 i =1 ( x i , x i +1 ) (32.1) = ( x 1 ) e P t- 1 i =1 log ( x i ,x i +1 ) (32.2) = ( x 1 ) e P x,y 2 T x,y ( x t 1 ) log ( x,y ) (32.3) where T x,y ( x t 1 ) counts the number of times the state y follows the state x in the sequence x t 1 , i.e., it gives the transition counts . What we have just established is that the Markov chains on with a given initial distribution form an exponential family, whose natural sufficient statistics are the transition counts, and whose natural parameters are the logarithms of the transition probabilities. (If is not continuous, but we make the density assumptions mentioned at the beginning of this chapter, we can write p X t 1 ( x t 1 ) = n ( x 1 ) t- 1 i =1 m ( x i , x i +1 ) (32.4) = n ( x 1 ) e R 2 dT ( x t 1 ) log m ( x,y ) (32.5) where now T...
View Full Document
- Spring '06