This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 5510: Random Processes Lecture Notes Fall 2009 Lecture 22 Today: (1) Markov Property (2) Markov Chains HW 9 due today at 5pm; HW 10 Thu Dec 10 at 10:45am (on web). Application Assignment 6 due Tue, Dec 8 at midnight. 1 Markov Processes Weve talked about 1. iid random sequences 2. WSS random sequences Each sample of the iid sequence has no dependence on past samples. Each sample of the WSS sequence may depend on many (possibly infinitely many) previous samples. Now well talk specifically about random processes for which the distribution of X n +1 depends at most on the most recent sample. A random property like this is said to have the Markov property. A quick way to talk about a Markov process is to say that given the present value, its future is independent of the past . It turns out, there are quite a variety of r.p.s which have the Markov property. The benefit is that you can do a lot of analysis using a program like Matlab, and come up with valuable answers for the design of systems. 1.1 Definition Defn: Markov Process A discrete random process X n is Markov if it has the property that P [ X n +1  X n ,X n 1 ,X n 2 ,... ] = P [ X n +1  X n ] A discrete random process X ( t ) is Markov if it has the property that for t n +1 > t n > t n 1 > t n 2 > , P [ X ( t n +1 )  X ( t n ) ,X ( t n 1 ) ,X ( t n 2 ) ,... ] = P [ X ( t n +1 )  X ( t n )] ECE 5510 Fall 2009 2 Examples For each one, write P [ X ( t n +1 )  X ( t n ) ,X ( t n 1 ) ,X ( t n 2 ) ,... ] and P [ X ( t n +1 )  X ( t n )]: Brownian motion: The value of X n +1 is equal to X n plus the random motion that occurs between time n and n + 1. Any independent increments process. Gambling or investment value over time. Digital computers and control systems. The state is described by what is in the computers memory; and the transitions may be nonrandom (described by a deterministic algorithm) or random. Randomness may arrive from input signals. Notes: If you need more than one past sample to predict a future value, then the process is not Markov. The value X n is also called the state. The change from X n to X n +1 is called the state transition. i.i.d. r.p.s are also Markov. 1.2 Visualization We make diagrams to show the possible progression of a Markov process. Each state is a circle; while each transition is an arrow, labeled with the probability of that transition. Example: Discrete Telegraph Wave r.p. Let X n be a Binomial r.p. with parameter p , and let Y n = ( 1) X n . Each time a trial is a success, the r.p. switches from 1 to 1 or vice versa. See the state transition diagram drawn in Fig. 1.1 +1 p p 1 p 1 p Figure 1: A state transition diagram for the Discrete Telegraph Wave....
View
Full
Document
This note was uploaded on 09/15/2011 for the course ECE 5510 taught by Professor Chen,r during the Fall '08 term at University of Utah.
 Fall '08
 Chen,R

Click to edit the document details