E ? μ t ? ? μ ? ? μ μ ? μ μ ? μ markov jump

This preview shows page 63 - 73 out of 107 pages.

+ e - ( λ + μ ) t λ λ + μ - λ λ + μ - μ λ + μ μ λ + μ ! Markov Jump Processes 63/106 " ÷÷* e- E) Plt ,x , , ) ily ) T " loss of memory " " asymptotic True penance "
Analytically Solving for P ( t ) Poisson process: P ( t , i , i + j ) = e - λ t ( λ t ) j j ! , j 0 , t 0 i.e. N ( t ) - N (0) D = Poisson( λ t ) How to verify: P 0 ( t , i , i + j ) = 1 X k =0 Q ( i , k ) P ( t , k , i + j ) = - λ P ( t , i , i + j ) + λ P ( t , i + 1 , i + j ) Markov Jump Processes 64/106 7 2
Analytically Solving for P ( t ) Numerically solving for P ( t ) : If one is interested in only one row of P ( t ) , one can use simulation Compute exp( Qt ) = m X n =0 Q n t n n ! for a large value of m [Computes P ( t ) only for one value of t ] Solve the system of ordinary di erential equations from either Slide 45 or Slide 46 via time-stepping [Computes P ( Δ ) , P (2 Δ ) , P (3 Δ ) , ... ] Markov Jump Processes 65/106 -3 9 I
The Forwards Equations 8.13 The Forwards Equations Given μ ( x ) = P ( X (0) = x ) for x 2 S (the initial distribution of X ), our goal is to compute the row vector p ( t ) = ( p ( t , y ) : y 2 S ) , where p ( t , y ) = P ( X ( t ) = y ) Then, p ( t ) = μ P ( t ) p 0 ( t ) = μ P 0 ( t ) = μ P ( t ) Q = p ( t ) Q So, ( p ( t ) : t 0) satisfies p 0 ( t ) = p ( t ) Q s/t p (0) = μ Markov Jump Processes 66/106 L t & z - ÷ szr
The Forwards Equations Forwards equations in continuous time: p 0 ( t ) = p ( t ) Q Forwards equations in discrete time: p n - p n - 1 = p n - 1 ( P - I ) (See Slide 9 of Topic 1) Time-stepping for forwards equations involves matrix/vector multiplies (more e ffi cient!) Markov Jump Processes 67/106 P - I I Q ErE P4t7=Plq . I
The Backwards Equations 8.14 The Backwards Equations Given a reward function r ( x ) for x 2 S , our goal is to compute the expected reward column vector u ( t ) = ( u ( t , x ) : x 2 S ) , where u ( t , x ) = E x r ( X ( t )) Then, u ( t ) = P ( t ) r u 0 ( t ) = P 0 ( t ) r = QP ( t ) r = Qu ( t ) So, ( u ( t ) : t 0) satisfies u 0 ( t ) = Qu ( t ) s/t u (0) = r Markov Jump Processes 68/106 -9 0 - c-
The Backwards Equations Backwards equations in continuous time: u 0 ( t ) = Qu ( t ) Backwards equations in discrete time u n - u n - 1 = ( P - I ) u n - 1 Time-stepping for backwards equations involves matrix/vector multiplies Markov Jump Processes 69/106 PC tT=#t)Q @ * P' ( t ) = Q Plot n' Lt ) = nlt ) Q y' (4) = Quit ) P lol = I Nicol = M who ? = r Matt is |simulatiu| Mathematica etc .
First Transition Analysis 8.15 First Transition Analysis for Markov Jump Processes Follows closely what we did for DTMC’s In fact, to compute entrance/absorption probabilities, it is a special case of our earlier theory: u ( x ) = P x ( X ( T ) 2 A ) where T = inf { t 0 : X ( t ) 2 C c } , A C c Note that u ( x ) = P x ( Y e T 2 A ) where e T = inf { n 0 : Y n 2 C c } Markov Jump Processes 70/106 3 q { Transition matrix R 3 {
First Transition Analysis Computing Expected Hitting Times: T = inf { t 0 : X ( t ) 2 C c } , C c S u ( x ) = E x T , x 2 C Note that: u ( x ) = 1 λ ( x ) + X y 2 C R ( x , y ) u ( y ) , x 2 C Equivalently, λ ( x ) u ( x ) = 1 + X y 6 = x Q ( x , y ) u ( y ) , x 2 C i.e. X y 2 C Q ( x , y ) u ( y ) = - 1 , x 2 C Markov Jump Processes 71/106 ¥ 0 Z z # ET ± ? " ? EE t ' O ' T T a) Gi Rin , y ) = Qlni , y ) E c- ± " - - _ ¥ "" ¥?¥
First Transition Analysis Continuous-time: X y 2 C Q ( x , y ) u ( y ) = - 1 , x 2 C Discrete-time: X y 2 C ( P - I )( x , y ) u ( y ) = - 1 , x 2 C i.e.

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture