Unformatted text preview: ider a continuous-time Markov chain with two states: 1 and 0, where 0
is an absorbing state. Let state 1 be the initial state. The process spends an exponential time
T1 ∼ exp(λ) at state 1 and then jumps to state 0. At state 1 the reward rate is 1 and at the jump
epoch there is no lump sum reward. At state 0 the process collects no rewards. Let the discount
factor be α and let T ∼ exp(α).
The total discounted rewards under the two deﬁnitions are
T1 J1 = e−αt dt = 0 1
(1 − e−αT1 ),
α T ∧T1 J2 = d t = T ∧ T1 . 0 For the ﬁrst deﬁnition,
var (J1 ) = 1
var (e−αT1 ) = 2 (MT1 (−2α) − (MT1 (−α))2 ) =
(λ + α) (λ + 2α) where MX (s) is the moment generating function of a random variable X . In particular, MT1 (s) =
λ/(λ − s). 1212 E. A. FEINBERG AND J. FEI Since T ∧ T1 is an exponential random variable with intensity λ + α ,
var (J2 ) = 1
(λ + α)2 . Thus, var (J1 ) < var (J2 ).
Example 2.2. Consider a discrete-time Markov chain where at each jump the proc...
View Full Document
- Fall '08
- Variance, Probability theory, Tn, J2, total discounted rewards