This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 3) As t → 1, N (t) → 1, and thus, by the strong law of large numbers, the ﬁrst term on the
right side of (3.33) approaches E [Rn ] with probability 1. Also the second term approaches
1/X by the strong law for renewal processes. Thus the product of the two terms approaches
the limit E [Rn ] /X . The right hand term of (3.32) is handled almost the same way,
PN (t)+1
n=1 t Rn = PN (t)+1 Rn N (t) + 1 N (t)
n=1
.
N (t) + 1
N (t)
t (3.34) It is seen that the terms on the right side of (3.34) approach limits as before and thus
the term on the left approaches E [Rn ] /X withRprobability 1. Since the upper and lower
τ
bound in (3.32) approach the same limit, (1/t) 0 R(τ ) dτ approaches the same limit and
the theorem is proved.
The restriction to nonnegative renewalreward functions in Theorem 3.6 is slightly artiﬁcial.
The same result holds for nonpositive reward functions simply by changing the directions
of the inequalities in (3.32). Assuming that E [Rn ] exists (i.e., that both its positive and
negative parts are ﬁnite), the same result applies in general by splitting an arbitrary reward
function into a positive and negative part. This gives us the corollary:
Corollary 3.1. Let {R(t); t > 0} be a renewalreward function for a renewal process with
expected interrenewal time E [X ] = X . If E [Rn ] exists, then with probability 1
Z
1t
E [Rn ]
lim
R(τ ) dτ =
.
(3.35)
t→1 t τ =0
X
Example 3.4.4. (Distribution of Residual Life) Example 3.4.1 treated the timeaverage
value of the residual life Y (t). Suppose, however, that we would like to ﬁnd the timeaverage
distribution function of Y (t), i.e., the fraction of time that Y (t) ≤ y as a function of y . The 3.4. RENEWALREWARD PROCESSES; TIMEAVERAGES 111 approach, which applies to a wide variety of applications, is to use an indicator function
(for a given value of y ) as a reward function. That is, deﬁne R(t) to have the value 1 for all
t such that Y (t) ≤ y and to have the value 0 otherwise. Figure 3.10 illustrates this function
for a given sample path. Expressing this reward function in terms of Z (t) and X (t), we
have
(
1 ; X (t) − Z (t) ≤ y
R(t) = R(Z (t), X (t)) =
.
0 ; otherwise
✛y✲ 0 ✛y✲
✲ X3 ✛ S1 S2 S3 ✛y✲ S4 Figure 3.10: Reward function to ﬁnd the timeaverage fraction of time that {Y (t) ≤ y }.
For the sample function in the ﬁgure, X1 > y , X2 > y , and X4 > y , but X3 < y Note that if an interrenewal interval is smaller than y (such as the third interval in Figure
3.10), then R(t) has the value one over the entire interval, whereas if the interval is greater
than y , then R(t) has the value one only over the ﬁnal y units of the interval. Thus
Rn = min[y , Xn ]. Note that the random variable min[y , Xn ] is equal to Xn for Xn ≤ y , and
thus has the same distribution function as Xn in the range 0 to y . Figure 3.11 illustrates
this in terms of the complementary distribution function. From the ﬁgure, we see that
Z1
Zy
E [Rn ] = E [min(X, y )] =
Pr {min(X, y ) > x} dx =
Pr {X > x} dx.
(3.36)
x=0 x=0 ✛
✒
°
°✲
Pr {min(X, y ) > x} °
✠ Pr {X > x} x y Figure 3.11: Rn for distribution of residual life.
Rt
Let FY (y ) = limt→1 (1/t) 0 R(τ ) dτ denote the timeaverage fraction of time that the
residual life is less than or equal to y . From Theorem 3.6 and Eq.(3.36), we then have
Z
E [Rn ]
1y
FY (y ) =
=
Pr {X > x} dx.
(3.37)
X
X x=0
As a check, note that this integral is increasing in y and approaches 1 as y → 1.
In the development so far, the reward function R(t) has been a function solely of the age and
duration intervals. In more general situations, where the renewal process is embedded in
some more complex process, it is often desirable to deﬁne R(t) to depend on other aspects of
the process as well. The important thing here is for the reward function to be independent of
the renewal process outside the given interrenewal interval so that the accumulated rewards 112 CHAPTER 3. RENEWAL PROCESSES over successive interrenewal intervals are IID random variables. Under this circumstance,
Theorem 3.6 clearly remains valid. The subsequent examples of Little’s theorem and the
M/G/1 expected queueing delay both use this more general type of renewalreward function.
The above timeaverage is sometimes visualized by the following type of experiment. For
some given large time t, let T be a uniformly distributed random variable over (0, t]; T is
Rt
independent of the renewalreward process under consideration. Then (1/t) 0 R(τ ) dτ is
the expected value (over T ) of R(T ) for a given sample point of {R(τ ); τ >0}. Theorem 3.6
states that in the limit t → 1, all sample points (except a set of probability 0) yield the
same expected value over T . This approach of viewing a timeaverage as a random choice
of time is referred to as random incidence. Random incidence is awkward mathematically,
since the random variable T changes with the overall time t and has n...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details