Unformatted text preview: number of trials required when the number is chosen in advance.
Another example occurs in tree searches where a path is explored until further extensions
of the path appear to be unproﬁtable.
The ﬁrst careful study of experimental situations where the number of trials depends on the
data was made by the statistician Abraham Wald and led to the ﬁeld of sequential analysis.
Wald’s equality, in the next subsection, is quite simple but crucial to the study of these
situations. Wald’s equality will be used again, along with a generating function equality
known as Wald’s identity, when we study random walks.
An important part of experiments that stop after a random number of trials is the rule
for stopping. Such a rule must specify, for each sample function, the trial at which the
experiment stops, i.e., the ﬁnal trial after which no more trials are performed. Thus the
rule for stopping must specify a positive, integer valued, random variable J , called the
stopping time, mapping sample functions into the trial at which the experiment stops.
We still view the sample space as the set of sample value sequences for the never ending
sequence of random variables X1 , X2 , . . . . That is, even if the experiment is stopped at the 3.3. EXPECTED NUMBER OF RENEWALS 101 end of the second trial, we still visualize the 3rd, 4th, . . . random variables as having sample values as part of the sample function. In other words, we visualize that the experiment
continues forever, but that the observer stops watching at the end of the stopping point.
From the standpoint of applications, it doesn’t make any diﬀerence whether the experiment continues or not after the observer stops watching. From a mathematical standpoint,
however, it is far preferable to view the experiment as continuing so as to avoid confusion
and ambiguity about what it means for the variables X1 , X2 , . . . to be IID when the very
existence of later variables depends on earlier sample values.
The intuitive notion of stopping includes the notion that a decision to stop before trial n
should depend only on the results before trial n. In other words, we want to exclude from
stopping rules those rules that allow the experimenter to peek at subsequent values before
making the decision to stop or not.3 In other words, the event {J ≥ n}, i.e., the event that
the nth experiment is performed, should be independent of Xn and all subsequent trials.
More precisely,
Deﬁnition 3.1. A stopping time4 J for a sequence of rv’s X1 , X2 , . . . , is a positive integer
valued rv such that for each n ≥ 1, the event {J ≥ n} is statistical ly independent of
(Xn , Xn+1 , . . . ). It is convenient in working with stopping rules to use an indicator function In for the event
{J ≥ n} for each n ≥ 1. That is, In = 1 if J ≥ n and In = 0 otherwise. A stopping
time is then a positive integervalued rv for which each associated indicator function In is
independent of Xn , Xn+1 , . . . for each n. Thus the rv In is a binary rv that takes the value
1 if the experiment includes the nth trial, and the value 0 otherwise. Since we assume that
the ﬁrst observation is always made (i.e., J is a positive random variable), I1 = 1 with
probability 1.
We can view In as a decision rule exercised by an observer to determine whether to continue
with the nth trial. In many applications, however, including that of establishing the elementary renewal theorem, there is no real notion of an observer, but only of the speciﬁcation
of some condition, not involving peeking, that is met after some random number of trials.
Since J ≥ n implies that J ≥ j for all j < n, the indicator functions have the corresponding
property that In = 1 implies that Ij = 1 for j < n. Also, since J is a rv, and thus ﬁnite with
probability 1, limn→1 Pr {In = 1} must be 0. Thus, according to the deﬁnition, stopping
must take place eventually, although not necessarily with any ﬁnite bound. We see that
each decision rule In is a function of the stopping time J , and the stopping time J is also
determined by all the decision rules (see Exercise 3.4). The notion that a stopping rule
should not allow peeking then means, for each n > 1, that In , the decision whether or not
to observe Xn , should depend only on X1 , . . . , Xn−1 .
3 For example, poker players do not take kindly to a player who bets on a hand and then withdraws his
bet when someone else wins the hand.
4
Stopping times are sometimes called optional stopping times. 102 3.3.3 CHAPTER 3. RENEWAL PROCESSES Wald’s equality An important question that arises with stopping rules is to evaluate the sum SJ of the
P
random variables up to the stopping time, i.e., SJ = J =1 Xn . Many gambling strategies
n
and investing strategies involve some sort of rule for when to stop, and it is important to
understand the rv SJ . Wald’s equality is very useful in establishing a very simple way to
ﬁnd E [SJ ].
Theorem 3.3 (Wald’s Equality). Let {Xn ; n ≥ 1} be IID rv’s, e...
View
Full
Document
 Spring '09
 R.Srikant
 Probability, Probability theory, ProbabiliTy Review

Click to edit the document details