Unformatted text preview: the nodes at which stopping occurs) are marked with large dots and the intermediate
nodes (the other nodes) with small dots. Note that each leaf in the tree has a onetoone correspondence with an initial segment of the tree, so the stopping nodes can
be unambiguously viewed either as leaves of the tree or initial segments of the sample
sequences. Note that in both of these examples, the stopping rule determines which initial segment of
any given sample sequence satisifes the rule. The distribution of each Xn , and even whether
or not the sequence is IID, is usually not relevant for deﬁning these stopping rules. In other
words, the conditions about statistical independence used in Chapter 3 for the indicator
functions of stopping rules is quite unnatural for most applications.
The essence of a stopping rule, however, is illustrated quite well in Figure 7.7. If one stops
at some initial segment of a sample sequence, then one cannot stop again at some longer
initial segment of the same sample sequence. This leads us to the following deﬁnitions of
stopping nodes, stopping rules, and stopping times.
Deﬁnition 7.2 (Stopping nodes). Given a sequence {Xn ; n ≥ 1} of rv’s, a col lection of
stopping nodes is a col lection of initial segments of the sample sequences of {Xn ; n ≥ 1}.
If an initial segment of one sequence is a stopping node, then it is a stopping node for al l
sequences with that same initial segment. Also, no stopping node can be an initial segment
of any other stopping node.
This deﬁnition is less abstract when each Xn is dicrete with a ﬁnite number, say m of
possible values. In this case, as illustrated in Figure 7.7, the set of seqeunces is represented
by a tree in which each node has one branch coming in from the root and m branches going
out. Each stopping node corresponds to ‘pruning’ the tree at that node. All the sequences
with that given initial segment can then be ignored since they all have that same initial 294 CHAPTER 7. RANDOM WALKS, LARGE DEVIATIONS, AND MARTINGALES segment, i.e., stopping node. In this sense, every ‘pruning’ of the tree corresponds to a
collection of stopping nodes.
In information theory, such a collection of stopping nodes is called a preﬁxfree source
code. Each segment corresponding to a stopping node is used as a codeword for some given
message. If a sequence of consecutive segments is transmitted, a receiver can parse the
incoming letters into segments by using the fact that no stopping node is an initial segment
of any other stopping node.
Deﬁnition 7.3 (Stopping rule and stopping time). A stopping rule for {Xn ; n ≥ 1}
is a rule that determines a col lection of stopping nodes. A stopping time is a perhaps
defective rv whose value, for a sample sequence with a stopping node, is the length of the
initial segment for that node. Its value, for a sample sequence with no stopping node, is
inﬁnite.
For most interesting stopping rules, sample sequences exist that have no stopping nodes.
For the example of a random walk with two thresholds, there are many sequences that stay
inside the thresholds forever. As shown by Lemma 7.1 however, this set of sequences has
zero probability and thus the stopping time is a (nondefective) rv. We see from this that,
although stopping rules are generally deﬁned without the use of a probability measure,
and the mapping from sample sequences to stopping nodes is similarly independent of the
probability measure, the question of whether the stopping time is defective and whether it
has moments is very dependent on the probability measure.
£
§
Theorem 7.2 (Wald’s identity). Let {Xi ; i ≥ 1} be IID and let ∞ (r) = ln{E erX } be
the semiinvariant moment generating function of each Xi . Assume ∞ (r) is ﬁnite in an
open interval (r− , r+ ) with r− < 0 < r+ . For each n ≥ 1, let Sn = X1 + · · · + Xn . Let α > 0
and β < 0 be arbitrary real numbers, and let N be the smal lest n for which either Sn ≥ α
or Sn ≤ β . Then for al l r ∈ (r− , r+ ),
E [exp(rSN − N ∞ (r))] = 1. (7.24) We ﬁrst show how to use and interpret this theorem, and then prove it. The proof is quite
simple, but will mean more after understanding the surprising power of this result. Wald’s
identity can be thought of as a generating function form of Wald’s equality as established
in Theorem 3.3. First note that the trial N at which a threshold is crossed in the theorem
is a stopping time in the terminology of Chapter 3. Also, if we take the derivative with
respect to r of both sides of (7.24), we get
£
§
E [SN − N ∞ 0 (r) exp{rSN − N ∞ (r)} = 0.
Setting r = 0 and recalling that ∞ (0) = 0 and ∞ 0 (0) = X , this becomes Wald’s equality,
E [SN ] = E [N ] X . (7.25) Note that this derivation of Wald’s equality is restricted to a random walk with two thresholds (and this automatically satisﬁes the constraint in Wald’s equality that E [N ] < 1). The
result in Chapter 3 was more general, applyin...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details