Unformatted text preview: for all y . The MAP rule is to choose H1 or H0 depending on whether the
quantity on the right is positive or negative, i.e., n > ln(p0 /p1 ) ; choose H1
X
< ln(p0 /p1 ) ; choose H0
zi
(7.14) i=1
= ln(p0 /p1 ) ; don’t care, choose either Conditional on H0 , the rv’s {Yi ; 1 ≤ i ≤ n} are IID. Since Zi = ln[f (Yi  H1 )/f (Yi  H0 )] for
1 ≤ i ≤ n, and since Zi is the same ﬁnite function of Yi for all i, we see that each Zi is a rv
and that Z1 , . . . , Zn are IID conditional on H0 . Similarly, Z1 , . . . , Zn are IID conditional
on H1 .
Without conditioning on H0 or H1 , neither the rv’s Y1 , . . . , Yn nor the rv’s Z1 , . . . , Zn are
IID. Thus it is important to keep in mind the basic structure of this problem: initially
a sample value is chosen for H . Then n observations, IID conditional on H , are made.
Naturally the observer does not observe the original selection for H
Conditional on H0 , the sum on the left in (7.14) is thus the sample value of the nth term in
the random walk Sn = Z1 + · · · + Zn based on the rv’s {Zi ; i ≥ 1} conditional on H0 . The
MAP rule chooses H1 , thus making an error conditional on H0 , if Sn is greater than the
threshold ln[p0 /p1 ]. Similarly, conditional on H1 , Sn = Z1 + · · · + Zn is the nth term in a
random walk with the conditional probabilities from H1 , and an error is made, conditional
on H1 , if Sn is less than the threshold ln[p0 /p1 ].
P
It is interesting to observe that i zi in (7.14) depends only on the observations but not
on p0 , whereas the threshold ln(p0 /p1 ) depends only on p0 and not on the observations.
P
Naturally the marginal probability distribution of
i Zi do es depend on p0 (and on the
P
conditioning), but i zi is a function only of the observations, so its value does not depend
on p0 .
P
The decision rule in (7.14) is called a threshold test in the sense that i zi is compared with
a threshold to make a decision. There are a number of other formulations of the problem
that also lead to threshold tests. For example, maximum likelihood (ML) detection chooses
the hypothesis i that maximizes f (y  Hi ), and thus corresponds to a threshold at 0. The
ML rule has the property that it minimizes the maximum of Pr {H0  Y } and Pr {H1  Y };
this has obvious beneﬁts when one is unsure of the a priori probabilities.
In many detection situations there are unequal costs associated with the two kinds of errors.
For example one kind of error in a medical test could lead to death of the patient and the
other to an unneeded medical procedure. A minimum cost decision minimizes the expected
cost over the two types of errors. As shown in Exercise 7.5, this is also a threshold test.
Finally, one might impose the constraint that Pr {error  H1 } must be less than some tolerable limit α, and then minimize Pr {error  H0 } sub ject to this constraint. The solution to this
is called a NeymanPearson threshold test (see Exercise 7.6). The NeymanPearson test is
of particular interest since it does not require any assumptions about a priori probabilities.
So far we have assumed that a decision is made after n observations. In many situations
there is a cost associated with observations and one would prefer, after a given number of
observations, to make a decision if the resulting probability of error is small enough, and to 7.4. THRESHOLD CROSSING PROBABILITIES IN RANDOM WALKS 287 continue with more observations otherwise. Common sense dictates such a strategy, and the
branch of probability theory analyzing such strategies is called sequential analysis, which is
based on the results in the next section.
Essentially, we will see that the appropriate way to vary the number of observations based on
the result of the observations is as follows: The probability of error under either hypothesis
is based on Sn = Z1 + · · · + Zn . Thus we will see that the appropriate rule is to choose H0
if the sample value of Sn is less than some negative threshold β , to choose H1 if the sample
value of Sn ≥ α for some positive threshold α and to continue testing if the sample value
has not exceeded either threshold.
The previous examples have all involved random walks crossing thresholds, and we now turn
to the systematic study of threshold crossing problems. First we look at single thresholds,
so that one question of interest is to ﬁnd Pr {Sn ≥ α} for an arbitrary integer n ≥ 1 and
arbitrary α > 0. Another question is whether Sn ≥ α for any n ≥ 1. We then turn to random
walks with both a positive and negative threshold. Here, some questions of interest are to
ﬁnd the probability that the positive threshold is crossed before the negative threshold, to
ﬁnd the distribution of the threshold crossing time given the particular threshold crossed,
and to ﬁnd the overshoot when a threshold is crossed. 7.4 Threshold crossing probabilities in random walks Let {Xi ; i ≥ 1} be a sequence of IID random variables with the distribution function FX (x),
and let {Sn ; n ...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details