This preview shows page 1. Sign up to view the full content.
Unformatted text preview: a ﬂag indicating the end of the packet. To prevent this problem,
an extra binary digit of value 0 is inserted after each appearance of 011111 in the original
string (this can be deleted after reception).
Suppose we want to ﬁnd the expected number of inserted bits in a string of length n. For
each position i ≥ 6 in the original string, deﬁne Xi as a rv whose value is 1 if an insertion
occurs after the ith data bit. The total number of insertions is then just the sum of Xi from
i = 6 to n inclusive. Since E [Xi ] = 2−6 , the expected number of insertions is (n − 5)2−6 .
Note that the positions in which the insertions occur are highly dependent, and the problem
would be quite diﬃcult if one didn’t use (1.31) to avoid worrying about the dependence.
If the rv’s X1 , . . . , Xn are independent, then, as shown in exercises 1.11 and 1.17, the
variance of Sn = X1 + · · · + Xn is given by
Xn
2
2
σSn =
σXi .
(1.32)
i=1 2
If X1 , . . . , Xn are also identically distributed (i.e., X1 , . . . , Xn are IID) with variance σX ,
√
2 = nσ 2 . Thus the standard deviation of S is σ
then σSn
nσX . Sums of IID rv’s
n
Sn =
X
appear everywhere in probability theory and play an especially central role in the laws
of large numbers. It is important to remember that, while the mean of Sn is increasing
linearly with n, the standard deviation is increasing only with the square root of n. Figure
1.5 illustrates this behavior. 22 CHAPTER 1. INTRODUCTION AND REVIEW OF PROBABILITY 1 · · · · · · · · · · · · · · · 0.8 · · · · · · · · · · · · · · · 0.6 · FS4
· · · FS20
·
· · · · · FS50
· · · · · 0.4 · · · · · · · · · · · · · · · 0.2 · · · · · · · · · · · · · · · 0
5 10 15 20 Figure 1.5: The distribution function FSn (s) of the sum of n IID rv’s for n = 4, n = 20,
and n = 50. The rv is binary with Pr {1} = 1/√ Pr {0} = 3/4. Note that the mean is
4,
increasing with n and standard deviation with n. 1.3.8 Conditional expectations Just as the conditional distribution of one rv conditioned on a sample value of another rv
is important, the conditional expectation of one rv based on the sample value of another is
equally important. Initially let X be a positive discrete rv and let y be a sample value of
another discrete rv Y such that pY (y ) > 0. Then the conditional expectation of X given
Y = y is deﬁned to be
X
E [X  Y =y ] =
x pX Y (x  y )
(1.33)
x This is simply the ordinary expected value of X using the conditional probabilities in the
reduced sample space corresponding to Y = y . It can be ﬁnite or inﬁnite as before. More
generally, if X can take on positive or negative values, then there is the possibility that
the conditional expectation is undeﬁned if the sum is 1 over positive values of x and −1
over negative values. In other words, for discrete rv’s, the conditional expectation is exactly
the same as the ordinary expectation, except that it is taken using conditional probabilities
over the reduced sample space.
More generally yet, let X be a completely aribitrary rv and let y be a sample value of a
discrete rv Y with pY (y ) > 0. The conditional distribution function of X conditional on
Y = y is deﬁned as
FX Y (x  y ) = Pr {X ≤ x, Y = y }
Pr {Y = y } Since this is an ordinary distribution function in the reduced sample space where Y = y , 1.3. PROBABILITY REVIEW 23 (1.24) expresses the expectation of X conditional on Y = y as
Z0
Z1
E [X  Y = y ] = −
FX Y (x  y ) dx +
FX Y (x  y ) dx
−1 (1.34) 0 The forms of conditional expectation in (1.33) and (1.34) are given for individual sample
values of Y for which pY (y ) > 0. Thus each ω ∈ ≠ (except perhaps a set of zero probability)
maps into Y = y for some y with pY (y ) > 0 and each such y corresponds to E [X  Y = y ]
for that y . Thus we can consider E [X  Y ] to be a rv that is a function of Y , mapping
ω into Y = y and hence into E [X  Y = y ]. Regarding a conditional expectation as a rv
that is a function of the conditioning rv is a powerful tool both in problem solving and in
advanced work. For now, we use this to express the unconditional mean of X as
E [X ] = E [E [X  Y ]] ,
where the outer expectation is over the rv E [X  Y ]. Again assuming X to be discrete, we
can write out this expectation by using (1.33) for E [X  Y = y ].
X
E [X ] = E [E [X  Y ]] =
pY (y )E [X  Y = y ]
= X
y pY (y ) X
x y x pX Y (xy ) (1.35) Operationally, there is nothing very fancy here. Combining the sums, (1.37) simply says
P
that E [X ] = y,x x pY X (y , x). As a concept, however, viewing the conditional expectation
as a rv based on the conditioning rv is often very useful in research. This approach is equally
useful as a tool in problem solving, since there are many problems where it is easy to ﬁnd the
conditional expectations, and then to ﬁnd the total expectation by (1.37). For this reason,
this result is sometimes called the total expectation theorem. Exercise 1...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details