This preview shows page 1. Sign up to view the full content.
Unformatted text preview: corresponds to a weaker type
of convergence. The resolution of this paradox is that the sequence of rv’s in the CLT
√
√
is { Sn −nX ; n ≥ 1}. The presence of n in the denominator of this sequence provides
nσ much more detailed information about how Sn /n approaches X with increasing n than
the limiting unit step of FSn /n itself. For example, it is easy to see from the CLT that
limn1 FSn /n (X ) = 1/2, which can not be directly derived from the weak law.
Yet another kind of convergence is convergence in mean square (MS). An example of this,
for the sample average Sn /n of IID rv’s with a variance, is given in (1.50), repeated below:
"µ
∂2 #
Sn
lim E
−X
= 0.
n→1
n
The general deﬁnition is as follows:
Deﬁnition 1.9. A sequence of rv’s Y1 , Y2 , . . . , converges in mean square (MS) to a real
£
§
number z if limn→1 E (Yn − z )2 = 0. Our derivation of the weak law of large numbers (Theorem 1.2) was essentially based on the
MS convergence of (1.50). Using the same approach, Exercise 1.28 shows in general that
convergence in MS implies convergence in probability. Convergence in probability does not
imply MS convergence, since as shown in Theorem 1.3, the weak law of large numbers holds
without the need for a variance.
The ﬁnal type of convergence to be discussed is convergence with probability 1. The strong
©
™
law of large numbers, stating that Pr limn→1 Sn /n = X = 1, is an example. The general
deﬁnition is as follows:
Deﬁnition 1.10. A sequence of rv’s Y1 , Y2 , . . . , converges with probability 1 (W.P.1) to a
real number z if Pr {limn→1 Yn = z } = 1.
An equivalent statement is that o1 , Y2 , . . . , converges W.P.1 to a real number z if, for every
Y
nS
ε > 0, Pr
m≥n {Ym − z  > ε} = 0. The equivalence of these two statements follows by
exactly the same argument used to show the equivalence of the two versions of the stong
law. In seeing this, note that the proof of version 2 of the strong law did not use any
properties of the sample averages Sn /n.
We now show that convergence with probability 1 implies convergence in probability. Using
the second form above of convergence W.P.1, we see that
n[
o
Pr
{Ym − z  > ε} ≥ Pr {Yn − z  > ε}
(1.73)
m≥n 42 CHAPTER 1. INTRODUCTION AND REVIEW OF PROBABILITY Thus if the term on the left converges to 0 as n → 1, then the term on the right does also
and convergence W.P.1 implies convergence in probability.
It turns out that convergence in probability does not imply convergence W.P.1. We cannot
show this from the SLLN in Theorem 1.5 and the WLLN in Theorem 1.3, since both hold
under the same conditions. However, convergence in probability and convergence W.P.1 also
hold for many other sequences of rv’s . A simple example where convergence in probability
holds but convergence W.P.1 does not is given by Example 1.4.1 where we let Yn = IAn and
z = 0. Then for any positive ε < 1, the left side of (1.73) is 1 and the right side converges
to 0. This means that {Yn ; n ≥ 1} converges to 0 in probability but does not converge with
probability 1.
Finally, we want to show that convergence W.P.1 neither implies nor is implied by convergence in mean square. Although we have not shown it, convergence W.P.1 in the form of the
SLLN is valid for rv’s with a mean but not a variance, so this is a case where convergence
W.P.1 holds, but convergence in MS does not.
Going the other way, the Example of 1.4.1 again shows that convergence W.P.1 does not
hold here, but convergence in MS to 0 does hold. Figure 1.12 illustrates the fact that neither
convergence W.P.1 nor convergence in MS imply the other. 1.5 Relation of probability models to the real world Whenever ﬁrstrate engineers or scientists construct a probability model to represent aspects
of some system that either exists or is being designed for some application, they ﬁrst acquire
a deep knowledge of the system and its surrounding circumstances. This is then combined
with a deep and wide knowledge of probabilistic analysis of similar types of problems. For a
text such as this, however, it is necessary for motivation and insight to pose the models we
look at as highly simpliﬁed models of realworld applications, chosen more for their tutorial
than practical value.
There is a danger, then, that readers will come away with the impression that analysis is
more challenging and important than modeling. To the contrary, modeling is almost always
more diﬃcult, more challenging, and more important than analysis. The ob jective here
is to provide the necessary knowledge and insight about probabilistic systems so that the
reader can later combine it with a deep understanding of some real application area which
will result in a useful interactive use of models, analysis, and experimentation.
In this section, our purpose is not to learn how to model realworld problems, since, as
said above, this requires deep and specialized knowledge of whatever application area is of
interest. Rather it is to understand the following con...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details