This preview shows page 1. Sign up to view the full content.
Unformatted text preview: {An } .
(1.9)
n 1.3
1.3.1 n Probability review
Conditional probabilities and statistical independence Deﬁnition 1.1. For any two events A and B (with Pr {B } > 0), the conditional probability
of A, conditional on B , is deﬁned by
Pr {AB } = Pr {AB } /Pr {B } . (1.10) One visualizes an experiment that has been partly carried out with B as the result. Pr {AB }
can then be viewed as the probability of A normalized to a sample space restricted to event
B . Within this restricted sample space, we can view B as the sample space and AB as an
event within this sample space. For a ﬁxed event B , we can visualize mapping each event
A in the original space into AB in the restricted space. It is easy to see that the event
axioms are still satisﬁed in the restricted space. Assigning probability Pr {AB } to each AB
in the restricted space, it is easy to see that the axioms of probability are also satisﬁed. In
other words, everything we know about probability can also be applied to such a restricted
probability space.
10 We sometimes express intersection as A T B and sometimes directly as AB . 1.3. PROBABILITY REVIEW 9 Deﬁnition 1.2. Two events, A and B , are statistical ly independent (or, more brieﬂy, independent) if
Pr {AB } = Pr {A} Pr {B } .
For Pr {B } > 0, this is equivalent to Pr {AB } = Pr {A}. This latter form corresponds to
our intuitive view of independence, since it says that the observation of B does not change
the probability of A. Such intuitive statements about “observation” and “occurrence” are
helpful in reasoning probabilistically, but also can lead to great confusion. For Pr {B } > 0,
Pr {AB } is deﬁned without any notion of B being observed “before” A. For example
Pr {AB } Pr {B } = Pr {B A} Pr {A} is simply a consequence of the deﬁnition of conditional
probability and has nothing to do with causality or observations. This issue caused immense
confusion in probabilistic arguments before the axiomatic theory was developed.
The notion of independence is of vital importance in deﬁning, and reasoning about, probability models. We will see many examples where very complex systems become very simple,
both in terms of intuition and analysis, when enough quantities are modeled as statistically
independent. An example will be given in the next subsection where repeated independent
experiments are used in understanding relative frequency arguments.
Often, when the assumption of independence turns out to be oversimpliﬁed, it is reasonable
to assume conditional independence, where A and B are said to be conditional ly independent
given C if Pr {AB C } = Pr {AC } Pr {B C }. Most of the stochastic processes to be studied
here are characterized by particular forms of independence or conditional independence.
For more than two events, the deﬁnition of statistical independence is a little more complicated.
Deﬁnition 1.3. The events A1 , . . . , An , n > 2 are statistical ly independent if for each
subset S of two or more of the integers 1 to n.
Pr n\ i∈S oY
Ai = i∈S Pr {Ai } . (1.11) This includes the full subset S = {1, . . . , n}, so one necessary condition for independence
is that
n\n
o Yn
Pr
Ai =
Pr {Ai } .
(1.12)
i=1 i=1 Assuming that Pr {Ai } is strictly positive for each i, this says that
n Ø[
Ø
Pr Ai Ø j ∈S Aj o = Pr {Ai } for each i and each S such that i ∈ S.
/ It might be surprising that (1.12) does not imply (1.11), but the example in Exercise 1.5
will help clarify this. This deﬁnition will become clearer (and simpler) when we see how to
view independence of events as a special case of independence of random variables. 10 1.3.2 CHAPTER 1. INTRODUCTION AND REVIEW OF PROBABILITY Repeated idealized experiments Much of our intuitive understanding of probability comes from the notion of repeated idealized experiments, but the axioms of probability contain no explicit recognition of such
repetitions. The appropriate way to handle n repetitions of an idealized experiment is
through an extended probability model whose sample points are ntuples of sample points
from the original model. In other words, given an original sample space ≠, the sample space
of an nrepetition model is the Cartesian product
≠×n = {(ω1 , ω2 , . . . , ωn )  ωi ∈ ≠ for each i, 1 ≤ i ≤ n}. (1.13) Since each sample point in the nrepetition model is an ntuple of points from the original
≠, an event in the nrepetition model is a subset of ≠×n , i.e., a subset of the ntuples
(ω1 , . . . , ωn ), each ωi ∈ ≠. This class of events should include each event of the form
(A1 , A2 , . . . , An ) where A1 ⊆ ≠ is an event within the ﬁrst trial, A2 ⊆ ≠ is an event within
the 2nd trial, etc. For example, with two rolls of a die, an even number {2, 4, 6} followed by
an odd number is an event. There are other events also, such as the union of odd followed
by even or even followed by odd, that can not be represented as an ntuple of elementary
events. Thus the set of events (for nrepetitions) should include all ntuples of elementary
events plus the extension of this class of events over c...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details