Discrete-time stochastic processes

110 one visualizes an experiment that has been partly

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: {An } . (1.9) n 1.3 1.3.1 n Probability review Conditional probabilities and statistical independence Definition 1.1. For any two events A and B (with Pr {B } > 0), the conditional probability of A, conditional on B , is defined by Pr {A|B } = Pr {AB } /Pr {B } . (1.10) One visualizes an experiment that has been partly carried out with B as the result. Pr {A|B } can then be viewed as the probability of A normalized to a sample space restricted to event B . Within this restricted sample space, we can view B as the sample space and AB as an event within this sample space. For a fixed event B , we can visualize mapping each event A in the original space into AB in the restricted space. It is easy to see that the event axioms are still satisfied in the restricted space. Assigning probability Pr {A|B } to each AB in the restricted space, it is easy to see that the axioms of probability are also satisfied. In other words, everything we know about probability can also be applied to such a restricted probability space. 10 We sometimes express intersection as A T B and sometimes directly as AB . 1.3. PROBABILITY REVIEW 9 Definition 1.2. Two events, A and B , are statistical ly independent (or, more briefly, independent) if Pr {AB } = Pr {A} Pr {B } . For Pr {B } > 0, this is equivalent to Pr {A|B } = Pr {A}. This latter form corresponds to our intuitive view of independence, since it says that the observation of B does not change the probability of A. Such intuitive statements about “observation” and “occurrence” are helpful in reasoning probabilistically, but also can lead to great confusion. For Pr {B } > 0, Pr {A|B } is defined without any notion of B being observed “before” A. For example Pr {A|B } Pr {B } = Pr {B |A} Pr {A} is simply a consequence of the definition of conditional probability and has nothing to do with causality or observations. This issue caused immense confusion in probabilistic arguments before the axiomatic theory was developed. The notion of independence is of vital importance in defining, and reasoning about, probability models. We will see many examples where very complex systems become very simple, both in terms of intuition and analysis, when enough quantities are modeled as statistically independent. An example will be given in the next subsection where repeated independent experiments are used in understanding relative frequency arguments. Often, when the assumption of independence turns out to be oversimplified, it is reasonable to assume conditional independence, where A and B are said to be conditional ly independent given C if Pr {AB |C } = Pr {A|C } Pr {B |C }. Most of the stochastic processes to be studied here are characterized by particular forms of independence or conditional independence. For more than two events, the definition of statistical independence is a little more complicated. Definition 1.3. The events A1 , . . . , An , n > 2 are statistical ly independent if for each subset S of two or more of the integers 1 to n. Pr n\ i∈S oY Ai = i∈S Pr {Ai } . (1.11) This includes the full subset S = {1, . . . , n}, so one necessary condition for independence is that n\n o Yn Pr Ai = Pr {Ai } . (1.12) i=1 i=1 Assuming that Pr {Ai } is strictly positive for each i, this says that n Ø[ Ø Pr Ai Ø j ∈S Aj o = Pr {Ai } for each i and each S such that i ∈ S. / It might be surprising that (1.12) does not imply (1.11), but the example in Exercise 1.5 will help clarify this. This definition will become clearer (and simpler) when we see how to view independence of events as a special case of independence of random variables. 10 1.3.2 CHAPTER 1. INTRODUCTION AND REVIEW OF PROBABILITY Repeated idealized experiments Much of our intuitive understanding of probability comes from the notion of repeated idealized experiments, but the axioms of probability contain no explicit recognition of such repetitions. The appropriate way to handle n repetitions of an idealized experiment is through an extended probability model whose sample points are n-tuples of sample points from the original model. In other words, given an original sample space ≠, the sample space of an n-repetition model is the Cartesian product ≠×n = {(ω1 , ω2 , . . . , ωn ) | ωi ∈ ≠ for each i, 1 ≤ i ≤ n}. (1.13) Since each sample point in the n-repetition model is an n-tuple of points from the original ≠, an event in the n-repetition model is a subset of ≠×n , i.e., a subset of the n-tuples (ω1 , . . . , ωn ), each ωi ∈ ≠. This class of events should include each event of the form (A1 , A2 , . . . , An ) where A1 ⊆ ≠ is an event within the first trial, A2 ⊆ ≠ is an event within the 2nd trial, etc. For example, with two rolls of a die, an even number {2, 4, 6} followed by an odd number is an event. There are other events also, such as the union of odd followed by even or even followed by odd, that can not be represented as an n-tuple of elementary events. Thus the set of events (for n-repetitions) should include all n-tuples of elementary events plus the extension of this class of events over c...
View Full Document

This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online