This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 3 Conditional expectation Given a probability space (Ω , F ,P ), let A,B be events, X,Y be r.v.s, and A be a sub σalgebra. We are interested in introducing E ( X A ), one of the most important (and perhaps one of most difficult) concepts in probability theory and certainly in this course. Conditional expectation will be used in the definition of martingales . The formal definition of E ( X A ) is given in Definition 3.5 in Section 3.2. Take a look at it first. If you are happy with that definition, you can directly skip to Section 3.2. However, if you find that definition a little bit puzzling to you (if that is the case, don’t worry, you might not be alone), we will start from simple definitions and try to lead to that definition bit by bit. Specifically, we shall introduce the following concepts step by step. • P ( A  B ), • E ( X  B ), • E ( X  Y = y ), • E ( X  Y ), • E ( X  σ ( Y )) • E ( X A ), 1 3.1 Review 3.1.1 Definition of P ( A  B ) Definition 3.1 Let A,B ∈ F be two events, with P ( B ) > . Then, the conditional probability of A given B is P ( A  B ) = P ( A ∩ B ) P ( B ) . Remark 3.1 This definition can be motivated as follows. Since we know ω ∈ B , we can take B as our new sample space, e Ω := B . The new probability space becomes ( e Ω , e F , e P ) =: ( B, F ∩ B,P/P ( B )) . The reason for choosing e P = P/P ( B ) is to make sure e P ( e Ω) = e P ( B ) = 1 . (The old measure P is NOT a probability measure on e F since P ( e Ω) = P ( B ) < 1 ). A graphical illustration is given below. 3.1.2 Definition of E ( X  B ) Definition 3.2 The conditional expectation of a r.v. X , given the event B with P ( B ) > is E ( X  B ) = E ( XI B ) P ( B ) , where I B ( ω ) = 1 if ω ∈ B , and 0 otherwise. Remark 3.2 This definition can be motivated “formally” as follows: E ( X  B ) = Z e Ω X ( ω ) d ˜ P ( ω ) = R B X ( ω ) dP ( ω ) P ( B ) = R Ω I B ( ω ) X ( ω ) dP ( ω ) P ( B ) = E ( XI B ) P ( B ) . Case I: X is a discrete r.v. If X is a discrete r.v. (taking values x 1 ,x 2 ,... ), XI B is also discrete. Hence, E ( X  B ) = E ( XI B ) P ( B ) = 1 P ( B ) ˆ ∞ X i =1 ( x i × 1) × P ( X = x i ,I B = 1) ! 2 = 1 P ( B ) ˆ ∞ X i =1 x i P ( X = x i  I B = 1) P ( I B = 1) ! = 1 P ( B ) ˆ ∞ X i =1 x i P ( X = x i  B ) P ( B ) ! = ∞ X i =1 x i P ( X = x i  B ) . (1.1) Case II: X is an absolutely continuous r.v. For simplicity, we consider a canonical r.v. X ( ω ) = ω (i.e., we take (Ω , F ,P ) = ( R, B ,P )). Further assume that X is absolutely continuous with p.d.f. f X . Then E ( X  B ) = E ( XI B ) P ( B ) = R Ω ωI B ( ω ) f X ( ω ) dω P ( B ) = R B xf X ( x ) dx P ( B ) ....
View
Full Document
 Spring '11
 D
 Algebra, Probability, Probability theory, measure, A

Click to edit the document details