lectures2 - MATH 235A Probability Theory Lecture Notes,...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MATH 235A Probability Theory Lecture Notes, Fall 2009 Part II: Laws of large numbers Dan Romik Department of Mathematics, UC Davis Draft version of 11/2/2009 (minor typos corrected 11/5/09) Lecture 7: Expected values 7.1 Construction of the expectation operator We wish to define the notion of the expected value , or expectation , of a random variable X , which will be denoted E X (or E ( X )). In measure theory this is denoted R XdP and is called the Lebesgue integral. It is one of the most important concepts in all of mathematical analysis! So time invested in understanding it is time well-spent. The idea is simple. For bounded random variables, we want the expectation to satisfy three properties: First, the expectation of an indicator variable 1 A , where A is an event, should be equal to P ( A ). Second, the expectation operator should be linear i.e., should satisfy E ( aX + bY ) = a E X + b E Y for real numbers a,b and r.v.s X,Y . Third, it should be monotone, i.e., if X Y (meaning X ( ) Y ( ) for all ) then E X E Y . For unbounded random variables, we will also require some kind of continuity, but lets treat the case of bounded case first. It turns out that these properties determine the expec- tation/Lebesgue integral operator uniquely. Different textbooks may have some variation in how they construct it, but the existence and uniqueness are really the essential facts. 1 Theorem 1. Let ( , F , P ) be a probability space. Let B denote the class of bounded random variables. There exists a unique operator E that takes a r.v. X B and returns a number in R , and satisfies: 1. If A F then E (1 A ) = P ( A ) . 2. If X,Y B , a,b R then E ( aX + bY ) = a E ( X ) + b E ( Y ) . 3. If X,Y B and X Y then E ( X ) E ( Y ) . Sketch of proof. Call X a simple function if it is of the form X = n i =1 a i 1 B i , where a 1 ,...,a n R and B 1 ,...,B n are disjoint events. For such r.v.s define E ( X ) = a i P ( B i ). Show that the linearity and monotonicity properties hold, and so far uniqueness clearly holds since we had no choice in how to define E ( X ) for such functions if we wanted the properties above to hold. Now for a general bounded r.v. X with | X | M , for any > 0 it is pos- sible to approximate X from below and above by simple functions Y X Z such that E ( Z- Y ) < . This suggests defining E ( X ) = sup { E ( Y ) : Y is a simple function such that Y X } . (1) By approximation, the construction is shown to still satisfy the properties in the Theorem and to be unique, since E ( X ) is squeezed between E ( Y ) and E ( Z ), and these can be made arbitrarily close to each other....
View Full Document

Page1 / 27

lectures2 - MATH 235A Probability Theory Lecture Notes,...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online