This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 1 Motivation SECTION 1 offers some reasons for why anyone who uses probability should know about the measure theoretic approach. SECTION 2 describes some of the added complications, and some of the compensating benefits that come with the rigorous treatment of probabilities as measures. SECTION 3 argues that there are advantages in approaching the study of probability theory via expectations, interpreted as linear functionals, as the basic concept. SECTION 4 describes the de Finetti convention of identifying a set with its indicator function, and of using the same symbol for a probability measure and its corresponding expectation. SECTION *5 presents a fairprice interpretation of probability, which emphasizes the linearity properties of expectations. The interpretation is sometimes a useful guide to intuition. 1. Why bother with measure theory? Following the appearance of the little book by Kolmogorov (1933), which set forth a measure theoretic foundation for probability theory, it has been widely accepted that probabilities should be studied as special sorts of measures. (More or less truesee the Notes to the Chapter.) Anyone who wants to understand modern probability theory will have to learn something about measures and integrals, but it takes surprisingly little to get started. For a rigorous treatment of probability, the measure theoretic approach is a vast improvement over the arguments usually presented in undergraduate courses. Let me remind you of some difficulties with the typical introduction to probability. Independence There are various elementary definitions of independence for random variables. For example, one can require factorization of distribution functions, P { X x , Y y } = P { X x } P { Y y } for all real x , y . The problem with this definition is that one needs to be able to calculate distribution functions, which can make it impossible to establish rigorously some desirable 2 Chapter 1: Motivation properties of independence. For example, suppose X 1 ,..., X 4 are independent random variables. How would you show that Y = X 1 X 2 " log X 2 1 + X 2 2  X 1  +  X 2  ! +  X 1  3 + X 3 2 X 4 1 + X 4 2 # is independent of Z = sin X 3 + X 2 3 + X 3 X 4 + X 2 4 + q X 4 3 + X 4 4 , by means of distribution functions? Somehow you would need to express events { Y y , Z z } in terms of the events { X i x i } , which is not an easy task. (If you did figure out how to do it, I could easily make up more taxing examples.) You might also try to define independence via factorization of joint density functions, but I could invent further examples to make your life miserable, such as problems where the joint distribution of the random variables are not even given by densities. And if you could grind out the joint densities, probably by means of horrible calculations with Jacobians, you might end up with the mistaken impression that independence had something to do with the smoothness of the transformations.that independence had something to do with the smoothness of the transformations....
View
Full
Document
This note was uploaded on 11/21/2009 for the course STAT 330 taught by Professor Davidpollard during the Spring '09 term at Yale.
 Spring '09
 DavidPollard
 Probability

Click to edit the document details