This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Probability Theory Richard F. Bass These notes are c 1998 by Richard F. Bass. They may be used for personal or classroom purposes, but not for commercial purposes. Revised 2001. 1. Basic notions. A probability or probability measure is a measure whose total mass is one. Because the origins of probability are in statistics rather than analysis, some of the terminology is different. For example, instead of denoting a measure space by ( X, A ,μ ), probabilists use (Ω , F , P ). So here Ω is a set, F is called a σfield (which is the same thing as a σalgebra), and P is a measure with P (Ω) = 1. Elements of F are called events . Elements of Ω are denoted ω . Instead of saying a property occurs almost everywhere, we talk about properties occurring almost surely , written a.s. . Realvalued measurable functions from Ω to R are called random variables and are usually denoted by X or Y or other capital letters. We often abbreviate ”random variable” by r.v. We let A c = ( ω ∈ Ω : ω / ∈ A ) (called the complement of A ) and B A = B ∩ A c . Integration (in the sense of Lebesgue) is called expectation or expected value , and we write E X for R Xd P . The notation E [ X ; A ] is often used for R A Xd P . The random variable 1 A is the function that is one if ω ∈ A and zero otherwise. It is called the indicator of A (the name characteristic function in probability refers to the Fourier transform). Events such as ( ω : X ( ω ) > a ) are almost always abbreviated by ( X > a ). Given a random variable X , we can define a probability on R by P X ( A ) = P ( X ∈ A ) , A ⊂ R . (1 . 1) The probability P X is called the law of X or the distribution of X . We define F X : R → [0 , 1] by F X ( x ) = P X ((∞ ,x ]) = P ( X ≤ x ) . (1 . 2) The function F X is called the distribution function of X . As an example, let Ω = { H,T } , F all subsets of Ω (there are 4 of them), P ( H ) = P ( T ) = 1 2 . Let X ( H ) = 1 and X ( T ) = 0. Then P X = 1 2 δ + 1 2 δ 1 , where δ x is point mass at x , that is, δ x ( A ) = 1 if x ∈ A and 0 otherwise. F X ( a ) = 0 if a < 0, 1 2 if 0 ≤ a < 1, and 1 if a ≥ 1. Proposition 1.1. The distribution function F X of a random variable X satisfies: (a) F X is nondecreasing; (b) F X is right continuous with left limits; (c) lim x →∞ F X ( x ) = 1 and lim x →∞ F X ( x ) = 0 . Proof. We prove the first part of (b) and leave the others to the reader. If x n ↓ x , then ( X ≤ x n ) ↓ ( X ≤ x ), and so P ( X ≤ x n ) ↓ P ( X ≤ x ) since P is a measure. Note that if x n ↑ x , then ( X ≤ x n ) ↑ ( X < x ), and so F X ( x n ) ↑ P ( X < x ). Any function F : R → [0 , 1] satisfying (a)(c) of Proposition 1.1 is called a distribution function, whether or not it comes from a random variable....
View
Full
Document
This note was uploaded on 01/02/2012 for the course FINANCE 347 taught by Professor Bayou during the Fall '11 term at NYU.
 Fall '11
 Bayou
 Finance

Click to edit the document details