This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2011, Professor Whitt Class Lecture Notes: Tuesday, January 25. Random Variables, Conditional Expectation and Transforms 1. Random Variables and Functions of Random Variables (i) What is a random variable ? A (realvalued) random variable, often denoted by X (or some other capital letter), is a function mapping a probability space ( S,P ) into the real line R . This is shown in Figure 1. Associated with each point s in the domain S the function X assigns one and only one value X ( s ) in the range R . (The set of possible values of X ( s ) is usually a proper subset of the real line; i.e., not all real numbers need occur. If S is a finite set with m elements, then X ( s ) can assume at most m different values as s varies in S .) A random variable: a function ( S,P ) R X Range: real line Domain: probability space Figure 1: A (realvalued) random variable is a function mapping a probability space into the real line. As such, a random variable has a probability distribution. We usually do not care about the underlying probability space, and just talk about the random variable itself, but it is good to know the full formalism. The distribution of a random variable is defined formally in the obvious way F ( t ) F X ( t ) P ( X t ) P ( { s S : X ( s ) t } ) , where means equality by definition, P is the probability measure on the underlying sample space S and { s S : X ( s ) t } is a subset of S , and thus an event in the underlying sample space S . See Section 2.1 of Ross; he puts this out very quickly. (Key point: recall that P attaches probabilities to events, which are subsets of S .) If the underlying probability space is discrete, so that for any event E in the sample space S we have P ( E ) = X s E p ( s ) , where p is the probability mass function (pmf), then X also has a pmf p X on a new sample space, say S 1 , defined by p X ( r ) P ( X = r ) P ( { s S : X ( s ) = r } ) = X s { s S : X ( s )= r } p ( s ) for r S 1 . (1) Example 0.1 ( roll of two dice ) Consider a random roll of two dice. The natural sample space is S { ( i,j ) : 1 i 6 , 1 j 6 } , where each of the 36 points in S is assigned equal probability p ( s ) = 1 / 36. (See Example 4 in Section 1.2.) The random variable X might record the sum of the values on the two dice, i.e., X ( s ) X (( i,j )) = i + j . Then the new sample space is S 1 = { 2 , 3 , 4 ,..., 12 } . In this case, using formula (1), we get the pmf of X being p X ( r ) P ( X = r ) for r S 1 , where p X (2) = p X (12) = 1 / 36 , p X (3) = p X (11) = 2 / 36 , p X (4) = p X (10) = 3 / 36 , p X (5) = p X (9) = 4 / 36 , p X (6) = p X (8) = 5 / 36 , p X (7) = 6 / 36 ....
View
Full
Document
This note was uploaded on 01/16/2012 for the course IEOR 4106 taught by Professor Whitward during the Spring '11 term at Columbia College.
 Spring '11
 WhitWard
 Operations Research

Click to edit the document details