This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ECE 6010 Lecture 2 – More on Random Variables Readings from G&S : Section 3.3, Section 3.4, Section 3.6, Section 3.7, Section 4.3, Section 4.6, Section 5.1, Section 5.2, Section 5.6, Section 5.7, Section 5.8, Expectation When we say “expectation,” we mean “average,” the average being roughly what you would think of (i.e., the arithmetic average, as opposed to a median or mode). For a discrete r.v. X , we define the expectation as E [ X ] = X i x i p X ( x i ) For a continuous r.v., we define the expectation as E [ X ] = Z ∞∞ f X ( x ) x dx Now a bit of technicality regarding integration, which introduces notation commonly used. When you integrate, you are typically doing a Riemann integral: Z b a xf X ( x ) dx = lim max 1 ≤ i ≤ n 1  x i +1 x i → n X i =1 z i f X ( z i )( x i +1 x i ) a = x 1 < x 2 < ··· < x n = b z i ∈ ( x i , x i +1 ) In other words, we break up the interval into little slices and add up the vertical rectangular pieces. Another way of writing this is to recognize that z i f X ( z i )( x i +1 x i ) ≈ z i P ( x i < X ≤ X i +1 ) = z i ( F X ( x i +1 ) F X ( x i )) and that in the limit, the approximation becomes exact. Note, however, that this is ex pressed in terms of the c.d.f., not the p.d.f., and so exists for all random variables, not just continuous ones. This gives rise to what is known as the RiemannStieltjes Integral: E [ X ] = lim max 1 ≤ i ≤ n 1 ( x i +1 x i ) → n X i =1 z i [ F X ( x i +1 ) F X ( x i )] . We write the limit as Z b a xdF X ( x ) This notation “describes” continuous, discrete, and mixed cases. That is, E [ X ] = Z ∞∞ xdF X ( x ) . We have defined the RiemannStieltjes integral in a context of expectation. However, it has a more general definition: Z b a f ( x ) dg ( x ) = lim max( x i +1 x i ) → X i f ( z i )[ g ( x i +1 ) g ( x i )] When g ( x ) = x , this reduces to the ordinary Riemann integral. Sufficient conditions for existence: ECE 6010: Lecture 2 – More on Random Variables 2 • g ( x ) of bounded variation • and f ( x ) continuous on [ a, b ] or • f ( x ) of bounded variation • g ( x ) continuous The first case covers the case of expectation. In a directly analogous way we define Z ∞∞ g ( x ) dF X ( x ) = lim n X i =1 g ( z i )[ F X ( x i +1 ) F X ( x i )] . Now consider the r.v. Y = g ( X ) . E [ Y ] = Z ∞∞ ydF Y ( y ) Note that dF Y ( y ) is the representation of the limiting value F Y ( y i +1 ) F Y ( y i ) = P r ( y i < Y ≤ y i +1 ) = P r ( y i < g ( X ) ≤ y i +1 ) = P r ( g 1 ( y i ) < X ≤ g 1 ( y i +1 )) = P r ( x i < X ≤ x i +1 ) which, in the limit is equal to dF X ( x ) , when y = g ( x ) . Thus E [ Y ] = Z ∞∞ ydF Y ( y ) = Z ∞ ∞ g ( x ) dF X ( x ) Let us put this in more familiar terms: If Y = g ( x ) , then E [ Y ] = Z ∞∞ g ( x ) f X ( x ) dx (1) One might think that finding E [ Y ] would require finding f Y ( y ) . However, as (1) shows, all that is necessary is to substitute g ( x...
View
Full Document
 Spring '08
 Stites,M
 Probability theory, lim

Click to edit the document details