This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 10 Expectations and Bounds The concept of expectation, which was originally introduced in the context of discrete random variables, can be generalized to other types of random variables. For instance, the expectation of a continuous random variable is defined in terms of its probability density function (PDF). We know from our previous discussion that expectations provide an effective way to summarize the information contained in the distribution of a random variable. As we will see shortly, expectations are also very valuable in establishing bounds on probabilities. 10.1 Expectations Revisited The definition of an expectation associated with a continuous random variable is very similar to its discrete counterpart; the weighted sum is simply replaced by a weighted integral. For a continuous random variable X with PDF f X ( ), the expectation of g ( X ) is defined by E[ g ( X )] = R g ( ) f X ( ) d. In particular, the mean of X is equal to E[ X ] = R xf X ( x ) dx and its variance becomes Var( X ) = E ( X E[ X ]) 2 = R (  E[ X ]) 2 f X ( ) d. 120 10.1. EXPECTATIONS REVISITED 121 As before, the variance of random variable X can also be computed using Var( X ) = E [ X 2 ] ( E [ X ]) 2 . Example 79. We wish to calculate the mean and variance of a Gaussian random variable with parameters m and 2 . By definition, the PDF of this random variable can be written as f X ( ) = 1 2 e (  m ) 2 2 2 R . The mean of X can be obtained through direct integration, with a change of variables, E[ X ] = 1 2  e (  m ) 2 2 2 d = 2  + m e 2 2 d = 2  e 2 2 d + 2  m e 2 2 d = m. In finding a solution, we have leveraged the facts that e 2 2 is absolutely in tegrable and an odd function. We also took advantage of the normalization condition which ensures that a Gaussian PDF integrates to one. To derive the variance, we again use the normalization condition; for a Gaussian PDF, this property implies that  e (  m ) 2 2 2 d = 2 . Differentiating both sides of this equation with respect to , we get  (  m ) 2 3 e (  m ) 2 2 2 d = 2 . Rearranging the terms yields  (  m ) 2 2 e (  m ) 2 2 2 d = 2 . Hence, Var( X ) = E [( X m ) 2 ] = 2 . Of course, the variance can also be obtained by more conventional methods, using integration by parts. Example 80. Suppose that R is a Rayleigh random variable with parameter 2 . We wish to compute its mean and variance. 122 CHAPTER 10. EXPECTATIONS AND BOUNDS Recall that R is a nonnegative random variable with PDF f R ( r ) = r 2 e r 2 2 2 r ....
View Full
Document
 Fall '07
 Chamberlain

Click to edit the document details