Lecture 05 - 1 Expected Values of Random Variables Often it...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Expected Values of Random Variables Often it is convenient to describe a random variable using some location measure. The most important location measure is the expected value (a.k.a. the mean or the weighted average). For a discrete random variable X its expected value, denote E [ X ], is given by E [ X ] = X k x k p ( x k ) . For a continuous random variable E [ X ] = Z - xf ( x ) dx. Remark: Strictly speaking E [ X ] exists only if both E [ X + ] and E [ X- ] exists and are not both where X + = max( X, 0) and X- = max(- X, 0). Example: Suppose X takes values { , 1 , 2 , 3 } with probabilities { 1 / 8 , 3 / 8 , 3 / 8 , 1 / 8 } . Then, E [ X ] = 0(1 / 8)+ 1(3 / 8) + 2(3 / 8) + 3(1 / 8) = 12 / 8 = 1 . 5. Remark: E [ X ] need not be a possible outcome of X . Frequency, or long run average interpretation of the expected value: Example: Suppose a random variable X represents the profit associated with the production of some item that can be defective or non- defective. Suppose that the profit is- 2 when the item is defective and 10 when the item is non-defective. Finally, assume that p (- 2) = 0 . 1 , p (10) = 0 . 9. Then E [ X ] =- 2(0 . 1) + 10( . 9) = 8 . 8. Suppose that a very large number n of items are produced and let n ( G ) be the number of good items and n ( D ) is the number of defective items. Then the average profit per items is- 2 n ( D ) n + 10 n ( G ) n The frequency interpretation is that n ( G ) /n converges in some sense to be defined later to p (10) = 0 . 9. This convergence is know as the Law of Large Numbers. 1.1 Markovs Inequality Proposition: Suppose X is a nonnegative random variable and c > 0. Then P ( X c ) E [ X ] c . Notice that the inequality is non-trivial if c > EX . Proof (Discrete case): E [ X ] = X k kp ( k ) X k c kp ( k ) c X k c p ( k ) = cP ( X c ) . Alternative format: c = kE [ X ] then P ( X kE [ X ]) 1 k ....
View Full Document

Page1 / 5

Lecture 05 - 1 Expected Values of Random Variables Often it...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online