{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# 09_10 - ≥ σ = P | X | ≥ 1 = 1 P | X – μ |< σ = P...

This preview shows pages 1–2. Sign up to view the full content.

STAT 410 Examples for 09/10/2008 Fall 2008 Markov’s Inequality: Let u ( X ) be a non-negative function of the random variable X. If E [ u ( X ) ] exists, then, for every positive constant c , P ( u ( X ) c ) ( ) [ ] c u X E . Chebyshev’s Inequality: Let X be any random variable with mean μ and variance σ 2 . For any ε > 0, P ( | X μ | ε ) 2 2 ° ± or, equivalently, P ( | X μ | < ε ) 2 2 ° ± 1 - Setting ε = k σ , k > 1, we obtain P ( | X μ | k σ ) 2 1 k or, equivalently, P ( | X μ | < k σ ) 2 1 1 k - That is, for any k > 1, the probability that the value of any random variable will be within k standard deviations of its mean is at least 2 1 1 k - . Example 1 : Suppose μ = E ( X ) = 17, σ = SD ( X ) = 5. Consider interval ( 9, 25 ) = ( 17 – 8, 17 + 8 ) . ° k = 5 8 = 1.6. ° P ( 9 < X < 25 ) = P ( | X μ | < 1.6 σ ) 2 6 . 1 1 1 - = 0.609375 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Example 2 : Consider a discrete random variable X with p.m.f. P ( X = – 1 ) = ½ , P ( X = 1 ) = ½ . Then μ = E ( X ) = 0, σ 2 = Var ( X ) = E ( X 2 ) = 1. ° P ( | X μ | σ ) = P ( | X | 1 ) = 1. P ( | X μ | < σ ) = P ( | X | < 1 ) = 0.
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ≥ σ ) = P ( | X | ≥ 1 ) = 1. P ( | X – μ | < σ ) = P ( | X | < 1 ) = 0. Example 3 : Let a > 0, 0 < p < ½ . Consider a discrete random variable X with p.m.f. P ( X = – a ) = p , P ( X = 0 ) = 1 – 2 p , P ( X = a ) = p . Then μ = E ( X ) = 0, σ 2 = Var ( X ) = E ( X 2 ) = 2 p a 2 . Let k = p 2 1 > 1. Then k σ = a . & P ( | X – μ | ≥ k σ ) = P ( | X | ≥ a ) = 2 p = 2 1 k . P ( | X – μ | < k σ ) = P ( | X | < a ) = 1 – 2 p = 2 1 1 k-. Jensen’s Inequality: If g is convex on an open interval I and X is a random variable whose support is contained in I and has finite expectation, then E [ g ( X ) ] ≥ g [ E ( X ) ]. If g is strictly convex then the inequality is strict, unless X is a constant random variable. & E ( X 2 ) ≥ [ E ( X ) ] 2 ⇔ Var ( X ) ≥ 0 & E ( e t X ) ≥ e t E ( X ) & M X ( t ) ≥ e t μ & E [ ln X ] ≤ ln E ( X )...
View Full Document

{[ snackBarMessage ]}