This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ≥ σ ) = P (  X  ≥ 1 ) = 1. P (  X – μ  < σ ) = P (  X  < 1 ) = 0. Example 3 : Let a > 0, 0 < p < ½ . Consider a discrete random variable X with p.m.f. P ( X = – a ) = p , P ( X = 0 ) = 1 – 2 p , P ( X = a ) = p . Then μ = E ( X ) = 0, σ 2 = Var ( X ) = E ( X 2 ) = 2 p a 2 . Let k = p 2 1 > 1. Then k σ = a . & P (  X – μ  ≥ k σ ) = P (  X  ≥ a ) = 2 p = 2 1 k . P (  X – μ  < k σ ) = P (  X  < a ) = 1 – 2 p = 2 1 1 k. Jensen’s Inequality: If g is convex on an open interval I and X is a random variable whose support is contained in I and has finite expectation, then E [ g ( X ) ] ≥ g [ E ( X ) ]. If g is strictly convex then the inequality is strict, unless X is a constant random variable. & E ( X 2 ) ≥ [ E ( X ) ] 2 ⇔ Var ( X ) ≥ 0 & E ( e t X ) ≥ e t E ( X ) & M X ( t ) ≥ e t μ & E [ ln X ] ≤ ln E ( X )...
View
Full Document
 Fall '08
 AlexeiStepanov
 Probability, Standard Deviation, Probability theory, µ

Click to edit the document details