{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lec.10 - ENGG2430C Lecture 10 Inequalities Law of Large...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
ENGG2430C Lecture 10: Inequalities, Law of Large Number, and Central Limit Theorem Minghua Chen ([email protected]) Information Engineering The Chinese University of Hong Kong Reading : Ch. 5 of the textbook. M. Chen ([email protected]) ENGG2430C lecture 10 1 / 29
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Review: Derived distribution: monotonic cases Let X be an r.v. and Y = g ( X ) ; g ( · ) is a strictly monotonic function. (From MIT opencourse 6.041 slides.) { x X x + δ } = { g ( x ) Y g ( x )+ δ | dg dx ( x ) |} f X ( x ) δ = f Y ( g ( x )) δ | dg dx ( x ) | Let y = g ( x ) , we have, f Y ( y ) = f X ( g - 1 ( y ) ) / | dg dx ( g - 1 ( y )) | M. Chen ([email protected]) ENGG2430C lecture 10 2 / 29
Background image of page 2
Review: Generating Random Variable Given a strictly increasing CDF F ( x ) and a random variable U uniformly distributed in [0,1], the procedure to generate an X is: I Step 1: Generate a value of U using the computer, say the value is u . I Step 2: Compute the value x of X which satisfies that F ( x ) = u . I Step 3: Repeat Step1 and Step2, then we can get a series of values for X . This way we say that we generate a random variable X . Let F X ( x ) be the CDF of X . From the description, we have X = F - 1 ( U ) and U = F ( X ) . We have F X ( x ) = P ( X x ) = P ( F - 1 ( U ) x ) = P ( U F ( x )) = F ( x ) Thus the generated random variable X follows the desired CDF F ( x ) . M. Chen ([email protected]) ENGG2430C lecture 10 3 / 29
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Inequality: Motivation In engineering design problems, often we care about bounding the probability of a “bad” event. I In wireless communication: bound the probability of error decoding. I In ATV poll problem: bound the probability of our estimation being too off. But computing the exact probability of the events can be complicated. Sometime critical information for their computation is not available. What can we do? I Essentially, computing probability exactly needs distribution functions, which can be hard to get. I One way out is to compute a bound on the probability using less information, e.g., mean and variance. M. Chen ([email protected]) ENGG2430C lecture 10 4 / 29
Background image of page 4
Markov Inequality If a random variable X takes only nonnegative values, then P ( X a ) E [ X ] a , for all a > 0. I Chance of being abnormally large is small. Proof: Define a random variable Y as (draw a figure to compare the distributions of Y and X ) Y = ( 0 , a , if X < a ; if X a . I Clearly, E [ Y ] E [ X ] and P ( Y = a ) = P ( X a ) . Thus E [ Y ] E [ X ] 0 · P ( Y = 0 )+ a · P ( Y = a ) = a · P ( X a ) E [ X ] . M. Chen ([email protected]) ENGG2430C lecture 10 5 / 29
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Markov Inequality If a random variable X takes only nonnegative values, then P ( X a ) E [ X ] a , for all a > 0. Example (Umbrella problem): I Each of n students chooses one out of n umbrellas at random. What is the probability that at least 3 students get their own umbrellas? I Define a r.v. Y to be the number of students who choose their own umbrellas. We already know E [ Y ] = 1. Then P ( Y 3 ) E [ Y ] 3 = 1 3 . M. Chen ([email protected]) ENGG2430C lecture 10 6 / 29
Background image of page 6
Markov Inequality Markov inequality may give loose bound. [Example 7.1 in textbook] I Let X be a r.v. uniformly distributed in [ 0 , 4 ] I E [ X ] = 2 I Applying Markov inequality, we have P ( X 2 ) E [ X ] 2 = 1 (very loose) P ( X 3 ) E [ X ] 3 = 0 . 67 (loose) P ( X 4 ) E [ X ] 4 = 0 . 5 (very loose) Intuitively, why? Markov inequality only utilizes mean.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}