{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# hw4 - Information Theory and Coding-HW 3 V Balakrishnan...

This preview shows pages 1–3. Sign up to view the full content.

Information Theory and Coding-HW 3 V Balakrishnan Department of ECE Johns Hopkins University October 8, 2006 1 Markov inequality and Chebyshev’s inequality 1.1 Markov Inequality Let f x ( x ) be the probability distribution function of X. Pr ( X δ ) = δ f x ( x ) dx δ x δ f x ( x ) dx + -∞ x δ f x ( x ) dx = E ( X ) δ The first step follows from the definition of probability The second step follow from the fact that x/δ 1 for x δ The third step follows from the fact that f x ( x ) 0 for all x as it is a probability density function. 1.2 Chebyshev’s inequality We see that E ( X ) = E [( Y - μ ) 2 ] = σ 2 and we are required to compute Pr ( | Y - μ | > ) = Pr ( X > 2 ) So we have Pr ( | X | > 2 ) Pr ( | X | ≥ 2 ) E ( | X | ) 2 = σ 2 2 The second step follows directly from Markov inequality and from the fact that | X | = X as X is defined as ( Y - μ ) 2 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
1.3 The weak law of large numbers Let us compute the variance of ¯ Z n . The mean of it is clearly μ . We have it to be E ( ¯ Z n - μ ) 2 = 1 n 2 n i =1 E [( Z i - μ ) 2 ] + 2 n 2 i,j,i = j E [( Z i - μ )( Z j - μ )] Now E [( Z i - μ )( Z j - μ )] = E ( Z i ) E ( Z
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 5

hw4 - Information Theory and Coding-HW 3 V Balakrishnan...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online