This preview shows pages 1–10. Sign up to view the full content.

1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 EE670 Information Theory and Coding ± Class home page: http://koala.ece.stevens-tech.edu/~utureli/EE670/ Class information, assignments and other links. ± Text Elements of Information Theory, Thomas Cover and Joy A Thomas, Wiley, 1991. ² Entropy, Data Compression, Rate Distortion Theory ² Ch. 2- 3 HW Due Feb 3: 2.5,2.11,2.19,2.23,3.3
3 Entropy ± Entropy (uncertainty, self information) of a discrete R.V. X: i xi i x p p x p x p x p x p X H 1 log )) ( log( ) ( ) ( 1 log ) ( ) ( ∑∑ = = = In the logarithm, when base 2 is used: information measured in bits (for discrete R.Vs) When base e is used: information measured in nats (for continuous R.V.s)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 Entropy Properties ± Shift and Scale Invariant ± Depends on the probability of outcomes not on the values of outcomes ± Semi positive ± H(X) >=0 since log (p(x)) <= 0 when p(x) <= 1 ± Other: ± Maximum entropy for R.V. is when outcomes equally likely ± Source coding will provide a justification for the definition of entropy as the uncertainty of a R.V.
5 Joint and Conditional Entropy ± Joint Entropy ± Examples: 1. Two fair coin tosses, P(X=x)=1/2, P(Y=y)=1/2 Independent P(X=x,Y=y)=1/2x1/2=1/4 H(X,Y)= 4 x ¼ log 4 = 4 x ¼ * 2 = 2 bits! 2. If X, Y independent P(x,y)=P(x)P(y) H(X,Y)=H(X)+H(Y) = = ) , ( 1 log ) , ( 1 log ) , ( ) , ( ) , ( , y x p E y x p y x p Y X H y x p y x

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 Independent RVs X, Y ) ( ) ( ) , ( ) ( ) ( ) ( ) ( ) , ( ) ( ) ( ) ( ) ( ) , ( ) ( 1 log ) ( ) ( ) ( 1 log ) ( ) ( ) , ( ) ( 1 log ) ( ) ( ) ( 1 log ) ( ) ( ) , ( ) ( ) ( 1 log ) ( ) ( ) , ( 1 log ) , ( 1 log ) , ( ) , ( , , ,, Y H X H Y X H x p Y H y p X H Y X H Y H x p X H y p Y X H y p y p x p x p x p y p Y X H y p y p x p x p y p x p Y X H y p x p y p x p y x p y x p y x p Y X H x y x y xy x x y y x y x y xy x + = + = + = + = + = = = ∑∑
7 Joint Entropy The joint entropy H(X,Y) can also be expressed as: [] [ ] [ ] ) ( ) ( )] ( log [ ) ( log ) , ( )) ( log( ) ( (log ) ( ) ( log ) , ( log ) , ( Y H X H y p E x p E Y X H y p x p E y p x p E y x p E Y X H + = + = + = = =

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 Conditional probability and Bayes’ Rule The knowledge that a certain event occurred can change the probability that another event will occur. P(x|y) denotes the conditional probability that x will happen given that y has happened. Bayes’ rule states that: The complete probability formula states that P(A) = P(A|B)P(B) + P(A| ¬ B)P( ¬ B) or in the more general case Note : P(A) = α P(A|B) + (1- α )P(A| ¬ B) mirrors our intuition that the unconditional probability of an event is somewhere between it’s conditional probability based on two opposing assumptions.
9 Information Theory: Entropy We want a fundamental measure that will allow us to

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.