Lec6 - Coding Source Messages M f(alphabet(alphabet...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon
Coding Source Messages, M Codeword, C (alphabet α ) (alphabet β ) Properties Distinct Uniquely Decipherable (Prefix) Instantaneously Decodable Minimal Prefix f
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Modeling and Coding Model Model Probability Distribution Probability Distribution Probability Estimates Probability Estimates Transmission System Encoder Decoder Original Source Messages Source Messages Compressed Bit Stream •Model predicts next symbol •Probability distribution and static codes •Probability estimates and dynamic codes
Background image of page 2
Entropy as a Measure of Information • Given a set of possible events with known probabilities p 1 , p 2 , …, p n , that sum to 1. Entropy E(p 1 , p 2 , …, p n ) (Shannon, 1940’s): how much choice in selecting an event. – E should be a continuous function of p i . – If p i =p j for all 1 i,j n, then E should be an increasing function of n. – If choice is made in stages, E should be the weighted sum of the entropies at each stage (weights are the probabilities of each stage).
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Entropy • Shannon showed that only one function can satisfy these conditions. Self-information of event A with probability P(A) is i(A) = - log P(A) – Entropy of a source is the sum of the self- information over all events = = n i i i n p p k p p p E 1 2 1 log ) ,... , (
Background image of page 4
Information and Compression • Compression seeks a message representation that uses exactly as many bits as required for the information content (entropy is a lower bound on compression). • However, computing entropy is difficult. • Example: 1 2 1 2 3 3 3 3 1 2 3 3 3 3 1 2 3 3 1 2 – One char at a time: P(1)=P(2)=¼, P(3)=½; entropy is 1.5 bits/symbol. – Two chars at a time: P(1 2)=P(3 3)=½; entropy is 1 bit/symbol.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Models Improve Entropy Computations • Finite Context Models • Finite State Models (Markov models) • Grammar Models • Ergodic Models
Background image of page 6
Finite Context Models • Order k model: k preceding characters used as context in determining probability of next character. •Ex amp l e s : – Order -1 model: all characters have equal probability. – Order 0 model: probabilities do not depend on context.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Finite State Models (Markov Models) • Probabilistic finite state machine. • Fixed context models are a subclass. Order 0 Fixed Context Model as a Finite State Model b 0.3 a 0.5 c 0.2 abcaab 0.5 0.3 0.2 0.5 0.5 0.3 Msg. Prob. = 0.00225 (8.80 bits entropy)
Background image of page 8
Order 1 Fixed Context Model as a Finite State Model 1 2 3 a 0.2 a 0.7 c 0.2 b 0.6 c 0.2 a 0.5 c 0.2 b 0.3 b 0.1 M e s s a g e :abcaab S t a t e s : 1123112 Probabilities: 0.5 0.3 0.2 0.2 0.5 0.3 Msg. Prob. = 0.0009; entropy = 10.1 bits
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Grammar Models • Use a grammar as the underlying structure.
Background image of page 10
Image of page 11
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 28

Lec6 - Coding Source Messages M f(alphabet(alphabet...

This preview shows document pages 1 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online