Lecture_10a-09Ross

Lecture_10a-09Ross - ELEC300U: A System View of...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
ELEC300U: A System View of Communications: from Signals to Packets Lecture 10a Source coding Information & Entropy Variable-length codes: Huffman’s algorithm Adaptive variable-length codes: LZW ELEC300U 1 Some content taken with permission from material developed for the course EECS6.02 by C. Sodini, M. Perrot and H. Balakrishnan
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Where we’ve gotten to… With channel coding, we have a way to reliably send bits across a channel: Next step: think about recoding the message bitstream to send the information it contains in as few bits as possible. Digital Transmitter Digital Receiver Channel Coding Error Correction Message bitstream bitstream with redundant information used for dealing with errors redundant bitstream possibly with errors Recovered message bitstream ELEC300U 2 Some content taken with permission from material developed for the course EECS6.02 by C. Sodini, M. Perrot and H. Balakrishnan
Background image of page 2
Source coding Digital Transmitter Digital Receiver Channel Coding Error Correction Recoded message bitstream Original message bitstream Recoded message bitstream Source Encoding Source Decoding Original message bitstream Many message streams use a “natural” fixed-length encoding: 7-bit ASCII characters, 8-bit audio samples, 24-bit color pixels. If we’re willing to use variable-length encodings (message symbols of differing lengths) we could assign short encodings to common symbols and longer encodings to other symbols… this should shorten the average length of a message. ELEC300U 3 Some content taken with permission from material developed for the course EECS6.02 by C. Sodini, M. Perrot and H. Balakrishnan
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Measuring information content Suppose you’re faced with N equally probable choices, and I give you a fact that narrows it down to M choices. Claude Shannon offered the following formula for the information you’ve received. log 2 (N/M) bits of information Examples: • information in one coin flip: log 2 (2/1) = 1 bit • roll of 2 dice: log 2 (36/1) = 5.2 bits • outcome of a football game: 1 bit Information is measured in bits (binary digits) which you can interpret as the number of binary digits required to encode the choice(s) ELEC300U 4 Some content taken with permission from material developed for the course EECS6.02 by C. Sodini, M. Perrot and H. Balakrishnan
Background image of page 4
When choices aren’t equally probable When the choices have different probabilities (p i ), you get more information when learning of a unlikely choice than when learning of a likely choice Information from choice i = log 2 (1/p i ) bits Average information content in a choice = Σ p i log 2 (1/p i ) We can use this to compute the average information
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/28/2011 for the course ELEC 300U taught by Professor Rossmurchandaminebermak during the Fall '08 term at HKUST.

Page1 / 16

Lecture_10a-09Ross - ELEC300U: A System View of...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online