L18_source_coding

L18_source_coding - Source Coding Information Entropy Variable-length codes Huffman's algorithm Adaptive variable-length codes LZW 6.02 Spring 2008

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
6.02 Spring 2008 Source Coding, Slide 1 Source Coding Information & Entropy Variable-length codes: Huffman’s algorithm Adaptive variable-length codes: LZW 6.02 Spring 2008 Source Coding, Slide 2 Where we’ve gotten to… With channel coding (along with block numbers and CRC), we have a way to reliably send bits across a channel: Next step: think about recoding the message bitstream to send the information it contains in as few bits as possible. Digital Transmitter Digital Receiver Channel Coding Error Correction Message bitstream (with CRC) bitstream with redundant information used for dealing with errors redundant bitstream possibly with errors Recovered message bitstream
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6.02 Spring 2008 Source Coding, Slide 3 Source coding Digital Transmitter Digital Receiver Channel Coding Error Correction Recoded message bitstream (with CRC) Original message bitstream Recoded message bitstream Source Encoding Source Decoding Original message bitstream Many message streams use a “natural” fixed-length encoding: 7- bit ASCII characters, 8-bit audio samples, 24-bit color pixels. If we’re willing to use variable-length encodings (message symbols of differing lengths) we could assign short encodings to common symbols and longer encodings to other symbols… this should shorten the average length of a message. 6.02 Spring 2008 Source Coding, Slide 4 Measuring information content Suppose you’re faced with N equally probable choices, and I give you a fact that narrows it down to M choices. Claude Shannon offered the following formula for the information you’ve received. log 2 (N/M) bits of information Examples: information in one coin flip: log 2 (2/1) = 1 bit roll of 2 dice: log 2 (36/1) = 5.2 bits outcome of a Red Sox game: 1 bit (well, actually, are both outcomes equally probable?) Information is measured in bits (binary digits) which you can interpret as the number of binary digits required to encode the choice(s)
Background image of page 2
6.02 Spring 2008 Source Coding, Slide 5 When choices aren’t equally probable When the choices have different probabilities (p i ), you get more information when learning of a unlikely choice than when learning of a likely choice Information from choice i = log 2 (1/p i ) bits Average information content in a choice = ! p i " log 2 (1/p i ) We can use this to compute the average information content taking into account all possible choices:
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 08/23/2009 for the course EECS 6.02 taught by Professor Terman during the Spring '08 term at MIT.

Page1 / 8

L18_source_coding - Source Coding Information Entropy Variable-length codes Huffman's algorithm Adaptive variable-length codes LZW 6.02 Spring 2008

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online