{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

compression3 - Summary so far 15-853:Algorithms in the Real...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
1 15-853 Page 1 15-853:Algorithms in the Real World Data Compression: Lecture 2.5 15-853 Page 2 Summary so far Model generates probabilities, Coder uses them Probabilities are related to information . The more you know, the less info a message will give. More “skew” in probabilities gives lower Entropy H and therefore better compression Context can help “skew” probabilities (lower H) Average length l a for optimal prefix code bound by Huffman codes are optimal prefix codes Arithmetic codes allow “blending” among messages 15-853 Page 3 Encoding: Model and Coder The Static part of the model is fixed The Dynamic part is based on previous messages The “optimality” of the code is relative to the probabilities. If they are not accurate, the code is not going to be efficient Dynamic Part Static Part Coder Message s ± S Codeword Model {p( s) | s ± S} Compress |w| ± i M ( s ) = -log p( s ) 15-853 Page 4 Decoding: Model and Decoder The probabilities {p(s) | s ² S} generated by the model need to be the same as generated in the encoder. Note : consecutive “messages” can be from a different message sets, and the probability distribution can change Decoder Message s ± S Codeword Dynamic Part Static Part Model {p( s) | s ± S} Uncompress
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 15-853 Page 5 Codes with Dynamic Probabilities Huffman codes: Need to generate a new tree for new probabilities. Small changes in probability, typically make small changes to the Huffman tree. Adaptive Huffman codes” update the tree without having to completely recalculate it. Used frequently in practice Arithmetic codes: Need to recalculate the f(m) values based on current probabilities. Can be done with a balanced tree. 15-853 Page 6 Compression Outline Introduction : Lossy vs. Lossless, Benchmarks, … Information Theory : Entropy, etc.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}