CH27 - CHAPTER 27 Data Compression Data transmission and...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
481 CHAPTER 27 Data Compression Data transmission and storage cost money. The more information being dealt with, the more it costs. In spite of this, most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy-to-use encoding methods require data files about twice as large as actually needed to represent the information. Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy-to-use format to one optimized for compactness. Likewise, an uncompression program returns the information to its original form. We examine five techniques for data compression in this chapter. The first three are simple encoding techniques, called: run- length, Huffman, and delta encoding. The last two are elaborate procedures that have established themselves as industry standards: LZW and JPEG. Data Compression Strategies Table 27-1 shows two different ways that data compression algorithms can be categorized. In (a), the methods have been classified as either lossless or lossy . A lossless technique means that the restored data file is identical to the original. This is absolutely necessary for many types of data, for example: executable code, word processing files, tabulated numbers, etc. You cannot afford to misplace even a single bit of this type of information. In comparison, data files that represent images and other acquired signals do not have to be keep in perfect condition for storage or transmission. All real world measurements inherently contain a certain amount of noise . If the changes made to these signals resemble a small amount of additional noise, no harm is done. Compression techniques that allow this type of degradation are called . This distinction is important because lossy techniques are much more effective at compression than lossless methods. The higher the compression ratio, the more noise added to the data.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The Scientist and Engineer's Guide to Digital Signal Processing 482 Lossless Lossy output Method a. Lossless or Lossy run-length Huffman delta LZW CS&Q JPEG MPEG fixed fixed fixed variable variable variable Group size: input CS&Q Huffman Arithmetic run-length, LZW variable fixed b. Fixed or variable group size TABLE 27-1 Compression classifications. Data compression methods can be divided in two ways. In (a), the techniques are classified as lossless or lossy . Lossless methods restore the compressed data to exactly the same form as the original, while lossy methods only generate an approximation. In (b), the methods are classified according to a fixed or variable size of group taken from the original file and written to the compressed file.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This document was uploaded on 08/27/2011.

Page1 / 22

CH27 - CHAPTER 27 Data Compression Data transmission and...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online