compression.slides.printing

Compression information theory types of redundancy

This preview shows page 7 - 13 out of 20 pages.

Compression Information Theory Types of Redundancy Remember: redundancy = data - information In general, there are three types of redundancy we can try to remove with signals/images: Coding Inefficient allocation of bits for symbols Intersample (Interpixel) Predictability in the data Perceptual (Visual) More data than we can see/hear
Image of page 7

Subscribe to view the full document.

Compression Information Theory Types of Compression Compression algorithms characterized by information preservation: Lossless or information-preserving: No loss of information (text, legal, or medical applications) Lossy: Sacrifice some information for better compression Near-lossless: Increasingly accepted for legal, medical applications
Image of page 8
Compression Information Theory Quantifying Compression and Error Compression described either using Compression ratio [popular but less technical] Rate: bits-per-symbol (bps) or bits-per-pixel (bpp) Error is measured by comparing the compressed/decompressed result ˆ f to the original f : error rms = 2 4 M X x = 1 N X y = 1 ( ˆ f ( x , y ) - f ( x , y )) 2 3 5 1 2 SNR = h P M x = 1 P N y = 1 f ( x , y ) 2 i 1 2 h P M x = 1 P N y = 1 ( ˆ f ( x , y ) - f ( x , y )) 2 i 1 2
Image of page 9

Subscribe to view the full document.

Compression Information Theory Quantifying Compression and Error Most lossy algorithms let you trade off accuracy vs. compression. This is described using a rate distortion curve : The fewer bits required, the more distorted the signal is.
Image of page 10
Compression Information Theory Quantifying Information: Entropy Information content of symbol a with probability of occurrence p ( a ) : info a = - log 2 p ( a ) bits 8 possible symbols with equal probability. For each symbol a , info a = - log 2 1 8 = 3 bits Symbol a with probablity 1 8 , with 100 other symbols: info a = - log 2 1 8 = 3 bits
Image of page 11

Subscribe to view the full document.

Compression Information Theory Entropy For a Language The average bits of information (entropy) for a language with n symbols a 1 , a 2 , . . . , a n is H = n i = 1 p ( a i ) info a i = - n i = 1 p ( a i ) log 2 p ( a i ) where p ( a i ) is the probability of symbol a i occurring.
Image of page 12
Image of page 13
You've reached the end of this preview.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern