EngE_1104_Summer_2006_Lab_11_Students_Copy_V1A_TW

EngE_1104_Summer_200 - Explorations Of Our Digital Future Summer 2006 Lab 11 Error Detection and Data Compression Copyright by Jeremy Garrett and

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Explorations Of Our Digital Future Summer 2006 Lab 11 – Error Detection and Data Compression Copyright by: Jeremy Garrett and Tom Walker, July 6, 2006 Lab Objective: To learn how computers detect errors in data and to learn how computers “compress” data with both “lossy” and “loss-less” compression techniques. Lab Description: In this lab we will begin by discussing some traditional methods of preventing and detecting errors in numerical and binary data (such as those used in credit card verification, shopping market bar codes, and data storage within high end computer memory). Then we will explore some of the methods that are used in general purpose (loss-less / non-destructive) data compression techniques (such as ZIP and RAR formats). After that we will explore the JPG (or JPEG) “lossy” / destructive image compression technique (not the newer JPEG 2000 standard). We will finish by comparing and contrasting the results of the destructive and non-destructive compression techniques and by discussing when we might need one versus the other. Part 1 -- Parity Bits: Part 1 – Background: In the original PC’s (such as the IBM XT) all of main memory used an error detection method which relied on attaching an “extra” 9 th bit the end of each 8bit byte. Depending on the system, either “odd parity” or “even parity” was used. In both methods the number of “1”’s were counted and the extra bit was set to “0” or “1” in such a way that the sum of all the “1”’s (including the parity) bit came out “even” (a multiple of 2) or “odd.” Most of the lower cost PC’s no longer rely on this method -- partially because of an increase in the quality of the memory chips being produced. Special, expensive computers, like VT’s G5-based super computer actually use a system that is even better than simple parity. Instead they use a system known as “ECC memory” (“normal” memory is non-ECC). ECC stands for Error Correcting Code. Instead of using one extra bit, this kind of memory actually uses many extra bits, which means that many more “chips” are required to store the same amount of data. The advantage is that if there is only one bit that contains an error, the memory controller circuit can actually fix that bit. Unfortunately though, if there are two errors, it can only say that there was an error. For additional information see: Engineering Our Digital Future, Chapter 6, Section 3, or this suggested website: http://oak.cats.ohiou.edu/~piccard/mis300/eccram.htm
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Part 1 – Report: 1. For a quick practice calculate the appropriate parity bit, using EVEN parity, for the following bytes: (A simple 1 or 0 is adequate for an answer.) 0 0 1 0 1 1 1 0 0 1 1 0 0 1 0 0 0 1 2. In this case, simply decide whether each of the bytes of data, including its parity bit (using EVEN parity), contains an error. 0 1
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 09/19/2011 for the course ENGE 1114 taught by Professor Twknott during the Fall '06 term at Virginia Tech.

Page1 / 10

EngE_1104_Summer_200 - Explorations Of Our Digital Future Summer 2006 Lab 11 Error Detection and Data Compression Copyright by Jeremy Garrett and

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online