This preview shows page 1. Sign up to view the full content.
Unformatted text preview: is: 0001 1001 1001 0101 This is the binary coded decimal scheme which is supported by some computers and is commonly used in pocket calculators. Binary notation Most computers, most of the time, abandon the humanoriented decimal scheme altogether in favour of a pure binary notation where the same number becomes: 11111001011 Here the righthand digit represents units, the next one 2, then 4, and so on. Each time we move left one place the value of the digit doubles. Since a value of 2 in one column can be represented by a value of 1 in the next column left, we never need a digit to have any value other than 0 or 1, and hence each binary digit (bit) can be represented by a single Boolean variable. Hexadecimal notation Although machines use binary numbers extensively internally, a typical 32bit binary number is fairly unmemorable, but rather than convert it to the familiar decimal form (which is quite hard work and errorprone), computer users often describe the number in hexadecimal (base 16) notation. This is easy because the binary number can be split into groups of four binary digits and each group replaced by a hexadecimal number. Because, in base 16, we need symbols for numbers from 0 to 15, the early letters of the alphabet have been pressed into service where the decimal symbols run out: we use 0 to 9 as themselves and A to F to represent 10 to 15. Our number becomes: 7CB (At one time it was common to use octal, base 8, notation in a similar role. This avoids the need to use alphabetic characters, but groups of three bits are less convenient to work with than groups of four, so the use of octal has largely been abandoned.) Number ranges When writing on paper, we use the number of decimal digits that are required to represent the number we want to write. A computer usually reserves a fixed number of bits for a number, so if the number gets too big it cannot be represented. The ARM deals efficiently with 32bit quantities, so the first data type that the architecture supports is the 32bit (unsigned) integer, which has a value in the range: 0 to 4 294 967 29510 = 0 to FFFFFFF...
View
Full
Document
This document was uploaded on 10/30/2011 for the course CSE 378 380 at SUNY Buffalo.
 Spring '09
 Staff

Click to edit the document details