*This preview shows
pages
1–6. Sign up
to
view the full content.*

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*
**Unformatted text preview: **The Art and Science of An Introduction to Computer Science ERIC S. ROBERTS Jav a Objects and Memory C H A P T E R 7 Yea, from the table of my memory I’ll wipe away all trivial fond records. —William Shakespeare, Hamlet, c. 1600 7.1 The structure of memory 7.2 The allocation of memory to variables 7.3 Primitive types vs. objects 7.4 Linking objects together The Structure of Memory • The fundamental unit of memory inside a computer is called a bit , which is a contraction of the words binary digit. A bit can be in either of two states, usually denoted as 0 and 1. • Numbers are stored in still larger units that consist of multiple bytes. The unit that represents the most common integer size on a particular hardware is called a word . Because machines have different architectures, the number of bytes in a word may vary from machine to machine. 1 1 1 • The hardware structure of a computer combines individual bits into larger units. In most modern architectures, the smallest unit on which the hardware operates is a sequence of eight consecutive bits called a byte . The following diagram shows a byte containing a combination of 0s and 1s: Binary Notation The rightmost digit is the units place. The next digit gives the number of 2s. The next digit gives the number of 4s. And so on . . . • Bytes and words can be used to represent integers of different sizes by interpreting the bits as a number in binary notation . 0 x = 1 1 x = 2 2 0 x = 4 42 1 x = 8 8 0 x = 16 1 x = 32 32 0 x = 64 0 x = 128 • Binary notation is similar to decimal notation but uses a different base . Decimal numbers use 10 as their base, which means that each digit counts for ten times as much as the digit to its right. Binary notation uses base 2, which means that each position counts for twice as much, as follows: 1 1 1 Numbers and Bases • The calculation at the end of the preceding slide makes it clear that the binary representation 00101010 is equivalent to the number 42. When it is important to distinguish the base, the text uses a small subscript, like this: 00101010 2 = 42 10 • Although it is useful to be able to convert a number from one base to another, it is important to remember that the number remains the same. What changes is how you write it down. • The number 42 is what you get if you count how many stars are in the pattern at the right. The number is the same whether you write it in English as forty-two, in decimal as 42, or in binary as 00101010. • Numbers do not have bases; representations do. Octal and Hexadecimal Notation • Because binary notation tends to get rather long, computer scientists often prefer octal (base 8) or hexadecimal (base 16) notation instead. Octal notation uses eight digits: 0 to 7....

View
Full
Document