{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

CH07-ObjectsAndMemory

CH07-ObjectsAndMemory - TheArtandScienceof CHAPTER 7...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
The Art and Science of An Introduction to Computer Science ERIC S. ROBERTS Jav a Objects and Memory C H A P T E R 7 Yea, from the table of my memory I’ll wipe away all trivial fond records. —William Shakespeare, Hamlet, c. 1600 7.1 The structure of memory 7.2 The allocation of memory to variables 7.3 Primitive types vs. objects 7.4 Linking objects together
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The Structure of Memory The fundamental unit of memory inside a computer is called a bit , which is a contraction of the words binary digit. A bit can be in either of two states, usually denoted as 0 and 1. Numbers are stored in still larger units that consist of multiple bytes. The unit that represents the most common integer size on a particular hardware is called a word . Because machines have different architectures, the number of bytes in a word may vary from machine to machine. 0 0 1 0 1 0 1 0 The hardware structure of a computer combines individual bits into larger units. In most modern architectures, the smallest unit on which the hardware operates is a sequence of eight consecutive bits called a byte . The following diagram shows a byte containing a combination of 0s and 1s:
Background image of page 2
Binary Notation The rightmost digit is the units place. The next digit gives the number of 2s. The next digit gives the number of 4s. And so on . . . Bytes and words can be used to represent integers of different sizes by interpreting the bits as a number in binary notation . 0 x = 0 1 1 x = 2 2 0 x = 0 4 42 1 x = 8 8 0 x = 0 16 1 x = 32 32 0 x = 0 64 0 x = 0 128 Binary notation is similar to decimal notation but uses a different base . Decimal numbers use 10 as their base, which means that each digit counts for ten times as much as the digit to its right. Binary notation uses base 2, which means that each position counts for twice as much, as follows: 0 0 1 0 1 0 1 0
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Numbers and Bases The calculation at the end of the preceding slide makes it clear that the binary representation 00101010 is equivalent to the number 42. When it is important to distinguish the base, the text uses a small subscript, like this: 00101010 2 = 42 10 Although it is useful to be able to convert a number from one base to another, it is important to remember that the number remains the same. What changes is how you write it down. The number 42 is what you get if you count how many stars are in the pattern at the right. The number is the same whether you write it in English as forty-two, in decimal as 42, or in binary as 00101010. Numbers do not have bases; representations do.
Background image of page 4
Octal and Hexadecimal Notation Because binary notation tends to get rather long, computer scientists often prefer octal (base 8) or hexadecimal (base 16) notation instead. Octal notation uses eight digits: 0 to 7. Hexadecimal notation uses sixteen digits: 0 to 9, followed by the letters A through F to indicate the values 10 to 15.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}