\u2022 When does data get moved In the most basic cache data gets moved into cache when it is used for the first time But many caches use some kind of

• when does data get moved in the most basic cache

This preview shows page 62 - 64 out of 99 pages.

• When does data get moved? In the most basic cache, data gets movedinto cache when it is used for the first time. But many caches use somekind ofprefetching, meaning that data is loaded before it is explicitlyrequested. We have already seen one form of prefetching: loading anentire block when only part of it is requested.• Where in the cache does the data go? When the cache is full, we can’tbring anything in without kicking something out. Ideally, we want tokeep data that will be used again soon and replace data that won’t.The answers to these questions make up thecache policy. Near the top ofthe hierarchy, cache policies tend to be simple because they have to be fastand they are implemented in hardware. Near the bottom of the hierarchy,there is more time to make decisions, and well-designed policies can makea big difference.Most cache policies are based on the principle that history repeats itself;if we have information about the recent past, we can use it to predict theimmediate future. For example, if a block of data has been used recently,we expect it to be used again soon. This principle suggests a replacementpolicy calledleast recently used, or LRU, which removes from the cache ablock of data that has not been used recently. For more on this topic, see.7.6PagingIn systems with virtual memory, the operating system can move pages backand forth between memory and storage. As I mentioned in Section 6.2, thismechanism is calledpagingor sometimesswapping.Here’s how the process works:
Background image
7.6. Paging531. Suppose Process A callsto allocate a chunk. If there is no freespace in the heap with the requested size,callsto ask theoperating system for more memory.2. If there is a free page in physical memory, the operating system addsit to the page table for Process A, creating a new range of valid virtualaddresses.3. If there are no free pages, the paging system chooses avictim pagebelonging to Process B. It copies the contents of the victim page frommemory to disk, then it modifies the page table for Process B to indi-cate that this page isswapped out.4. Once the data from Process B is written, the page can be reallocatedto Process A. To prevent Process A from reading Process B’s data, thepage should be cleared.5. At this point the call tocan return, givingadditional spacein the heap. Thenallocates the requested chunk and returns.Process A can resume.6. When Process A completes, or is interrupted, the scheduler might al-low Process B to resume. When Process B accesses a page that has beenswapped out, the memory management unit notices that the page isinvalidand causes an interrupt.7. When the operating system handles the interrupt, it sees that the pageis swapped out, so it transfers the page back from disk to memory.8. Once the page is swapped in, Process B can resume.
Background image
Image of page 64

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture