This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Project #2 Monday, April 27 Project #3 Credit Distribution Agreement is due (end of class today) Project Review is due (before midnight tonight) Rob's seminar Friday Plan of attack due Sunday Do Homework #3, problems 2 5 Discussion in class Wednesday Today's Topics More Virtual Memory More Pagereplacement Algorithms Memory Allocation Methods Wrap up Virtual Memory Page Replacement Algorithms Goal: lowest pagefault rate. Algorithm evaluation: for each algorithm ... Run algorithm on a particular string of memory references (reference string) Compute the number of page faults on that string. Classical Pagereplacement Algorithms FIFO (First In First Out) LRU (Least Recently Used) MFU (Most Frequently Used) LFU (Least Frequently Used) Optimal FIFO Page Replacement Count the page faults Don't forget the initial faults caused by empty frames Remember Belady's Anomaly Optimal Page Replacement Algorithm Replace page that will not be used for longest period of time. 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 2 3 4 5 4 6 page faults How do you know which page is the victim? Used for benchmarking. Lookahead required As you approach the end of the reference string, use FIFO Optimal Page Replacement Least Recently Used (LRU) Page Replacement Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 2 3 4 5 8 page faults 5 3 4 LRU Page Replacement Most Frequently Used (MFU) Page Replacement Algorithm Based on conjecture that the most frequently used page must be nearly finished. Check history Resolve tie using FIFO Most Frequently Used (MFU) Page Replacement Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 2 3 4 5 7 page faults 1 2 Resolve ties with FIFO Least Frequently Used (LFU) Page Replacement Algorithm Based on conjecture that recent history will repeat itself Check history Resolve tie using FIFO Least Frequently Used (LFU) Page Replacement Algorithm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 8 page faults 2 3 4 5 3 4 5 Resolve ties with FIFO Page Swap Overhead Page swapping is "wasted time" for the process that requires it. System can attend to other tasks while page swapping is in progress Goal: minimize number of page faults!! Performance of Demand Paging Page Fault Rate 0 p 1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 p) x memory access time + p x (page fault overhead*) * page fault overhead is time required to [swap page out ], swap page in, and start/resume process. Demand Paging Example Given: Find the Effective Access Time ... EAT = [(1 0.2) x 250] + (0.2 x 10,000,000) = (0.8 x 250) + (0.2 x 10,000,000) = 200 + 2,000,000 = 2,000,200 ns = 2.0002 ms Memory access time = 250 ns 20% of the time the page that is being referenced is not in memory (Pagefault rate, p = 0.2) Pagefault overhead = 10 ms = 10,000,000 ns Effective Access Time for p = 0.2 is approx. 8,000 times as long as when p = 0 Suppose that we decide to accept a virtual memory performance degradation of less than 10% : Demand Paging Example
...so we want EAT < 275 ns Memory access time about 250 ns Find acceptable pagefault rate (p) [(1 p) x 250] + (p x 10,000,000) < 275 [250 250p] + 10,000,000p < 275 250 + 9,999,750p < 275 9,999,750p < 25 p < 0.0000025 Pagefault overhead = 10 ms = 10,000,000 ns This means that less than 1 page reference in 400,000 can be allowed to fault !! Allocation of Frames Longterm scheduler determines the number of processes to be loaded Each process requires a certain minimum number of pages. Two major allocation schemes. Degree of multiprogramming fixed allocation
Equal Proportional priority allocation Fixed Allocation ... Equal Frames distributed equally among processes Example: suppose there are 100 frames and 5 processes give each 20 pages. Fixed allocation ... Proportional Allocate frames according to the process sizes e.g., suppose there are 100 frames and 2 processes: S1 (size = 20) and S2 (size = 127)
s i = size of process p i S = si m = total number of frames si a i = allocation for p i = m S m =100 s1 = 20 s 2 =127 20 a1 = 100 14 147 127 a2 = 100 86 147 Priority Allocation Use a proportional allocation scheme using priorities rather than size. If process Pi generates a page fault, select one of its own frames as the victim ... ... or select the victim from a process with a lower priority. Thrashing If a process does not have "enough" pages, the pagefault rate is very high. This leads to: low CPU utilization. Longterm scheduler thinks that it needs to increase the degree of multiprogramming. another process added to the system. Thrashing: a process is kept busy swapping pages in and out. Considerations / Tradeoffs Prepaging Page size selection Attempt to bring in multiple pages Anticipatory paging Exploit locality of reference Avoid fragmentation Minimize page table size Minimize I/O overhead Considerations / Tradeoffs TLB Reach The amount of memory accessible from the TLB. Ideally, the active pages of each process are referenced by the TLB. TLB Reach = (TLB Size) X (Page Size) Otherwise there may be a high degree of page faults. Considerations / Tradeoffs Increase the size of the TLB? Increase the Page Size? expensive Smaller page tables May cause increased internal fragmentation Provide Multiple Page Sizes? not all applications require a large page size. Allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation. Increases overhead Questions? Do Homework #3 Discussion in class Wednesday Read Love Chapter 14 ...
View Full Document
This note was uploaded on 06/28/2009 for the course CS 411 taught by Professor Staff during the Spring '08 term at Oregon State.
- Spring '08
- Operating Systems