{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lecture22nFull(1) - COT 4600 Operating Systems Spring 2011...

Info iconThis preview shows pages 1–12. Sign up to view the full content.

View Full Document Right Arrow Icon
Click to edit Master subtitle style 9/16/11 Lecture 22 COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Lecture 22 9/16/11 Lecture 22 - Tuesday April 12, 2011 n Last time: ¨ Scheduling n Today: ¨ Memory characterization ¨ Multilevel memories management using virtual memory ¨ Adding multi-level memory management to virtual memory ¨ Page replacement algorithms n Next Time: ¨ Performance 22
Background image of page 2
Lecture 22 9/16/11 Virtual memory n Several strategies ¨ Paging ¨ Segmentation ¨ Paging+ segmentation n At the time a process/thread is created the system creates a page table for it; an entry in the page table contains ¨ The location in the swap area of the page ¨ The address in main memory where the page resides if the page has been brought in from the disk ¨ Other information e.g. dirty bit. n Page fault ° a process/thread references an address in a page which is not in main memory n On demand paging ° a page is brought in the main memory from the swap area on the disk when the process/thread references an address in that page. 33
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Lecture 22 9/16/11 44
Background image of page 4
Lecture 22 9/16/11 55
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Lecture 22 9/16/11 Dynamic address translation 66
Background image of page 6
Lecture 22 9/16/11 Multi-level memories 77
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Lecture 22 9/16/11 System components involved in memory management n Virtual memory manager – VMM ° dynamic address translation n Multi level memory management – MLMM ° 88
Background image of page 8
Lecture 22 9/16/11 The modular design n VM attempts to translate the virtual memory address to a physical memory address n If the page is not in main memory VM generates a page-fault exception . n The exception handler uses a SEND to send to an MLMM port the page number n The SEND invokes ADVANCE which wakes up a thread of MLMM n The MMLM invokes AWAIT on behalf of the thread interrupted due to the page fault. n The AWAIT releases the processor to the SCHEDULER thread. 99
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Lecture 22 9/16/11 Application thread 1 Virtual Memory Manager Exception Handler Scheduler Multi-Level Memory Manager Application thread 2 IR PC Translate (PC) into (Page#,Displ) Is (Page#) in primary storage? YES- compute the physical address of the instruction IR PC NO – page fault Save PC Handle page fault Identify Page # Issue AWAIT on behalf of thread 1 AWAIT SEND(Page #) Thread 1 WAITING Thread 2 RUNNING IR PC Load PC of thread 2 Find a block in primary storage Is “dirty” bit of block ON? YES- write block to secondary storage NO- fetch block corresponding to missing page I/O operation complets ADVANCE Thread 1 RUNNING Load PC of thread 1 IR PC 1010
Background image of page 10
Lecture 22 9/16/11 Name resolution in multi- level memories n We consider pairs of layers: ¨ Upper level of the pair ? primary ¨ Lower level of the pair ± secondary n The top level managed by the application which generates LOAD and STORE instructions to/from CPU registers from/to named memory locations n The processor issues READs/WRITEs to named memory locations. The name goes to the primary memory device located on the same chip as the processor which searches the name space of the on-chip cache (L1 cache), the primary device with the L2 cache as secondary device.
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 12
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}