process-sync - Process Synchronization So far the...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
Process Synchronization So far the granularity of concurrency has been at the machine-instruction level — preemption may occur at any time between machine-level instructions But, mix in shared data or resources and we run into potentially inconsistent state caused by inappropriate interleaving of concurrent operations, since many activities need > 1 machine instruction to complete Technical term = race condition = any situation whose outcome depends on the specific order that concurrent code accesses shared data Process synchronization or coordination seeks to make sure that concurrent processes or threads don’t interfere with each other when accessing or changing shared resources Some terms to help frame the problem better: Segments of code that touch shared resources are called critical sections — thus, no two processes should be in their critical sections at the same time The critical-section problem is the problem of designing a protocol for ensuring that cooperating processes’ critical sections don’t interleave The overall subject of synchronization leads to a number of well-known “classic problems” and solutions
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Critical-section code has the same general structure: An entry section requests and waits for permission to enter the critical section The critical section itself follows — the process should be free to touch shared resources here without fear of interference The exit section handles anything that “closes” a process’s presence in its critical section The remainder section is anything else after that Critical Section Structure Critical Section Solution Requirements Solutions to the critical-section problem must satisfy: Mutual exclusion — Only one process in its critical section at a time; i.e., no critical-section overlap Progress (a.k.a. deadlock-free ) — If some process wants into its critical section, then some process gets in Bounded waiting (a.k.a. lockout-free ) — Every process that wants into its critical section eventually gets in Note how no lockout implies no deadlock — and in practice, we also want a reasonable maximum bound
Background image of page 2
Critical Sections in the Operating System Kernel Note how an OS has a lot of possible race conditions, due to shared data structures and resources A simple solution for this is to have a nonpreemptive kernel — never preempt kernel-mode processes, thus eliminating OS race conditions A preemptive kernel — in which kernel-mode code can be preempted — is much harder to do, but is needed for real-time operations and better responsiveness, plus it eliminates the risk of excessively long kernel code activities Windows XP/2000 nonpreemptive Traditional Unix nonpreemptive Linux nonpreemptive < 2.6, preemptive thereafter xnu (Mac OS X/ Darwin) “preemptible” — off by default, until a real-time process is scheduled Solaris preemptive IRIX preemptive
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/18/2012 for the course INFORMATIK 2011 taught by Professor Phanthuongcang during the Winter '11 term at Cornell University (Engineering School).

Page1 / 10

process-sync - Process Synchronization So far the...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online