Distributed-Mutual-Exclusion-slides

Distributed-Mutual-Exclusion-slides - Distributed Mutual...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
Distributed Mutual Exclusion
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Last time… Synchronizing real, distributed clocks Logical time and concurrency Lamport clocks and total-order Lamport clocks
Background image of page 2
Goals of distributed mutual exclusion Much like regular mutual exclusion Safety: mutual exclusion Liveness: progress Fairness: bounded wait and in-order Secondary goals: reduce message traffic minimize synchronization delay i.e., switch quickly between waiting processes By logical time!
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Distributed mutex is different Regular mutual exclusion solved using shared state, e.g. atomic test-and-set of a shared variable… shared queue… We solve distributed mutual exclusion with message passing Note: we assume the network is reliable but asynchronous…but processes might fail!
Background image of page 4
Solution 1: A central mutex server To enter critical section: send REQUEST to central server, wait for permission To leave: send RELEASE to central server
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Solution 1: A central mutex server Advantages: Simple (we like simple!) Only 3 messages required per entry/exit Disadvantages: Central point of failure Central performance bottleneck With an asynchronous network, impossible to achieve in-order fairness Must elect/select central server
Background image of page 6
Solution 2: A ring-based algorithm Pass a token around a ring Can enter critical section only if you hold the token Problems: Not in-order Long synchronization delay Need to wait for up to N-1 messages, for N processors Very unreliable Any process failure breaks the ring
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2’: A fair ring-based algorithm Token contains the time t of the earliest known outstanding request To enter critical section: Stamp your request with the current time T r , wait for token When you get token with time t while waiting with request from time T r , compare T r to t : If T r = t : hold token, run critical section If T r > t : pass token If t not set or T r < t : set token-time to T r , pass token, wait for token To leave critical section: Set token-time to null (i.e., unset it), pass token
Background image of page 8
Solution 3: A shared priority queue By Lamport, using Lamport clocks Each process i locally maintains Q i , part of a shared priority queue To run critical section, must have replies from all other processes AND be at the front of Q i When you have all replies: #1: All other processes are aware of your request #2: You are aware of any earlier requests for the mutex
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 30

Distributed-Mutual-Exclusion-slides - Distributed Mutual...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online