ch5 - CHAPTER 5: DISTRIBUTED PROCESS SCHEDULING Chapter...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CHAPTER 5: DISTRIBUTED PROCESS SCHEDULING Chapter outline • Three process models: – precedence – communication – disjoint • System performance model that illustrates the relationship among – algorithm, scheduling, and architecture • Scheduling: – Static scheduling: precedence model, communication model – Dynamic scheduling: load sharing, load balancing – disjoint process model – interacting process model • Implementation: – remote service and execution – process migration • Real-time scheduling and synchronization – basic RTS – Priority Inversion – Blocking 1 Process models process model process model process model (a) Precedence (b) Communication (c) Disjoint Note: Dashed lines represent processor boundaries • Precedence Process Model – Precedence relationship represented best by DAG – Suitable for Fork/Join or CoBegin/CoEnd code – Communication costs incurred if arc crosses processor boundary – Goal: Minimize makespan • Interacting Process Model – Persistent processes that exhange messages asynchronously – Represented by graph showing processes and communication paths – Communication costs incurred if message crosses processor bound- ary – Goal: Minimize computation and communication costs • Disjoint Process Model – Processes can run independently – Processes arrive independently – Queuing time and service time – Goal: Minimize turnaround time/processor idle time – If process migration, get load sharing/balancing at migration cost 2 A system performance model Speed-up factor S = F ( Algorithm,System,Schedule ) S = OSPT CPT = OSPT OCPT ideal × OCPT ideal CPT = S i × S d • S = actual speedup on an n processor sytem • OSPT = Optimal Sequential Processing Time • CPT = Actual Concurrent Processing Time • OCPT ideal = Optimal Concurrent Processing Time • S i = ideal speedup • S d = degradation due to actual implementation 3 Ideal speed-up S i = RC RP × n RP = ∑ m i =1 P i OSPT RC = ∑ m i =1 P i OCPT ideal × n • S i = ideal speedup on an n processor sytem • RC = Relative Concurrency (processor utilization) ≤ 1 • RP = Relative Processing requirement ≥ 1 • n = number of processors • m = number of tasks • P i = Computation time of task i 4 System degradation S d = 1 1 + ρ ρ = CPT − OCPT ideal OCPT ideal • S d = System degradation • ρ = loss of parallelism on a real machine optimal scheduling ρ ρ sched ρ ρ syst syst sched ρ nonoptimal nonoptimal ’ ’ ideal system with real system with scheduling real system with optimal scheduling ideal system with scheduling Finally, S = RC RP × 1 1 + ρ × n 5 Amdahl’s Law Speed-up is limited intrinsically by parallelizability of program....
View Full Document

This note was uploaded on 01/18/2012 for the course COP 5615 taught by Professor Staff during the Fall '08 term at University of Florida.

Page1 / 22

ch5 - CHAPTER 5: DISTRIBUTED PROCESS SCHEDULING Chapter...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online