This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CS-350: Fundamentals of Computing Systems Page 1 of 13Lecture Notes Processes as Resource Consumers Sources of Concurrency in a Computing System As we alluded to before, the management of concurrency (coordination, resource management, synchronization, etc.) is a central theme of this course. Before we delve into details regarding mechanisms and approaches for the management of concurrency, we must first understand why concurrency is necessary in most modern computing systems. Efficient use of resources dictates the use of concurrency In a uni-programmed environment, a single program cannot make efficient use of a resource (e.g. CPU) when it is waiting for completion of some work on a different resource (e.g. Disk or network). In Figure 1, while the program is waiting for service from some other resource (say the disk), CPU cycles are being wasted.1Figure 1: Uni-programming To make better use of the resource, we need to allow multiple programs to sharein the use of the resource by interleaving their use of the resource as depicted in Figure 2. Figure 2: Interleaving multiple program execution 1Notice that the same could be said for the other resource as well! Namely, while the program is using the CPU, the disk is sitting idle. Azer Bestavros. All rights reserved. Reproduction or copying (electronic or otherwise) is expressly forbidden except for students enrolled in CS-350. CS-350: Fundamentals of Computing Systems Page 2 of 13Lecture Notes The co-existence of multiple executing programs sharing the same set of resources gives rise to the notion of a multiprogramming (or multi-tasking) environment. System scalability dictates the use of independent/concurrent services Consider a complex software systemsay an operating system, a database system, or a web server. One way of building such a system is to think of it as a monolithic piece of software. This is obviously less than ideal because it does not allow for independent development (and separation) of functionality. A much better approach is to divide the system into basic components that are implemented independently, and which are run autonomously and communicate only when necessary. This is indeed the model most often used in the development of large software artifacts, whereby the system is designed as a collection of services that, together, constitute the system. One of the many advantages of this divide and conquer approach to software development is that it makes it possible to reuse services developed for one system in a different system. Another important advantage is that the issue of provisioning (deciding how much computing resources to dedicate to the system) becomes more manageable since, for example, one could have multiple services run on different computers in a distributed environment. This is indeed why the client-server model and variants thereof, (e.g. publisher/subscriber, producer/consumer, CORBA, DCOM) for building large software artifacts has been quite successful. All of these models imply DCOM) for building large software artifacts has been quite successful....
View Full Document
- Spring '09