unpv22e.chap12 - 12 Shared Memory Introduction 12.1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 12 Shared Memory Introduction 12.1 Introduction Shared memory is the fastest form of IPC available. Once the memory is mapped into the address space of the processes that are sharing the memory region, no kernel involvement occurs in passing data between the processes. What is normally required, however, is some form of synchronization between the processes that are storing and fetching information to and from the shared memory region. In Part 3, we discussed various forms of synchronization: mutexes, condition variables, read–write locks, record locks, and semaphores. What we mean by ‘‘no kernel involvement’’ is that the processes do not execute any sys- tem calls into the kernel to pass the data. Obviously, the kernel must establish the mem- ory mappings that allow the processes to share the memory, and then manage this memory over time (handle page faults, and the like). Consider the normal steps involved in the client–server file copying program that we used as an example for the various types of message passing (Figure 4.1). • The server reads from the input file. The file data is read by the kernel into its memory and then copied from the kernel to the process. • The server writes this data in a message, using a pipe, FIFO, or message queue. These forms of IPC normally require the data to be copied from the process to the kernel. We use the qualifier normally because Posix message queues can be implemented using memory-mapped I/O (the mmap function that we describe in this chapter), as we showed in Section 5.8 and as we show in the solution to Exercise 12.2. In Figure 12.1, we assume 303 304 Shared Memory Introduction Chapter 12 that Posix message queues are implemented within the kernel, which is another possibil- ity. But pipes, FIFOs, and System V message queues all involve copying the data from the process to the kernel for a write or msgsnd , or copying the data from the kernel to the process for a read or msgrcv . • The client reads the data from the IPC channel, normally requiring the data to be copied from the kernel to the process. • Finally, the data is copied from the client’s buffer, the second argument to the write function, to the output file. A total of four copies of the data are normally required. Additionally, these four copies are done between the kernel and a process, often an expensive copy (more expensive than copying data within the kernel, or copying data within a single process). Fig- ure 12.1 depicts this movement of the data between the client and server, through the kernel. output file IPC (pipe, FIFO, or message queue) input file client server process kernel read() write() , mq_send() , or msgsnd() write() read() , mq_receive() , or msgrcv() Figure 12.1 Flow of file data from server to client....
View Full Document

This note was uploaded on 11/12/2010 for the course CSCI 271 taught by Professor Wilczynski during the Spring '08 term at USC.

Page1 / 21

unpv22e.chap12 - 12 Shared Memory Introduction 12.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online