Unformatted text preview: shared memory, life if much much slower. Adve paper gives two constraints on implementations of serializability: Don’t do the next write until the previous write is completed, where completed means update/invalidate all copies (if item is cached) All updates to the same location are serialized With linearizability, need to wait until the store completes before doing the add. With sequential consistency, can buffer the first store, do it in parallel with the add. The second store must then stall to wait for the first store, but it can still be buffered. But the load must wait for both earlier stores to finish. And this is bad! Means that it is possible you have two processors, and your system runs several times slower than if it had one processor! Ouch! Third model (causal ordering): a read returns a causally consistent recent version of the data. That is, if I have received a message A from a node (or indirectly received it through some other node), then I will see all updates that node made prior to A. This relaxes ordering constraints even m...
View Full Document
- Spring '14