Lecture15 - Transactional Memory Lecture 15 CS 501 May 19,...

Info iconThis preview shows pages 1–13. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Transactional Memory Lecture 15 CS 501 May 19, 2009 Motivation Scalability no longer just about large-scale distributed systems Modern processors have multiple cores Natural result of power / performance challenges No longer just servers ... but now clients: desktops, laptops, and, increasingly, mobile devices Good performance requires scaling over multiple cores Threads and Locks Traditional programming model for concurrency multiple threads of execution (e.g., one per core) shared memory locks to coordinate DifFcult to program ine-grain locking required for good scalability Avoiding deadlock, maintaining lock discipline, composing modules, ... Transactional Memory Replace locked regions with transactions sets of atomic operations that appear to execute sequentially Two key ideas Optimistic concurrency in the implementation Assume potentially conFicting operations can run concurrently Declarative safety in the language Declare what properties rather than how to get them y: cat z: apple x: Programming with Locks synchronized ( data ) { x = apple; } synchronized ( data ) { z = cat; } cat z: apple x: y: atomic { x = apple; } atomic { z = cat; } Programming with Transactions y: bird banana cat z: apple x: Resolving Conficts atomic { y = banana; } atomic { y = bird; } Overview Motivation Implementing Optimistic Concurrency Hardware Transactional Memory (HTM) Software Transactional Memory (STM) Integrating Transactions into Programming Languages Summary Optimistic Concurrency in Transactions Key idea: transactions should be performed in parallel Old idea from databases ... but now, apply to memory Allow independent transactions to overlap Detect and recover from conFicting accesses Maintain original and new versions of memory for each transaction If no conFict, commit and publish new version all at once If conFict, abort and restore original version Hardware Transactional Memory Seminal work: Herlihy / Moss 1993 Idea: leverage existing multi-processor hardware Provide small scale transactions Versioning: cache contains new values, main memory retains original Confict detection: extend existing cache coherency mechanisms Commit: cached values copied into memory Abort: cached values dropped State of Commercial Hardware TM Azul - Large scale Java servers (> 500 cores) TM used under the hood: speculative lock elision Execute locked sections in Java as transactions On failure, fall back to locks Suns Rock (Expected in 2009) Hardware TM exposed to programmers Transactions limited in size and allowable operations Software Transactional Memory Implement all necessary bookkeeping in software Leverage Virtual machine, Compiler, GC SacriFce raw performance, but gain exibility...
View Full Document

Page1 / 52

Lecture15 - Transactional Memory Lecture 15 CS 501 May 19,...

This preview shows document pages 1 - 13. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online