Skip to main content

Transactional Memory, Second Edition

  • Book
  • © 2010
  • Latest edition

Overview

Part of the book series: Synthesis Lectures on Computer Architecture (SLCA)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 17.99 USD 34.99
Discount applied Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions

Similar content being viewed by others

Table of contents (6 chapters)

Authors and Affiliations

  • Microsoft Research, USA

    Tim Harris, James Larus

  • Intel Corporation, USA

    Ravi Rajwar

About the authors

Tim Harris is a Senior Researcher at MSR Cambridge where he works on abstractions for using multi-core computers. He has worked on concurrent algorithms and transactional memory for over ten years, most recently, focusing on the implementation of STM for multi-core computers and the design of programming language features based on it. Harris is currently working on the Barrelfish operating system and on architecture support for programming language runtime systems. Harris has a BA and PhD in computer science from Cambridge University Computer Laboratory. He was on the faculty at the Computer Laboratory from 2000-2004 where he led the department's research on concurrent data structures and contributed to the Xen virtual machine monitor project. He joined Microsoft Research in 2004. James Larus is a Research Area Manager for programming languages and tools at Microsoft Research. He manages the Advanced Compiler Technology, Human Interaction in Programming, Runtime Analysis and Design, and Software Reliability Research groups and co-lead the Singularity research project. He joined Microsoft Research as a Senior Researcher in 1998 to start and, for five years, lead the Software Productivity Tools (SPT) group, which was one of the most innovative and productive groups in the areas of program analysis and programming tools. Before joining Microsoft, Larus was an Assistant and Associate Professor at the University of Wisconsin-Madison, where he co-led the Wisconsin Wind Tunnel research project with Professors Mark Hill and David Wood. This DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. In addition, Larus's research covered a number of areas: new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages, tools for verifying program correctness, compiler analysis and optimization, and custom cache coherence protocols. Larus received a PhD from the University of California at Berkeley in 1989, and an AB from Harvard in 1980. Tools (SPT) group, which was one of the most innovative and productive groups in the areas of program analysis and programming tools. Before joining Microsoft, Larus was an Assistant and Associate Professor at the University of Wisconsin-Madison, where he co-led the Wisconsin Wind Tunnel research project with Professors Mark Hill and David Wood. This DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. In addition, Larus's research covered a number of areas: new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages, tools for verifying program correctness, compiler analysis and optimization, and custom cache coherence protocols. Larus received a PhD from the University of California at Berkeley in 1989, and an AB from Harvard in 1980.

Accessibility Information

Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.

Bibliographic Information

Publish with us