Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1583991.1584005acmconferencesArticle/Chapter ViewAbstractPublication PagesspaaConference Proceedingsconference-collections
keynote

Memory models: a case for rethinking parallel languages and hardware

Published: 11 August 2009 Publication History

Abstract

The era of parallel computing for the masses is here, but writing correct parallel programs remains a challenge--popular parallel environments offer no analogs for the concepts of structured and safe sequential programming. The memory model forms the heart of the concurrency semantics of any (shared-memory) parallel language or hardware. Unfortunately, it has involved a tradeoff between programmability and performance, and has arguably been one of the most challenging and contentious areas in shared-memory specification. Recent broad community-scale efforts have finally led to a convergence in this debate, with popular languages such as Java and C++ and most hardware vendors publishing compatible memory model specifications. Although this convergence is a dramatic improvement, it has exposed fundamental shortcomings in current popular languages and systems that prevent achieving the vision of structured and safe parallel programming.
I will discuss the path to the above convergence, the hard lessons learned, and their implications. A cornerstone of this convergence has been the view that the memory model should be a contract between the programmer and the system--if the programmer writes well-structured (data-race-free) programs, the system will provide high programmability (sequential consistency) and performance. I will discuss why this view is the best we can do with current popular languages, and why it should be unacceptable moving forward. I will then discuss research directions that eliminate worrying about the memory model, but require rethinking popular parallel languages and hardware. In particular, I will argue for languages that eliminate data races by design and provide determinism by default, while retaining the advantages of modern object-oriented programming. I will also argue for hardware that takes advantage of such disciplined programming models to enable energy-efficient performance scalability. I will use the Deterministic Parallel Java language and DeNovo hardware projects at Illinois as examples of such directions.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SPAA '09: Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures
August 2009
370 pages
ISBN:9781605586069
DOI:10.1145/1583991

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 August 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. memory consistency models
  2. memory models
  3. multicore architecture
  4. safe programming

Qualifiers

  • Keynote

Conference

SPAA 09

Acceptance Rates

Overall Acceptance Rate 447 of 1,461 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 296
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media