Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3399871.3399881acmotherconferencesArticle/Chapter ViewAbstractPublication PagesasseConference Proceedingsconference-collections
research-article

Towards Implicit Memory Management for Portable Parallel Programming in C++

Published: 03 July 2020 Publication History

Abstract

We consider the challenge of programming modern high-performance parallel processors including multi-core CPUs and many-core GPUs (Graphics Processing Units). Our approach is based on using the widely-spread programming language C++ in a portable way, i.e., the same program code runs on different target architectures. The contribution of this paper is that we extend our existing programming framework PACXX (Programming Accelerators in C++) with an additional compilation pass which allows to simplify the program data management for the programmer and makes the programming process less error-prone. We describe our current work in progress on implementing the implicit data management by presenting the major design choices and illustrating the advantages of our approach using simple programming examples.

References

[1]
2014. Bold C++ Template Library. Version 1.2.
[2]
Ping An, Alin Jula, Silvius Rus, Steven Saunders, Tim Smith, Gabriel Tanase, Nathan Thomas, Nancy Amato, and Lawrence Rauchwerger. 2003. STAPL: An Adaptive, Generic Parallel C++ Library. In Languages and Compilers for Parallel Computing. Springer, 193--208.
[3]
James C. Beyer, Eric J. Stotzer, Alistair Hart, and Bronis R. de Supinski. 2011. OpenMP for Accelerators. In OpenMP in the Petascale Era. Springer, 108--121.
[4]
CUDA Vector addition example 2019. https://github.com/olcf/vector_addition_tutorials/tree/master/CUDA
[5]
Michael Haidl and Sergei Gorlatch. 2014. PACXX: Towards a Unified Programming Model for Programming Accelerators Using C++14. In 2014 LLVM Compiler Infrastructure in HPC. 1--11. https://doi.org/10.1109/LLVM-HPC.2014-9
[6]
Jared Hoberock and Nathan Bell. 2014. Thrust: A Parallel Template Library. Version 1.6.
[7]
isocpp.org 2014. Programming Languages - C++ (Committee Draft). isocpp.org.
[8]
Khronos OpenCL Working Group 2012. The OpenCL Specification. Khronos OpenCL Working Group. Version 1.2.
[9]
Vladyslav Kucher, Jens Hunloh, and Sergei Gorlatch. 2019. Toward Performance-Portable Finite Element Methods on High-Performance Systems. In 2019 3rd International Conference on Recent Advances in Signal Processing, Telecommunications Computing (SigTelCom). 69--73. https://doi.org/10.1109/SIGTELCOM.2019.8696146
[10]
Chris Lattner. 2008. LLVM and Clang: Next Generation Compiler Technology. In The BSD Conference. 1--2.
[11]
Lu Li and Christoph Kessler. 2017. VectorPU: A Generic and Efficient Data-container and Component Model for Transparent Data Transfer on GPU-based Heterogeneous Systems (PARMA-DITAM '17). ACM, New York, NY, USA, 7--12. https://doi.org/10.1145/3029580.3029582
[12]
Microsoft 2012. C++ AMP: Language and Programming Model. Microsoft. Version 1.0.
[13]
Nvidia 2014. CUDA C Programming Guide. Nvidia. Version 6.5.
[14]
openacc-standard.org 2013. The OpenACC Application Programming Interface. openacc-standard.org. Version 2.0a.
[15]
OpenCL Vector addition example 2019. https://github.com/olcf/vector_addition_tutorials/tree/master/OpenCL

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ASSE '20: Proceedings of the 2020 Asia Service Sciences and Software Engineering Conference
May 2020
163 pages
ISBN:9781450377102
DOI:10.1145/3399871
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • Nanyang Technological University

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 July 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GPU programming
  2. High-performance computing
  3. Performance portability
  4. Unified parallel programming

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ASSE '20

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 60
    Total Downloads
  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)1
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media