Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2442516.2442534acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections
research-article

Ownership passing: efficient distributed memory programming on multi-core systems

Published: 23 February 2013 Publication History

Abstract

The number of cores in multi- and many-core high-performance processors is steadily increasing. MPI, the de-facto standard for programming high-performance computing systems offers a distributed memory programming model. MPI's semantics force a copy from one process' send buffer to another process' receive buffer. This makes it difficult to achieve the same performance on modern hardware than shared memory programs which are arguably harder to maintain and debug. We propose generalizing MPI's communication model to include ownership passing, which make it possible to fully leverage the shared memory hardware of multi- and many-core CPUs to stream communicated data concurrently with the receiver's computations on it. The benefits and simplicity of message passing are retained by extending MPI with calls to send (pass) ownership of memory regions, instead of their contents, between processes. Ownership passing is achieved with a hybrid MPI implementation that runs MPI processes as threads and is mostly transparent to the user. We propose an API and a static analysis technique to transform legacy MPI codes automatically and transparently to the programmer, demonstrating that this scheme is easy to use in practice. Using the ownership passing technique, we see up to 51% communication speedups over a standard message passing implementation on state-of-the art multicore systems. Our analysis and interface will lay the groundwork for future development of MPI-aware optimizing compilers and multi-core specific optimizations, which will be key for success in current and next-generation computing platforms.

References

[1]
A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, 2 edition, 2007.
[2]
S. Borkar. Will interconnect help or limit the future of computing. Presented as the 19th IEEE Conference on Hot Interconnects, 2011.
[3]
G. Bronevetsky. Communication-sensitive static dataflow for parallel message passing applications. In International Symposium on Code Generation and Optimization (CGO), Mar. 2009.
[4]
Y. Dotsenko. Expressiveness, programmability and portable high performance of global address space languages. Technical report, 2006.
[5]
S. J. Frank. Tightly Coupled Multiprocessor System Speeds Memory- Access Times. Electronics, 57(1):164--169, Jan. 1984.
[6]
X. Gonze et al. A brief introduction to the ABINIT software package. Zeitschrift fr Kristallographie, 220(5-6-2005):558--562, 2005.
[7]
M. A. Heroux, D. W. Dorfler, P. S. Crozier, J. M. Willenbring, H. C. Edwards, A. Williams, M. Rajan, E. R. Keiter, H. K. Thornquist, and R. W. Numrich. Improving performance via mini-applications. 2009.
[8]
T. Hoefler and M. Snir. Writing Parallel Libraries with MPI - Common Practice, Issues, and Extensions. In 18th European MPI Users' Group Meeting, EuroMPI, Proc., volume 6960, pages 345--355, Sep. 2011.
[9]
S. Kumar, C. Huang, G. Zheng, E. Bohm, A. Bhatele, J. C. Phillips, H. Yu, and L. V. Kale. Scalable molecular dynamics with NAMD on the IBM Blue Gene/L system. IBM J. Res. Dev., 52:177--188, January 2008.
[10]
E. A. Lee. The problem with threads. Computer, 39(5):33--42, May 2006.
[11]
L.-Q. Lee and A. Lumsdaine. Generic programming for high performance scientific applications. In Proc. of the 2002 Joint ACM Java Grande -- ISCOPE Conference, pages 112--121. ACM Press, 2002.
[12]
L.-Q. Lee and A. Lumsdaine. The generic message passing framework. In Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS), page 53, April 2003.
[13]
MPI Forum. MPI: A message-passing interface standard. version 2.2, September 4th 2009.
[14]
S. Negara, G. Zheng, K.-C. Pan, N. Negara, R. E. Johnson, L. V. Kale, and P. M. Ricker. Automatic MPI to AMPI Program Transformation using Photran. In 3rd Workshop on Productivity and Performance (PROPER 2010), number 10-14, Ischia/Naples/Italy, August 2010.
[15]
S. Negara, R. K. Karmani, and G. Agha. Inferring ownership transfer for efficient message passing. In Proceedings of the 16th ACM symposium on Principles and practice of parallel programming, PPoPP '11, pages 81--90, New York, NY, USA, 2011. ACM.
[16]
M. Perache, P. Carribault, and H. Jourdren. MPC-MPI: An MPI implementation reducing the overall memory consumption. In Proc. of the 16th European PVM/MPI Users' Group Meeting, pages 94--103, Berlin, Heidelberg, 2009. Springer-Verlag.
[17]
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical recipes in C (2nd ed.): the art of scientific computing. Cambridge University Press, 1992.
[18]
J. Protic, M. Tomasevic, and V. Milutinovic, editors. Distributed Shared Memory: Concepts and Systems. IEEE Computer Society Press, Los Alamitos, CA, USA, 1st edition, 1997.
[19]
D. J. Quinlan. Rose: Compiler support for object-oriented frameworks. Parallel Processing Letters, 10(2/3):215--226, 2000.
[20]
R. Rabenseifner. Hybrid parallel programming on HPC platforms. In In proceedings of the Fifth European Workshop on OpenMP, EWOMP'03, Aachen, Germany, 2003.
[21]
R. Rabenseifner, G. Hager, and G. Jost. Hybrid mpi/openmp parallel programming on clusters of multi-core smp nodes. In Proceedings of the 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, PDP '09, pages 427--436, Washington, DC, USA, 2009. IEEE Computer Society.
[22]
H. Tang and T. Yang. Optimizing threaded MPI execution on SMP clusters. In ACM International Conference on Supercomputing (ICS), pages 381 -- 392, 2001.
[23]
D. Turner and X. Chen. Protocol-dependent message-passing performance on linux clusters. In Proceedings of the IEEE International Conference on Cluster Computing, CLUSTER '02, pages 187,Washington, DC, USA, 2002. IEEE Computer Society.
[24]
S.-Y. Tzou and D. P. Anderson. The performance of message-passing using restricted virtual memory remapping. Software - Practice and Experience, 21:251--267, 1991.
[25]
M. Woodacre, D. Robb, D. Roe, and K. Feind. The SGI Altix 3000 global shared-memory architecture. 2005.

Cited By

View all
  • (2021)Accelerating Messages by Avoiding Copies in an Asynchronous Task-based Programming Model2021 IEEE/ACM 6th International Workshop on Extreme Scale Programming Models and Middleware (ESPM2)10.1109/ESPM254806.2021.00007(10-19)Online publication date: Nov-2021
  • (2019)MPI Collectives for Multi-core ClustersWorkshop Proceedings of the 48th International Conference on Parallel Processing10.1145/3339186.3339199(1-10)Online publication date: 5-Aug-2019
  • (2019)RDMA Managed Buffers: A Case for Accelerating Communication Bound Processes via Fine-Grained Events for Zero-Copy Message Passing2019 18th International Symposium on Parallel and Distributed Computing (ISPDC)10.1109/ISPDC.2019.00025(121-130)Online publication date: Jun-2019
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
PPoPP '13: Proceedings of the 18th ACM SIGPLAN symposium on Principles and practice of parallel programming
February 2013
332 pages
ISBN:9781450319225
DOI:10.1145/2442516
  • cover image ACM SIGPLAN Notices
    ACM SIGPLAN Notices  Volume 48, Issue 8
    PPoPP '13
    August 2013
    309 pages
    ISSN:0362-1340
    EISSN:1558-1160
    DOI:10.1145/2517327
    Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 February 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. distributed memory
  2. message passing
  3. multi-core
  4. ownership passing
  5. shared memory

Qualifiers

  • Research-article

Conference

PPoPP '13
Sponsor:

Acceptance Rates

Overall Acceptance Rate 230 of 1,014 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)3
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Accelerating Messages by Avoiding Copies in an Asynchronous Task-based Programming Model2021 IEEE/ACM 6th International Workshop on Extreme Scale Programming Models and Middleware (ESPM2)10.1109/ESPM254806.2021.00007(10-19)Online publication date: Nov-2021
  • (2019)MPI Collectives for Multi-core ClustersWorkshop Proceedings of the 48th International Conference on Parallel Processing10.1145/3339186.3339199(1-10)Online publication date: 5-Aug-2019
  • (2019)RDMA Managed Buffers: A Case for Accelerating Communication Bound Processes via Fine-Grained Events for Zero-Copy Message Passing2019 18th International Symposium on Parallel and Distributed Computing (ISPDC)10.1109/ISPDC.2019.00025(121-130)Online publication date: Jun-2019
  • (2019)FALCON: Efficient Designs for Zero-Copy MPI Datatype Processing on Emerging Architectures2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS.2019.00045(355-364)Online publication date: May-2019
  • (2018)Framework for scalable intra-node collective operations using shared memoryProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.5555/3291656.3291695(1-12)Online publication date: 11-Nov-2018
  • (2018)Framework for scalable intra-node collective operations using shared memoryProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.1109/SC.2018.00032(1-12)Online publication date: 11-Nov-2018
  • (2016)GEMs: shared-memory parallel programming for Node.jsACM SIGPLAN Notices10.1145/3022671.298403951:10(531-547)Online publication date: 19-Oct-2016
  • (2016)GEMs: shared-memory parallel programming for Node.jsProceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications10.1145/2983990.2984039(531-547)Online publication date: 19-Oct-2016
  • (2016)IMPACCProceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing10.1145/2907294.2907302(189-201)Online publication date: 31-May-2016
  • (2016)Architectural support for efficient message passing on shared memory multi-coresJournal of Parallel and Distributed Computing10.1016/j.jpdc.2016.02.00595:C(92-106)Online publication date: 1-Sep-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media