Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3416315.3416318acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Why is MPI (perceived to be) so complex?: Part 1—Does strong progress simplify MPI?

Published: 07 October 2020 Publication History

Abstract

Strong progress is optional in MPI. MPI allows implementations where progress (for example, updating the message-transport state machines or interaction with network devices) is only made during certain MPI procedure calls. Generally speaking, strong progress implies the ability to achieve progress (to transport data through the network from senders to receivers and exchange protocol messages) without explicit calls from user processes to MPI procedures. For instance, data given to a send procedure that matches a pre-posted receive on the receiving process is moved from source to destination in due course regardless of how often (including zero times) the sender or receiver processes call MPI in the meantime. Further, nonblocking operations and persistent collective operations work ‘in the background’ of user processes once all processes in the communicator’s group have performed the starting step for the operation. Overall, strong progress is meant to enhance the potential for overlap of communication and computation and improve predictability of procedure execution times by eliminating progress effort from user threads. This paper posits that strong progress is desirable as an MPI implementation property and examines whether strong progress: This paper explores such possibilities and sets forth principles that underpin MPI and interactions with normal and fault modes of operation. The key contribution of this paper is the conclusion that, whether measured by absolute performance, by performance portability, or by interface simplicity, strong progress in MPI is no worse than weak progress and, in most scenarios, has more potential to fulfil the aforementioned desirable attributes.

References

[1]
Purushotham V. Bangalore, Rolf Rabenseifner, Daniel J. Holmes, Julien Jaeger, Guillaume Mercier, Claudia Blaas-Schenner, and Anthony Skjellum. 2019. Exposition, Clarification, and Expansion of MPI Semantic Terms and Conventions: Is a Nonblocking MPI Function Permitted to Block?. In Proceedings of the 26th European MPI Users’ Group Meeting (Zürich, Switzerland) (EuroMPI ’19). Association for Computing Machinery, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/3343211.3343213
[2]
MPI Forum. 2015. MPI: A Message-Passing Interface Standard. Version 3.1. Technical Report. University of Tennessee, Knoxville, TN, USA. https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
[3]
MPI Forum 2018. MPI Semantic Terms Proposal. MPI Forum. Retrieved May 26, 2020 from https://github.com/mpi-forum/mpi-issues/issues/96

Cited By

View all
  • (2023)Optimizing Irregular Communication with Neighborhood Collectives and Locality-Aware ParallelismProceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis10.1145/3624062.3624111(427-437)Online publication date: 12-Nov-2023
  • (2021)Overlapping Communication and Computation with ExaMPI's Strong Progress and Modern C++ Design2021 Workshop on Exascale MPI (ExaMPI)10.1109/ExaMPI54564.2021.00008(18-26)Online publication date: Nov-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroMPI/USA '20: Proceedings of the 27th European MPI Users' Group Meeting
September 2020
88 pages
ISBN:9781450388801
DOI:10.1145/3416315
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. MPI
  2. fault tolerance
  3. performance portability
  4. specification complexity
  5. strong progress

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • European Union's Horizon 2020 Research and Innovation programme
  • National Science Foundation

Conference

EuroMPI/USA '20
EuroMPI/USA '20: 27th European MPI Users' Group Meeting
September 21 - 24, 2020
TX, Austin, USA

Acceptance Rates

Overall Acceptance Rate 66 of 139 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)17
  • Downloads (Last 6 weeks)1
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Optimizing Irregular Communication with Neighborhood Collectives and Locality-Aware ParallelismProceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis10.1145/3624062.3624111(427-437)Online publication date: 12-Nov-2023
  • (2021)Overlapping Communication and Computation with ExaMPI's Strong Progress and Modern C++ Design2021 Workshop on Exascale MPI (ExaMPI)10.1109/ExaMPI54564.2021.00008(18-26)Online publication date: Nov-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media