Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2966884.2966915acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article
Public Access

MPI Sessions: Leveraging Runtime Infrastructure to Increase Scalability of Applications at Exascale

Published: 25 September 2016 Publication History

Abstract

MPI includes all processes in MPI_COMM_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending MPI with a new concept called Sessions, which makes two key contributions: a tighter integration with the underlying runtime system; and a scalable route to communication groups. This is a fundamental change in how we organise and address MPI processes that removes well-known scalability barriers by no longer requiring the global communicator MPI_COMM_WORLD.

References

[1]
MPI-2 Journal of Development. http://www.mpi-forum.org/docs/mpi-jd/mpi-20-jod.ps.Z.
[2]
MPI Standard 3.1. http://www.mpi-forum.org/docs/docs.html.
[3]
The OpenMP API Specification for Parallel Programming. http://openmp.org.
[4]
Top 500 Supercomputing Sites, Top500. http://www.top500.org/.
[5]
T. Adachi, N. Shida, K. Miura, S. Sumimoto, A. Uno, M. Kurokawa, F. Shoji, and M. Yokokawa. The Design of Ultra Scalable MPI Collective Communication on the K Computer. Computer Science-Research and Development, 28(2-3):147--155, 2013.
[6]
P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Traff. MPI on Millions of Cores. Parallel Processing Letters, 21(01):45--60, 2011.
[7]
A. Beguelin, J. Dongarra, A. Geist, R. Manchek, and V. Sunderam. A Users' Guide to PVM (Parallel Virtual Machine). Technical report, Oak Ridge National Lab., TN (United States), 1991.
[8]
W. Bland, A. Bouteiller, T. Herault, J. Hursey, G. Bosilca, and J. J. Dongarra. Recent Advances in the Message Passing Interface: 19th European MPI Users' Group Meeting, EuroMPI 2012, Vienna, Austria, September 23-26, 2012. Proceedings, chapter An Evaluation of User-Level Failure Mitigation Support in MPI, pages 193--203. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
[9]
M. Chaarawi and E. Gabriel. Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators. In Computational Science--ICCS 2008, pages 297--306. Springer, 2008.
[10]
J. Dinan, R. E. Grant, P. Balaji, D. Goodell, D. Miller, M. Snir, and R. Thakur. Enabling Communication Concurrency Through Flexible MPI Endpoints. International Journal of High Performance Computing Applications, 28(4):390--405, 2014.
[11]
M. Farreras, T. Cortes, J. Labarta, and G. Almasi. Scaling MPI to Short-Memory MPPs Such As BG/L. In Proceedings of the 20th annual international conference on Supercomputing, pages 209--218, 2006.
[12]
E. Gabriel, G. E. Fagg, and J. J. Dongarra. Evaluating Dynamic Communicators and One-Sided Operations for Current MPI Libraries. International Journal of High Performance Computing Applications, 19(1):67--79, 2005.
[13]
G. Geist, J. A. Kohl, and P. M. Papadopoulos. PVM and MPI: A Comparison of Features. Calculateurs Paralleles, 8(2):137--150, 1996.
[14]
W. Gropp and E. Lusk. Recent Advances in Parallel Virtual Machine and Message Passing Interface: 4th European PVM/MPI Users' Group Meeting Cracow, Poland, November 3-5, 1997 Proceedings, chapter Why are PVM and MPI so different? 1997.
[15]
F. Gygi, R. K. Yates, J. Lorenz, E. W. Draeger, F. Franchetti, C. W. Ueberhuber, B. R. d. Supinski, S. Kral, J. A. Gunnels, and J. C. Sexton. Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform Using the QBOX code. In Proceedings of the 2005 ACM/IEEE conference on Supercomputing, 2005.
[16]
D. Holmes and S. Booth. Mcmpi: A managed-code mpi library in pure c#. In Proceedings of the 20th European MPI Users' Group Meeting, EuroMPI '13, pages 25--30, New York, NY, USA, 2013. ACM.
[17]
H. Kamal, S. M. Mirtaheri, and A. Wagner. Scalability of Communicators and Groups in MPI. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC '10, 2010.
[18]
H. Kamal and A. Wagner. FG-MPI: Fine-grain MPI for Multicore and Clusters. In Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on, pages 1--8. IEEE, 2010.
[19]
A. Moody, D. H. Ahn, and B. R. de Supinski. Exascale Algorithms for Generalized MPI comm split. In Proceedings of the 18th European MPI Users' Group Conference on Recent Advances in the Message Passing Interface, EuroMPI'11. 2011.
[20]
M. Pérache, P. Carribault, and H. Jourdren. MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption. In Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 5759, pages 94--103. Springer, 2009.
[21]
B. V. Protopopov and A. Skjellum. A Multithreaded Message Passing Interface (MPI) Architecture: Performance and Program Issues. Journal of Parallel and Distributed Computing, 61(4):449--466, 2001.
[22]
S. Sridharan, J. Dinan, and D. D. Kalamkar. Enabling Efficient Multithreaded MPI Communication Through a Library-Based Implementation of MPI Endpoints. In High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for, pages 487--498. IEEE, 2014.

Cited By

View all
  • (2024)Taking the MPI standard and the open MPI library to exascaleThe International Journal of High Performance Computing Applications10.1177/10943420241265936Online publication date: 23-Jul-2024
  • (2024)Faster and Scalable MPI Applications LaunchingIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2022.321807735:2(264-279)Online publication date: Feb-2024
  • (2024)OpenCUBE: Building an Open Source Cloud Blueprint with EPI SystemsEuro-Par 2023: Parallel Processing Workshops10.1007/978-3-031-48803-0_29(260-264)Online publication date: 14-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroMPI '16: Proceedings of the 23rd European MPI Users' Group Meeting
September 2016
225 pages
ISBN:9781450342346
DOI:10.1145/2966884
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 September 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Message Passing Interface
  2. Scalable Programming Model

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

EuroMPI 2016
EuroMPI 2016: The 23rd European MPI Users' Group Meeting
September 25 - 28, 2016
Edinburgh, United Kingdom

Acceptance Rates

Overall Acceptance Rate 66 of 139 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)92
  • Downloads (Last 6 weeks)7
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Taking the MPI standard and the open MPI library to exascaleThe International Journal of High Performance Computing Applications10.1177/10943420241265936Online publication date: 23-Jul-2024
  • (2024)Faster and Scalable MPI Applications LaunchingIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2022.321807735:2(264-279)Online publication date: Feb-2024
  • (2024)OpenCUBE: Building an Open Source Cloud Blueprint with EPI SystemsEuro-Par 2023: Parallel Processing Workshops10.1007/978-3-031-48803-0_29(260-264)Online publication date: 14-Apr-2024
  • (2023)Fault Awareness in the MPI 4.0 Session ModelProceedings of the 20th ACM International Conference on Computing Frontiers10.1145/3587135.3592189(189-192)Online publication date: 9-May-2023
  • (2023)Fault-Aware Group-Collective Communication Creation and Repair in MPIEuro-Par 2023: Parallel Processing10.1007/978-3-031-39698-4_4(47-61)Online publication date: 28-Aug-2023
  • (2022)Enabling Global MPI Process Addressing in MPI ApplicationsProceedings of the 29th European MPI Users' Group Meeting10.1145/3555819.3555829(27-36)Online publication date: 14-Sep-2022
  • (2022)Implementation and evaluation of MPI 4.0 partitioned communication librariesParallel Computing10.1016/j.parco.2021.102827108:COnline publication date: 23-Apr-2022
  • (2022)An Emulation Layer for Dynamic Resources with MPI SessionsHigh Performance Computing. ISC High Performance 2022 International Workshops10.1007/978-3-031-23220-6_10(147-161)Online publication date: 29-May-2022
  • (2021)MiniMod: A Modular Miniapplication Benchmarking Framework for HPC2021 IEEE International Conference on Cluster Computing (CLUSTER)10.1109/Cluster48925.2021.00028(12-22)Online publication date: Sep-2021
  • (2021)The MPI Tool Interfaces: Past, Present, and Future—Capabilities and ProspectsTools for High Performance Computing 2018 / 201910.1007/978-3-030-66057-4_3(55-83)Online publication date: 23-May-2021
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media