Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2966884.2966911acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Space Performance Tradeoffs in Compressing MPI Group Data Structures

Published: 25 September 2016 Publication History
  • Get Citation Alerts
  • Abstract

    MPI is a popular programming paradigm on parallel machines today. MPI libraries sometimes use O(N) data structures to implement MPI functionality. The IBM Blue Gene/Q machine has 16 GB memory per node. If each node runs 32 MPI processes, only 512 MB is available per process, requiring the MPI library to be space efficient. This scenario will become severe in a future Exascale machine with tens of millions of cores and MPI endpoints. We explore techniques to compress the dense O(N) mapping data structures that map the logical process ID to the global rank. Our techniques minimize topological communicator mapping state by replacing table lookups with a mapping function. We also explore caching schemes with performance results to optimize overheads of the mapping functions for recent translations in multiple MPI micro-benchmarks, and the 3D FFT and Algebraic Multi Grid application benchmarks.

    References

    [1]
    D. Chen, N. A. Eisley, P. Heidelberger, R. M. Senger, Y. Sugawara, S. Kumar, V. Salapura, D. L. Satterfield, B. Steinmacher-Burow, and J. J. Parker. The IBM Blue Gene/Q interconnection network and message unit. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pages 26:1--26:10. ACM, 2011.
    [2]
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum. MPICH: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing, 22(6):789--828, September 1996.
    [3]
    MPICH2. http://www.mcs.anl.gov/mpi/mpich2
    [4]
    W. Gropp and E. Lusk. MPICH ADI Implementation Reference Manual, August 1995.
    [5]
    S. Kumar, A. Mamidala, D. Faraj, B. Smith, M. Blocksome, B. Cernohous, D. Miller, J. Parker, J. Ratterman, P. Heidelberger, D. Chen, and B. Steinmacher-Burow. PAMI: A parallel active message interface for the BlueGene/Q supercomputer. In Proceedings of 26th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Shanghai, China, May 2012.
    [6]
    K. D. Ryu, R. Bellofatto, M. A. Blocksome, T. Gooding, T. A. Inglett, S. Kumar, A. R. Mamidala, M. G. Megerian, S. Miller, M. T. Nelson, B. Rosenburg, K. D. Ryu, B. Smith, J. Van Oosten, A. Wang and R. W. Wisniewski, IBM Blue Gene/Q System software stack, IBM Journal of Research and Development, Volume 57 Issue 1, January 2013, Pages 55--65.
    [7]
    P. Balaji, D. Buntinas, D. Goodell, W. Gropp, T. Hoefler, S. Kumar, E. Lusk, R. Thakur, and J. L. Träff, MPI on Millions of Cores. Parallel Processing Letters, 21(1):45--60, March 2011.
    [8]
    Stevens, R., White, A.: Report of the workshop on architectures and technologies for extreme scale computing. http://extremecomputing.labworks.org/hardware/report.stm (December 2009)
    [9]
    IBM Blue Gene/P Team, Overview of the IBM Blue Gene/P project, IBM Journal of Research and Development, Volume 52, No. 1/2, 2008
    [10]
    ASC Sequoia Benchmark Codes: Phloem. https://asc.llnl.gov/sequoia/benchmarks/#phloem (May 2011)
    [11]
    David Goodell, William Gropp, Xin Zhao and Rajeev Thakur, Scalable Memory Use in MPI: A Case Study with MPICH2, Recent Advances in the Message Passing Interface Lecture Notes in Computer Science Volume 6960, 2011, pp 140--149
    [12]
    Buntinas, Darius, Guillaume Mercier, and William Gropp. "Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem." Cluster Computing and the Grid, 2006. CCGRID 06. Sixth IEEE International Symposium on. Vol. 1. IEEE, 2006.
    [13]
    MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.edu
    [14]
    J.A. Ang, R.F. Barrett, R.E. Benner, D. Burke, C. Chan, D. Donofrio, S.D. Hammond, K.S. Hemmert, S.M. Kelly, H. Le, V.J. Leung, D.R. Resnick, A.F. Rodrigues, J. Shalf, D. Stark, D. Unat and N.J. Wright, Abstract Machine Models and Proxy Architectures for Exascale Computing, http://www.cal-design.org/publications/publications2
    [15]
    A. Petitet, C.Whaley, J. Dongarra, and A. Cleary, HPL: A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers. Innovative Computing Laboratory, 2000. Available at http://icl.cs.utk.edu/hpl.
    [16]
    Jesper Larsson Traff, "Compact and Efficient Implementation of the MPI Group Operations", in proceedings of recent advances in the message passing interface, EuroMPI', volume 6305 pp 170--178.
    [17]
    M. Charawi and E. Gabriel, Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators, In Proceedings of the International Conference on Computational Science, Krakow Poland, 2008, pp 297--308.
    [18]
    Sayantan Sur, Matthew J. Koop, and Dhabaleswar K. Panda. High-performance and scalable MPI over In_niBand with reduced memory usage: an in-depth performance analysis. In Proceedings of the 2006 ACM/IEEE conference on Supercomputing, page 105, New York, NY, USA, 2006. ACM.
    [19]
    Sequoia Algebraic Multi Grid (AMG) Benchmark https://asc.llnl.gov/sequoia/benchmarks/#amg

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    EuroMPI '16: Proceedings of the 23rd European MPI Users' Group Meeting
    September 2016
    225 pages
    ISBN:9781450342346
    DOI:10.1145/2966884
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 25 September 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. MPI
    2. MPI 3.0
    3. MPI collective communication
    4. MPI communicators and MPI_Comm_split

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • US Government Department of Energy

    Conference

    EuroMPI 2016
    EuroMPI 2016: The 23rd European MPI Users' Group Meeting
    September 25 - 28, 2016
    Edinburgh, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 66 of 139 submissions, 47%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 97
      Total Downloads
    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media