Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Exploring Hierarchical MPI Reduction Collective Algorithms Targeted to Multicore Node Clusters

  • Conference paper
  • First Online:
Numerical Computations: Theory and Algorithms (NUMTA 2023)

Abstract

High-performance computing applications heavily rely on message-passing mechanisms for data sharing in cluster environments, and the MPI library stands as the default communication library for parallel applications. Significant efforts have been directed toward optimizing data distribution and buffering based on size. This optimization aims to enhance communication performance and prevent issues such as running out of memory on the target node.

Furthermore, the emergence of multicore clusters with larger node sizes has stimulated the investigation of hierarchical collective algorithms that consider the placement of processes within the cluster and the memory hierarchy.

This paper studies and compares the performance of the algorithm of the reduction collective from the literature, specifically several implementations that do not form part of the current MPI standard, which tackle this issue. We implement the algorithms on top of Intel MPI and OpenMPI libraries using the MPI profiling interface.

Experimental results with the Intel MPI Benchmarks on a multicore cluster, Intel Platinum processor-based and OmniPath interconnection network show much room for improvement in the performance of collectives depending on the message sizes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bayatpour, M., Maqbool Hashmi, J., Chakraborty, S., Subramoni, H., Kousha, P., Panda, D.K.: Salar: scalable and adaptive designs for large message reduction collectives. In: 2018 IEEE International Conference on Cluster Computing (CLUSTER), pp. 12–23 (2018). https://doi.org/10.1109/CLUSTER.2018.00014

  2. Bayatpour, M., Chakraborty, S., Subramoni, H., Lu, X., Panda, D.K.D.: Scalable reduction collectives with data partitioning-based multi-leader design. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017, Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3126908.3126954, https://doi.org/10.1145/3126908.3126954

  3. Castelló, A., Catalán, M., Dolz, M.F., Quintana-Ortí, E.S., Duato, J.: Analyzing the impact of the MPI allreduce in distributed training of convolutional neural networks. Computing , 1–19 (2021). https://doi.org/10.1007/s00607-021-01029-2

  4. Chunduri, S., Parker, S., Balaji, P., Harms, K., Kumaran, K.: Characterization of mpi usage on a production supercomputer. In: SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 386–400 (Nov 2018). https://doi.org/10.1109/SC.2018.00033

  5. Giordano, A., De Rango, A., Rongo, R., D’Ambrosio, D., Spataro, W.: Dynamic load balancing in parallel execution of cellular automata. IEEE Trans. Parallel Distrib. Syst. 32(2), 470–484 (2021). https://doi.org/10.1109/TPDS.2020.3025102

    Article  MATH  Google Scholar 

  6. Graham, R.L., Shipman, G.: MPI support for multi-core architectures: optimized shared memory collectives. In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds.) EuroPVM/MPI 2008. LNCS, vol. 5205, pp. 130–140. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87475-1_21

    Chapter  MATH  Google Scholar 

  7. Heroux, M.A., et al.: Improving Performance via Mini-applications. Tech. Rep. SAND2009-5574, Sandia National Laboratories (2009)

    Google Scholar 

  8. Intel MPI library: Intel MPI Library. https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html

  9. OpenMPI Open Source HPC: OpenMPI Open Source HPC. http://www.open-mpi.org/

  10. Rico-Gallego, J., Díaz-Martin, J.: Improving the performance of the mpi_allreduce collective operation through rank renaming. In: In Proceedings of First International Workshop on Sustainable Ultrascale Computing Systems (August 2014). https://core.ac.uk/download/pdf/30277086.pdf

  11. Träff, J.L., Hunold, S.: Decomposing mpi collectives for exploiting multi-lane communication. In: 2020 IEEE International Conference on Cluster Computing (CLUSTER), pp. 270–280 (2020). https://doi.org/10.1109/CLUSTER49012.2020.00037

  12. Utrera, G., Gil, M., Martorell, X.: Analyzing the performance of hierarchical collective algorithms on arm-based multicore clusters. In: 2022 30th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), pp. 230–233 (2022). https://doi.org/10.1109/PDP55904.2022.00043

Download references

Acknowledgements

The authors acknowledge the support of the Spanish Ministry of Education (PID2019-107255GB-C22) and the Generalitat de Catalunya (2021-SGR-01007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gladys Utrera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Utrera, G., Gil, M., Martorell, X., Spataro, W., Giordano, A. (2025). Exploring Hierarchical MPI Reduction Collective Algorithms Targeted to Multicore Node Clusters. In: Sergeyev, Y.D., Kvasov, D.E., Astorino, A. (eds) Numerical Computations: Theory and Algorithms. NUMTA 2023. Lecture Notes in Computer Science, vol 14478. Springer, Cham. https://doi.org/10.1007/978-3-031-81247-7_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-81247-7_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-81246-0

  • Online ISBN: 978-3-031-81247-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics