Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Downsampling Algorithms for Large Sparse Matrices

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

Mapping of sparse matrices to processors of a parallel system may have a significant impact on the development of sparse-matrix algorithms and, in effect, to their efficiency. We present and empirically compare two downsampling algorithms for sparse matrices. The first algorithm is independent of particular matrix-processors mapping, while the second one is adapted for cases where matrices are partitioned among processors according to contiguous chunks of rows/columns. We show that the price for the versatility of the first algorithm is the collective communication performed by all processors. The second algorithm uses more efficient communication strategy, which stems from the knowledge of mapping of matrices to processors, and effectively outperforms the first algorithm in terms of running time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Balay, S., Brown, J., Buschelman, K., Eijkhout, V., Gropp, W.D., Kaushik, D., Knepley, M.G., McInnes, L.C., Smith, B.F., Zhang, H.: PETSc Users Manual. Tech. Rep. ANL-95/11 - Revision 3.2, Argonne National Laboratory (2010)

  2. Barrett, R., Berry, M., Chan, T.F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., der Vorst, H.V.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd edn. SIAM, Philadelphia, PA (1994)

    Book  Google Scholar 

  3. Bisseling, R.H., Auer, B.O.F., Yzelman, A.: Two-dimensional approaches to sparse matrix partitioning. In: Naumann, U., Schenk, O. (eds.) Combinatorial Scientific Computing, chap. 12, pp. 321–349. CRC Press, Boca Raton, FL (2012)

    Chapter  Google Scholar 

  4. Çatalyürek, U., Aykanat, C.: Hypergraph-partitioning-based decomposition for parallel sparse-matrix vector multiplication. IEEE Trans. Parallel Distrib. Syst. 10(7), 673–693 (1999). doi:10.1109/71.780863

    Article  Google Scholar 

  5. Çatalyürek, U.V., Aykanat, C., Uçar, B.: On two-dimensional sparse matrix partitioning: models, methods, and a recipe. SIAM J. Sci. Comput. 32(2), 656–683 (2010). doi:10.1137/080737770

    Article  MathSciNet  MATH  Google Scholar 

  6. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 3rd edn. The MIT Press, Cambridge, Massachusetts (2009)

    MATH  Google Scholar 

  7. Davis, T.A., Hu, Y.F.: The University of Florida sparse matrix collection. ACM Trans. Math. Softw. 38(1) (2011). doi:10.1145/2049662.2049663

  8. Dinan, J., Krishnamoorthy, S., Balaji, P., Hammond, J.R., Krishnan, M., Tipparaju, V., Vishnu, A.: Noncollective communicator creation in mpi. In: Y. Cotronis, A. Danalis, D. Nikolopoulos, J. Dongarra (eds.) Proceedings of the 18th European MPI Users’ Group Conference on Recent Advances in the Message Passing Interface, Lecture Notes in Computer Science, vol. 6960, pp. 282–291. Springer, Berlin, Heidelberg (2011). doi:10.1007/978-3-642-24449-0_32

  9. Falgout, R.D.: An introduction to algebraic multigrid. Comput. Sci. Eng. 8(6), 24–33 (2006). doi:10.1109/MCSE.2006.105

    Article  Google Scholar 

  10. Falgout, R.D., Yang, U.M.: hypre: A library of high performance preconditioners. In: P.M. Sloot, A.G. Hoekstra, C.K. Tan, J.J. Dongarra (eds.) Proceedings of the International Conference on Computational Science (ICCS 2002), Lecture Notes in Computer Science, vol. 2331, pp. 632–641. Springer, Berlin, Heidelberg (2002). doi:10.1007/3-540-47789-6_66

  11. Finkel, R., Bentley, J.: Quad trees a data structure for retrieval on composite keys. Acta Inform. 4(1), 1–9 (1974). doi:10.1007/BF00288933

    Article  Google Scholar 

  12. Gropp, W., Huss-Lederman, S., Lumsdaine, A., Lusk, E., Nitzberg, B., Saphir, W., Snir, M.: MPI–The Complete Reference, Volume 2: The MPI-2 extensions. MIT Press, Cambridge, MA (1998)

    Google Scholar 

  13. Heroux, M.A., Willenbring, J.M.: Trilinos users guide. Tech. Rep. SAND2003-2952, Sandia National Laboratories (2003)

  14. Joshi, M., Karypis, G., Kumar, V., Gupta, A., Gustavson, F.: PSPASES: An efficient and scalable parallel sparse direct solver. In: Proceedings of the 9th SIAM Conference on Parallel Processing for Scientific Computing (1999)

  15. Josuttis, N.M.: The C++ Standard Library–A Tutorial and Reference, 2nd edn. Addison Wesley Longman, Boston, MA (2012)

    Google Scholar 

  16. Kafeety, H.D., Meyer, C.D., Stewart, W.J.: A general framework for iterative aggregation/disaggregation methods. In: Proceedings of the Fourth Copper Mountain Conference on Iterative Methods (1992)

  17. Langr, D., Šimeček, I., Tvrdík, P., Dytrych, T.: Parallel data acquisition for visualization of very large sparse matrices. In: Proceedings of the 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2013), pp. 336–343. IEEE Computer Society, Los Alamitos, CA (2013)

  18. Langr, D., Šimeček, I., Tvrdík, P., Dytrych, T.: Large-scale visualization of sparse matrices. Scalable Comput. Pract. Exp. 15(1), 21–31 (2014). doi:10.12694/scpe.v15i1.963

    Google Scholar 

  19. Langr, D., Šimeček, I., Tvrdík, P., Dytrych, T.: Scalable parallel generation of very large sparse matrices. In: R. Wyrzykowski, J. Dongarra, K. Karczewski, J. Waniewski (eds.) 10th International Confernce on Parallel Processing and Applied Mathematics (PPAM 2013), Lecture Notes in Computer Science, pp. 178–187. Springer, Berlin Heidelberg (2014). doi:10.1007/978-3-642-55224-3_18. Accepted for publication

  20. Langr, D., Šimeček, I., Tvrdík, P., Dytrych, T., Draayer, J.P.: Adaptive-blocking hierarchical storage format for sparse matrices. In: Proceedings of the Federated Conference on Computer Science and Information Systems (FedCSIS 2012), pp. 545–551. IEEE Xplore Digital Library (2012)

  21. Li, X.S.: An overview of SuperLU: algorithms, implementation, and user interface. ACM Trans. Math. Softw. 31(3), 302–325 (2005). doi:10.1145/1089014.1089017

    Article  MATH  Google Scholar 

  22. Pinar, A., Aykanat, C.: Sparse matrix decomposition with optimal load balancing. In: Proceedings of the 4th International Conference on High-Performance Computing (HPC 1997), pp. 224–229. IEEE Xplore Digital Library (1997). doi:10.1109/HIPC.1997.634497

  23. Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia, PA (2003)

    Book  MATH  Google Scholar 

  24. Šimeček, I., Langr, D., Srnec, E.: The study of impact of matrix-processor mapping on the parallel sparse matrix-vector multiplication. In: Proceedings of the 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2013), pp. 321–328. IEEE Computer Society, Los Alamitos, CA (2013)

  25. Šimeček, I., Langr, D., Tvrdík, P.: Minimal quadtree format for compression of sparse matrices storage. In: Proceedings of the 14th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2012), pp. 359–364. IEEE Computer Society (2012)

  26. Šimeček, I., Langr, D., Tvrdík, P.: Space efficient formats for structure of sparse matrices based on tree structures. In: Proceedings of the 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2013), pp. 344–351. IEEE Computer Society (2013). doi:10.1109/SYNASC.2013.52

  27. Šimeček, I., Langr, D., Tvrdík, P.: Tree-based space efficient formats for storing the structure of sparse matrices. Scalable Comput. Pract. Exp. 15(1), 1–20 (2014). doi:10.12694/scpe.v15i1.962

    Google Scholar 

  28. Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI–The Complete Reference, Volume 1: The MPI Core, 2 (revised edn. MIT Press, Cambridge, MA (1998)

    Google Scholar 

  29. Stathis, P.T.: Sparse matrix vector processing formats. Ph.D. thesis, Technische Universiteit Delft (2004)

  30. The HDF Group. Hierarchical data format version 5, 2000–2013. http://www.hdfgroup.org/HDF5/ (accessed June 3, 2013)

  31. Ujaldón, M., Sharma, S.D., Zapata, E.L., Saltz, J.: Experimental evaluation of efficient sparse matrix distributions. In: Proceedings of the 10th International Conference on Supercomputing, ICS ’96, pp. 78–85. ACM, New York, NY, USA (1996). doi:10.1145/237578.237588

  32. Vastenhouw, B., Bisseling, R.H.: A two-dimensional data distribution method for parallel sparse matrix-vector multiplication. SIAM Rev. 47(1), 67–95 (2005). doi:10.1137/S0036144502409019

  33. Yzelman, A., Bisseling, R.H.: Two-dimensional cache-oblivious sparse matrix vector multiplication. Parallel Comput. 37(12), 806–819 (2011). doi:10.1016/j.parco.2011.08.004

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Czech Science Foundation under Grant No. P202/12/2011. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (award number OCI 07-25070) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana–Champaign and its National Center for Supercomputing Applications. This research used resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was supported by the IT4Innovations Centre of Excellence project (CZ.1.05/1.1.00/02.0070), funded by the European Regional Development Fund and the national budget of the Czech Republic via the Research and Development for Innovations Operational Programme, as well as Czech Ministry of Education, Youth and Sports via the project Large Research, Development and Innovations Infrastructures (LM2011033). The authors acknowledge helpful advice from J. Šístek.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Langr.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Langr, D., Tvrdík, P., Šimeček, I. et al. Downsampling Algorithms for Large Sparse Matrices. Int J Parallel Prog 43, 679–702 (2015). https://doi.org/10.1007/s10766-014-0315-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10766-014-0315-8

Keywords