Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Partitioned Global Address Space Languages

Published: 26 May 2015 Publication History

Abstract

The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to improve programmer productivity while at the same time aiming for high performance. The main premise of PGAS is that a globally shared address space improves productivity, but that a distinction between local and remote data accesses is required to allow performance optimizations and to support scalability on large-scale parallel architectures. To this end, PGAS preserves the global address space while embracing awareness of nonuniform communication costs.
Today, about a dozen languages exist that adhere to the PGAS model. This survey proposes a definition and a taxonomy along four axes: how parallelism is introduced, how the address space is partitioned, how data is distributed among the partitions, and finally, how data is accessed across partitions. Our taxonomy reveals that today’s PGAS languages focus on distributing regular data and distinguish only between local and remote data access cost, whereas the distribution of irregular data and the adoption of richer data access cost models remain open challenges.

References

[1]
Eric Allen, David Chase, Joe Hallett, Victor Luchangco, Jan-Willem Maessen, Sukyoung Ryu, Guy L. Steele, and Sam Tobin-Hochstadt. 2008. The Fortress Language Specification, version 1.0. Technical Report. Sun Microsystems, Inc. 262 pages.
[2]
Dan Bonachea. 2002. GASNet Specification, v1.1. Technical Report. University of California at Berkeley, Berkeley, CA.
[3]
Dan Bonachea and Jason Duell. 2004. Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations. Int. J. High Perform. Comput. Netw. 1, 1--3 (Aug. 2004), 91--99.
[4]
Eugene D. Brooks III, Brent C. Gorda, and Karen H. Warren. 1992. The parallel C preprocessor. Scientific Programming 1, 1 (Jan. 1992), 79--89.
[5]
David Callahan and Ken Kennedy. 1988. Compiling programs for distributed memory multiprocessors. The Journal of Supercomputing 2, 2 (1988), 151--169.
[6]
William W. Carlson and Jesse M. Draper. 1995. Distributed data access in AC. In Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP’95). ACM, New York, NY, 39--47.
[7]
Bradford L. Chamberlain, David Callahan, and Hans P. Zima. 2007. Parallel programmability and the chapel language. International Journal of High Performance Computing Applications 21, 3 (2007), 291--312.
[8]
Barbara Chapman, Tony Curtis, Swaroop Pophale, Stephen Poole, Jeff Kuehn, Chuck Koelbel, and Lauren Smith. 2010. Introducing OpenSHMEM: SHMEM for the PGAS community. In Proceedings of the 4th Conference on Partitioned Global Address Space Programming Model (PGAS’10). ACM, Article 2, 3 pages.
[9]
Philippe Charles, Christian Grothoff, Vijay Saraswat, Christopher Donawa, Allan Kielstra, Kemal Ebcioğlu, Christoph von Praun, and Vivek Sarkar. 2005. X10: An object-oriented approach to non-uniform cluster computing. In Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’05). ACM, New York, NY, 519--538.
[10]
Sébastien Chauvin, Proshanta Saha, François Cantonnet, Smita Annareddy, and Tarek El-Ghazawi. 2005. UPC Manual (v1.2 ed.). The George Washington University.
[11]
Cray Inc. 1999. Cray T3E C and C++ Optimization Guide. Technical Report 0042178002. http://docs.cray.com/books/004-2178-002/004-2178-002-manual.pdf.
[12]
David E. Culler, Andrea Dusseau, Seth Copen Goldstein, Arvind Krishnamurthy, Steven Lumetta, Thorsten von Eicken, and Katherine Yelick. 1993. Parallel programming in Split-C. In Proceedings of the Supercomputing’93 Conference. ACM, 262--273.
[13]
Steven J. Deitz. 2005. High-Level Programming Language Abstractions for Advanced and Dynamic Parallel Computations. Dissertation. University of Washington.
[14]
Tarek El-Ghazawi, William Carlson, Thomas Sterling, and Katherine Yelick. 2005. UPC: Distributed Shared Memory Programming. Wiley, Hoboken, NJ. 252 pages.
[15]
Al Geist, William Gropp, Steven H. Lederman, Andrew Lumsdaine, Ewing L. Lusk, William Saphir, Tony Skjellum, and Marc Snir. 1996. MPI-2: Extending the message-passing interface. In Euro-Par’96: Proceedings of the 2nd International Euro-Par Conference on Parallel Processing. Springer-Verlag, 128--135.
[16]
High Performance Fortran Forum. 1993. High Performance Fortran Language Specification, version 1.0. Technical Report CRPC-TR92225. Center for Research on Parallel Computation, Rice University, Houston, Texas.
[17]
Ken Kennedy, Charles Koelbel, and Hans Zima. 2007. The rise and fall of high performance Fortran: An historical object lesson. In Proceedings of the 3rd ACM SIGPLAN Conference on History of Programming Languages (HOPL III). ACM, New York, NY, 7--1--7--22.
[18]
Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele, Jr., and Mary E. Zosel. 1994. The High Performance Fortran Handbook. MIT Press, Cambridge, MA.
[19]
Jinpil Lee and Mitsuhisa Sato. 2010. Implementation and performance evaluation of XcalableMP: A parallel programming language for distributed memory systems. In Proceedings of the 2010 39th International Conference on Parallel Processing Workshops (ICPPW’10). IEEE Computer Society 413--420.
[20]
Calvin Lin and Lawrence Snyder. 1994. ZPL: An array sublanguage. In Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, Utpal Banerjee, David Gelernter, Alex Nicolau, and David Padua (Eds.), Vol. 768. Springer, 96--114.
[21]
Ewing L. Lusk and Katherine A. Yelick. 2007. Languages for high-productivity computing: The DARPA HPCS language project. Parallel Processing Letters 17, 1 (2007), 89--102.
[22]
Jarek Nieplocha and Bryan Carpenter. 1999. ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In Lecture Notes in Computer Science. Springer-Verlag, 533--546.
[23]
Jaroslaw Nieplocha, Robert J. Harrison, and Richard J. Littlefield. 1994. Global arrays: A portable “shared-memory” programming model for distributed memory computers. In Proceedings Supercomputing’94. 340--349.
[24]
Robert W. Numrich and John Reid. 1998. Co-Array Fortran for parallel programming. ACM Fortran Forum 17, 2 (1998), 1--31.
[25]
Robert W. Numrich and John Reid. 2005. Co-arrays in the next Fortran Standard. SIGPLAN Fortran Forum 24, 2 (Aug. 2005), 4--17.
[26]
Franz-Josef Pfreundt. 2010. The Building Blocks for HPC: GPI and MCTP. Technical Report. Fraunhofer Institut für Techno-und-Wirtschaftsmathematik (ITWM). Retrieved from http://www.gpi-site.com/cms/sites/default/files/GPI_Whitepaper.pdf.
[27]
John T. Richards, Jonathan Brezin, Calvin B. Swart, and Christine A. Halverson. 2014. A decade of progress in parallel programming productivity. Commun. ACM 57, 11 (October 2014), 60--66.
[28]
Lawrence Snyder. 2007. The design and development of ZPL. In Proceedings of the 3rd ACM SIGPLAN Conference on History of Programming Languages (HOPL III). ACM, 8--1--8--37.
[29]
Herb Sutter. 2005. The free lunch is over: A fundamental turn toward concurrency in software. Dr. Dobb’s Journal 30, 3 (March 2005), 1--1.
[30]
Thinking Machines Corporation. 1991. CM Fortran Reference Manual, Version 1.0. Cambridge, Massachusetts.
[31]
UPC Consortium. 2005. UPC Language Specification, v1.2. Technical Report LBNL-59208. Lawrence Berkeley National Laboratory.
[32]
Yonghong Yan, Jisheng Zhao, Yi Guo, and Vivek Sarkar. 2009. Hierarchical place trees: A portable abstraction for task parallelism and data movement. In Languages and Compilers for Parallel Computing. 172--187.
[33]
Katherine A. Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto, Ben Liblit, Arvind Krishnamurthy, Paul N. Hilfinger, Susan L. Graham, David Gay, Phillip Colella, and Alexander Aiken. 1998. Titanium: A high-performance java dialect. Concurrency: Practice and Experience 10, 11--13 (1998), 825--836.
[34]
Hans P. Zima, Heinz-J. Bast, and Michael Gerndt. 1988. SUPERB: A tool for semiautomatic MIMD/SIMD parallelization. Parallel Comput. 6, 1 (1988), 1--18.
[35]
ZPL Research Group. 2003. A Comic Book Introduction to ZPL. Department of Computer Science, University of Washington Seattle, WA. Retrieved from http://www.cs.washington.edu/research/zpl/comicbook/comicbook.html.

Cited By

View all
  • (2024)Adaptive Prefetching for Fine-grain Communication in PGAS Programs2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS57955.2024.00071(740-751)Online publication date: 27-May-2024
  • (2024)On the Performance of Malleable APGAS Programs and Batch Job SchedulersSN Computer Science10.1007/s42979-024-02641-75:4Online publication date: 27-Mar-2024
  • (2024)Bridging Between Active Objects: Multitier Programming for Distributed, Concurrent SystemsActive Object Languages: Current Research Trends10.1007/978-3-031-51060-1_4(92-122)Online publication date: 29-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 47, Issue 4
July 2015
573 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/2775083
  • Editor:
  • Sartaj Sahni
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 May 2015
Accepted: 01 January 2015
Revised: 01 November 2014
Received: 01 June 2013
Published in CSUR Volume 47, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. HPC
  2. PGAS
  3. Parallel programming
  4. data access
  5. data distribution
  6. message passing
  7. one-sided communication
  8. survey

Qualifiers

  • Survey
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)70
  • Downloads (Last 6 weeks)6
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Adaptive Prefetching for Fine-grain Communication in PGAS Programs2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS)10.1109/IPDPS57955.2024.00071(740-751)Online publication date: 27-May-2024
  • (2024)On the Performance of Malleable APGAS Programs and Batch Job SchedulersSN Computer Science10.1007/s42979-024-02641-75:4Online publication date: 27-Mar-2024
  • (2024)Bridging Between Active Objects: Multitier Programming for Distributed, Concurrent SystemsActive Object Languages: Current Research Trends10.1007/978-3-031-51060-1_4(92-122)Online publication date: 29-Jan-2024
  • (2023)A Component Model for Multilevel Parallel ProgrammingProceedings of the XXVII Brazilian Symposium on Programming Languages10.1145/3624309.3624318(25-32)Online publication date: 25-Sep-2023
  • (2023)Optimizing Communication in 2D Grid-Based MPI Applications at ExascaleProceedings of the 30th European MPI Users' Group Meeting10.1145/3615318.3615327(1-11)Online publication date: 11-Sep-2023
  • (2023)Macroprogramming: Concepts, State of the Art, and Opportunities of Macroscopic Behaviour ModellingACM Computing Surveys10.1145/357935355:13s(1-37)Online publication date: 13-Jul-2023
  • (2023)Revisiting Swapping in User-Space With Lightweight ThreadingIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.327495342:11(4205-4218)Online publication date: Nov-2023
  • (2023)Toward Reproducible Benchmarking of PGAS and MPI Communication Schemes2023 IEEE 29th International Conference on Parallel and Distributed Systems (ICPADS)10.1109/ICPADS60453.2023.00268(1959-1967)Online publication date: 17-Dec-2023
  • (2023)Development of a knowledge-sharing parallel computing approach for calibrating distributed watershed hydrologic modelsEnvironmental Modelling & Software10.1016/j.envsoft.2023.105708164(105708)Online publication date: Jun-2023
  • (2023)Malleable APGAS Programs and Their Support in Batch Job SchedulersEuro-Par 2023: Parallel Processing Workshops10.1007/978-3-031-48803-0_8(89-101)Online publication date: 28-Aug-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media