With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today. The chapters describe the programming models in a unique tutorial style rather than using the formal approach taken in the research literature. The aim is to cover a wide range of parallel programming models, enabling the reader to understand what each has to offer. The book begins with a description of the Message Passing Interface (MPI), the most common parallel programming model for distributed memory computing. It goes on to cover one-sided communication models, ranging from low-level runtime libraries (GASNet, OpenSHMEM) to high-level programming models (UPC, GA, Chapel); task-oriented programming models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to describe their computation and data units as tasks so that the runtime system can manage computation and data movement as necessary; and parallel programming models intended for on-node parallelism in the context of multicore architecture or attached accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will be a valuable resource for graduate students, researchers, and any scientist who works with data sets and large computations. Contributors Timothy Armstrong, Michael G. Burke, Ralph Butler, Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman, Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale, David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn, Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W. Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng
Cited By
- De Sensi D, Di Girolamo S, McMahon K, Roweth D and Hoefler T An in-depth analysis of the slingshot interconnect Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, (1-14)
- García J and Gonzalez-Escribano A (2019). Guest Editorial, International Journal of Parallel Programming, 47:2, (161-163), Online publication date: 1-Apr-2019.
- Bremer M, Kazhyken K, Kaiser H, Michoski C and Dawson C (2019). Performance Comparison of HPX Versus Traditional Parallelization Strategies for the Discontinuous Galerkin Method, Journal of Scientific Computing, 80:2, (878-902), Online publication date: 1-Aug-2019.
- Addazi L, Ciccozzi F and Lisper B Executable modelling for highly parallel accelerators Proceedings of the 22nd International Conference on Model Driven Engineering Languages and Systems, (318-321)
Index Terms
- Programming Models for Parallel Computing
Recommendations
Applied Parallel Computing
A review of Sourcebook of Parallel Computing, edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda To rczon, and Andy White.
Distributed parallel computing using navigational programming
Message Passing (MP) and Distributed Shared Memory (DSM) are the two most common approaches to distributed parallel computing. MP is difficult to use, whereas DSM is not scalable. Performance scalability and ease of programming can be achieved at the ...
Data-Parallel Programming on MIMD Computers
The implementation of two compilers for the data-parallel programming language Dataparallel C is described. One compiler generates code for Intel and nCUBE hypercube multicomputers; the other generates code for Sequent multiprocessors. A suite of ...