Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2627373.2627379acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
tutorial

GPGPU Composition with OCaml

Published: 09 June 2014 Publication History

Abstract

GPGPU programming promises high performance. However, to achieve it, developers must overcome several challenges. The main ones are: write and use hyper-parallel kernels on GPU, manage memory transfers between CPU and GPU, and compose kernels, keeping individual performance of components while optimizing the global performance. In this article, we propose to study the composition by distinguishing the location where it is done: kernel composition on the GPU, kernel generation by the CPU, and overall composition. To achieve it, we use the SPOC library, developed in OCaml. SPOC offers abstractions over the Cuda and OpenCL frameworks. It proposes a specific language, called Sarek, to express kernels and different parallel skeletons to compose them. We show that by increasing the level of abstraction to handle kernels, programs are easier to write and that some optimizations (via kernel generation and transfers scheduling) become possible. Thus, we win on both sides: expressiveness and efficiency.

References

[1]
Nvidia, Cuda C Programming guide, 2012.
[2]
K. O. W. Group, "OpenCL 1.2 specifications," 2012.
[3]
X. Leroy, D. Doligez, A. Firsch, J. Garrigue, D. R. Remy, and J. Vouillon, "The OCaml System Release 4.01: Documentation and User's Manual," tech. rep., Inria, 2013. http://caml.inria.fr.
[4]
M. Bourgoin, E. Chailloux, and J.-L. Lamotte, "Spoc: GPGPU Programming Through Stream Processing with OCAML," Parallel Processing Letters, vol. 22, pp. 1--12, May 2012. http://www.algo-prog.info/spoc.
[5]
M. Bourgoin, E. Chailloux, and J.-L. Lamotte, "Efficient Abstractions for GPGPU Programming," International Journal of Parallel Programming, pp. 1--18, 2013.
[6]
NVIDIA, "Cublas Library," 2012. http://developer.nvidia.com/cublas.
[7]
S. Tomov, R. Nath, P. Du, and J. Dongarra, "MAGMA Users Guide," 2011.
[8]
J. Donham and N. Pouillard, "Camlp4 and Template Haskell," in ACM SIGPLAN Commercial Users of Functional Programming, pp. 6:1--6:1, ACM, 2010.
[9]
T. B. Jablin, P. Prabhu, J. A. Jablin, N. P. Johnson, S. R. Beard, and D. I. August, "Automatic CPU-GPU communication management and optimization," ACM SIGPLAN Notices, pp. 142--151, 2011.
[10]
N. S. Scott, M. P. Scott, P. G. Burke, T. Stitt, V. Faro-Maza, C. Denis, and A. Maniopoulou, "2DRMP: A Suite of Two-Dimensional R-matrix Propagation Codes.," Computer Physics Communications, pp. 2424--2449, 2009.
[11]
M. Bourgoin, E. Chailloux, and J.-L. Lamotte, "Retour d'expérience: portage d'une application haute-performance vers un langage de haut niveau," in la Conférence d'informatique en Parallélisme, Architecture et Système (ComPas), 2013.
[12]
P. Kipfer and R. Westermann, "Improved gpu sorting," in GPU gems, vol. 2, pp. 733--746, 2005.
[13]
M. Cole, "Bringing Skeletons out of the Closet: A Pragmatic Manifesto for Skeletal Parallel Programming," Parallel computing, pp. 389--406, 2004.
[14]
R. Di Cosmo, Z. Li, M. Danelutto, S. Pelagatti, X. Leroy, P. Weis, and F. Clément, CamlP3l 1.0: User Manual, 2010. http://camlp3l.inria.fr/.
[15]
M. Chakravarty, G. Keller, S. Lee, T. McDonell, and V. Grover, "Accelerating Haskell array codes with multicore GPUs," in Proceedings of the sixth workshop on Declarative aspects of multicore programming (DAMP), pp. 3--14, ACM, 2011.
[16]
R. Clifton-Everest, T. McDonell, M. Chakravarty, and G. Keller, "Embedding foreign code," in Practical Aspects of Declarative Languages (M. Flatt and H.-F. Guo, eds.), vol. 8324 of Lecture Notes in Computer Science, pp. 136--151, Springer International Publishing, 2014.
[17]
G. Cocco, "FSCL F# to OpenCL Compiler and Runtime," 2013. http://www.gabrielecocco.it/fscl/.
[18]
A. Rubinsteyn, E. Hielscher, N. Weinman, and D. Shasha, "Parakeet: A just-in-time parallel accelerator for python," in The 4th USENIX Workshop on Hot Topics in Parallelism, USENIX, 2012.
[19]
B. Catanzaro, M. Garland, and K. Keutzer, "Copperhead: compiling an embedded data parallel language," in ACM SIGPLAN Notices, vol. 46, pp. 47--56, ACM, 2011.
[20]
T. Oliphant, "Numba python bytecode to LLVM translator," in Proceedings of the Python for Scientific Computing Conference (SciPy), 2012.
[21]
A. INC, "Aparapi," 2013. http://code.google.com/p/aparapi/.
[22]
E. Holk, M. Pathirage, A. Chauhan, A. Lumsdaine, and N. D. Matsakis, "GPU Programming in Rust: Implementing High-Level Abstractions in a Systems-Level Language," in Proceedings of the 2013 IEEE 27th International Symposium on Parallel and Distributed Processing Workshops and PhD Forum (IPDPSW), pp. 315--324, 2013.
[23]
E. Holk, W. E. Byrd, N. Mahajan, J. Willcock, A. Chauhan, and A. Lumsdaine, "Declarative parallel programming for gpus.," in Internation Conference on Parallel Computing (PARCO), pp. 297--304, 2011.
[24]
J. Hoberock and N. Bell, "Thrust: C++ Template Library for CUDA," 2009. http://code.google.com/p/thrust/.
[25]
C. Augonnet, S. Thibault, R. Namyst, and P.-A. Wacrenier, "StarPU: A Unified Platform for Task Scheduling on Heterogeneous Multicore Architectures," Concurrency and Computation: Practice and Experience, Special Issue: Euro-Par 2009, vol. 23, pp. 187--198, 2009.
[26]
T. Gautier, J. V. Lima, N. Maillard, and B. Raffin, "Xkaapi: A runtime system for data-flow task programming on heterogeneous architectures," in Proceedings of the 2013 IEEE 27th International Symposium on Parallel & Distributed Processing (IPDPS), pp. 1299--1308, 2013.
[27]
M. Amini, B. Creusillet, S. Even, R. Keryell, O. Goubier, S. Guelton, J. O. McMahon, F.-X. Pasquier, G. Péan, P. Villalon, et al., "Par4all: From convex array regions to heterogeneous computing," in IMPACT 2012: Second International Workshop on Polyhedral Compilation Techniques HiPEAC 2012, 2012.
[28]
T. Aarnio and M. Bourges-Sevenier, "WebCL 1.0 specification," Khronos WebCL Working Group, 2014.
[29]
J. Vouillon and V. Balat, "From Bytecode to JavaScript: The js of ocaml Compiler," Software: Practice and Experience, 2013.

Cited By

View all
  • (2019)Compiler Optimization of Accelerator Data TransfersInternational Journal of Parallel Programming10.1007/s10766-017-0549-347:1(39-58)Online publication date: 1-Feb-2019
  • (2017)High Level Data Structures for GPGPU Programming in a Statically Typed LanguageInternational Journal of Parallel Programming10.1007/s10766-016-0424-745:2(242-261)Online publication date: 1-Apr-2017
  • (2015)High-level accelerated array programming in the web browserProceedings of the 2nd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming10.1145/2774959.2774964(31-36)Online publication date: 13-Jun-2015

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ARRAY'14: Proceedings of ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming
June 2014
112 pages
ISBN:9781450329378
DOI:10.1145/2627373
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 June 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. DSL
  2. GPGPU
  3. OCaml
  4. composition
  5. parallel skeletons

Qualifiers

  • Tutorial
  • Research
  • Refereed limited

Conference

PLDI '14
Sponsor:

Acceptance Rates

ARRAY'14 Paper Acceptance Rate 17 of 25 submissions, 68%;
Overall Acceptance Rate 17 of 25 submissions, 68%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2019)Compiler Optimization of Accelerator Data TransfersInternational Journal of Parallel Programming10.1007/s10766-017-0549-347:1(39-58)Online publication date: 1-Feb-2019
  • (2017)High Level Data Structures for GPGPU Programming in a Statically Typed LanguageInternational Journal of Parallel Programming10.1007/s10766-016-0424-745:2(242-261)Online publication date: 1-Apr-2017
  • (2015)High-level accelerated array programming in the web browserProceedings of the 2nd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming10.1145/2774959.2774964(31-36)Online publication date: 13-Jun-2015

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media