Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

Out-of-core implementation for accelerator kernels on heterogeneous clouds

Published: 01 February 2018 Publication History

Abstract

Cloud environments today are increasingly featuring hybrid nodes containing multicore CPU processors and a diverse mix of accelerators such as Graphics Processing Units (GPUs), Intel Xeon Phi co-processors, and Field-Programmable Gate Arrays (FPGAs) to facilitate easier migration to them of HPC workloads. While virtualization of accelerators in clouds is a leading research challenge, we address the programming challenges that assail execution of large instances of data-parallel applications using these accelerators in this paper. In a typical hybrid node in a cloud, the tight integration of accelerators with multicore CPUs via PCI-E communication links contains inherent limitations such as limited main memory of accelerators and limited bandwidth of the PCI-E communication links. These limitations poses formidable programming challenges to execution of large problem sizes on these accelerators. In this paper, we describe a library containing interfaces (HCLOOC) that addresses these challenges. It employs optimal software pipelines to overlap data transfers between host CPU and the accelerator and computations on the accelerator. It is designed using the fundamental building blocks, which are OpenCL command queues for FPGAs, Intel offload streams for Intel Xeon Phis, and CUDA streams and events that allow concurrent utilization of the copy and execution engines provided in NVidia GPUs. We elucidate the key features of our library using an out-of-core implementation of matrix multiplication of large dense matrices on a hybrid node, an Intel Haswell multicore CPU server hosting three accelerators that includes NVidia K40c GPU, Intel Xeon Phi 3120P, and a Xilinx FPGA. Based on experiments with the GPU, we show that our out-of-core implementation achieves 82% of peak double-precision floating performance of the GPU and a speedup of 2.7 times over the NVidia's out-of-core matrix multiplication implementation (CUBLAS-XT). We also demonstrate that our implementation exhibits 0% drop in performance when the problem size exceeds the main memory of the GPU. We observe this 0% drop also for our implementation for Intel Xeon Phi and Xilinx FPGA.

References

[1]
Filelis-Papadopoulos CK, Grylonakis ENG, Kyziropoulos PE, Gravvanis GA, Morrison JP (2016) Characterization of hardware in self-managing self-organizing cloud environment. In: Proceedings of the 20th Pan-Hellenic Conference on Informatics, Series PCI '16. ACM, pp 56:1---56:6
[2]
Lynn T, Xiong H, Dong D, Momani B, Gravvanis GA, Filelis-Papadopoulos CK, Elster AC, Khan MM, Tzovaras D, Giannoutakis KM, Petcu D, Neagul M, Dragon I, Kuppudayar P, Natarajan S, McGrath M, Gaydadjiev G, Becker T, Gourinovitch A, Kenny D, Morrison J (2016) CLOUDLIGHTNING: a framework for a self-organising and self-managing heterogeneous cloud. In: Proceedings of the 6th International Conference on Cloud Computing and Services Science, vols 1 and 2, Series CLOSER 2016. SCITEPRESS - Science and Technology Publications, Lda pp 333---338
[3]
Hong CH, Spence I, Nikolopoulos D (2017) FairGV: fair and fast GPU virtualization. IEEE Trans Parallel Distrib Syst 99:1---1
[4]
CUBLAS-XT (2016) CUBLAS-XT: multi-GPU version of CUBLAS library supporting out-of-core routines. https://developer.nvidia.com/cublas
[5]
Tomov S, Dongarra J, Baboulin M (2010) Towards dense linear algebra for hybrid GPU accelerated manycore systems. Parallel Comput 36(5---6):232---240
[6]
Khronos OpenCL Registry (2017) OpenCL command queues. https://www.khronos.org/registry/OpenCL/specs/opencl-2.2.pdf
[7]
Intel (2017) Programming for Intel MIC architecture. https://software.intel.com/en-us/node/684368
[8]
NVIDIA (2016) CUDA C programming guide. https://docs.nvidia.com/cuda/cuda-c-programming-guide/
[9]
NVIDIA (2013) Tesla K40 GPU accelerator. http://www.nvidia.com/content/PDF/kepler/Tesla-K40-PCIe-Passive-Board-Spec-BD-06902-001_v05.pdf
[10]
Ostermann S, Iosup A, Yigitbasi N, Prodan R, Fahringer T, Epema D (2009) A performance analysis of EC2 cloud computing services for scientific computing. In: International Conference on Cloud Computing. Springer, pp 115---131
[11]
Iosup A, Ostermann S, Yigitbasi MN, Prodan R, Fahringer T, Epema D (2011) Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans Parallel Distrib Syst 22(6):931---945
[12]
Gupta A, Kalé LV, Milojicic D, Faraboschi P, Balle SM (2013) HPC-aware VM placement in infrastructure clouds. In: 2013 IEEE International Conference on Cloud Engineering (IC2E), Mar 2013, pp 11---20
[13]
Parashar M, AbdelBaky M, Rodero I, Devarakonda A (2013) Cloud paradigms and practices for computational and data-enabled science and engineering. Comput Sci Eng 15(4):10---18
[14]
Mauch V, Kunze M, Hillenbrand M (2013) High performance cloud computing. Future Gener Comput Syst 29(6):1408---1416
[15]
Giunta G, Montella R, Agrillo G, Coviello G (2010) A GPGPU transparent virtualization component for high performance computing clouds. Springer, Berlin
[16]
Byma S, Steffan JG, Bannazadeh H, Garcia AL, Chow P (2014) FPGAs in the cloud: booting virtualized hardware accelerators with OpenStack. In: 2014 IEEE 22nd Annual International Symposium on Field-Programmable Custom Computing Machines, May 2014, pp 109---116
[17]
Hong C-H, Spence I, Nikolopoulos DS (2017) GPU virtualization and scheduling methods: a comprehensive survey. ACM Comput Surv (CSUR) 50(3):35
[18]
Gu L, Siegel J, Li X (2011) Using GPUs to compute large out-of-card FFTs. In: Proceedings of the International Conference on Supercomputing, Series ICS '11. ACM, pp 255---264
[19]
Mu X, Zhou H-X, Chen K, Hong W (2014) Higher order method of moments with a parallel out-of-core LU solver on GPU/CPU platform. IEEE Trans Antennas Propag 62(11):5634---5646
[20]
Zhong Z, Rychkov V, Lastovetsky A (2012) Data partitioning on heterogeneous multicore and multi-GPU systems using functional performance models of data-parallel applications. In: 2012 IEEE International Conference on Cluster Computing (Cluster 2012), 24---28 Sept 2012, pp 191---199
[21]
Zhong Z (2014) Optimization of data-parallel scientific applications on highly heterogeneous modern HPC platforms. Ph.D. dissertation, University College Dublin
[22]
Wu J, Jaja J (2016) Achieving native GPU performance for out-of-card large dense matrix multiplication. Parallel Process Lett 26(02):1650007
[23]
Edgar R (2009) SciGPU-GEMM. https://github.com/YaohuiZeng/scigpugemm
[24]
Martin D (2010) High performance computing linpack benchmark for CUDA. https://github.com/avidday/hpl-cuda
[25]
NVIDIA (2017) CUDA toolkit documentation. http://docs.nvidia.com/cuda/cublas/index.html#axzz4kRVc2o6B
[26]
Khaleghzadeh H, Zhong Z, Reddy R, Lastovetsky A (2017) ZZGemmOOC: multi-GPU out-of-core routines for dense matrix multiplization. https://git.ucd.ie/hcl/zzgemmooc.git
[27]
Khaleghzadeh H, Zhong Z, Reddy R, Lastovetsky A (2017) XeonPhiOOC: out-of-core package for out-of-core DGEMM on Xeon Phi. https://git.ucd.ie/manumachu/xeonphiooc.git
[28]
Khaleghzadeh H, Zhong Z, Reddy R, Lastovetsky A (2017) FPGAOOC: out-of-core package for out-of-core DGEMM on FPGA. https://git.ucd.ie/hcl/fpgagemm.git
[29]
Intel MKL BLAS. https://software.intel.com/en-us/mkl

Cited By

View all
  • (2023)Kernel-as-a-ServiceProceedings of the 24th International Middleware Conference10.1145/3590140.3629115(192-206)Online publication date: 27-Nov-2023
  • (2022)An out-of-core method for GPU image mapping on large 3D scenarios of the real worldFuture Generation Computer Systems10.1016/j.future.2022.03.022134:C(66-77)Online publication date: 1-Sep-2022
  • (2022)An efficient parallelization method of Dempster–Shafer evidence theory based on CUDAThe Journal of Supercomputing10.1007/s11227-022-04810-y79:4(4582-4601)Online publication date: 28-Sep-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image The Journal of Supercomputing
The Journal of Supercomputing  Volume 74, Issue 2
February 2018
467 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 February 2018

Author Tags

  1. CUBLAS
  2. CUDA
  3. GPU
  4. Heterogeneous clouds
  5. Intel MKL
  6. Intel Xeon Phi
  7. Matrix multiplication
  8. Out-of-core

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Kernel-as-a-ServiceProceedings of the 24th International Middleware Conference10.1145/3590140.3629115(192-206)Online publication date: 27-Nov-2023
  • (2022)An out-of-core method for GPU image mapping on large 3D scenarios of the real worldFuture Generation Computer Systems10.1016/j.future.2022.03.022134:C(66-77)Online publication date: 1-Sep-2022
  • (2022)An efficient parallelization method of Dempster–Shafer evidence theory based on CUDAThe Journal of Supercomputing10.1007/s11227-022-04810-y79:4(4582-4601)Online publication date: 28-Sep-2022
  • (2018)Special sectionThe Journal of Supercomputing10.1007/s11227-018-2241-974:2(527-529)Online publication date: 1-Feb-2018

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media