Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2818950.2818969acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmemsysConference Proceedingsconference-collections
research-article

S-L1: A Software-based GPU L1 Cache that Outperforms the Hardware L1 for Data Processing Applications

Published: 05 October 2015 Publication History
  • Get Citation Alerts
  • Abstract

    Implementing a GPU L1 data cache entirely in software to usurp the hardware L1 cache sounds counter-intuitive. However, we show how a software L1 cache can perform significantly better than the hardware L1 cache for data-intensive streaming (i.e., "Big-Data") GPGPU applications. Hardware L1 data caches can perform poorly on current GPUs, because the size of the L1 is far too small and its cache line size is too large given the number of threads that typically need to run in parallel.
    Our paper makes two contributions. First, we experimentally characterize the performance behavior of modern GPU memory hierarchies and in doing so identify a number of bottlenecks. Secondly, we describe the design and implementation of a software L1 cache, S-L1. On ten streaming GPGPU applications, S-L1 performs 1.9 times faster, on average, when compared to using the default hardware L1, and 2.1 times faster, on average, when compared to using no L1 cache.

    References

    [1]
    M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proceedings of the 22nd Annual International Conference on Supercomputing (ICS '08), pages 225--234, 2008.
    [2]
    M. Bauer, H. Cook, and B. Khailany. CudaDMA: Optimizing GPU Memory Bandwidth Via Warp Specialization. In Proceedings of 2011 international conference for high performance computing, networking, storage and analysis, page 12, 2011.
    [3]
    J. Chapman, I. Ho, S. Sunkara, S. Luo, G. Schroth, and D. Rokhsar. Meraculous: De Novo Genome Assembly with Short Paired-End Reads. PLoS ONE, page e23501, 2011.
    [4]
    S. Che, M. Boyer, J. Meng, D. Tarjan, J. W. Sheaffer, S.-H. Lee, and K. Skadron. Rodinia: A Benchmark Suite for Heterogeneous Computing. In IEEE International Symposium on Workload Characterization (IISWC 2009), pages 44--54, 2009.
    [5]
    J. Gaur, R. Srinivasan, S. Subramoney, and M. Chaudhuri. Efficient Management of Last-level Caches in Graphics Processors for 3D Scene Rendering Workloads. In Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 46), pages 395--407, 2013.
    [6]
    C. Gregg and K. Hazelwood. Where is the Data? Why You Cannot Debate CPU vs. GPU Performance Without the Answer. In Proceedings of IEEE Intl. Symp. on Performance Analysis of Systems and Software (ISPASS), pages 134--144, 2011.
    [7]
    F. Ji and X. Ma. Using Shared Memory to Accelerate MapReduce on Graphics Processing Units. In IEEE International Symposium on Parallel & Distributed Processing (IPDPS 2011), pages 805--816, 2011.
    [8]
    W. Jia, K. A. Shaw, and M. Martonosi. Characterizing and Improving the Use of Demand-fetched Caches in GPUs. In Proceedings of the 26th ACM international conference on Supercomputing (ICS 26), pages 15--24, 2012.
    [9]
    M. Khan, P. Basu, G. Rudy, M. Hall, C. Chen, and J. Chame. A Script-based Autotuning Compiler System to Generate High-performance CUDA Code. ACM Transactions on Architecture and Code Optimization (TACO), pages 31:1--31:25, 2013.
    [10]
    F. Khorasani, K. Vora, R. Gupta, and L. N. Bhuyan. CuSha: Vertex-centric Graph Processing on GPUs. In Proceedings of the 23rd International Symposium on High-performance Parallel and Distributed Computing (HPDC '14), pages 239--252, 2014.
    [11]
    C. Li, Y. Yang, H. Dai, S. Yan, F. Mueller, and H. Zhou. Understanding the tradeoffs between software-managed vs. hardware-managed caches in GPUs. In 2014 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 231--242, 2014.
    [12]
    C.-H. Lin, S.-Y. Tsai, C.-H. Liu, S.-C. Chang, and J.-M. Shyu. Accelerating String Matching Using Multi-threaded Algorithm on GPU. In Global Telecommunications Conference (GLOBECOM 2010), pages 1--5, 2010.
    [13]
    M. Moazeni, A. Bui, and M. Sarrafzadeh. A Memory Optimization Technique for Software-managed Scratchpad Memory in GPUs. In IEEE 7th Symposium on Application Specific Processors (SASP'09), pages 43--49, 2009.
    [14]
    R. Mokhtari and M. Stumm. BigKernel -- High Performance CPU-GPU Communication Pipelining for Big Data-Style Applications. In Proceedings of the IEEE 28th International Parallel and Distributed Processing Symposium (IPDPS '14), pages 819--828, 2014.
    [15]
    A. Nukada, Y. Ogata, T. Endo, and S. Matsuoka. Bandwidth intensive 3-D FFT kernel for GPUs using CUDA. In International Conference for High Performance Computing, Networking, Storage and Analysis (SC 2008), pages 1--11, 2008.
    [16]
    I. Singh, A. Shriraman, W. Fung, M. O'Connor, and T. Aamodt. Cache Coherence for GPU Architectures. 2013.
    [17]
    M. A. Suchard, Q. Wang, C. Chan, J. Frelinger, A. Cron, and M. West. Understanding GPU programming for statistical computation: Studies in massively parallel massive mixtures. Journal of Computational and Graphical Statistics, pages 419--438, 2010.
    [18]
    J. Tölke and M. Krafczyk. TeraFLOP computing on a desktop PC with GPUs for 3D CFD. International Journal of Computational Fluid Dynamics, pages 443--456, 2008.
    [19]
    Y. Torres, A. Gonzalez-Escribano, and D. R. Llanos. Understanding the Impact of CUDA Tuning Techniques for Fermi. In 2011 International Conference on High Performance Computing and Simulation (HPCS), pages 631--639, 2011.
    [20]
    S.-Z. Ueng, M. Lathara, S. S. Baghsorkhi, and W. H. Wen-mei. CUDA-lite: Reducing GPU Programming Complexity. In Languages and Compilers for Parallel Computing, pages 1--15. 2008.
    [21]
    M. Ujaldon. Inside kepler. http://gpu.cs.uct.ac.za/Slides/Kepler.pdf, 2013.
    [22]
    J. P. Walters, V. Balu, S. Kompalli, and V. Chaudhary. Evaluating the Use of GPUs in Liver Image Segmentation and HMMER Database Searches. In IEEE International Symposium on Parallel & Distributed Processing (IPDPS 2009), pages 1--12, 2009.
    [23]
    Wikipedia. Geforce 700 series --- Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/GeForce_700_series, 2013.
    [24]
    T. Wilson, P. Hoffmann, S. Somasundaran, J. Kessler, J. Wiebe, Y. Choi, C. Cardie, E. Riloff, and S. Patwardhan. OpinionFinder: a System for Subjectivity Analysis. In Proceedings of HLT/EMNLP on Interactive Demonstrations, pages 34--35, 2005.
    [25]
    H. Wong, M.-M. Papadopoulou, M. Sadooghi-Alvandi, and A. Moshovos. Demystifying GPU Microarchitecture Through Microbenchmarking. In 2010 IEEE International Symposium on Performance Analysis of Systems Software (ISPASS), pages 235--246, 2010.
    [26]
    Y. Yang, P. Xiang, J. Kong, and H. Zhou. A GPGPU Compiler for Memory Optimization and Parallelism Management. In Proceedings of the 2010 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '10), pages 86--97, 2010.

    Cited By

    View all
    • (2019)A survey of architectural approaches for improving GPGPU performance, programmability and heterogeneityJournal of Parallel and Distributed Computing10.1016/j.jpdc.2018.11.012Online publication date: Jan-2019

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    MEMSYS '15: Proceedings of the 2015 International Symposium on Memory Systems
    October 2015
    278 pages
    ISBN:9781450336048
    DOI:10.1145/2818950
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 October 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    MEMSYS '15
    MEMSYS '15: International Symposium on Memory Systems
    October 5 - 8, 2015
    DC, Washington DC, USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)A survey of architectural approaches for improving GPGPU performance, programmability and heterogeneityJournal of Parallel and Distributed Computing10.1016/j.jpdc.2018.11.012Online publication date: Jan-2019

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media