Abstract
Application memory access patterns are crucial in deciding how much traffic is served by the cache and forwarded to the dynamic random-access memory (DRAM). However, predicting such memory traffic is difficult because of the interplay of prefetchers, compilers, parallel execution, and innovations in manufacturer-specific micro-architectures. This research introduced MAPredict, a static analysis-driven framework that addresses these challenges to predict last-level cache (LLC)-DRAM traffic. By exploring and analyzing the behavior of modern Intel processors, MAPredict formulates cache-aware analytical models. MAPredict invokes these models to predict LLC-DRAM traffic by combining the application model, machine model, and user-provided hints to capture dynamic information. MAPredict successfully predicts LLC-DRAM traffic for different regular access patterns and provides the means to combine static and empirical observations for irregular access patterns. Evaluating 130 workloads from six applications on recent Intel micro-architectures, MAPredict yielded an average accuracy of 99% for streaming, 91% for strided, and 92% for stencil patterns. By coupling static and empirical methods, up to 97% average accuracy was obtained for random access patterns on different micro-architectures.
This manuscript has been authored by UT-Battelle LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Jalby, W., Kuck, D., Malony, A., Masella, M., Mazouz, A., Popov, M.: The long and winding road toward efficient high-performance computing. Proc. IEEE 106(11), 1985–2003 (2018)
Monil, M.A.H., Belviranli, M., Lee, S., Vetter, J., Malony, A. In: International Conference on Parallel Architectures and Compilation Techniques (PACT), (2020)
Williams, S., Waterman, A., Patterson, D.: Roofline: an insightful visual performance model for multicore architectures. Commun. ACM 52(4), 65–76 (2009)
Lee, S., Meredith, J., Vetter, J.: Compass: a framework for automated performance modeling and prediction. In: 29th International Conference on Supercomputing (ICS15), pp. 405–414 (2015)
Peng, I., Vetter, J., Moore, S., Lee, S.: Tuyere: Enabling scalable memory workloads for system exploration. In: International Symposium on High-Performance Parallel and Distributed Computing, pp. 180–191 (2018)
Umar, M., Moore, S.V., Meredith, J.S., Vetter, J.S., Cameron, K.W.: Aspen-based performance and energy modeling frameworks. J. Parallel Distrib. Compu. 120, 222–236 (2018)
Spafford, K.L., Vetter, J.S.: Aspen: a domain specific language for performance modeling. In: SC12: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–11, Salt Lake City (2012)
Top 500 supercomputers published at sc20. https://www.top500.org/
Alappat, C., Hofmann, J., Hager, G., Fehske, H., Bishop, A., Wellein, G.: Understanding HPC benchmark performance on Intel Broadwell and Cascade Lake processors. arXiv preprint arXiv:2002.03344 (2020)
Monil, M.A.H., Lee, S., Vetter, J.S., Malony, A.D.: Comparing LLC-memory traffic between CPU and GPU architectures. In: 2021 IEEE/ACM Redefining Scalability for Diversely Heterogeneous Architectures Workshop (RSDHA), pp. 8–16 (2021)
Shende, S., Malony, A.: The TAU parallel performance system. Int. J. High Perform. Comput. Appl 20(2), 287–311 (2006)
Terpstra, D., Jagode, H., You, H., Dongarra, J.: Collecting performance data with PAPI-C. In: Muller, M., Resch, M., Schulz, A., Nagel, W. (eds.) Tools for High Performance Computing 2009, pp. 157–173. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-11261-4_11
Lee, S., Vetter, J.S.: OpenARC: open accelerator research compiler for directive-based, efficient heterogeneous computing. In: ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC), Vancouver, ACM (2014)
McCalpin, J.D.: Stream benchmarks (2002)
Tramm, J., Siegel, A., Islam, T., Schulz,M.: XSBench-the development and verification of a performance abstraction for Monte Carlo reactor analysis. In: Conference: PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future (PHYSOR) (2014)
Karlin, I.: Lulesh programming model and performance ports overview. Technical report, Lawrence Livermore National Lab. (LLNL), CA, USA (2012)
Yu, L., Li, D., Mittal, S., Vetter, J.S.: Quantitatively modeling application resiliency with the data vulnerability factor. In: ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) (2014)
Kim, Y., Yang, W., Mutlu, O.: Ramulator: a fast and extensible DRAM simulator. IEEE Comput. Archit. Lett. 15(1), 45–49 (2015)
Rosenfeld, P., Cooper-Balis, E., Jacob, B.: DRAMSim2: a cycle accurate memory system simulator. IEEE Comput. Archit. Lett. 10(1), 16–19 (2011)
Allen, T., Ge, R.: Characterizing power and performance of GPU memory access. In: Internatopnal Workshop on Energy Efficient Supercomputing (E2SC), pp. 46–53 (2016)
Dave, C., Bae, H., Min, S., Lee, S., Eigenmann, R., Midkiff, S.: Cetus: a source-to-source compiler infrastructure for multicores. Computer 42, 36–42 (2009)
Lopez, M.G., Hernandez, O., Budiardja, R.D., Wells, J.C.: CAASCADE: a system for static analysis of HPC software application portfolios. In: Bhatele, A., Boehme, D., Levine, J.A., Malony, A.D., Schulz, M. (eds.) ESPT/VPA 2017-2018. LNCS, vol. 11027, pp. 90–104. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17872-7_6
Hill, M., Reddi, V.J.: Gables: a roofline model for mobile SoCs. In: International Symposium on High Performance Computer Architecture (HPCA), pp. 317–330 (2019)
Monil, M.A.H., Lee, S., Vetter, J., Malony, A.: Understanding the impact of memory access patterns in Intel processors. In: MCHPC 2020: Workshop on Memory Centric High Performance Computing. IEEE (2020)
Marques, D.: Performance analysis with cache-aware roofline model in Intel advisor. In: International Conference on High Performance Computing & Simulation, pp. 898–907 (2017)
Hammond, S., Vaughan, C., Hughes, C.: Evaluating the Intel Skylake Xeon processor for HPC workloads. In: International Conference on High Performance Computing & Simulation (HPCS18), pp. 342–349 (2018)
Molka, D., Hackenberg, D., Schöne, R.: Main memory and cache performance of Intel Sandy Bridge and AMD Bulldozer. In: Proceedings of the Workshop on Memory Systems Performance and Correctness, pp. 1–10 (2014)
Treibig, J., Hager, G., Wellein, G.: LIKWID: a lightweight performance-oriented tool suite for x86 multicore environments. In: 2010 39th International Conference on Parallel Processing Workshops, pp. 207–216. IEEE (2010)
Hofmann, J., Fey, D., Eitzinger, J., Hager, G., Wellein, G.: Analysis of Intel’s Haswell microarchitecture using the ECM model and microbenchmarks. In: Hannig, F., Cardoso, J.M.P., Pionteck, T., Fey, D., Schröder-Preikschat, W., Teich, J. (eds.) ARCS 2016. LNCS, vol. 9637, pp. 210–222. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30695-7_16
Hofmann, J., Hager, G., Wellein, G., Fey, D.: An analysis of core- and chip-level architectural features in four generations of intel server processors. In: Kunkel, J.M., Yokota, R., Balaji, P., Keyes, D. (eds.) ISC High Performance 2017. LNCS, vol. 10266, pp. 294–314. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58667-0_16
Saini, S., Hood, R., Chang, J., Baron, J.: Performance evaluation of an Intel Haswell-and Ivy Bridge-based supercomputer using scientific and engineering applications. In: 2016 IEEE 18th International Conference on High Performance Computing and Communications (HPCC), pp. 1196–1203. IEEE (2016)
Saini, S., Hood, R.: Performance evaluation of Intel Broadwell nodes based supercomputer using computational fluid dynamics and climate applications. In: 2017 IEEE 19th International Conference on High Performance Computing and Communications Workshops (HPCCWS), pp. 58–65. IEEE (2017)
Acknowledgments
This research used resources of the Experimental Computing Laboratory at Oak Ridge National Laboratory, which are supported by the US Department of Energy’s Office of Science under contract no. DE-AC05-00OR22725.
This research was supported by (1) the US Department of Defense, Brisbane: Productive Programming Systems in the Era of Extremely Heterogeneous and Ephemeral Computer Architectures and (2) DOE Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Monil, M.A.H., Lee, S., Vetter, J.S., Malony, A.D. (2022). MAPredict: Static Analysis Driven Memory Access Prediction Framework for Modern CPUs. In: Varbanescu, AL., Bhatele, A., Luszczek, P., Marc, B. (eds) High Performance Computing. ISC High Performance 2022. Lecture Notes in Computer Science, vol 13289. Springer, Cham. https://doi.org/10.1007/978-3-031-07312-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-07312-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-07311-3
Online ISBN: 978-3-031-07312-0
eBook Packages: Computer ScienceComputer Science (R0)