AStitch: enabling a new multi-dimensional optimization space for memory-intensive ML training and inference on modern SIMT architectures
Proceedings of the 27th ACM International Conference on Architectural …, 2022•dl.acm.org
This work reveals that memory-intensive computation is a rising performance-critical factor in
recent machine learning models. Due to a unique set of new challenges, existing ML
optimizing compilers cannot perform efficient fusion under complex two-level dependencies
combined with just-in-time demand. They face the dilemma of either performing costly fusion
due to heavy redundant computation, or skipping fusion which results in massive number of
kernels. Furthermore, they often suffer from low parallelism due to the lack of support for real …
recent machine learning models. Due to a unique set of new challenges, existing ML
optimizing compilers cannot perform efficient fusion under complex two-level dependencies
combined with just-in-time demand. They face the dilemma of either performing costly fusion
due to heavy redundant computation, or skipping fusion which results in massive number of
kernels. Furthermore, they often suffer from low parallelism due to the lack of support for real …
This work reveals that memory-intensive computation is a rising performance-critical factor in recent machine learning models. Due to a unique set of new challenges, existing ML optimizing compilers cannot perform efficient fusion under complex two-level dependencies combined with just-in-time demand. They face the dilemma of either performing costly fusion due to heavy redundant computation, or skipping fusion which results in massive number of kernels. Furthermore, they often suffer from low parallelism due to the lack of support for real-world production workloads with irregular tensor shapes. To address these rising challenges, we propose AStitch, a machine learning optimizing compiler that opens a new multi-dimensional optimization space for memory-intensive ML computations. It systematically abstracts four operator-stitching schemes while considering multi-dimensional optimization objectives, tackles complex computation graph dependencies with novel hierarchical data reuse, and efficiently processes various tensor shapes via adaptive thread mapping. Finally, AStitch provides just-in-time support incorporating our proposed optimizations for both ML training and inference. Although AStitch serves as a stand-alone compiler engine that is portable to any version of TensorFlow, its basic ideas can be generally applied to other ML frameworks and optimization compilers. Experimental results show that AStitch can achieve an average of 1.84x speedup (up to 2.73x) over the state-of-the-art Google's XLA solution across five production workloads. We also deploy AStitch onto a production cluster for ML workloads with thousands of GPUs. The system has been in operation for more than 10 months and saves about 20,000 GPU hours for 70,000 tasks per week.
ACM Digital Library