Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3489517.3530409acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article
Open access

Automated accelerator optimization aided by graph neural networks

Published: 23 August 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Using High-Level Synthesis (HLS), the hardware designers must describe only a high-level behavioral flow of the design. However, it still can take weeks to develop a high-performance architecture mainly because there are many design choices at a higher level to explore. Besides, it takes several minutes to hours to evaluate the design with the HLS tool. To solve this problem, we model the HLS tool with a graph neural network that is trained to be used for a wide range of applications. The experimental results demonstrate that our model can estimate the quality of design in milliseconds with high accuracy, resulting in up to 79X speedup (with an average of 48X) for optimizing the design compared to the previous state-of-the-art work relying on the HLS tool.

    References

    [1]
    J. Cong et al. 2016. Source-to-source optimization for HLS. In FPGAs for Software Programmers. 137--163.
    [2]
    J. Cong et al. 2016. Software infrastructure for enabling FPGA-based accelerations in data centers. In ISLPED. 154--155.
    [3]
    C. Cummins et al. 2021. ProGraML: A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations. (2021).
    [4]
    J. Duarte et al. 2018. Fast inference of deep neural networks in FPGAs for particle physics. Journal of Instrumentation 13, 07 (2018), P07027.
    [5]
    D. P. Kingma et al. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
    [6]
    T. N. Kipf et al. 2017. Semi-supervised classification with graph convolutional networks. ICLR (2017).
    [7]
    J. Kwon et al. 2020. Transfer Learning for Design-Space Exploration with High-Level Synthesis. In 2020 ACM/IEEE MLCAD.
    [8]
    C. Lattner et al. 2004. LLVM: A compilation framework for lifelong program analysis & transformation. In International Symposium on CGO.
    [9]
    Y. Li et al. 2016. Gated graph sequence neural networks. ICLR (2016).
    [10]
    L. v. d. Maaten et al. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, Nov (2008), 2579--2605.
    [11]
    A. Mahapatra et al. 2014. Machine-learning based simulated annealer method for high level synthesis design space exploration. In ESLsyn.
    [12]
    A. Paszke et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32.
    [13]
    B. Reagen et al. 2014. Machsuite: Benchmarks for accelerator design and customized architectures. In IISWC.
    [14]
    B. C. Schafer et al. 2019. High-Level Synthesis Design Space Exploration: Past, Present, and Future. IEEE TCAD (2019).
    [15]
    Y. Shi et al. 2021. Masked label prediction: Unified message passing model for semi-supervised classification. IJCAI (2021).
    [16]
    A. Sohrabizadeh et al. 2020. End-to-End Optimization of Deep Learning Applications. In FPGA. 133--139.
    [17]
    A. Sohrabizadeh et al. 2022. AutoDSE: Enabling Software Programmers to Design Efficient FPGA Accelerators. ACM Transactions on Design Automation of Electronic Systems (TODAES) 27, 4 (2022), 1--27.
    [18]
    E. Ustun et al. 2020. Accurate operation delay prediction for FPGA HLS using graph neural networks. In ICCAD.
    [19]
    A. Vaswani et al. 2017. Attention is all you need. In NeurIPS. 5998--6008.
    [20]
    P. Veličković et al. 2018. Graph attention networks. ICLR (2018).
    [21]
    N. Wu et al. 2021. IronMan: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning. arXiv:2102.08138 (2021).
    [22]
    Z. Wu et al. 2020. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems 32, 1 (2020), 4--24.
    [23]
    K. Xu et al. 2018. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning. PMLR, 5453--5462.
    [24]
    C. H. Yu et al. 2018. S2FA: an accelerator automation framework for heterogeneous computing in datacenters. In DAC. 1--6.
    [25]
    T. Yuki et al. PolyBench/C. ([n. d.]). https://web.cse.ohio-state.edu/~pouchet.2/software/polybench/
    [26]
    J. Zhao et al. 2017. COMBA: A comprehensive model-based analysis framework for high level synthesis of real applications. In ICCAD. 430--437.

    Cited By

    View all
    • (2024)FADO: Floorplan-Aware Directive Optimization Based on Synthesis and Analytical Models for High-Level Synthesis Designs on Multi-Die FPGAsACM Transactions on Reconfigurable Technology and Systems10.1145/3653458Online publication date: 20-Mar-2024
    • (2024)AutoAnnotate: Reinforcement Learning based Code Annotation for High Level Synthesis2024 25th International Symposium on Quality Electronic Design (ISQED)10.1109/ISQED60706.2024.10528738(1-9)Online publication date: 3-Apr-2024
    • (2023)Towards a comprehensive benchmark for high-level synthesis targeted to FPGAsProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668084(45288-45299)Online publication date: 10-Dec-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
    July 2022
    1462 pages
    ISBN:9781450391429
    DOI:10.1145/3489517
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 August 2022

    Check for updates

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    DAC '22
    Sponsor:
    DAC '22: 59th ACM/IEEE Design Automation Conference
    July 10 - 14, 2022
    California, San Francisco

    Acceptance Rates

    Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

    Upcoming Conference

    DAC '25
    62nd ACM/IEEE Design Automation Conference
    June 22 - 26, 2025
    San Francisco , CA , USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)944
    • Downloads (Last 6 weeks)65
    Reflects downloads up to 27 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)FADO: Floorplan-Aware Directive Optimization Based on Synthesis and Analytical Models for High-Level Synthesis Designs on Multi-Die FPGAsACM Transactions on Reconfigurable Technology and Systems10.1145/3653458Online publication date: 20-Mar-2024
    • (2024)AutoAnnotate: Reinforcement Learning based Code Annotation for High Level Synthesis2024 25th International Symposium on Quality Electronic Design (ISQED)10.1109/ISQED60706.2024.10528738(1-9)Online publication date: 3-Apr-2024
    • (2023)Towards a comprehensive benchmark for high-level synthesis targeted to FPGAsProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668084(45288-45299)Online publication date: 10-Dec-2023
    • (2023)Special Session: Machine Learning for Embedded System DesignProceedings of the 2023 International Conference on Hardware/Software Codesign and System Synthesis10.1145/3607888.3608962(28-37)Online publication date: 17-Sep-2023
    • (2023)GANDSE: Generative Adversarial Network-based Design Space Exploration for Neural Network Accelerator DesignACM Transactions on Design Automation of Electronic Systems10.1145/357092628:3(1-20)Online publication date: 19-Mar-2023
    • (2023)HL-Pow: Learning-Assisted Pre-RTL Power Modeling and Optimization for FPGA HLSIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.324638742:11(3925-3938)Online publication date: 17-Feb-2023
    • (2023)AutoHLS: Learning to Accelerate Design Space Exploration for HLS Designs2023 IEEE 66th International Midwest Symposium on Circuits and Systems (MWSCAS)10.1109/MWSCAS57524.2023.10405914(491-495)Online publication date: 6-Aug-2023
    • (2023)An Interview With Professor Jason Cong [Interview]IEEE Circuits and Systems Magazine10.1109/MCAS.2023.333132723:4(3-66)Online publication date: Dec-2024
    • (2023)Micro/Nano Circuits and Systems Design and Design Automation: Challenges and Opportunities [Point of View]Proceedings of the IEEE10.1109/JPROC.2023.3276941111:6(561-574)Online publication date: Jun-2023
    • (2023)HGBO-DSE: Hierarchical GNN and Bayesian Optimization based HLS Design Space Exploration2023 International Conference on Field Programmable Technology (ICFPT)10.1109/ICFPT59805.2023.00017(106-114)Online publication date: 12-Dec-2023
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media