Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3566097.3567894acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article

Area-Driven FPGA Logic Synthesis Using Reinforcement Learning

Published: 31 January 2023 Publication History

Abstract

Logic synthesis involves a rich set of optimization algorithms applied in a specific sequence to a circuit netlist prior to technology mapping. A conventional approach is to apply a fixed "recipe" of such algorithms deemed to work well for a wide range of different circuits. We apply reinforcement learning (RL) to determine a unique recipe of algorithms for each circuit. Feature-importance analysis is conducted using a random-forest classifier to prune the set of features visible to the RL agent. We demonstrate conclusive learning by the RL agent and show significant FPGA area reductions vs. the conventional approach (resyn2). In addition to circuit-by-circuit training and inference, we also train an RL agent on multiple circuits, and then apply the agent to optimize: 1) the same set of circuits on which it was trained, and 2) an alternative set of "unseen" circuits. In both scenarios, we observe that the RL agent produces higher-quality implementations than the conventional approach. This shows that the RL agent is able to generalize, and perform beneficial logic synthesis optimizations across a variety of circuits.

References

[1]
Luca Amarú, Pierre-Emmanuel Gaillardon, and Giovanni De Micheli. 2015. The EPFL combinational benchmark suite. In Proc. 24th Int. Workshop on Logic & Synthesis (IWLS).
[2]
Richard Bellman. 1957. A Markovian decision process. Journal of mathematics and mechanics 6, 5 (1957), 679--684.
[3]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. (2016). arXiv:1606.01540
[4]
Ameer Haj-Ali, Qijing Huang, John Xiang, William Moses, Krste Asanovic, John Wawrzynek, and Ion Stoica. 2020. AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning. In Proc. 3rd Conf. on Machine Learning and Syst. (MLSys). 70--81.
[5]
Abdelrahman Hosny, Soheil Hashemi, Mohamed Shalan, and Sherief Reda. 2020. DRiLLS: Deep Reinforcement Learning for Logic Synthesis. In Proc. 25th Asia and South Pacific Design Automation Conf. (ASP-DAC). 581--586.
[6]
M.D. Hutton, J. Rose, J.P. Grossman, and D.G. Corneil. 1998. Characterization and parameterized generation of synthetic combinational benchmark circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuits and Syst. 17, 10 (1998), 985--996.
[7]
Gilles Louppe. 2015. Understanding Random Forests: From Theory to Practice. arXiv:1407.7502
[8]
A. Mishchenko. 2007. ABC: A system for sequential synthesis and verification. http://www.eecs.berkeley.edu/alanmi/abc
[9]
Alan Mishchenko, Sungmin Cho, Satrajit Chatterjee, and Robert Brayton. 2007. Combinational and sequential mapping with priority cuts. In Proc. Int. Conf. on Computer-aided design (ICCAD). 354--361.
[10]
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. (2016). arXiv:1602.01783
[11]
Volodymyr Mnih and et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533.
[12]
Antonin Raffin, Ashley Hill, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, and Noah Dormann. 2019. Stable Baselines3. https://github.com/DLR-RM/stable-baselines3.
[13]
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2017. Trust Region Policy Optimization. arXiv:1502.05477
[14]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. (Aug 2017). arXiv:1707.06347
[15]
Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA.
[16]
Keren Zhu, Mingjie Liu, Hao Chen, Zheng Zhao, and David Z Pan. 2020. Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network. In Proc. ACM/IEEE Workshop on Machine Learning for CAD (MLCAD).

Cited By

View all
  • (2024)CBTune: Contextual Bandit Tuning for Logic Synthesis2024 Design, Automation & Test in Europe Conference & Exhibition (DATE)10.23919/DATE58400.2024.10546766(1-6)Online publication date: 25-Mar-2024
  • (2024)On Accelerating Domain-Specific MC-TS with Knowledge Retention and Efficient Parallelization for Logic Optimization2024 2nd International Symposium of Electronics Design Automation (ISEDA)10.1109/ISEDA62518.2024.10617624(235-240)Online publication date: 10-May-2024
  • (2024)Machine Learning for FPGA Electronic Design AutomationIEEE Access10.1109/ACCESS.2024.351134512(182640-182662)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Area-Driven FPGA Logic Synthesis Using Reinforcement Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASPDAC '23: Proceedings of the 28th Asia and South Pacific Design Automation Conference
      January 2023
      807 pages
      ISBN:9781450397834
      DOI:10.1145/3566097
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      In-Cooperation

      • IPSJ
      • IEEE CAS
      • IEEE CEDA
      • IEICE

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 31 January 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. circuit optimization
      2. logic synthesis
      3. reinforcement learning

      Qualifiers

      • Research-article

      Conference

      ASPDAC '23
      Sponsor:

      Acceptance Rates

      ASPDAC '23 Paper Acceptance Rate 102 of 328 submissions, 31%;
      Overall Acceptance Rate 466 of 1,454 submissions, 32%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)110
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 22 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)CBTune: Contextual Bandit Tuning for Logic Synthesis2024 Design, Automation & Test in Europe Conference & Exhibition (DATE)10.23919/DATE58400.2024.10546766(1-6)Online publication date: 25-Mar-2024
      • (2024)On Accelerating Domain-Specific MC-TS with Knowledge Retention and Efficient Parallelization for Logic Optimization2024 2nd International Symposium of Electronics Design Automation (ISEDA)10.1109/ISEDA62518.2024.10617624(235-240)Online publication date: 10-May-2024
      • (2024)Machine Learning for FPGA Electronic Design AutomationIEEE Access10.1109/ACCESS.2024.351134512(182640-182662)Online publication date: 2024
      • (2024)Enhancing FPGA CAD Flow with AI-Powered SolutionsAI-Enabled Electronic Circuit and System Design10.1007/978-3-031-71436-8_7(225-256)Online publication date: 17-Oct-2024
      • (2023)AlphaSyn: Logic Synthesis Optimization with Efficient Monte Carlo Tree Search2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)10.1109/ICCAD57390.2023.10323856(1-9)Online publication date: 28-Oct-2023
      • (2023)Three Challenges in ReRAM-Based Process-In-Memory for Neural Network2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)10.1109/AICAS57966.2023.10168640(1-5)Online publication date: 11-Jun-2023
      • (2023)Application of Machine Learning in FPGA EDA Tool DevelopmentIEEE Access10.1109/ACCESS.2023.332235811(109564-109580)Online publication date: 2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media