Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3373087.3375380acmconferencesArticle/Chapter ViewAbstractPublication PagesfpgaConference Proceedingsconference-collections
poster

HPIPE: Heterogeneous Layer-Pipelined and Sparse-Aware CNN Inference for FPGAs

Published: 24 February 2020 Publication History

Abstract

This poster presents a novel cross-layer-pipelined Convolutional Neural Network accelerator architecture, and network compiler, that make use of precision minimization and parameter pruning to fit ResNet-50 entirely into on-chip memory on a Stratix 10 2800 FPGA. By statically partitioning the hardware across each of the layers in the network, our architecture enables full DSP utilization and reduces the soft logic per DSP ratio by roughly 4x over prior work on sparse CNN accelerators for FPGAs. This high DSP utilization, a frequency of 420MHz, and skipping zero weights enable our architecture to execute a sparse ResNet-50 model at a batch size of 1 at 3300 images/s, which is nearly 3x higher throughput than NVIDIA's fastest machine learning targeted GPU, the V100. We also present a network compiler and a flexible hardware interface that make it easy to add support for new types of neural networks, and to optimize these networks for FPGAs with different on-chip resources.

Cited By

View all
  • (2024)HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator2024 34th International Conference on Field-Programmable Logic and Applications (FPL)10.1109/FPL64840.2024.00043(257-263)Online publication date: 2-Sep-2024
  • (2024)FPGA Acceleration of Dynamic Neural Networks: Challenges and Advancements2024 IEEE International Conference on Omni-layer Intelligent Systems (COINS)10.1109/COINS61597.2024.10622127(1-5)Online publication date: 29-Jul-2024
  • (2023)Koios 2.0: Open-Source Deep Learning Benchmarks for FPGA Architecture and CAD ResearchIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.327258242:11(3895-3909)Online publication date: Nov-2023
  • Show More Cited By

Index Terms

  1. HPIPE: Heterogeneous Layer-Pipelined and Sparse-Aware CNN Inference for FPGAs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FPGA '20: Proceedings of the 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
    February 2020
    346 pages
    ISBN:9781450370998
    DOI:10.1145/3373087
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 February 2020

    Check for updates

    Author Tags

    1. accelerator
    2. fpga
    3. layer-pipeline
    4. neural networks
    5. sparsity

    Qualifiers

    • Poster

    Conference

    FPGA '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 125 of 627 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator2024 34th International Conference on Field-Programmable Logic and Applications (FPL)10.1109/FPL64840.2024.00043(257-263)Online publication date: 2-Sep-2024
    • (2024)FPGA Acceleration of Dynamic Neural Networks: Challenges and Advancements2024 IEEE International Conference on Omni-layer Intelligent Systems (COINS)10.1109/COINS61597.2024.10622127(1-5)Online publication date: 29-Jul-2024
    • (2023)Koios 2.0: Open-Source Deep Learning Benchmarks for FPGA Architecture and CAD ResearchIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.327258242:11(3895-3909)Online publication date: Nov-2023
    • (2023)Exploiting the Common Case When Accelerating Input-Dependent Stream Processing by FPGAIEEE Transactions on Computers10.1109/TC.2022.320057672:5(1343-1355)Online publication date: 1-May-2023
    • (2023)Respect the Difference: Reinforcement Learning for Heterogeneous FPGA Placement2023 International Conference on Field Programmable Technology (ICFPT)10.1109/ICFPT59805.2023.00022(152-160)Online publication date: 12-Dec-2023
    • (2023)Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition2023 33rd International Conference on Field-Programmable Logic and Applications (FPL)10.1109/FPL60245.2023.00036(204-211)Online publication date: 4-Sep-2023
    • (2023)A Pipelining-Based Heterogeneous Scheduling and Energy-Throughput Optimization Scheme for CNNs Leveraging Apache TVMIEEE Access10.1109/ACCESS.2023.326482811(35007-35021)Online publication date: 2023
    • (2022)HPIPE NX: Boosting CNN Inference Acceleration Performance with AI-Optimized FPGAs2022 International Conference on Field-Programmable Technology (ICFPT)10.1109/ICFPT56656.2022.9974441(1-9)Online publication date: 5-Dec-2022
    • (2021)Elastic-DF: Scaling Performance of DNN Inference in FPGA Clouds through Automatic PartitioningACM Transactions on Reconfigurable Technology and Systems10.1145/347056715:2(1-34)Online publication date: 6-Dec-2021
    • (2021)The Evolution of Domain-Specific Computing for Deep LearningIEEE Circuits and Systems Magazine10.1109/MCAS.2021.307162921:2(75-96)Online publication date: Oct-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media