1 Introduction
High-level synthesis (HLS) with the task-parallel programming model is an important tool to help programmers scale up the performance of their accelerators on modern FPGAs with ever-increasing resource capacities. Task-level parallelism is a form of parallelization of computer programs across multiple processors. In contrast to data parallelism where the workload is partitioned on data and each processor executes the same program (e.g., OpenMP [
29]), different processors, or modules, in a task-parallel program often behave differently, while data are passed between processors. Examples of task-parallel programs include image processing pipelines [
17,
54,
67], graph processing [
30,
31,
84,
99], and network switching [
63]. Researches show that even for data-parallel applications such as neural networks [
73,
74,
83] and stencil computation [
17], task-parallel implementations show better scalability and higher frequency than their data-parallel counterparts due to the localized communication pattern [
25].
Even though task-parallel programs are suitable for spatial architectures, existing FPGA
computer-aided design (CAD) toolchains often fail in timing closure. One major cause that leads to the unsatisfactory frequency is that HLS cannot easily predict the physical layout of the design after placement and routing. Thus, HLS tools typically rely on pre-characterized operation delays and a crude interconnect delay model to insert clock boundaries (i.e., registers) into an untimed design to generate a timed RTL implementation [
36,
79,
98]. Hence, as the HLS accelerator system designs get larger to fully leverage the resources of a modern FPGA, the behavior-level estimation becomes even less accurate and the timing quality of the synthesized RTLs usually further degrades.
This timing issue is worsened as modern FPGA architectures become increasingly heterogeneous [
90]. Modern FPGAs have thousands of heterogeneous
digital signal processing (DSP) and
random-access memory (RAM) blocks and millions of
lookup table (LUT) and
flip-flop (FF) instances. To pack more logic onto a single device, the latest FPGAs integrate multiple dies using silicon interposers, but the interconnects that go across the die boundaries would carry a non-trivial delay penalty. In addition, specialized IP blocks such as PCIe and DDR controllers are embedded in the programmable logic. These IP blocks usually have fixed locations near-dedicated I/O banks and will consume a large number of programmable resources nearby. As a result, these dedicated IPs often detour the signals close by towards more expensive and/or longer routing paths. This complexity and heterogeneity significantly challenge the effectiveness and efficiency of modern FPGA CAD workflow.
Moreover, the recent release of High Bandwidth Memory (HBM)-based FPGA boards brings even more challenges to the timing closure of HLS designs. The key feature of the HBM device is that a super-wide data interface is exposed in a local region, which often causes severe local congestion. For example, the AMD/Xilinx U280 FPGA features 32 independent HBM channels at the bottom of the device, each with a 256-bit width running at 450 MHz. To fully utilize the potential of the HBM bandwidth, the physical design tools need to squeeze a substantial amount of logic into the area nearby the HBM blocks.
The current timing-closure struggle in multi-die FPGAs originates from a disconnection between the HLS step and the downstream physical design step. The existing FPGA compilation stack consists of a sequence of independent design and optimization steps to lower the design abstraction gradually, but these steps lack cross-stack coordination and optimizations. Given a C++ task-parallel program input of an accelerator system, HLS can adjust the output RTL to change the pipeline level of the data links between tasks (modules), which are usually latency-insensitive, to break critical paths; however, the tool does not know which ones will be timing-critical. However, the physical design tools could determine the critical paths but no longer have the option to add extra registers because physical design tools honor the cycle-accurate RTL input.
Several prior attempts couple the physical design process with HLS compilation [
6,
9,
23,
48,
77,
78,
94,
98]. Zheng et al. [
98] propose to iteratively run placement and routing to obtain accurate delay statistics of each wire and operator. Based on the post-route information, HLS re-runs the scheduling step for a better pipelining; Cong et al. [
23] is another representative work that presents placement-driven scheduling and binding for multi-cycle communications in an island-style architecture similar to FPGAs. Kim et al. [
48] propose to combine architectural synthesis with placement under distributed-register architecture to minimize the system latency. Stammermann et al. [
77] proposed methods to simultaneously perform floorplanning and functional unit binding to reduce power on interconnects. Chen et. al. [
9] propose implementing HLS as a sub-routine to adjust the delay/power/variability/area of the circuit modules during the physical planning process across different IC layers, which only improves timing by 8%. The previous approaches share the common aspect of focusing on the fine-grained interaction between HLS and physical design, where individual operators and the associated wires and registers are all involved during the delay prediction and iterative HLS-layout co-optimization. While such a fine-grained method can be effective on relatively small HLS designs and FPGA devices, it is too expensive (if not infeasible) for today’s large designs targeting multi-die FPGAs, where each implementation iteration from HLS to bitstream may take days to complete.
Therefore, we propose to re-structure the CAD stack and partially combine physical design with HLS in a coarse-grained fashion. Specifically, we propose to couple the coarse-grained floorplanning step with behavior-level pipelining in HLS. Our coarse-grained floorplanning involves dividing the FPGA device into a grid of regions and assigning each task to one region during HLS compilation. We further pipeline all the inter-region connections to facilitate timing closure while we leave the intra-region optimization to the default HLS tool. As our experiment will show, floorplanning a 4-die FPGA into only 6–8 regions is already enough to properly guide HLS for accurate elimination of global critical paths, thus our floorplan-guided HLS approach is lightweight and highly scalable.
Our methodology relieves local congestion and fixes global critical paths at the same time. First, the early floorplanning step could guide the subsequent placement steps to distribute the user logic evenly across the entire device instead of attempting to pack the logic into a single die as much as possible, which aims to alleviate local congestion as much as possible. Second, the floorplan provides HLS a view of the global physical layout that helps HLS accurately identify and pipeline the long wires, especially those crossing the die boundaries, so the global critical paths could be appropriately pipelined. Finally, we present analysis and latency balancing algorithms to guarantee that the throughput of the resulting design is not negatively impacted. Our contributions are as follows:
–
To the best of our knowledge, we are the first to tackle the challenge of high-frequency HLS design on multi-die FPGAs by coupling floorplanning and pipelining to effectively insert registers on the long cross-die interconnects. We further ensure that the additional latency does not affect the throughput of the design.
–
We present a set of optimizations specifically tailored for HBM devices, including automatic HBM port binding, floorplan solution space exploration, and a customized programming API to minimize the area overhead of HBM IO modules.
–
Our framework, TAPA, interfaces with the commercial FPGA design tool flow. It improves the average frequency of 43 designs from 147 MHz to 297 MHz with a negligible area overhead.
This article extends the two prior publications [
18,
35] of the authors on this topic. Compared to the prior papers, this article includes additional contributions as follows:
–
We integrate the co-optimization methodology from Reference [
35] with the programming framework in Reference [
18] to provide a fully automated, programmer-friendly, and robust workflow that consistently achieves higher frequency compared to existing commercial toolchains.
–
We extend the framework of Reference [
18] with additional APIs for external memory access, which has significantly lowered BRAM consumption. This optimization enables the successful implementation of large-scale accelerators onto modern HBM-based FPGAs (Section
3.4).
–
We extend the co-optimization methods of Reference [
35] to consider the special challenges of programming HBM-based FPGAs, including automatic HBM channel binding and multi-floorplanning generation (Section
6).
–
We add four extra benchmarks that use a large number of HBM channels. We demonstrate how our new optimization enables them to be successfully implemented with high frequency (Section
7.4).
Figure
1 shows the overall flow of our proposed methodology. The rest of the article is organized as follows: Section
2 introduces background information on modern FPGA architectures and shows motivating examples; Section
3 presents our proposed task-parallel programming model; Section
4 details our coarse-grained floorplan scheme inside the HLS flow; Section
5 describes our floorplan-aware pipelining methods; Section
6 elaborates our techniques tailored for HBM-based FPGAs; Section
7 presents experimental results; Section
8 provides related work, followed by conclusion and acknowledgments.
4 Coupling HLS with Coarse-grained Floorplanning
In this section, we present our coarse-grained floorplanning scheme that assigns TAPA tasks to different regions of the programmable fabric. We call this TAPA module for floorplanning AutoBridge, which is an extension to our prior same-name work [
35].
Note that the focus of this work is not on improving floorplanning algorithms; instead, we intend to properly use coarse-grained floorplan information to guide HLS and placement.
4.1 Coarse-grained Floorplanning Scheme
Instead of finding a dedicated region with a detailed aspect ratio for each module, we choose to view the FPGA device as a grid that is formed by the die boundaries and the large IP blocks. These physical barriers split the programmable fabric apart into a series of disjoint slots in the grid where each slot represents a sub-region of the device isolated by die boundaries and IP blocks. Using our coarse-grained floorplanning, we will assign each function of the HLS design to one of these slots.
For example, for the Xilinx Alveo U250 FPGA, the array of DDR controllers forms a vertical split in the middle column; and there are three horizontal die boundaries. Thus, the device can be viewed as a grid of 8 slots in 2 columns and 4 rows. Similarly, the U280 FPGA can be viewed as a grid of 6 slots in 2 columns and 3 rows.
In this scheme, each slot contains about 700 BRAM_18Ks, 1500 DSPs, 400K Flip-Flops, and 200K LUTs. Meanwhile, to reduce the resource contention in each slot, we set a maximum utilization ratio for each slot to guarantee enough blank space. Experiments show that such slot sizes are suitable, and HLS has a good handle on the timing quality of the local logic within each slot, as shown in Section
7.
4.2 Problem Formulation
Given: (1) a graph
\(G(V, E)\) representing the task-parallel program where
V represents the set of tasks and
E represents the set of streaming channels between vertices; (2) the number of rows
R and the number of columns
C of the grid representation of the target device; (3) maximum resource utilization ratios for each slot; (4) location constraints such that certain IO modules must be placed nearby certain IP blocks. In addition, we may have constraints that certain vertices must be assigned to the same slot. This is for throughput concerns and will be explained in Section
5.
Goal: Assign each
\(v \in V\) to one of the slots such that (1) the resource utilization ratio
1 of each slot is below the given limit; (2) the cost function is minimized. We choose the total number of slot-crossings as the cost instead of the total estimated wire lengths. Specifically, the cost function is defined as
where
\(e_{ij}.width\) is the bitwidth of the FIFO channel connecting
\(v_i\) and
\(v_j\) and module
v is assigned to the
\(v.col\) -th column and the
\(v.row\) -th row. The physical meaning of the cost function is the sum of the number of slot boundaries that every wire crosses.
4.3 Solution
Our problem is relatively small in size, as the number of tasks in behavior-level task parallel programs (typically less than thousands) is much smaller than the number of gates in a logic netlist. We adopt the main idea of top-down partitioning-based placement algorithms [
4,
32,
57] to solve our problem. Meanwhile, due to the relatively small problem size, we plan to pursue an exact solution for each partitioning process.
Figure
8 demonstrates the floorplanning of an example design through three iterations of partitioning. The top-down partitioning-based approach starts with the initial state where all modules are assigned to the same slot, iteratively partitions the current slots in half into two
child slots, and then assigns the modules into the child slots. Each partitioning involves splitting all of the current slots in half either horizontally or vertically.
We formulate the partitioning process of each iteration using
integer linear programming (ILP). In every partitioning iteration, all current slots need to be divided in half. Since some of the modules in a slot may be tightly connected to modules outside of the slot, ignoring such connections can adversely affect the quality of the assignment. Therefore, our ILP formulation considers the partitioning of all slots together for an exact solution that is possible due to the small problem size. Experiments in Section
7 show that our ILP formulation is solvable within a few seconds or minutes for designs of hundreds of modules.
Performing an N-way partitioning is another potential method. However, compared to our iterative 2-way partitioning, experiments show that it is much slower than iterative 2-way partitioning.
ILP Formulation of One Partitioning Iteration.
The formulation declares a binary decision variable
\(v_d\) for each
v to denote whether
v is assigned to the left or the right child slot during a vertical partitioning (or to the upper or the lower child slot for a horizontal one). Let
R denote the set of all current slots. For each slot
\(r \in R\) to be divided, we use
\(r_v\) to denote the set of all vertices that
r is currently accommodating. To ensure that the child slots have enough resources for all modules assigned to them, the ILP formulation imposes the resource constraint for each child slot
\(r_{child}\) and for each type of on-chip resource.
where
\(v_{area}\) is the resource requirement of
v and
\((r_{sub})_{area}\) represents the available resources in the child slot divided from
r.
To express the cost function that is based on the coordinates of each module, we first need to express the new coordinates
\((v.row, v.col)\) of
v based on the previous coordinates
\(((v.row)_{prev}, (v.col)_{prev})\) and the decision variable
\(v_d\) . For a vertical partitioning, the new coordinates of
v will be
And for horizontal partitioning, the new coordinates will be
Finally, the objective is to minimize the total slot-crossing shown in Equation (
1) for each partitioning iteration.
For the example in Figure
8, Table
2 shows the
row and
col indices of selected vertices in each partitioning iteration.
5 Floorplan-aware Pipelining
Based on the generated floorplan, we aim to pipeline every cross-slot connection to facilitate timing closure. Although HLS has the flexibility to pipeline them to increase the final frequency, the additional latency could potentially lead to a large increase of the execution cycles, which we need to avoid. This section presents our methods to pipeline slot-crossing connections without hurting the overall throughput of the design.
We will first focus on pipelining the dataflow designs, then extend the method to other types of HLS design. In Section
5.1, we introduce the approach of pipelining with latency balancing; and Section
5.2 presents the detailed algorithm. In Section
5.3, we present how to utilize the internal computation pattern to construct loop-level dataflow graphs that allow more pipelining opportunities.
5.1 Pipelining Followed by Latency Balancing for Dataflow Designs
In our problem, an HLS dataflow design consists of a set of concurrently executed functions communicating through FIFO channels, where each function will be compiled into an RTL module controlled by an FSM [
65]. The rich expressiveness of FSM makes it difficult to statically determine how the additional latency will affect the total execution cycles. Note that our problem is different from other simplified dataflow models such as the
Synchronous Data Flow (SDF) [
51] and the
Latency Insensitive Theory (LIT) [
7], where the firing rate of each vertex is fixed. Unlike SDF and LIT, in our problem, each vertex is an FSM and the firing rate is not fixed and can have complex patterns.
Therefore, we adopt a conservative approach, where we first pipeline all edges that cross slot boundaries, then balance the latency of parallel paths based on the
cut-set pipelining [
64]. A cut-set is a set of edges that can be removed from the graph to create two disconnected sub-graphs; and if all edges in a cut-set are of the same direction, then we could add an equal amount of latency to each edge and the throughput of the design will be unaffected. Figure
9(a) illustrates the idea. If we need to add one unit of latency to
\(e_{13}\) (marked in red) due to the floorplan results, then we need to find a cut-set that includes
\(e_{13}\) and
balance the latency of all other edges in this cut-set (marked in blue).
Since we can choose different cut-set to balance the same edge, we need to minimize the area overhead. For example, for
\(e_{13}\) , balancing the
cut-set 2 in Figure
9(b) costs smaller area overhead compared to
cut-set 1 in Figure
9(a), as the width of
\(e_{47}\) is smaller than that of
\(e_{14}\) . Meanwhile, it is possible that multiple edges can be included in the same cut-set. For example, the edges
\(e_{27}\) and
\(e_{37}\) are both included in the
cut-set 3, so we only need to balance the other edges in
cut-set 3 once.
Cut-set pipelining is equivalent to balancing the total added latency of every pair of
reconvergent paths [
64]. A path is defined as one or multiple concatenated edges of the same direction; two paths are reconvergent if they have the same source vertex and destination vertex. When there are multiple edges with additional latency from the floorplanning step, we need to find a globally optimal solution that ensures all reconvergent paths have a balanced latency, and the area overhead is minimized.
5.2 Latency Balancing Algorithm
Problem Formulation.
Given: A graph \(G(V, E)\) representing a dataflow design that has already been floorplanned and pipelined. Each vertex \(v \in V\) represents a function in the dataflow design and each edge \(e \in E\) represents the FIFO channel between functions. Each edge \(e \in E\) is associated with \(e.width\) representing the bitwidth of the edge. For each edge e, the constant \(e.lat\) represents the additional latency inserted to e in the previous pipelining step. We use the integer variable \(e.balance\) to denote the number of latency added to e in the current latency balancing step.
Goal: (1) For each edge
\(e \in E\) , compute
\(e.balance\) such that for any pair of reconvergent paths
\(\lbrace p_1, p_2\rbrace\) , the total latency on each path is the same:
and (2) minimize the total area overhead, which is defined as:
Note that this problem is different from the classic min-cut problem [
59] for DAG. One naïve solution is to find a min-cut for every pipelined edge and increase the latency of the other edges in the cut accordingly. However, this simple method is suboptimal. For example, in Figure
9, since edge
\(e_{27}\) and
\(e_{37}\) can be in the same cut-set, we only need to add one unit of latency to the other edges in the cut-set (e.g.,
\(e_{47}\) ,
\(e_{57}\) , and
\(e_{67}\) ) so all paths are balanced.
Solution.
We formulate the problem in a restricted form of ILP that can be solved in polynomial time. For each vertex \(v_i\) , we associate it with an integer variable \(S_i\) that denotes the maximum latency from pipelining between \(v_i\) and the sink vertex of the graph. In other words, given two vertices \(v_x\) and \(v_y\) , \((S_x - S_y)\) represents the maximum latency among all paths between the two vertices. Note that we only consider the latency on edges due to pipelining.
For each edge
\(e_{ij}\) , we have
According to our definition, the additional balancing latency added to edge
\(e_{ij}\) in this step can be expressed as
since we want every path from
\(v_i\) to
\(v_j\) have the same latency.
The optimization goal is to minimize the total area overhead, i.e., the weighted sum of the additional depth on each edge:
For example, assume that there are two paths from \(v_1\) to \(v_2\) where path \(p_1\) has 3 units of latency from pipelining while \(p_2\) has 1 unit. Thus, from our formulation, we will select the edge(s) on \(p_2\) and add 2 additional units of latency to balance the total latency of \(p_1\) and \(p_2\) so the area overhead is minimized.
Our formulation is essentially a
system of differential constraints (SDC), in which all constraints are in the form of
\(x_i - x_j \le b_{ij}\) , where
\(b_{ij}\) is a constant and
\(x_i, x_j\) are variables. Because of this restrictive form of constraints, we can solve SDC as a linear programming problem while the solutions are guaranteed to be integers. As a result, it can be solved in polynomial time [
26,
52].
If the SDC formulation does not have a solution, then there must be a dependency cycle in the dataflow graph [
26]. This means that at least one of the edges in the dependency cycle is pipelined based on the floorplan. In this situation, we will feedback to the floorplanner to constrain those vertices into the same region and then re-generate a new floorplan.
5.3 Efficient Pipelining Implementation
Figure
10 shows how we add pipelining to a FIFO-based connection. We adopt FIFOs that assert their
full pin before the storage actually runs out, so we could directly register the interface signals without affecting the functionality.
6 Optimization for HBM Devices
As will be shown in our evaluation section, the techniques from previous sections will already be effective for a significant timing improvement on DDR-based FPGAs. However, more optimization techniques are needed to squeeze the best performance out of the state-of-the-art HBM-based FPGAs. In this section, we present three major techniques tailored for the unique architecture of HBM-based FPGAs where a large set of independent data channels are clustered closely at the edge of the device.
6.1 Reduce BRAM Usage with async_mmap
First, we present a system-level optimization to reduce the resource consumption near the HBM blocks by using the
async_mmap API presented in Section
3.4. When interacting with the AXI interface, existing HLS tools will buffer the entire burst transactions using on-chip memories. For a 512-bit AXI interface, the AXI buffers generated by Vitis HLS costs 15 BRAM_18K each for the read channel and the write channel. While this is trivial for conventional DDR-based FPGAs where only a few external DDRs are available, such BRAM overhead becomes a huge problem for HBM devices. To use all 32 HBM channels, the AXI buffers alone take away more than 900 BRAM_18Ks, which accounts for more than 70% of the BRAM resources in the bottom SLR.
However, with the
async_mmap interface, we no longer need to set aside a large buffer to accommodate the data in AXI burst transactions, because the flow control mechanism is explicitly included in the user code (Figure
4). Table
3 shows our resource reduction for just one HBM channel.
6.2 Automatic HBM Channel Binding
In the current FPGA HBM architecture, the HBM is divided into 32 channels are physically bundled into eight groups, and each group contains four adjacent channels joined by a built-in 4 × 4 crossbar. The crossbar provides full connectivity within the group. Meanwhile, each AXI interface at the user side can still access any HBM channels outside its group. The data will sequentially traverse through each of the lateral connections until it reaches the crossbar connecting to the target channel, thus inter-group accesses will come with longer latency and potentially less bandwidth due to data link sharing. Therefore, the binding of logical buffers and physical HBM channels will affect the design in two ways:
–
Since intra-group access is more efficient compared to inter-group accesses, an inferior binding will negatively affect the available bandwidth.
–
As the HBM channels are hardened to fixed locations, the binding also affects the placement and routing of the logic that connect to HBM. Thus, an unoptimized binding may cause local congestion in the programmable logic nearby the HBM channels.
Existing CAD tools require that users explicitly specify the mapping of all HBM channels, which requires users to master low-level architecture details. Also, since the binding does not affect the correctness of the design, users are often unaware of suboptimal choices.
To alleviate the problem, we propose a semi-automated solution. We observe that very often the design only involves intra-group HBM accesses. In this case, the binding decision does not affect the HBM bandwidth and latency and only impacts the placement and routing of nearby logic. Therefore, we implement an API where users could specify the partial binding of channels, or none at all if desired, and let TAPA automatically determine the binding for the rest.
Specifically, we incorporate the HBM binding process into our floorplanning step. We treat the number of available HBM channels as another type of resource for the slots. Therefore, slots that are directly adjacent to HBM blocks will be treated as having the corresponding number of HBM channels, while other slots will have zero available HBM channel resources. Meanwhile, each task that directly interacts with the HBM channel is treated as requiring one unit of HBM channel resources, and other tasks will be regarded as not requiring HBM resources.
6.3 Generating Multiple Floorplan Candidates
By default, TAPA will only generate one floorplan solution where we will prioritize a balanced distribution of logic and then accordingly pipeline the inter-slot connections. However, due to the severe local congestion around the bottom die in an HBM device, we need to explore the different tradeoffs between logic resource usage and routing resource usage, especially die-crossing wires. One floorplan solution may use fewer logic resources in the bottom die but require more die-crossing wires as logic are pushed to the upper regions; another solution may have the opposite effect. We observe that very often it is unpredictable which factor is more important for a given design until the routing process is done. Note that each different floorplan solution comes with corresponding pipelining schemes that best suit the floorplan results.
Instead of generating only one floorplan solution, we can generate a set of Pareto-optimal points and run physical design concurrently to explore the best results. In our formulation of the floorplan problem, we have a parameter to control the maximal logic resource utilization of each island. Reducing this parameter will reduce local logic resource usage and increase global routing resource usage and vice versa. Therefore, we sweep through a range of this parameter to generate a set of slightly different floorplans and implement them in parallel to achieve the highest frequency.
7 Experiments
7.1 Implementation Details
TAPA is implemented in C++ and Python. We implement our prototype to interface with the CAD flow for AMD/Xilinx FPGAs, including Vitis HLS, Vivado, and Vitis (2021.2). We use the Python MIP package [
72] coupled with Gurobi [
39] to solve the various ILP problems introduced in previous sections. We generate
tcl constraint files to be used by Vivado to enforce our high-level floorplanning scheme.
Meanwhile, we turn off the hierarchy rebuild process during RTL synthesis [
89] to prevent the RTL synthesis tool from introducing additional wire connections between RTL modules. The hierarchy rebuild step first flattens the hierarchy of the RTL design and then tries to rebuild the hierarchy. As a result, hierarchy rebuild may create unpredictable new connections between modules. As a result, if two modules are floorplanned far apart, then these additional wires introduced during RTL synthesis will be under-pipelined, as they are unseen during HLS compilation. Note that disabling this feature may lead to slight differences in the final resource utilization.
We test out designs on the Xilinx Alveo U250 FPGA
2 with 4 DRAMs and the Xilinx Alveo U280 FPGA
3 with HBM. As the DDR controllers are distributed in the middle vertical column while the HBM controller lies at the bottom row, these two FPGA architectures present different challenges to the CAD tools. Thus, it is worthwhile to test them separately.
To run our framework, users first specify how they want to divide the device. By default, we divide the U250 FPGA into a 2-column × 4-row grid and the U280 FPGA into a 2-column × 3-row grid, matching the block diagram of these two architectures shown in Figure
2. To control the floorplanning, users can specify the maximum resource utilization ratio of each slot. The resource utilization is based on the estimation by HLS. Users can also specify how many levels of pipelining to add based on the number of boundary crossings. By default, for each boundary crossing, we add two levels of pipelining to the connection. The processed design is integrated with the Xilinx Vitis infrastructure to communicate with the host.
7.2 Benchmarks
We use two groups of benchmarks to demonstrate the proposed methodologies. We first include six benchmarks that are originally used in AutoBridge [
35] to showcase the frequency improvement from co-optimization of HLS and physical design. AutoBridge uses six representative benchmark designs with different topologies and changes the parameter of the benchmarks to generate a set of designs with varying sizes on both the U250 and the U280 board. The six designs are all large-scale designs implemented and optimized by HLS experts. Figure
11 shows the topology of the benchmarks. Note that even for those benchmarks that seem regular (e.g., CNN), the location constraints from peripheral IPs can highly distort their physical layouts.
–
The stencil designs created by the SODA [
17] compiler are a set of kernels in linear topologies.
–
The genome sequencing design [
37] performing the Minimap2 overlapping algorithm [
53] has
processing elements (PE) in broadcast topology. This benchmark is based on shared-memory communication and all other benchmarks are dataflow designs.
–
The CNN accelerators created by the PolySA [
24] compiler are in a grid topology.
–
The HBM graph processing design [
18] performs the page rank algorithm. It features eight sets of processing units and one central controller. This design also contains dependency cycles if viewed at the granularity of computing kernels.
–
The HBM bucket sort design adapted from Reference [
71], which includes eight parallel processing lanes and two fully connected layers.
–
The Gaussian elimination designs created by AutoSA [
83] are in triangle topologies.
In addition, we include three additional benchmarks that use a large number of HBM channels to demonstrate the newly added HBM-specific optimizations. All of the three additional benchmarks will still fail to route with the original AutoBridge. However, our latest optimizations enable them to route successfully with high frequencies.
–
The
Scalable and Automatic Stencil Acceleration Framework (SASA) [
80] accelerators where one version uses 24 channels, and the other one uses 27 channels. Compared to the SODA stencil accelerator used in the original AutoBridge paper, the SASA accelerator also has a much more complicated topology.
–
The HBM
sparse matrix-matrix multiply (SpMM) accelerator [
76] that uses 29 HBM channels.
–
The
Sparse matrix-vector multiply (SpMV) accelerators [
75] where one version uses 20 HBM channels and another version uses 28 HBM channels.
7.3 Original Evaluation of AutoBridge
By varying the size of the benchmarks, in total, we have tested the implementation of 43 designs with different configurations. Among them, 16 designs failed in routing or placement with the baseline CAD flow, compared AutoBridge, which succeeds in routing all of them and achieves an average of 274 MHz. For the other 27 designs, we improve the final frequency from 234 MHz to 311 MHz, on average. In general, we find that AutoBridge is effective for designs that use up to about 75% of the available resources. We execute our framework on an Intel Xeon CPU running at 2.2 GHz. Both the baseline designs and optimized ones are implemented using Vivado with the highest optimization level, with a target operating frequency setting of 300 MHz. The final design checkpoint files of all experiments are available in our open-sourced repository.
In some experiments, we may find that the optimized versions have even slightly smaller resource consumption. Possible reasons are that we adopt a different FIFO template and disable the hierarchy rebuild step during RTL synthesis. Also, as the optimization leads to very different placement results compared to those of the original version, we expect different optimization strategies will be adopted by the physical design tools. The correctness of the code is verified by cycle-accurate simulation and on-board execution.
Next, we present the detailed results of each benchmark.
SODA Stencil Computation.
For the stencil computing design, the kernels are connected in a chain format through FIFO channels. By adjusting the number of kernels, we can vary the total size of the design. We test anywhere from one kernel up to eight kernels, and Figure
12 shows final frequency of the eight design configurations on both U250 and U280 FPGAs. In the original flow, many design configurations fail in routing due to routing resource conflicts. Those that are routed successfully still achieve relatively low frequencies. In comparison, with the help of AutoBridge, all design configurations are routed successfully. On average, we improve the timing from 86 MHz to 266 MHz on the U280 FPGA and from 69 MHz to 273 MHz on the U250 FPGA.
Starting from the seven-kernel design, we observe a frequency decrease on the U280 FPGA. This is because each kernel of the design is very large and uses about half the resources of a slot; thus, starting from the seven-kernel design on the relatively small U280, two kernels have to be squeezed into one slot, which will cause more severe local routing congestion. Based on this phenomenon, we recommend that users avoid designing very large kernels and instead split the functionality into multiple functions to allow the tool more flexibility in floorplanning the design.
CNN Accelerator.
The CNN accelerator consists of identical PEs in a regular grid topology. We adjust the size of the grid from a
\(2 \times 13\) array up to a
\(16 \times 13\) array to test the robustness of AutoBridge. Figure
13 shows the result on both U250 and U280 FPGAs.
Although the regular 2-dimensional grid structure is presumed to be FPGA-friendly, the actual implementation results from the original tool flow is not satisfying. With the original tool flow, even small-size designs are bounded at around 220 MHz when targeting U250. Designs of larger sizes will fail in placement ( \(13 \times 12\) ) or routing ( \(13 \times 10\) and \(13 \times 14\) ). Although the final frequency is high when the design is small for the original tool flow targeting U280, the timing quality is steadily dropping as the designs become larger.
In contrast, AutoBridge improves from 140 MHz to 316 MHz on U250, on average, and from 214 MHz to 328 MHz on U280. Table
4 lists the resource consumption and cycle counts of the experiments on U250. Statistics on U280 are similar and are omitted here.
Gaussian elimination.
The PEs in this design form a triangle topology. We adjust the size of the triangle and test on both U250 and U280. Table
5 shows the results. On average, we improve the frequency from 245 MHz to 334 MHz on U250 and from 223 MHz to 335 MHz on U280.
HBM Bucket Sort.
The bucket sort design has two complex fully connected layers. Each fully connected layer involves an
\(8 \times 8\) crossbar of FIFO channels, with each FIFO channel being 256-bit wide. AutoBridge pipelines the FIFO channels to alleviate the routing congestion. Table
6 shows the frequency gain, where we improve from 255 MHz to 320 MHz on U280. As the design requires 16 external memory ports and U250 only has 4 available, the test for this design is limited to U280 only.
Because the original source code has enforced a BRAM-based implementation for some small FIFOs, which results in wasted BRAM resources, the results of AutoBridge have slightly lower BRAM and flip-flop consumption than the original implementation. In comparison, we use a different FIFO template that chooses the implementation style (BRAM-based or shift-register-based) based on the area of the FIFO. Cycle-accurate simulation has proven the correct functionality of our optimized implementation.
HBM Page Rank.
This design incorporates eight sets of processing units, each interfacing with two HBM ports. There are also centralized control units that exchange control information with five HBM ports. Table
7 shows the experiment results and we improve the final frequency from 136 MHz to 210 MHz on U280.
7.4 HBM-Specific Optimizations
In this section, we use four real-world designs from premium academic conferences or journals to demonstrate the effects of our HBM-specific optimizations. We select those designs, as they use a large number of HBM channels, which brings about a serious timing closure challenge.
HBM SpMM and SpMV Accelerators.
The SpMM and the SpMV accelerators leverage the
async_mmap API, automatic HBM channel binding, die-crossing wire adjusting, and multi-floorplan generation to achieve the best performance. We implement two versions of the SpMV accelerator, SpMV_A24 and SpMV_A16, with different numbers of parallel processing elements. We report the user clock and HBM clock frequencies and the resource utilization in Table
8. We have improved both the user clock and the HBM clock frequencies for the three designs. Especially for SpMV_A24, we have improved the user clock frequency from 193 MHz to 283 MHz and the HBM clock frequency from 430 MHz to 450 MHz. With the
async_mmap, we significantly reduced BRAM utilization—for SpMM and SpMV_A24, we reduced 10% of the total BRAM utilization.
HBM Stencil Accelerators by SASA.
The SASA design incorporates the async_mmap API, automatic HBM channel binding, die-crossing wire adjusting, and floorplan candidate generation to push the user clock frequency above 225 MHz and the HBM clock frequency to 450 MHz, which enables the accelerator to fully utilize the HBM bandwidth. For stencil algorithms that have a low number of iterations, SASA will leverage efficient spatial parallelism where each kernel read one tile of input data and additional halo data from neighboring tiles at the start. Then, each kernel performs the computation for all iterations (if any) without synchronization. Each kernel works in a streaming pattern and uses at least two HBM banks to store the input and output. The original design based on mmap fails to meet the frequency requirement. With the async_mmap API, we are able to significantly reduce the BRAM utilization. With all optimizations, the two selected designs achieve 241 MHz and 250 MHz, respectively.
Results of Multi-floorplan Generation
For HBM designs that are sensitive to logic resource utilization and routing resource utilization at the same time, we generate a set of Pareto-optimal floorplanning and implement all of them to explore the potentially best results. Table
10 shows the corresponding achievable frequency. The number of generated floorplan candidates is related to the granularity of the design. Designs with larger tasks have less flexibility in floorplanning, thus there are fewer points on the Pareto-optimal curve. It remains as future work to automatically split large tasks and fuse small tasks to better facilitate the floorplan process.
As can be seen, even with the same set of optimization techniques, slightly different floorplanning may lead to non-trivial variation in the final achievable frequency. At this stage, we treat the downstream tools as a black box, so we implement all generated floorplan schemes in parallel to search for the best results. How to better predict the final frequency and skip unpromising floorplans in an early stage remains as future work.
7.5 Control Experiments
First, we test whether the frequency gain comes from the combination of pipelining and HLS-floorplanning or simply pipelining alone. To do this, we set a control group where we perform floorplanning and pipelining as usual, but we do not pass the floorplan constraints to the physical design tools. The blue curve with triangle markers in Figure
15 shows the results. As can be seen, the control group has a lower frequency than the original design for small sizes and has limited improvements over the original designs for large sizes. In all experiments, the group with both pipelining and floorplan constraints (green curve with crossing markers) has the highest frequency. This experiment proves that the frequency gain is not simply a result of more pipelining.
Meanwhile, if we only do floorplanning without pipelining, then obviously the frequency will be much degraded, as visualized by Figure
3.
Second, we test the effectiveness of setting a slot boundary based on the DDR controllers. We run a set of experiments where we only divide the FPGA into four slots based on the die boundaries, minus the division in the middle column. The yellow curve with diamond markers in Figure
15 shows the results. As can be seen, it achieves lower frequency compared to our default eight-slot scheme.
7.6 Scalability
To show that the tool works well on designs with large numbers of small functions, we utilize the CNN experiments to test the scalability of our algorithms, as the CNN designs have the most vertices (HLS functions) and edges. Table
11 lists The compile time overhead for the floorplanning and the latency balancing when using Gurobi as the ILP solver.
4 For the largest CNN accelerator that has 493 modules and 925 FIFO connections, the floorplan step only takes around 20 seconds and the latency balancing step takes 0.03 s. Usually, FPGA designs are not likely to have this many modules and connections [
43,
93], and our method is fast enough.
8 Related Work
Layout-aware HLS Optimization. Previous works have studied how to couple the physical design process with HLS in a
fine-grained manner. Zheng et al. [
98] propose to iteratively run placement and routing for fine-grained calibration of the delay estimation of wires. The long runtime of placement and routing prohibits their methods from benefiting large-scale designs, and their experiments are all based on small examples (1,000 s of registers and 10 s of DSPs in their experiments). Cong et al. [
23] presented placement-driven scheduling and binding for multi-cycle communications in an island-style reconfigurable architecture. Xu et al. [
94] proposed to predict a register-level floorplan to facilitate the binding process. Some commercial HLS tools [
6,
78] have utilized the results of logic synthesis to calibrate HLS delay estimation, but they do not consider the interconnect delays. Chen et al. [
9] propose implementing HLS as a sub-routine to adjust the delay/power/variability/area of the circuit modules during the physical planning process across different IC layers. They report a timing improvement of 8% on synthetic designs while we get almost 2× frequency improvement. Kim et al. [
48] propose to combine architectural synthesis with placement under distributed-register architecture to minimize the system latency. Stammermann et al. [
77] proposed methods to simultaneously perform floorplanning and functional unit binding to reduce power on interconnects.
The previous approaches share the common aspect of focusing on the fine-grained interaction between physical design and upstream synthesis, where individual operators and the associated wires and registers are all involved during the delay prediction and iterative pipeline co-optimization. While such a fine-grained method can be effective on relatively small designs and FPGA devices, it is too expensive (if not infeasible) for today’s large designs targeting multi-die FPGAs, where each implementation iteration may take days to complete.
In contrast, we focus on a coarse-grained approach that only pipelines the channels that span long distances and guides the detailed placement.
Other works have studied methods to predict delay estimation at the behavior level. Guo et al. [
36] proposed to calibrate the estimated delay for operators with large broadcast factors by pre-characterizing benchmarks with different broadcast factors. Tan et al. [
79] showed that the delay prediction of logic operations (e.g.,
AND,
OR,
NOT) by HLS tools is too conservative. Therefore, they consider the technology mapping for logic operations. These works mainly target local operators and have limited effects on global interconnects. Zhao et al. [
97] used machine learning to predict how manual pragmas affect routing congestion.
In addition, Cong et al. [
25] presented tools to allow users to insert additional buffers to the designated datapath. Chen et al. [
10] proposed to add additional registers to the pipelined datapath during HLS synthesis based on the profiling results on the CHStone benchmark. Reference [
96] proposes to generate floorplanning constraints only for systolic array designs, and their method does not consider the interaction with peripheral IPs such as DDR controllers. In comparison, our work is fully automated for general designs, and our register insertion is accurate due to HLS-floorplan co-design.
Optimization for Multi-die FPGAs. To adapt to multi-die FPGAs, previous works have studied how to partition the entire design or memories among different dies [
15,
40,
47,
58,
62,
70,
82]. These methods are all based on RTL inputs, thus the partition method must observe the cycle-accurate specification. References [
40,
62] try to modify the cost function of placement to reduce die-crossing. This will lead to designs confined in fewer dies with a higher level of local congestion. Zha et al. [
95] propose methods to virtualize the FPGA and let different applications execute at different partitions. Xiao et al. [
87,
88] propose methods to split the placement and routing of different parts of the design through dynamic reconfiguration.
Floorplanning Algorithms. Floorplanning has been extensively studied [
2,
3,
14,
61]. Conventionally, floorplanning consists of (1) feasible topology generation and (2) determining the aspect ratios for goals such as minimal total wire length. In the existing FPGA CAD flows, the floorplanning step works on RTL input. In contrast, we propose to perform coarse-grained floorplanning during the HLS step to help gain layout information for the HLS tool. Similar to References [
49,
50,
60], our algorithm adopts the idea of the partitioning-based approach. As our problem size is relatively small, we use ILP for each partitioning.
Throughput Analysis of Dataflow Designs. Various dataflow models have been proposed in other literature, such as the
Kahn Process Network (KPN) [
34], SDF [
51], among many others. The more simplified the model is, the more accurately we can analyze its throughput. In the SDF model, it is restricted that the number of data produced or consumed by a process for each firing is fixed and known. Therefore, it is possible to analytically compute the influence of additional latency on throughput [
33]. The LIT [
1,
8,
22,
55,
56] also enforces similar restrictions as SDF. Reference [
81] proposes methods to insert delays when composing IP blocks of different latency. Reference [
45] studies the buffer placement problem in dataflow circuits [
11,
12,
13,
44]. Other works have studied how to map dataflow programs to domain-specific coarse-grained reconfigurable architectures [
27,
28,
85,
86].
In our scenario, each function will be compiled into an FSM that can be arbitrarily complex, thus it is difficult to quantitatively analyze the effect of the added latency on the total execution cycles. We adopt a conservative approach to balance the added latency on all reconvergent paths.