AXI Reference Guide
AXI Reference Guide
[optional]
UG761 (v14.1) April 24, 2012 [optional]
AXI Reference
Guide
UG761 (v14.1) April 24, 2012
AXI Reference Guide www.xilinx.com UG761 (v14.1) April 24, 2012
The information disclosed to you hereunder (the Materials) is provided solely for the selection and use of Xilinx products. To the maximum
extent permitted by applicable law: (1) Materials are made available AS IS and with all faults, Xilinx hereby DISCLAIMS ALL WARRANTIES
AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY,
NON-INFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and (2) Xilinx shall not be liable (whether in contract or tort, including
negligence, or under any other theory of liability) for any loss or damage of any kind or nature related to, arising under, or in connection with,
the Materials (including your use of the Materials), including for any direct, indirect, special, incidental, or consequential loss or damage
(including loss of data, profits, goodwill, or any type of loss or damage suffered as a result of any action brought by a third party) even if such
damage or loss was reasonably foreseeable or Xilinx had been advised of the possibility of the same. Xilinx assumes no obligation to correct
any errors contained in the Materials, or to advise you of any corrections or update. You may not reproduce, modify, distribute, or publicly
display the Materials without prior written consent. Certain products are subject to the terms and conditions of the Limited Warranties which
can be viewed at http://www.xilinx.com/warranty.htm; IP cores may be subject to warranty and support terms contained in a license issued
to you by Xilinx. Xilinx products are not designed or intended to be fail-safe or for use in any application requiring fail-safe performance; you
assume sole risk and liability for use of Xilinx products in Critical Applications: http://www.xilinx.com/warranty.htm#critapps.
Copyright 2012 Xilinx, Inc. Xilinx, the Xilinx logo, Artix, ISE, Kintex, Spartan, Virtex, Zynq, and other designated brands included herein are
trademarks of Xilinx in the United States and other countries. All other trademarks are the property of their respective owners.
Revision History
The following table shows the revision history for this document:
.
Date Version Description of Revisions
03/01/2011 13.1 Second Xilinx release. Added new AXI Interconnect features.
Corrected ARESETN description in Appendix A.
03/07/2011 13.1_web Corrected broken link.
07/06/2011 13.2 Release changes:
Updated AXI Interconnect IP features and use cases.
Added Optimization chapter.
10/19/2011 13.3 Release updates:
Added information about an AXI Interconnect option to delay assertion of
AWVALID/ARVALID signals until FIFO occupancy permits interrupted burst
transfers to AXI Interconnect Core Features, page 14.
Added limitation related to CORE Generator use in AXI Interconnect Core
Limitations, page 17.
Added the impact of delay assertion BRAM FIFO option to as a means of improving
throughput to Table 5-1, page 86.
Added the impact of delay assertion BRAM FIFO option to as a means of improving
throughput to Throughput / Bandwidth Optimization Guidelines, page 90.
Added reference to AXI MPMC Application Note, (XAPP739), to AXI4-based
Multi-Ported Memory Controller: AXI4 System Optimization Example, page 92.
Added information regarding the AXI Interconnect option to delay assertion of
AWVALID/ARVALID signals until FIFO occupancy permits interrupted burst
transfers to Refining the AXI Interconnect Configuration, page 96.
Added information about using the BSB for an AXI design in Using Base System
Builder Without Analyzing and Optimizing Output, page 104.
Added reference to AXI MPMC Application Note, (XAPP739), to Appendix C,
Additional Resources.
UG761 (v14.1) April 24, 2012 www.xilinx.com AXI Reference Guide
01/18/2012 13.4 Modified:
References to 7 series and Zynq Extensible Platform devices in Introduction in
Chapter 1, Introducing AXI for Xilinx System Development.
Figure 2-1, page 11 and Figure 2-4, page 13 to reflect new IP Catalog in tools.
Data Widths throughout document.
Reset statement in Table 3-1, page 41.
Signal names in Slave FSL to AXI4-Stream Signal Mapping, page 78.
Added:
References to new design templates, documented in
http://www.xilinx.com/support/answers/37425.htm, in Chapter 2, AXI
Support in Xilinx Tools and IP.
Information about the Data Mover in Chapter 2. Changed all bit widths to include
512, 1024.
Information to Centralized DMA in Chapter 2, and Video DMA in Chapter 2.
Note to TSTRB in Table 3-2, page 43.
Note to DSP and Wireless IP: AXI Feature Adoption, page 54.
Migrating Designs from XSVI to the AXI4-Stream Video Protocol.
References to new design templates, documented in
http://www.xilinx.com/support/answers/37425.htm, in
Migrating to AXI for IP Cores in Chapter 4.
Section for Video IP: AXI Feature Adoption, page 55.
References for an example of an AXI MPMC used in a high performance system,
Designing High-Performance Video Systems with the AXI Interconnect, (XAPP740), in
AXI4-based Multi-Ported Memory Controller: AXI4 System Optimization Example
in Chapter 5.
Information about DPS IP in Table A-1.
New links in Appendix C, Additional Resources.
04/24/2012 14.1 Reorganized How AXI Works in Chapter 1.
Added AXI4-Stream IP Interoperability in Chapter 1.
Reworded AXI4-Stream Adoption and Support in Chapter 3.
Added upsizer/downsizer content to Real Scalar Data Example in Chapter 3.
Modified:
All instances of Video over AXI4-Stream to AXI4-Stream Video Protocol.
Figure 3-15
Table 3-6, page 66
Removed redundant figure in Chapter 3.
Added:
Chapter 6, AXI4-Stream IP Interoperability: Tips and Hints.
Date Version Description of Revisions
AXI Reference Guide www.xilinx.com UG761 (v14.1) April 24, 2012
AXI Reference Guide www.xilinx.com 1
UG761 (v14.1) April 24, 2012
Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Chapter 1: Introducing AXI for Xilinx System Development
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
What is AXI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
How AXI Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
IP Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
AXI4-Stream IP Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
What AXI Protocols Replace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Targeted Reference Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 2: AXI Support in Xilinx Tools and IP
AXI Development Support in Xilinx Design Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Xilinx AXI Infrastructure IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 3: AXI Feature Adoption in Xilinx FPGAs
Memory Mapped IP Feature Adoption and Support. . . . . . . . . . . . . . . . . . . . . . . . . . 41
AXI4-Stream Adoption and Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
DSP and Wireless IP: AXI Feature Adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Video IP: AXI Feature Adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Chapter 4: Migrating to Xilinx AXI Protocols
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
The AXI To PLBv.46 Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Migrating Local-Link to AXI4-Stream. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Using System Generator for Migrating IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Migrating a Fast Simplex Link to AXI4-Stream. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Migrating HDL Designs to use DSP IP with AXI4-Stream. . . . . . . . . . . . . . . . . . . . 77
Migrating Designs from XSVI to the AXI4-Stream Video Protocol. . . . . . . . . . . . 80
Tool Considerations for AXI Migration (Endian Swap) . . . . . . . . . . . . . . . . . . . . . . 80
General Guidelines for Migrating Big-to-Little Endian. . . . . . . . . . . . . . . . . . . . . . . 81
Data Types and Endianness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
High End Verification Solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chapter 5: AXI System Optimization: Tips and Hints
AXI System Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
AXI4-based Multi-Ported Memory Controller:
AXI4 System Optimization Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Table of Contents
2 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Common Pitfalls Leading to AXI Systems of Poor Quality Results . . . . . . . . . . 101
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Key Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Domain Usage Guidelines and Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Domain-Specific Data Interpretation and Interoperability Guidelines. . . . . . . 112
Appendix A: AXI Adoption Summary
AXI4 and AXI4-Lite Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
AXI4-Stream Signal Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Appendix B: AXI Terminology
Appendix C: Additional Resources
Xilinx Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Third Party Documents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
AXI Reference Guide www.xilinx.com 3
UG761 (v14.1) April 24, 2012
Chapter 1
Introducing AXI for Xilinx System Development
Introduction
Xilinx
adopted the Advanced eXtensible Interface (AXI) protocol for Intellectual Property
(IP) cores beginning with the Spartan
-6 and Virtex
) ARM
Design Suite. Xilinx recommends using the BSB to start new designs. Refer to the
XPS Help for more information.
Xilinx Platform Studio (XPS)provides a block-based system assembly tool for
connecting blocks of IPs together using many bus interfaces (including AXI) to create
embedded systems, with or without processors. XPS provides a graphical interface for
connection of processors, peripherals, and bus interfaces.
Software Development Toolkit (SDK) is the development environment for
application projects. SDK is built with the Eclipse open source standard. For
AXI-based embedded systems, hardware platform specifications are exported in an
XML format to SDK (XPS-based software development and debugging is not
supported.) Refer to SDK Help for more information.
More information on EDK is available at:
http://www.xilinx.com/support/documentation/dt_edk.htm.
Creating and Importing AXI IP
XPS contains a Create and Import Peripheral (CIP) wizard that automates adding your IP
to the IP repository in Platform Studio.
10 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Debugging and Verifying Designs: Using ChipScope in XPS
The ChipScope Pro Analyzer AXI monitor core, chipscope_axi_monitor, aids in
monitoring and debugging Xilinx AXI4 or AXI4-Lite protocol interfaces. This core lets you
probe any AXI, memory mapped master or slave bus interface. It is available in XPS.
With this probe you can observe the AXI signals going from the peripheral to the AXI
Interconnect core. For example, you can set a monitor on a MicroBlaze processor
instruction or data interface to observe all memory transactions going in and out of the
processor.
Each monitor core works independently, and allows chaining of trigger outputs to enable
taking system level measurements. By using the auxiliary trigger input port and the trigger
output of a monitor core you can create multi-level triggering environments to simplify
complex system-level measurements.
For example, if you have a master operating at 100MHz and a slave operating at 50MHz,
this multi-tiered triggering lets you analyze the transfer of data going from one time
domain to the next. Also, with this system-level measurement, you can debug complex
multi-time domain system-level issues, and analyze latency bottlenecks in your system.
You can add the chipscope_axi_monitor core to your system using the IP Catalog in XPS
available under the /debug folder as follows:
1. Put the chipscope_axi_monitor into your bus interface System Assembly View (SAV).
2. Select the bus you want to probe from the Bus Name field.
After you select the bus, an M for monitor displays between your peripheral and the
AXI Interconnect core IP.
3. Add a ChipScope ICON core to your system, and connect the control bus to the AXI
monitor.
4. In the SAV Ports tab, on the monitor core, set up the MON_AXI_ACLK port of the core to
match the clock used by the AXI interface being probed.
Optionally, you can assign the MON_AXI_TRIG_OUT port and connect it to other
chipscope_axi_monitor cores in the system.
Using Processor-less Embedded IP in Project Navigator
You might want to use portions of EDK IP outside of a processor system. For example, you
can use an AXI Interconnect core block to create a multiported DDR3 controller. XPS can be
used to manage, connect, and deliver EDK IP, even without a processor. See Xilinx Answer
Record37856 for more information.
Using System Generator: DSP Edition
System Generator for DSP supports both AXI4 and AXI4-Stream interfaces:
AXI4 interface is supported in conjunction with the EDK Processor Block.
AXI4-Stream interface is supported in IPs found in the System Generator AXI4 block
library.
AXI4 Support in System Generator
AXI4 (memory mapped) support in System Generator is available through the EDK
Processor block found in the System Generator block set.
AXI Reference Guide www.xilinx.com 11
UG761 (v14.1) April 24, 2012
AXI Development Support in Xilinx Design Tools
The EDK Processor block lets you connect hardware circuits created in System Generator
to a Xilinx MicroBlaze processor; options to connect to the processor using either a
PLBv4.6 or an AXI4 interface are available.
You do not need to be familiar with the AXI4 nomenclature when using the System
Generator flow because the EDK Processor block provides an interface that is
memory-centric and works with multiple bus types.
You can create hardware that uses shared registers, shared FIFOs, and shared memories,
and the EDK Processor block manages the memory connection to the specified interface.
Figure 2-1 shows the EDK Processor Implementation tab with an AXI4 bus type selected.
Port Name Truncation
System Generator shortens the AXI4-Stream signal names to improve readability on the
block; this is cosmetic and the complete AXI4-Stream name is used in the netlist. The name
truncation is turned on by default; uncheck the Display shortened port names option in
the block parameter dialog box to see the full name.
Port Groupings
System Generator groups together and color-codes blocks of AXI4-Stream channel signals.
In the example illustrated in the following figure, the top-most input port, data_tready,
and the top two output ports, data_tvalid and data_tdata belong in the same
AXI4-Stream channel, as well as phase_tready, phase_tvalid, and phase_tdata.
System Generator gives signals that are not part of any AXI4-Stream channels the same
background color as the block; the rst signal, shown in Figure 2-2, page 12, is an example.
Figure 2-1: EDK Processor Interface Implementation Tab
12 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Breaking Out Multi-Channel TDATA
The TDATA signal in an AXI4-Stream can contain multiple channels of data. In System
Generator, the individual channels for TDATA are broken out; for example, in the complex
multiplier shown in Figure 2-3 the TDATA for the dout port contains both the imaginary
and the real number components.
Note: Breaking out of multi-channel TDATA does not add additional logic to the design. The data is
correctly byte-aligned also.
For more information about System Generator and AXI IP creation, see the following
Xilinx website: http://www.xilinx.com/tools/sysgen.htm.
Figure 2-2: Block Signal Groupings
Figure 2-3: Multi-Channel TDATA
AXI Reference Guide www.xilinx.com 13
UG761 (v14.1) April 24, 2012
AXI Development Support in Xilinx Design Tools
Using Xilinx AXI IP: Logic Edition
Xilinx IP with an AXI4 interface can be accessed directly from the IP catalog in CORE
Generator, Project Navigator, and PlanAhead. An AXI4 column in the IP catalog shows IP
with AXI4 support. The IP information panel displays the supported AXI4, AXI4-Stream,
and AXI4-Lite interface.
Generally, for Virtex
-6 and Spartan
-5, Virtex
-4 and
Spartan
-6 and Spartan
-6
devices, and future device support:
Xilinx AXI Interconnect Core IP
Connecting AXI Interconnect Core Slaves and Masters
External Masters and Slaves
Data Mover
Centralized DMA
Ethernet DMA
Video DMA
Memory Control IP and the Memory Interface Generator
Refer to Chapter 4, Migrating to Xilinx AXI Protocols, for more detailed usage
information. See the following for a list of all AXI IP:
http://www.xilinx.com/support/documentation/axi_ip_documentation.htm.
Appendix C, Additional Resources, also contains this link.
Xilinx AXI Interconnect Core IP
The AXI Interconnect core IP (axi_interconnect) connects one or more AXI
memory-mapped master devices to one or more memory-mapped slave devices. The AXI
interfaces conform to the AMBA
, including the
AXI4-Lite control register interface subset.
Note: The AXI Interconnect core IP is intended for memory-mapped transfers only; AXI4-Stream
transfers are not applicable. IP with AXI4-Stream interfaces are generally connected to one another,
and to DMA IP.
The AXI Interconnect core IP is provided as a non-encrypted, non-licensed (free) pcore in
the Xilinx Embedded Development Toolkit (EDK) and in the Project Navigator for use in
non-embedded designs using the CORE Generator
tool.
See the AXI_Interconnect IP (DS768) for more information. Appendix C, Additional
Resources, also contains this link.
AXI Interconnect Core Features
The AXI Interconnect IP contains the following features:
AXI protocol compliant (AXI3, AXI4, and AXI4-Lite), which includes:
Burst lengths up to 256 for incremental (INCR) bursts
Converts AXI4 bursts >16 beats when targeting AXI3 slave devices by splitting
transactions.
AXI Reference Guide www.xilinx.com 15
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Generates REGION outputs for use by slave devices with multiple address decode
ranges
Propagates USER signals on each channel, if any; independent USER signal width
per channel (optional)
Propagates Quality of Service (QoS) signals, if any; not used by the AXI
Interconnect core (optional)
Interface data widths:
AXI4: 32, 64, 128, 256, 512, or 1024 bits.
AXI4-Lite: 32 bits
32-bit address width
The Slave Interface (SI) of the core can be configured to include 1-16 SI slots to accept
transactions from up to 16 connected master devices. The Master Interface (MI) can be
configured to comprise 1-16 MIT slots to issue transactions to up to 16 connected slave
devices.
Connects 1-16 masters to 1-16 slaves:
When connecting one master to one slave, the AXI Interconnect core can
optionally perform address range checking. Also, it can perform any of the
normal data-width, clock-rate, or protocol conversions and pipelining.
When connecting one master to one slave and not performing any conversions or
address range checking, pathways through the AXI Interconnect core are
implemented as wires, with no resources, no delay and no latency.
Note: When used in a non-embedded system such as CORE Generator, the AXI Interconnect core
connects multiple masters to one slave, typically a memory controller.
Built-in data-width conversion:
Each master and slave connection can independently use data widths of 32, 64,
128, 256, 512, or 1024 bits wide:
- The internal crossbar can be configured to have a native data-width of 32, 64,
128, 256, 512, or 1024 bits.
- Data-width conversion is performed for each master and slave connection
that does not match the crossbar native data-width.
When converting to a wider interface (upsizing), data is packed (merged)
optionally, when permitted by address channel control signals (CACHE
modifiable bit is asserted).
When converting to a narrower interface (downsizing), burst transactions can be
split into multiple transactions if the maximum burst length would otherwise be
exceeded.
Built-in clock-rate conversion:
Each master and slave connection can use independent clock rates
Synchronous integer-ratio (N:1 and 1:N) conversion to the internal crossbar native
clock-rate.
Asynchronous clock conversion (uses more storage and incurs more latency than
synchronous conversion).
The AXI Interconnect core exports reset signals resynchronized to the clock input
associated with each SI and MI slot.
Built-in AXI4-Lite protocol conversion:
16 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
The AXI Interconnect core can connect to any mixture of AXI4 and AXI4-Lite
masters and slaves.
The AXI Interconnect core saves transaction IDs and restores them during
response transfers, when connected to an AXI4-Lite slave.
- AXI4-Lite slaves do not need to sample or store IDs.
The AXI Interconnect core detects illegal AXI4-Lite transactions from AXI4
masters, such as any transaction that accesses more than one word. It generates a
protocol-compliant error response to the master, and does not propagate the
illegal transaction to the AXI4-Lite slave.
Write and read transactions are single-threaded to AXI4-Lite slaves, propagating
only a single address at a time, which typically nullifies the resource overhead of
separate write and read address signals.
Built-in AXI3 protocol conversion:
The AXI Interconnect core splits burst transactions of more than 16 beats from
AXI4 masters into multiple transactions of no more than 16 beats when connected
to an AXI3 slave.
Optional register-slice pipelining:
Available on each AXI channel connecting to each master and each slave.
Facilitates timing closure by trading-off frequency vs. latency.
One latency cycle per register-slice, with no loss in data throughput under all AXI
handshaking conditions.
Optional data-path FIFO buffering:
Available on write and read data paths connecting to each master and each slave.
32-deep LUT-RAM based.
512-deep block RAM based.
Option to delay assertion of:
- AWVALID until the complete burst is stored in the W-channel FIFO
- ARVALID until the R-channel FIFO has enough vacancy to store the entire
burst length
Selectable Interconnect Architecture:
Shared-Address, Multiple-Data (SAMD) crossbar:
- Parallel crossbar pathways for write data and read data channels. When more
than one write or read data source has data to send to different destinations,
data transfers can occur independently and concurrently, provided AXI
ordering rules are met.
- Sparse crossbar data pathways according to configured connectivity map,
resulting in reduced resource utilization.
- One shared write address arbiter, plus one shared Read address arbiter.
Arbitration latencies typically do not impact data throughput when
transactions average at least three data beats.
Shared Access Shared Data (SASD) mode (Area optimized):
- Shared write data, shared read data, and single shared address pathways.
- Issues one outstanding transaction at a time.
- Minimizes resource utilization.
Supports multiple outstanding transactions:
AXI Reference Guide www.xilinx.com 17
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Supports masters with multiple reordering depth (ID threads).
Supports up to 16-bit wide ID signals (system-wide).
Supports write response re-ordering, read data re-ordering, and read data
interleaving.
Configurable write and read transaction acceptance limits for each connected
master.
Configurable write and read transaction issuing limits for each connected slave.
Single-Slave per ID method of cyclic dependency (deadlock) avoidance:
For each ID thread issued by a connected master, the master can have outstanding
transactions to only one slave for writes and one slave for reads, at any time.
Fixed priority and round-robin arbitration:
16 configurable levels of static priority.
Round-robin arbitration is used among all connected masters configured with the
lowest priority setting (priority 0), when no higher priority master is requesting.
Any SI slot that has reached its acceptance limit, or is targeting an MI slot that has
reached its issuing limit, or is trying to access an MI slot in a manner that risks
deadlock, is temporarily disqualified from arbitration, so that other SI slots can be
granted arbitration.
Supports TrustZone security for each connected slave as a whole:
- If configured as a secure slave, only secure AXI accesses are permitted
- Any non-secure accesses are blocked and the AXI Interconnect core returns a
DECERR response to the master
Support for Read-only and write-only masters and slaves, resulting in reduced
resource utilization.
AXI Interconnect Core Limitations
The AXI Interconnect core does not support the following AXI3 features:
Atomic locked transactions; this feature was retracted by AXI4 protocol. A locked
transaction is changed to a non-locked transaction and propagated by the MI.
Write interleaving; this feature was retracted by AXI4 protocol. AXI3 masters must be
configured as if connected to a slave with write interleaving depth of one.
AXI4 QoS signals do not influence arbitration priority. QoS signals are propagated
from SI to MI.
The AXI Interconnect core does not convert multi-beat bursts into multiple single-beat
transactions when connected to an AXI4-Lite slave.
The AXI Interconnect core does not support low-power mode or propagate the AXI
C-channel signals.
The AXI Interconnect core does not time out if the destination of any AXI channel
transfer stalls indefinitely. All AXI slaves must respond to all received transactions, as
required by AXI protocol.
The AXI Interconnect core provides no address remapping.
The AXI Interconnect core provides no built-in conversion to non-AXI protocols, such
as APB.
18 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
The AXI Interconnect core does not have clock-enable (ACLKEN) inputs. Consequently,
the use of ACLKEN is not supported among memory mapped AXI interfaces in Xilinx
systems.
Note: The ACLKEN signal is supported for Xilinx AXI4-Stream interfaces.
When used in the CORE Generator tool flow, the AXI Interconnect core can only be
configured with one MI port (one connected slave device), and therefore performs no
address decoding.
AXI Interconnect Core Diagrams
Figure 2-6 illustrates a top-level AXI Interconnect.
The AXI Interconnect core consists of the SI, the MI, and the functional units that comprise
the AXI channel pathways between them. The SI accepts Write and Read transaction
requests from connected master devices. The MI issues transactions to slave devices. At
the center is the crossbar that routes traffic on all the AXI channels between the various
devices connected to the SI and MI.
The AXI Interconnect core also comprises other functional units located between the
crossbar and each of the interfaces that perform various conversion and storage functions.
The crossbar effectively splits the AXI Interconnect core down the middle between the
SI-related functional units (SI hemisphere) and the MI-related units (MI hemisphere).
The following subsection describes the use models for the AXI Interconnect core.
AXI Interconnect Core Use Models
The AXI Interconnect IP core connects one or more AXI memory-mapped master devices
to one or more memory-mapped slave devices. The following subsections describe the
possible use cases:
Pass-Through
Conversion Only
N-to-1 Interconnect
1-to-N Interconnect
N-to-M Interconnect (Sparse Crossbar Mode)
N-to-M Interconnect (Shared Access Mode)
Figure 2-6: Top-Level AXI Interconnect
AXI Interconnect
Slave
Interface
Master
Interface
SI Hemisphere MI Hemisphere
Crossbar
Master 0 Slave 0
Slave 1 Master 1
R
e
g
i
s
t
e
r
S
l
i
c
e
s
R
e
g
i
s
t
e
r
S
l
i
c
e
s
U
p
-
s
i
z
e
r
s
U
p
-
s
i
z
e
r
s
C
l
o
c
k
C
o
n
v
e
r
t
e
r
s
D
o
w
n
-
s
i
z
e
r
s
D
a
t
a
F
I
F
O
s
C
l
o
c
k
C
o
n
v
e
r
t
e
r
s
D
o
w
n
-
s
i
z
e
r
s
P
r
o
t
o
c
o
l
C
o
n
v
e
r
t
e
r
s
D
a
t
a
F
I
F
O
s
X12047
AXI Reference Guide www.xilinx.com 19
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Pass-Through
When there is only one master device and only one slave device connected to the AXI
Interconnect core, and the AXI Interconnect core is not performing any optional
conversion functions or pipelining, all pathways between the slave and master interfaces
degenerate into direct wire connections with no latency and consuming no logic
resources.
The AXI Interconnect core does, however, continue to resynchronize the
INTERCONNECT_ARESETN input to each of the slave and master interface clock domains
for any master or slave devices that connect to the ARESET_OUT_N outputs, which
consumes a small number of flip-flops.
Figure 2-7 is a diagram of the Pass-Through use case.
Conversion Only
The AXI Interconnect core can perform various conversion and pipelining functions when
connecting one master device to one slave device. These are:
Data width conversion
Clock rate conversion
AXI4-Lite slave adaptation
AXI-3 slave adaptation
Pipelining, such as a register slice or data channel FIFO
In these cases, the AXI Interconnect core contains no arbitration, decoding, or routing logic.
There could be latency incurred, depending on the conversion being performed.
Figure 2-7: Pass-through AXI Interconnect Use Case
Master 0 Slave 0
Interconnect
20 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Figure 2-8 shows the one-to-one or conversion use case.
N-to-1 Interconnect
A common degenerate configuration of AXI Interconnect core is when multiple master
devices arbitrate for access to a single slave device, typically a memory controller.
In these cases, address decoding logic might be unnecessary and omitted from the AXI
Interconnect core (unless address range validation is needed).
Conversion functions, such as data width and clock rate conversion, can also be performed
in this configuration. Figure 2-9 shows the N to 1 AXI interconnection use case.
.
Figure 2-8: 1-to-1 Conversion AXI Interconnect Use Case
Figure 2-9: N-to-1 AXI Interconnect
X12049
Master 0 Slave 0
Interconnect
Conversion
and/or
Pipelining
X12050
Master 0
Master 1
Slave 0
Interconnect
A
r
b
i
t
e
r
AXI Reference Guide www.xilinx.com 21
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
1-to-N Interconnect
Another degenerative configuration of the AXI Interconnect core is when a single master
device, typically a processor, accesses multiple memory-mapped slave peripherals. In
these cases, arbitration (in the address and write data paths) is not performed. Figure 2-10,
shows the 1 to N Interconnect use case.
N-to-M Interconnect (Sparse Crossbar Mode)
The N-to-M use case of the AXI Interconnect features a Shared-Address Multiple-Data
(SAMD) topology, consisting of sparse data crossbar connectivity, with single-threaded
write and read address arbitration, as shown in Figure 2-11.
Figure 2-10: 1-to-N AXI Interconnect Use Case
X12051
Master 0
Slave 0
Slave 1
Interconnect
D
e
c
o
d
e
r
/
R
o
u
t
e
r
Figure 2-11: Shared Write and Read Address Arbitration
X12052
Master 0
Master 1
Master 2
Slave 0
Slave 1
Slave 2
Interconnect
AW
AR
AW
AR
AW
AR
AW
AR
AW
AR
AW
AR
Write
Transaction
Arbiter
Read
Transaction
Arbiter
Router
Router
22 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Figure 2-12 shows the sparse crossbar write and read data pathways.
Parallel write and read data pathways connect each SI slot (attached to AXI masters on the
left) to all the MI slots (attached to AXI slaves on the right) that it can access, according to
the configured sparse connectivity map. When more than one source has data to send to
different destinations, data transfers can occur independently and concurrently, provided
AXI ordering rules are met.
The write address channels among all SI slots (if > 1) feed into a central address arbiter,
which grants access to one SI slot at a time, as is also the case for the read address channels.
The winner of each arbitration cycle transfers its address information to the targeted MI
slot and pushes an entry into the appropriate command queue(s) that enable various data
pathways to route data to the proper destination while enforcing AXI ordering rules.
N-to-M Interconnect (Shared Access Mode)
When in Shared Access mode, the N-to-M use case of the AXI Interconnect core provides
for only one outstanding transaction at a time, as shown in Figure 2-13, page 23. For each
connected master, read transactions requests always take priority over writes. The aribter
then selects from among the requesting masters. A write or read data transfer is enabled to
the targeted slave device. After the data transfer (including write response) completes, the
next request is arbitrated. Shared Access mode minimizes the resources used to implement
the crossbar module of the AXI Interconnect.
Figure 2-12: Sparse Crossbar Write and Read Data Pathways
X12053
Interconnect
Master 0
Master 1
Master 2
Slave 0
Slave 1
Slave 2
W
R
W
R
W
R
W
R
W
R
W
R
Write Data Crossbar
Read Data Crossbar
AXI Reference Guide www.xilinx.com 23
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Width Conversion
The AXI Interconnect core has a parametrically-defined, internal, native data-width that
supports 32, 64, 128, 256, 512, and 1024 bits. The AXI data channels that span the crossbar
are sized to the native width of the AXI Interconnect, as specified by the
C_INTERCONNECT_DATA_WIDTH parameter.
When any SI slots or MI slots are sized differently, the AXI Interconnect core inserts width
conversion units to adapt the slot width to the AXI Interconnect core native width before
transiting the crossbar to the other hemisphere.
The width conversion functions differ depending on whether the data path width gets
wider (upsizing) or more narrow (downsizing) when moving in the direction from the
SI toward the MI. The width conversion functions are the same in either the SI hemisphere
(translating from the SI to the AXI Interconnect core native width) or the MI hemisphere
(translating from the AXI Interconnect core native width to the MI).
MI and SI slots have an associated individual parametric data-width value. The AXI
Interconnect core adapts each MI and SI slot automatically to the internal native
data-width as follows:
When the data width of an SI slot is wider than the internal native data width of the
AXI Interconnect, a downsizing conversion is performed along the pathways of the SI
slot.
When the internal native data width of the AXI Interconnect core is wider than that of
an MI slot, a downsizing conversion is performed along the pathways of the MI slot.
When the data width of an SI slot is narrower than the internal native data width of
the AXI Interconnect, an upsizing conversion is performed along the pathways of the
SI slot.
When the internal native data width of the AXI Interconnect core is narrower than
that of an MI slot, an upsizing conversion is performed along the pathways of the MI
slot.
Typically, the data-width of the AXI Interconnect core is matched to that of the most
throughput-critical peripheral, such as a memory controller, in the system design.
The following subsections describe the downsizing and upsizing behavior.
Figure 2-13: Shared Access Mode
Master 0
Master 1
Slave 0
Slave 1
AW
AR
W
R
AW
AR
W
R
AW
AR
W
R
AW
AR
W
R
nterconnect
Arbiter
Address
Write Data
Read Data
24 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Downsizing
Downsizers used in pathways connecting wide master devices are equipped to split burst
transactions that might exceed the maximum AXI burst length (even if such bursts are
never actually needed).
When the data width on the SI side is wider than that on the MI side and the transfer size
of the transaction is also wider than the data width on the MI side, then downsizing is
performed and, in the transaction issued to the MI side, the number of data beats is
multiplied accordingly.
For writes, data serialization occurs
For reads, data merging occurs
The AXI Interconnect core sets the RRESP for each output data beat (on the SI) to the
worst-case error condition encountered among the input data beats being merged,
according to the following descending precedence order: DECERR, SLVERR, OKAY,
EXOKAY.
When the transfer size of the transaction is equal to or less than the MI side data width, the
transaction (address channel values) remains unchanged, and data transfers Pass-Through
unchanged except for byte-lane steering. This applies to both writes and reads.
Upsizing
For upsizers in the SI hemisphere, data packing is performed (for INCR and WRAP bursts),
provided the AW/ARCACHE[1] bit (Modifiable) is asserted.
In the resulting transaction issued to the MI side, the number of data beats is reduced
accordingly.
For writes, data merging occurs
For reads, data serialization occurs
The AXI Interconnect core replicates the RRESP from each input data beat onto the RRESP
of each output data beat (on the SI).
Clock Conversions
Clock conversion comprises the following:
A clock-rate reduction module performs integer (N:1) division of the clock rate from
its input (SI) side to its output (MI) side.
A clock-rate acceleration module performs integer (1:N) multiplication of clock rate
from its input (SI) to output (MI) side.
An asynchronous clock conversion module performs either reduction or acceleration
of clock-rates by passing the channel signals through an asynchronous FIFO.
For both the reduction and the acceleration modules, the sample cycle for the faster clock
domain is determined automatically. Each module is applicable to all five AXI channels.
The MI and SI each have a vector of clock inputs in which each bit synchronizes all the
signals of the corresponding interface slot. The AXI Interconnect core has its own native
clock input. The AXI Interconnect core adapts the clock rate of each MI and SI slot
automatically to the native clock rate of the AXI Interconnect.
Typically, the native clock input of the AXI Interconnect core is tied to the same clock
source as used by the highest frequency SI or MI slot in the system design, such as the MI
slot connecting to the main memory controller.
AXI Reference Guide www.xilinx.com 25
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Pipelining
Under some circumstances, AXI Interconnect core throughput is improved by buffering
data bursts. This is commonly the case when the data rate at a SI or MI slot differs from the
native data rate of the AXI Interconnect core due to data width or clock rate conversion.
To accommodate the various rate change combinations, data burst buffers can be inserted
optionally at the various locations.
Additionally, an optional, two-deep register slice (skid buffer) can be inserted on each of
the five AXI channels at each SI or MI slot to help improve system timing closure.
Peripheral Register Slices
At the outer-most periphery of both the SI and MI, each channel of each interface slot can
be optionally buffered by a register slice. These are provided mainly to improve system
timing at the expense of one latency cycle.
Peripheral register slices are always synchronized to the SI or MI slot clock.
Data Path FIFOs
Under some circumstances, AXI Interconnect throughput is improved by buffering data
bursts. This is commonly the case when the data rate at an SI or MI slot differs from the
native data rate of the AXI Interconnect core due to data width or clock rate conversion. To
accommodate the various rate change combinations, you can optionally insert data burst
buffers at the following locations:
The SI-side write data FIFO is located before crossbar module, after any SI-side width,
or clock conversion.
The MI-side write data FIFO is located after the crossbar module, before any MI slot
width, clock, or protocol conversion.
The MI-side Read data FIFO is located before (on the MI side) of the crossbar module,
after any MI-side width, or protocol conversion.
The SI-side Read data FIFO is located after (on the SI side) of the crossbar module,
before any SI-side width, or clock conversion.
Data FIFOs are synchronized to the AXI Interconnect core native clock. The width of each
data FIFO matches the AXI Interconnect core native data width.
For more detail and the required signals and parameters of the AXI Interconnect core IP,
refer to the AXI Interconnect IP (DS768). Appendix C, Additional Resources, also contains
this link.
Connecting AXI Interconnect Core Slaves and Masters
You can connect the slave interface of one AXI Interconnect core module to the master
interface of another AXI Interconnect core with no intervening logic using an AXI-to-AXI
Connector (axi2axi_connector) IP. The axi2axi_connector IP provides the port
connection points necessary to represent the connectivity in the system, plus a set of
parameters used to configure the respective interfaces of the AXI Interconnect core
modules being connected.),
26 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
AXI-To-AXI Connector Features
The features of the axi2axi_connector are:
Connects the master interface of one AXI Interconnect core module to slave interface
of another AXI Interconnect core module.
Directly connects all master interface signals to all slave interface signals.
Contains no logic or storage, and functions as a bus bridge in EDK.
Description
The AXI slave interface of the axi2axi_connector (connector) module always connects to
one attachment point (slot) of the master interface of one AXI Interconnect core module
(the upstream interconnect). The AXI master interface of the connector always connects
to one slave interface slot of a different AXI Interconnect core module (the downstream
interconnect) as shown in Figure 2-14.
Using the AXI To AXI Connector
When using the AXI To AXI Connector (axi2axi_connector) you can cascade two AXI
Interconnect cores. The EDK tools set the data width and clock frequency parameters on
the axi2axi_connector IP so that the characteristics of the master and slave interfaces
match.
Also, the EDK tools auto-connect the clock port of the axi2axi_connector so that the
interfaces of the connected interconnect modules are synchronized by the same clock
source.
For more detail and the required signals and parameter of the AXI To AXI Connector, refer
to the AXI To AXI Connector IP Data Sheet (DS803). Appendix C, Additional Resources, also
contains this link.
Figure 2-14: Master and Slave Interface Modules Connecting Two AXI Interconnect cores
mb_0
AXI_Interconnect_0
AXI_Interconnect_1
AXI_Interconnect_2
slave_2
axi2axi_connector
slave_1
slave_3
M_AXI_IP
M_AXI_DP
M_AXI_IC
M_AXI_DC
X12036
AXI Reference Guide www.xilinx.com 27
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
External Masters and Slaves
When there is an AXI master or slave IP module that is not available as an EDK pcore (such
as a pure HDL module) that needs to be connected to an AXI Interconnect core inside the
EDK sub-module, these utility cores can be used for that the purpose. The AXI master or
slave module would remain in the top-level of the design, and the AXI signals would be
connected to the EDK sub system using this utility pcore.
Features
Connects an AXI master or slave interface to the AXI Interconnect core IP.
A master or slave AXI bus interface on one side and AXI ports on the other side.
Other ports are modeled as an I/O interface, which can be made external, thereby
providing the necessary signals that can be connected to a top-level master or slave.
Figure 2-15 is a block diagram of the AXI external master connector.
Figure 2-16 shows a block diagram of the external slave connector.
Figure 2-15: EDK Sub-system using an External Master Connector
Individual AXI Ports made
external to sub-system
interface
Microblaze
EDK sub-system
Axi_ext_master_conn
ICAXI
DCAXI
S_AXI
M_AXI
Memory controller
Axi_interconnect
X12040
Figure 2-16: EDK Subsystem using an External Slave Connector
Microblaze
Axi_interconnect
ICAXI S_AXI Memory controller
DCAXI
EDK sub-system
Individual AXI Ports made
external to sub-system
interface
Axi_gpio
Axi_ext_slave_conn
S_AXI
S_AXI
X12075
28 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
The Platform Studio IP Catalog contains the external master and external slave connectors.
For more information, refer to the Xilinx website:
http://www.xilinx.com/support/documentation/axi_ip_documentation.htm.
Appendix C, Additional Resources, also contains this link.
Data Mover
The AXI Data Mover is an important interconnect infrastructure IP that enables high
throughput transfer of data between the AXI4 memory-mapped domain to the
AXI4-Stream domain. It provides Memory Map to Stream and Stream to Memory Map
channels that operate independently in a full, duplex-like method. The Data Mover IP has
the following features:
Enables 4k byte address boundary protection
Provides automatic burst partitioning
Provides the ability to queue multiple transfer requests.
It also provides byte-level data realignment allowing memory reads and writes to any
byte offset location.
It is recommended to use AXI DataMover as a bridge between AXI4 Stream and AXI4
Memory Map Interfaces for both write and read operations where AXI4 Stream Master
controls data flow through command and status bus. AXI DataMover is available in both
CORE Generator and XPS. Figure 2-17 shows a block diagram of the Data Mover
functionality. See more information on the product page
http://www.xilinx.com/products/intellectual-property/axi_datamover.htm.
Figure 2-17: Data Mover Block Diagram
AXI Reference Guide www.xilinx.com 29
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Centralized DMA
Xilinx provides a Centralized DMA core for AXI. This core replaces legacy PLBv4.6
Centralized DMA with an AXI4 version that contains enhanced functionality and higher
performance. Figure 2-18 shows a typical embedded system architecture incorporating the
AXI (AXI4 and AXI4-Lite) Centralized DMA.
The AXI4 Centralized DMA performs data transfers from one memory mapped space to
another memory mapped space using high speed, AXI4, bursting protocol under the
control of the system microprocessor.
Figure 2-18: Typical Use Case for AXI Centralized DMA
AXI CDMA
CPU
(AXI
MicroBlaze)
AXI4 MMap
Interconnect
(AXI4-Lite)
AXI
BRAM
AXI
DDRx
Registers
Scatter
Gather
Engine
AXI4
AXI4
AXI4 Read
AXI4 Write
AXI4-Lite
AXI4
AXI Intc
AXI4-Lite
AXI4-Lite
AXI4 MMap
Interconnect
(AXI4)
DP
DC
IC
AXI4
AXI4
Interrupt
Interrupts In
Interrupt Out
(To AXI Intc)
AXI4-Stream
AXI4-Stream
DataMover
X12037
30 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
AXI Centralized DMA Summary
The AXI Centralized DMA provides the same simple transfer mode operation as the legacy
PLBv4.6 Centralized DMA. A simple mode transfer is defined as that which the CPU
programs the Centralized DMA register set for a single transfer and then initiates the
transfer. The Centralized DMA:
Performs the transfer
Generates an interrupt when the transfer is complete
Waits for the microprocessor to program and start the next transfer
Also, the AXI Centralized DMA includes an optional data realignment function for 32- and
64-bit bus widths. This feature allows addressing independence between the transfer
source and destination addresses.
AXI Centralized DMA Scatter Gather Feature
In addition to supporting the legacy PLBv4.6 Centralized DMA operations, the AXI
Centralized DMA has an optional Scatter Gather (SG) feature.
SG enables the system CPU to off-load transfer control to high-speed hardware
automation that is part of the Scatter Gather engine of the Centralized DMA. The SG
function fetches and executes pre-formatted transfer commands (buffer descriptors) from
system memory as fast as the system allows with minimal required CPU interaction. The
architecture of the Centralized DMA separates the SG AXI4 bus interface from the AXI4
data transfer interface so that buffer descriptor fetching and updating can occur in parallel
with ongoing data transfers, which provides a significant performance enhancement.
DMA transfer progress is coordinated with the system CPU using a programmable and
flexible interrupt generation approach built into the Centralized DMA. Also, the AXI
Centralized DMA allows the system programmer to switch between using Simple Mode
transfers and SG-assisted transfers using the programmable register set.
The AXI Centralized DMA is built around the new high performance AXI DataMover
helper core which is the fundamental bridging element between AXI4-Stream and AXI4
memory mapped buses. In the case of AXI Centralized DMA, the output stream of the
DataMover is internally looped back to the input stream. The SG feature is based on the
Xilinx SG helper core used for all Scatter Gather enhanced AXI DMA products.
Centralized DMA Configurable Features
The AXI4 Centralized DMA lets you trade-off the feature set implemented with the FPGA
resource utilization budget. The following features are parameterizable at FPGA
implementation time:
Use DataMover Lite for the main data transport (Data Realignment Engine (DRE) and
SG mode are not supported with this data transport mechanism)
Include or omit the Scatter Gather function
Include or omit the DRE function (available for 32- and 64-bit data transfer bus widths
only)
Specify the main data transfer bus width (32, 64, 128, 256, 512, and 1024 bits)
Specify the maximum allowed AXI4 burst length the DataMover will use during data
transfers
AXI Reference Guide www.xilinx.com 31
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Centralized DMA AXI4 Interfaces
The following table summarizes the four external AXI4 Centralized DMA interfaces in
addition to the internally-bridged DataMover stream interface within the AXI Centralized
DMA function.
Ethernet DMA
The AXI4 protocol adoption in Xilinx embedded processing systems contains an Ethernet
solution with Direct Memory Access (DMA). This approach blends the performance
advantages of AXI4 with the effective operation of previous Xilinx Ethernet IP solutions.
Figure 2-19, page 32 provides high-level block diagram of the AXI DMA.
Table 2-1: AXI Centralized DMA AXI4 Interfaces
Interface AXI Type Data Width Description
Control AXI4-Lite slave 32
Used to access the AXI Centralized DMA internal registers.
This is generally used by the system processor to control
and monitor the AXI Centralized DMA operations.
Scatter Gather AXI4 master 32
An AXI4 memory mapped master that is used by the AXI
Centralized DMA to read DMA transfer descriptors from
System Memory and then to write updated descriptor
information back to System Memory when the associated
transfer operation has completed.
Data MMap Read
AXI4 Read
master
32, 64,
128, 256, 512,
1024
Reads the transfer payload data from the memory mapped
source address. The data width is parameterizable to be 32,
64, 128, 256, 512, and 1024 bits wide.
Data MMap Write
AXI4 Write
master
32, 64,
128, 256, 512,
1024
Writes the transfer payload data to the memory mapped
destination address. The data width is parameterizable to be
32, 64, 128, 256, 512, and 1024 bits wide, and is the same
width as the Data Read interface.
32 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Figure 2-20 shows a typical system architecture for the AXI Ethernet.
Figure 2-19: AXI DMA High Level Block Diagram
AXI DMA
S2MM DMA Controller
MM2S DMA Controller
AXI DataMover
A
X
I
L
it
e
S
l
a
v
e
I
n
t
e
r
f
a
c
e
MM2S_IntrOut
S2MM_IntrOut
Reset
Module
Register Module
MM2S_DMACR
MM2S_DMASR
MM2S_CURDESC
Reserved
MM2S_TAILDESC
Reserved
S2MM_DMACR
S2MM_DMASR
S2MM_CURDESC
Reserved
S2MM_TAILDESC
Reserved
A
X
I
C
o
n
t
r
o
l
I
n
t
e
r
f
a
c
e
A
X
I
S
t
a
t
u
s
I
n
t
e
r
f
a
c
e
SG Engine
(Interrupt Coalescing)
SG Interface
AXI Memory Map Read (MM2S)
AXI Memory Map Write (S2MM)
AXI Control
Stream (MM2S)
AXI Status Stream
(S2MM)
AXI Stream
(MM2S)
AXI Stream
(S2MM)
AXI Lite
AXI Memory Map SG Read / Write
SG Interface
X12038
Figure Top x-ref 3
Figure 2-20: Typical Use Case for AXI DMA and AXI4 Ethernet
CPU
(AXI
MicroBlaze)
AXI4 MMap
Interconnect
AXI Ethernet
AXI
BRAM
AXI
DDRx
Registers
AXI DMA
Scatter
Gather
Engine
DataMover
Ethernet
Control
and Status
Registers
AXI4-Lite
AXI4-Stream
AXI4-Stream
AXI4-Stream
AXI4-Stream
Tx
Payload
Rx
Payload
Tx
Control
Rx
Status
AXI Intc
AXI4 MMap
Interconnect
DP
DC
IC
AXI4
Interrupt
Interrupts In
Interrupt Out
(To AXI Intc)
Interrupt Out
(To AXI Intc)
AXI4-Stream
AXI4-Stream
AVB
Ethernet Tx
Ethernet Rx
MIIM
AXI4-Lite
AXI4-Lite
AXI4-Lite
AXI4
AXI4
AXI4
AXI4
AXI4 Read
AXI4 Write
X12039
AXI Reference Guide www.xilinx.com 33
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
As shown in Figure 2-20, page 32, the AXI Ethernet is now paired with a new AXI DMA IP.
The AXI DMA replaces the legacy PLBv4.6 SDMA function that was part of the PLBv4.6
Multi-Port Memory Controller (MPMC).
The AXI DMA is used to bridge between the native AXI4-Stream protocol on the AXI
Ethernet to AXI4 memory mapped protocol needed by the embedded processing system.
The AXI DMA core can also be connected to a user system other than an Ethernet-based
AXI IP. In this case, the parameter C_SG_INCLUDE_STSCNTRL_STRM must be set to 0 to
exclude status and control information and use it for payload only.
AXI4 DMA Summary
The AXI DMA engine provides high performance direct memory access between system
memory and AXI4-Stream type target peripherals. The AXI DMA provides Scatter Gather
(SG) capabilities, allowing the CPU to offload transfer control and execution to hardware
automation.
The AXI DMA as well as the SG engines are built around the AXI DataMover helper core
(shared sub-block) that is the fundamental bridging element between AXI4-Stream and
AXI4 memory mapped buses.
AXI DMA provides independent operation between the Transmit channel Memory Map to
Slave (MM2S) and the Receive channel Slave to Memory Map (S2MM), and provides
optional independent AXI4-Stream interfaces for offloading packet metadata.
An AXI control stream for MM2S provides user application data from the SG descriptors to
be transmitted from AXI DMA.
Similarly, an AXI status stream for S2MM provides user application data from a source IP
like AXI4 Ethernet to be received and stored in a SG descriptors associated with the
Receive packet.
In an AXI Ethernet application, the AXI4 control stream and AXI4 status stream provide
the necessary functionality for performing checksum offloading.
Optional SG descriptor queuing is also provided, allowing fetching and queuing of up to
four descriptors internally in AXI DMA. This allows for very high bandwidth data transfer
on the primary data buses.
34 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
DMA AXI4 Interfaces
The Xilinx implementation for DMA uses the AXI4 capabilities extensively. Table 2-2
summarizes the eight AXI4 interfaces used in the AXI DMA function.
Table 2-2: AXI DMA Interfaces
Interface AXI Type
Data
Width
Description
Control AXI4-Lite slave 32 Used to access the AXI DMA internal registers. This is
generally used by the System Processor to control and
monitor the AXI DMA operations.
Scatter Gather AXI4 master 32 An AXI4 memory mapped master used by the AXI4
DMA to Read DMA transfer descriptors from system
memory and write updated descriptor information back
to system memory when the associated transfer
operation is complete.
Data MM Read AXI4 Read
master
32, 64,
128, 256,
512, 1024
Transfers payload data for operations moving data from
the memory mapped side of the DMA to the Main Stream
output side.
Data MM Write AXI4 Write
master
32, 64,
128, 256,
512, 1024
Transfers payload data for operations moving data from
the Data Stream In interface of the DMA to the memory
mapped side of the DMA.
Data Stream Out AXI4-Stream
master
32, 64,
128, 256,
512, 1024
Transfers data read by the Data MM Read interface to the
target receiver IP using the AXI4-Stream protocol.
Data Stream In AXI4-Stream
slave
32, 64,
128, 256,
512, 1024
Received data from the source IP using the AXI4-Stream
protocol. Transferred the received data to the Memory
Map system using the Data MM Write Interface.
Control Stream Out AXI4-Stream
master
32 The Control stream Out is used to transfer control
information imbedded in the Tx transfer descriptors to
the target IP.
Status Stream In AXI4-Stream
slave
32 The Status Stream In receives Rx transfer information
from the source IP and updates the data in the associated
transfer descriptor and written back to the System
Memory using the Scatter Gather interface during a
descriptor update.
AXI Reference Guide www.xilinx.com 35
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
Video DMA
The AXI4 protocol Video DMA (VDMA) provides a high bandwidth solution for Video
applications. It is a similar implementation to the Ethernet DMA solution.
Figure 2-21 shows a top-level AXI4 VDMA block diagram.
Figure 2-22, page 36 illustrates a typical system architecture for the AXI VDMA.
Figure 2-21: AXI VDMA High-Level Block Diagram
AXI VDMA
S2MM DMA Controller
MM2S DMA Controller
AXI DataMover
A
X
I
L
i
t
e
S
l
a
v
e
I
n
t
e
r
f
a
c
e
MM2S_IntrOut
S2MM_IntrOut
Reset
Module
Register Module
MM2S_DMACR
MM2S_DMASR
MM2S_CURDESC
S2MM_DMACR
S2MM_DMASR
S2MM_CURDESC
Reserved
SG Engine
(Interrupt Coalescing )
AXI Memory Map Read (MM2S)
AXI Memory Map Write (S2MM)
AXI MM2S
Stream
AXI S2MM
Stream
AXI Lite
AXI Memory Map SG Read
MM2S
Gen-Lock
MM2S
FSync
S2MM
Gen-Lock
S2MM
FSync
MM2S_TAILDESC
Reserved
S2MM_TAILDESC
Reserved
MM2S Frame Size
MM2S Stride
MM2S Strt Addr 0
MM2S Strt Addr N
:
MM2S Frame Size
MM2S Stride
MM2S Strt Addr 0
MM2S Strt Addr N
:
MM2S Frame Size
MM2S Stride
MM2S Strt Addr 0
MM2S Strt Addr N
:
MM2S Frame Size
MM2S Stride
MM2S Strt Addr 0
MM2S Strt Addr N
:
axi_resetn
m_axis_mm2s_aresetn
s_axis_s2mm_aresetn
Line
Buffer
Line
Buffer
MM2S Line
Bufffer Status
S2MM Line
Bufffer Status
Down
Sizer
Up
Sizer
Reserved
x12054
36 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
AXI VDMA Summary
The AXI VDMA engine provides high performance direct memory access between system
memory and AXI4-Stream type target peripherals. The AXI VDMA provides Scatter
Gather (SG) capabilities also, which allows the CPU to offload transfer control and
execution to hardware automation. The AXI VDMA and the SG engines are built around
the AXI DataMover helper core which is the fundamental bridging element between
AXI4-Stream and AXI4 memory mapped buses.
AXI VDMA provides circular frame buffer access for up to 32 frame buffers and provides
the tools to transfer portions of video frames or full video frames.
The VDMA provides the ability to park on a frame also, allowing the same video frame
data to be transferred repeatedly.
VDMA provides independent frame synchronization and an independent AXI clock,
allowing each channel to operate on a different frame rate and different pixel rate. To
maintain synchronization between two independently functioning AXI VDMA channels,
there is an optional Gen-Lock synchronization feature.
Gen-Lock provides a method of synchronizing AXI VDMA slaves automatically to one or
more AXI VDMA masters such that the slave does not operate in the same video frame
buffer space as the master. In this mode, the slave channel skips or repeats a frame
automatically. Either channel can be configured to be a Gen-Lock slave or a Gen-Lock
master.
For video data transfer, the AXI4-Stream ports can be configured from 8 bits up to 1024 bits
wide in multiples of 8. For configurations where the AXI4-Stream port is narrower than the
associated AXI4 memory map port, the AXI VDMA upsizes or downsizes the data
providing full bus width burst on the memory map side. It also supports an asynchronous
mode of operation where all clocks are treated asynchronously.
Figure 2-22: Typical Use Case for AXI VDMA and Video IP
Figure Top x-ref 4
AX nterconnect
AX DDRx
CPU
(AX
MicroBlaze)
AX BRAM
Video
Timing
Controller
Video P
AX4
AX4
AX ntc
nterrupt
nterrupts n
AX4 MMap
nterconnect AX4-Lite
AX4-Lite DP
DC
C
AX4
AX4
Registers
Scatter
Gather
Engine
DataMover
MM2S
Controller
MM2S
GenLock
S2MM
Controller
S2MM
GenLock
Video P
G
e
n
L
o
c
k
nterrupt Out
(To AX ntc)
AX VDMA
MM2S FSync
Video
Timing
Controller
S2MM Fsync
AX4-Lite
AX4 Write AX4-Stream
AX4-Stream
AX4 Read
AX4 Read
Video Out
Video n
X12055
AXI Reference Guide www.xilinx.com 37
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
VDMA AXI4 Interfaces
Table 2-3 lists and describes six AXI4 interfaces of the AXI DMA function.
Memory Control IP and the Memory Interface Generator
There are two DDRx (SDRAM) AXI memory controllers available in the IP catalog.
Because the Virtex-6 and Spartan-6 devices have natively different memory control
mechanisms (Virtex-6 uses a fabric-based controller and Spartan-6 has an on-chip Memory
Control Block (MCB)), the description of memory control is necessarily device-specific.
The following subsections describe AXI memory control by Virtex-6 and Spartan-6
devices.
The Virtex-6 and Spartan-6 memory controllers are available in two different tool
packages:
In EDK, as the axi_v6_ddrx or the axi_s6_ddrx memory controller core.
In the CORE Generator interface, through the Memory Interface Generator (MIG)
tool.
The underlying HDL code between the two packages is the same with different wrappers.
The flexibility of the AXI4 interface allows easy adaptation to both controller types.
Table 2-3: AXI VDMA Interfaces
Interface AXI Type Data Width Description
Control AXI4-Lite slave 32 Accesses the AXI VDMA internal registers. This is
generally used by the System Processor to control
and monitor the AXI VDMA operations.
Scatter Gather AXI4 master 32 An AXI4 memory mapped master that is used by the
AXI VDMA to read DMA transfer descriptors from
System Memory. Fetched Scatter Gather descriptors
set up internal video transfer parameters for video
transfers.
Data MM Read AXI4 Read master 32, 64,
128, 256, 512,
1024
Transfers payload data for operations moving data
from the memory mapped side of the DMA to the
Main Stream output side.
Data MM Write AXI4 Write master 32, 64,
128, 256, 512,
1024
Transfers payload data for operations moving data
from the Data Stream In interface of the DMA to the
memory mapped side of the DMA.
Data Stream Out AXI4-Stream master 8,16, 32,
64, 128, 256,
512, 1024
Transfers data read by the Data MM Read interface to
the target receiver IP using the AXI4-Stream
protocol.
Data Stream In AXI4-Stream slave 8, 16, 32,
64, 128, 256,
512, 1024
Receives data from the source IP using the
AXI4-Stream protocol. The data received is then
transferred to the Memory Map system via the Data
MM Write Interface.
38 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
Virtex-6
The Virtex-6 memory controller solution is provided by the Memory Interface Generator
(MIG) tool and is updated with an optional AXI4 interface.
This solution is available through EDK also, with an AXI4-only interface as the
axi_v6_ddrx memory controller.
The axi_v6_ddrx memory controller uses the same Hardware Design Language (HDL)
logic and uses the same GUI, but is packaged for EDK processor support through XPS. The
Virtex-6 memory controller is adapted with an AXI4 Slave Interface (SI) through an AXI4
to User Interface (UI) bridge. The AXI4-to-UI bridge converts the AXI4 slave transactions
to the MIG Virtex-6 UI protocol. This supports the same options that were previously
available in the Virtex-6 memory solution.
The optimal AXI4 data width is the same as the UI data width, which is four times the
memory data width. The AXI4 memory interface data width can be smaller than the UI
interface but is not recommended because it would result in a higher area, lower timing/
performance core to support the width conversion.
The AXI4 interface maps transactions over to the UI by breaking each of the AXI4
transactions into smaller stride, memory-sized transactions. The Virtex-6 memory
controller then handles the bank/row management for higher memory utilization.
Figure 2-23 shows a block diagram of the Virtex-6 memory solution with the AXI4
interface.
Spartan-6 Memory Control Block
The Spartan-6 device uses the hard Memory Control Block (MCB) primitive native to that
device. The Spartan-6 MCB solution was adapted with an AXI4 memory mapped Slave
Interface (SI).
To handle AXI4 transactions to external memory on Spartan-6 architectures requires a
bridge to convert the AXI4 transactions to the MCB user interface.
Because of the similarities between the two interfaces, the AXI4 SI can be configured to be
lightweight, by connecting a master that does not issue narrow bursts and has the same
native data width as the configured MCB interface.
Figure 2-23: Virtex-6 Memory Control Block Diagram
axi_v6_ddrx (EDK) or memc_ui_top (COREGen) top level
DDR2 /
DDR3 PHY
Virtex-6
Memory
Controller
AXI4 Slave
Interface
Block
DDR2 / DDR3 DFI
Native
Interface
User
Interface
bllock
UI
Interface
AXI4
Interface
DDR2 or
DDR3
SDRAM
external
AXI4 Master
AXI Reference Guide www.xilinx.com 39
UG761 (v14.1) April 24, 2012
Xilinx AXI Infrastructure IP
The AXI4 bridge:
Converts AXI4 incremental (INCR) commands to MCB commands in a 1:1 fashion for
transfers that are 16 beats or less.
Breaks down AXI4 transfers greater than 16 beats into 16-beat maximum transactions
sent over the MCB protocol.
This allows a balance between performance and latency in multi-ported systems. AXI4
WRAP commands can be broken into two MCB transactions to handle the wraps on the
MCB interface, which does not natively support WRAP commands natively.
The axi_s6_ddrx core and Spartan-6 AXI MIG core from CORE Generator support all
native port configurations of the MCB including 32, 64, and 128 bit wide interfaces with up
to 6 ports (depending on MCB port configuration). Figure 2-24 shows a block diagram of
the AXI Spartan-6 memory solution.
For more detail on memory control, refer to the memory website documents at
http://www.xilinx.com/products/technology/memory-solutions/index.htm.
Figure 2-24: Spartan-6 Memory Solution Block Diagram
X12046
axi_s6_ddrx or mcb_ui_top
mcb_raw_wrapper
f
p
g
a
b
o
u
n
d
a
r
y
MCB
MCB Soft Calibration Logic
AXI4 Master
AXI4 Master
AXI4 Master
AXI4 Slave
Interface 5
AXI4 Slave
Interface 1
AXI4 Slave
Interface 0
LPDDR/
DDR/
DDR2/
DDR3
SDRAM
Port 0
Port 1
Port 5
40 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 2: AXI Support in Xilinx Tools and IP
AXI Reference Guide www.xilinx.com 41
UG761 (v14.1) April 24, 2012
Chapter 3
AXI Feature Adoption in Xilinx FPGAs
This chapter describes how Xilinx
embedded
processors), to Little-endian (which aligns with ARM
IP, third party IP, and user IP present a wide range of configuration
options and design choices that let you tune a system for size, Fmax, throughput, latency,
ease of use, and ease of debug. IP design decisions and system architecture also impact the
area and performance of the system. Given that AXI-based systems must span a wide
solution space from small Spartan
and Artix
BFM available for XPS) and AXI protocol checkers/assertions (available from
Cadence or from the ARM
website).
Simulation-based verification results in far shorter debug cycle time, easier identification
and isolation of functional problems, and greater variation of AXI traffic than
hardware-only based verification.
Hardware-only based AXI IP verification requires full synthesis and Place and Route
(PAR) time per debug cycle, and the visibility of signals from an AXI ChipScope monitor is
more limited than in a simulation domain. The potential complexity of AXI4 traffic even in
a relatively typical system makes hardware-only verification very expensive.
Skipping
Simulation-based
AXI IP Verification
is Highly
Discouraged
However, if you must rely on hardware-only AXI IP verification, it is recommended that
the AXI Interconnect be configured as simply as possible.
For example, use SASD (which limits issuance/acceptance to 1), and minimize the use of
converter bank functions (size conversion, clock conversion, data path FIFOs, and so
forth). Register slices can also be enabled for hardware-only verification because register
slices acts as a filter for traffic patterns that can insulate the system from some protocol
violations.
Enable AXI ChipScope monitors and hardware protocol checkers at strategic points in the
system when performing hardware-only verification.
104 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 5: AXI System Optimization: Tips and Hints
Using Base System Builder Without Analyzing and Optimizing Output
Base System Builder (BSB) provides a starting point for a functional AXI system that can
run on an evaluation board or be used to begin a custom board based design. However, the
system produced by BSB is a point solution within a broad solution space that AXI IP can
offer.
BSB offers a basic choice between an area- or throughput-optimized design to establish a
baseline architecture for the system.
Optimize, Adapt, and
Transform BSB
Output
The AXI system output from BSB should still be further adapted, optimized, and
incrementally transformed to fit the desired end application using the techniques
described in AXI System Optimization, page 89. Failure to tune the output of BSB to meet
the specific requirements of an application could result in poor quality of results and low
performance.
The architecture and optimizations necessary for a good AXI IP-based solution can differ
greatly from that of an IBM CoreConnect or a Xilinx MPMC-based system. The output
from BSB for AXI systems might not be designed with the same type of system architecture
as was output from BSB for CoreConnect or MPMC based systems. The output from BSB
for AXI systems must be significantly modified to match similar area, performance, and
features tradeoffs as a CoreConnect or an MPMC system created by BSB.
AXI Reference Guide www.xilinx.com 105
UG761 (v14.1) April 24, 2012
Chapter 6
AXI4-Stream IP Interoperability: Tips and Hints
The AXI specification provides a framework that defines protocols for moving data
between IP using a defined signaling standard. This standard ensures that IP can exchange
data with each other and that data move across a system. The AXI protocol does not
specify or enforce the interpretation of data; therefore, the data contents must be
understood, and the different IP must have a compatible interpretation of the data.
This chapter provides information and presents concepts that help you construct systems
by configuring your IP designs to be interoperable. Interoperability in the context of this
document is defined as the ability to design two or more components in a system to
exchange information and to use that information without extra design effort.
This chapter also describes areas where converters or additional effort are required to
achieve IP interoperability.
Generally, components can achieve interoperability with other components using either or
both of the following approaches:
By adhering to published interface standards
By making use of a broker of services (bridge) that can convert the interface of one
component to the interface of another product.
Key Considerations
The key considerations for achieving IP Interoperability are:
Understand the IP Domain: The interfaces used by the IP for exchanging information
and the data type representation that is used for the transferred information can be
classified by IP domain. Xilinx IP generally follows a common set of guidelines to
describe data contents and interface signaling within a given domain. This chapter
focuses on the following main domains:
DSP/Wireless
Video
Communications
AXI Infrastructure IP domains
Follow the Published Standard for the IP Domain: Chapter 3, AXI Feature Adoption in
Xilinx FPGAs provides an overview of the adoption of AXI4-Stream by Xilinx IP. This
chapter further describes the various AXI4-Stream interface conventions and
guidelines for IP configuration and use.
106 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Validate the Data: Understanding the IP domain and following the published
standard lets you focus your effort on the key elements to achieve IP interoperability;
however, it is imperative that you confirm that the IP operates as expected in the
system using simulation or hardware testing.
Figure 6-1 illustrates how you need to approach the design and development process from
IP selection to IP configuration for implementing systems with a high degree of IP
interoperability.
Begin the design process by understanding the AXI4-Stream protocol because it is the basis
for data exchange. You can then move to higher levels of refinement by understanding the
domain-level AXI4-Stream usage conventions, domain-level data organization and
interpretation of data, and finally focus on the exact configuration settings and functions of
each IP in the system. In this process, you narrow the solution space for each IP in the
system.
Figure 6-1: Process Tier for Understanding IP Interoperability
AXI Reference Guide www.xilinx.com 107
UG761 (v14.1) April 24, 2012
Key Considerations
AXI4-Stream Protocol
Review and use the AXI4-Stream signaling and data exchange interface protocol as
described in Chapter 3, AXI Feature Adoption in Xilinx FPGAs. AXI4-Stream is an
interface-level protocol for IP to exchange raw data. Building on top of the signaling
protocol, the various IP application domains can then establish common data types to
enable IP to use exchanged data. AXI4-Stream defines optional signals with default tie-off
rules and byte-aligned data widths with width conversion formulas. Some of the key IP
interoperability considerations to focus on at the AXI4-Stream signaling levels are:
Use of optional signals between two IP being connected to each other, as shown in
Figure 6-2.
Data width of the connected interfaces.
Burst length (such as the size of the data frame, block, or packet).
Data representation (Layered Protocols).
Figure 6-2: Establish AXI4-Stream Signaling-Level Data Exchange Compatibility
108 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Domain Usage Guidelines and Conventions
Each IP application domain recommends common guidelines for usage of optional
AXI4-Stream signals to facilitate IP interoperability. The four major IP application domains
are shown in Figure 6-3.
Video IP
AXI4-Stream IP in this domain carries framed video and image pixel data, and exchanges
only pixel data and video line and frame markers. Other signals associated with physical
interfaces such as hsync, vsync, active_video, blanking, or other ancillary data signals
are not carried over AXI4-Stream.
AXI4-Stream video IP supports backpressure and elasticity in data flow around the
TVALID/TREADY handshake. Pixel data retains the relative location in the
datastream; however, video pixel timing relative to hsync or vsync at a physical
display is not preserved.
AXI4-Stream video IP uses layered protocols to encode a variety of video formats
and resolutions. For more details on video IP adoption of AXI4-Stream, see IP Using
AXI4-Stream Video Protocol in Chapter 3.
Table 6-1 summarizes video IP domain AXI4-Stream signaling usage and guidelines.
Figure 6-3: Figure 6-3: AXI4-Stream IP Domains
Table 6-1: Video IP Domain AXI4-Stream Signaling Usage
Signal Endpoint
ACLK Used and Supported
ACLKEN Limited Support
ARESTEN Used and Supported
TVALID Used and Supported
TREADY Used and Supported
TDATA Used and Supported
TID Not Supported
TDEST Not Supported
TKEEP Not Supported
TSTRB Not Supported
TUSER Used and Supported
TLAST Used and Supported
AXI Reference Guide www.xilinx.com 109
UG761 (v14.1) April 24, 2012
Domain Usage Guidelines and Conventions
DSP/Wireless IP
AXI4-Stream IP in the DSP and Wireless IP application domain operates on numerical
streaming data paths. IP in this domain exchange data in either blocking (with
backpressure) or non-blocking (continuous) modes.
You can organize data in either Time Division Multiplexing (TDM) or parallel paths, and
use optional AXI4-Stream interfaces to perform configuration, control, and status
reporting.
IP with multiple AXI4-Stream interfaces must account for IP core-specific synchronization
rules between configuration and data channels (for example: a configuration packet must
precede each data packet for block-based processing in some wireless IPs). See DSP and
Wireless IP: AXI Feature Adoption in Chapter 3.
Table 6-2 summarizes DSP and Wireless IP AXI4-Stream signaling usage and guidelines.
Table 6-2: DSP/Wireless IP Domain AXI4-Stream Signaling Usage
Signal Endpoint
ACLK Used and Supported
ACLKEN Limited Support
ARESTEN Used and Supported
TVALID Used and Supported
TREADY Used and Supported
TDATA Used and Supported
TID Not Supported
TDEST Not Supported
TKEEP Not Supported
TSTRB Not Supported
TUSER Used and Supported
TLAST Used and Supported
110 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Communications IP
AXI4-Stream IP in the communications application domain refers to Endpoint IP that
implement high-speed communications protocols using transceivers or I/Os. Depending
on the relationship with transceivers or I/Os that cannot accept backpressure, these
AXI4-Stream interfaces can have limited handshaking options (for example no support for
the TREADY signal with some IP that are closely tied to the physical interface).
AXI4-Stream communications IP are tightly coupled to the underlying protocol (such as
PCIe, Ethernet, and SRIO) with explicit data formats and handshaking rules that limit IP
interoperability across protocols.
AXI4-Stream communications IP are usually connected to custom logic in a user design,
AXI infrastructure IP, or other protocol-specific IP (for example: Ethernet IP using AXI
Ethernet DMA or PCIe bridging IP to other protocols).
More details on Communications IP are available at:
http//www.xilinx.com/products/technology/connectivity/index.htm
Table 6-3 summarizes communications IP AXI4-Stream signaling usage and guidelines.
Table 6-3: Communications IP Domain AXI4-Stream Signaling Usage
Signal Endpoint
ACLK Used and Supported
ACLKEN Not Supported
ARESTEN Used and Supported
TVALID Used and Supported
TREADY Optionally Supported
TDATA Used and Supported
TID Not Supported
TDEST Not Supported
TKEEP Optionally Supported
TSTRB Not Supported
TUSER Optionally Supported
TLAST Used and Supported
AXI Reference Guide www.xilinx.com 111
UG761 (v14.1) April 24, 2012
Domain Usage Guidelines and Conventions
AXI Infrastructure IP
AXI4-Stream infrastructure IP refers to IP that generally exchanges or moves data within a
system without using the contents of data or being tied to a specific data interpretation.
Typically AXI4-Stream infrastructure IP are used as system building blocks or as test and
debug IP. Common use models for infrastructure IP include width conversion, data
switching and routing, buffering, pipelining, and DMA.
AXI infrastructure IP is required to support a wide range of optional and flexible signal
interface configurations to meet the signaling needs of IP from all domains. This also helps
to exchange data between mismatched AXI4-Stream master and slave signaling interfaces.
More details on AXI4-Stream interconnect IP are available at:
http://www.xilinx.com/products/intellectual-property/axi4-stream_interconnect.htm
For information about DMA IP that implement AXI4-Stream to AXI4 (Memory Mapped)
data transfer, see Xilinx AXI Infrastructure IP in Chapter 2.
Table 6-4 describes the main sub-categories of AXI4-Stream infrastructure IP and their
useful characteristics. Table 6- 5, page 112 lists the infrastructure IP domain AXI4-Stream
signaling usage.
Table 6-4: AXI4-Stream Infrastructure IP Sub-Categories
Type Key Characteristics Examples Interoperability Considerations
Pass-through Used for buffering,
pipelining, or moving data.
Does not change contents
of data.
Register Slice
FIFO
Clock Converter
Crossbar Switch
Does not change contents or organization of data.
Generally compatible with all AXI4-Stream IP.
Modifier Potential to change
contents or organization of
data.
Width converter
Bus Rip/
Concatenation
(Split/Combine)
MUX/DeMUX
Subset Converter
Packer
Performs specific algorithmic operations on data.
Compatible with most AXI4-Stream IP with proper
usage.
Stream Endpoint Entry/Exit Point for a
stream subsystem or
processing pipeline.
Usually the logical data
source or terminus in a
chain of IPs.
DMA (general
purpose)
(MicroBlaze
processor stream
ports)
AXI4-Lite to
AXI4-Stream bridge
Virtual FIFO
Controller
Usually the first or last IP in a processing pipeline
Compatible with most AXI4-Stream IP with proper
usage
Might have limited support for TUSER, TID, and TDEST
Monitor Attaches to an AXI
interface for observation
only.
Does not alter
the contents of data.
AXI Chipscope
Monitor
AXI HW Protocol
Checker
Performance Monitor
Observes but does not alter data.
Taps an AXI4-Stream connection for viewing.
Generally compatible with all AXI4-Stream IP.
112 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Domain-Specific Data Interpretation and Interoperability
Guidelines
Domain-specific protocols can be layered on top of the AXI4-Stream signaling layer so that
IP can interpret and use the data that has been exchanged. This section summarizes key
domain-specific layered protocol usage information and presents guidelines to help users
focus on key concepts when constructing IP and systems to be interoperable.
Video IP Layered Protocols
Video IP use layered protocols to represent the video format and resolution. Video IP must be
configured to use the same video format and resolution to transfer information, such as
industry recognized YUV/YUVA, RGB/RGBA video formats and 1920x1080P60
resolutions. Where necessary, format conversion IP such as color space converters can be
used to convert the video between IP blocks in a system.
Video IP also has common conventions for packing the data bits for the color components
(such as red, green, and blue components) into TDATA. AXI4-Stream signals such as TLAST
and TUSER encode line and frame boundaries for a given video resolution.
Table 6-5: Infrastructure IP Domain AXI4-Stream Signaling Usage
Signal Pass- Through Modifier Endpoint Monitor
ACLK Used and Supported Used and Supported Used and Supported Used and Supported
ACLKEN Used and Supported Used and Supported Not Supported Not Supported
ARESTEN Used and Supported Used and Supported Used and Supported Used and Supported
TVALID Used and Supported Used and Supported Used and Supported Used and Supported
TREADY Used and Supported Used and Supported Used and Supported Used and Supported
TDATA Used and Supported Used and Supported Used and Supported Used and Supported
TID Used and Supported Used and Supported Not Supported Used and Supported
TDEST Used and Supported Used and Supported Limited Support Used and Supported
TKEEP Used and Supported Used and Supported Limited Support Used and Supported
TSTRB Used and Supported Used and Supported Limited Support Used and Supported
TUSER Used and Supported Used and Supported Limited Support Used and Supported
TLAST Used and Supported Used and Supported Used and Supported Used and Supported
AXI Reference Guide www.xilinx.com 113
UG761 (v14.1) April 24, 2012
Domain-Specific Data Interpretation and Interoperability Guidelines
Video IP also contain optional AXI4-Lite interface that can change the layered protocol
during runtime, typically under microprocessor control.
See Video IP: AXI Feature Adoption in Chapter 3 for details on the encoding of video
layered protocols. Table 6-6 and Table 6-7 summarize some of the key characteristics,
interoperability considerations, and guidelines for layered protocols used in the Video IP
domain.
Table 6-6: Video IP Layered Protocol Summary
Type Key Characteristics Examples Interoperability Considerations
Video Format
Video IP support industry
standard formats
RGB
YUV 4:2.2
Use conversion IP to change video formats.
Pixels Encoding
(Components)
Pixels (TDATA beats) consist of 1
to 4 components.
Each component is 8, 10, 12, or
16 bits wide.
Components are concatenated
and 0-padded up to an overall
byte width.
24-bit wide TDATA
carrying RGB
(3x8-bit components).
1 6-bit wide TDATA
carrying YUV 4:2:2
(8-bit alternating V/U +
8-bit Y components).
The relative placement order of the
components in the TDATA beat is fixed.
When the width of components mismatch,
rules apply on how to scale the data.
Video Resolution /
Framing
AXI4-Stream TLAST/ TUSER
signaling is used to mark
end-of-line and frame
boundaries
Only active video pixel data is
transferred
TUSER[0] marks the
start of a frame
TLAST marks end of line.
Examples:
1024x768 would have
1024 TDATA beats per
TLAST.
768 TLAST beats per
TUSER[0].
TLAST and TUSER must be preserved and
placed at correct intervals relative to pixels.
Some video IP are capable of recovering
from corrupted or incomplete frame data and
relocking to the framing signals.
Connected Video IP must have the same
frame resolution or rescaling IP. is required.
Table 6-7: Video IP Interoperability, Considerations, and Guidelines
Guideline Description
Rule to Achieve
Interoperability
User Effort/Notes
Data Type
1-4 components per pixel, 8,10,12,16 bit
per component
Converter cores provided.
All cores configurable to
support 8,10,12, 16 bit data.
Seamless if video format and resolution match;
Standard adapters provided to change formats
and resolution.
Data Burst
Video standards with up to 8k pixels per
line (burst) supported
All cores configurable to
support standard burst sizes.
Ensure that connected IP use the same settings.
AXI4-Stream
Optional
Signals
Optional: ACLKEN, ARESETN,
Else Fixed set (TUSER[0],
TLAST, TREADY, TVALID, TDATA)
AXI-FIFO needed to bridge
between different ACLK or
ACLKEN domains.
Standard adapter might be needed (AXI4-Stream
FIFO or AXI4-Stream Interconnect)
AXI4-Stream
TUSER
Signals
Only TUSER 0 signal used consistently
across all video cores
TUSER0 is required and is
used to signal frame
boundaries.
Special considerations might be needed for IP
that can generate or recover from partial frames
(for example, handling when cable is removed and
reconnected)
Number of
Channels and
AXI4-Lite
Dependency
Generally single AXI4-Stream through
IP, but some cores have multiple inputs/
output streams; Most IP have an
optional AXI4-Lite control interface.
For cores with multiple input/
output streams or when
AXI4-Lite is used, Read the
datasheet to understand
data relationships.
Core can permit format/resolution to be changed
using AXI4-Lite requiring care to coordinate any
runtime changes across system.
114 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
DSP/Wireless IP Layered Protocols
DSP/Wireless IP use layered protocols to represent numerical information and structures
to perform processing such as filtering, arithmetic operations. DSP/ Wireless IP usually
have a flow through architecture with input and output stream interfaces to take in data,
perform operations on the data, and send out the data.
Data flow is usually optional between blocking (with backpressure using TREADY) and
non-blocking (continuous data flow without TREADY).
DSP/Wireless IP also support data organized into TDM or parallel paths to operate on
numeric data structures such as arrays. The streams can also carry optional sideband
status signals to supplement the numeric data with core-specific information. DSP/
Wireless IP also often contain control AXI4-Stream interfaces for optional runtime control
and status such as the ability to change filter coefficients at runtime using a secondary
AXI4-Stream interface.
See AXI Feature Adoption in Xilinx FPGAs in Chapter 3 for details of encoding
DSP/wireless layered protocols. Table 6-8, page 114 and Table 6-9, page 115 summarizes
some of the key characteristics, interoperability considerations, and guidelines for layered
protocols used in the DSP/Wireless IP domain.
Table 6-8: DSP/Wireless IP Layered Protocol Summary
Key
Characteristics
Description Examples Interoperability Considerations
Number of Data
Transfers per
Invocation
Sample-Based Processing: IP
data processing is applied
independently to every single
AXI4-Stream transfer.
Block-Based Processing: IP
data processing is applied to a
block packet of
AXI4-Stream transfers.
Sample: complex multiplier
operates on a data sample at
a time.
Block: FFT of a given point
size.
Block based IP must have same notion of block
size to interoperate.
Number of Data
Transfers per
Invocation
Single-Channel: 1 logical
stream of data.
Multi-Channel: data being
processed in parallel (could be
a single AXI4-Stream interface
with parallel data
concatenated together on
FIR compiler can be
configured to operate single
or multiple parallel data
lanes. Multi-channel mode
allows DSP resources to be
shared across multiple data
paths.
Data must be concatenated together or split
out to change number of parallel data streams.
Data type
representation of
TDATA
Unit: single datum
Array: multiple data
Structure: tuples or special
data structures for control/
status interfaces.
FIR compiler can operate on
unit data or array of data
when configured for single or
multiple data paths
respectively.
Control/status AXI4-Stream
interfaces usually require
structured data such as FFT
configuration.
IP must have same notion of data type
representation to interoperate.
Structured TDATA is often used in control/
status interfaces and need custom logic or
programmable IP (like a microprocessor) to
generate.
AXI Reference Guide www.xilinx.com 115
UG761 (v14.1) April 24, 2012
Domain-Specific Data Interpretation and Interoperability Guidelines
Communications IP Layered Protocols
IP in this domain use layered protocols to represent communications protocols, typically
networking packets. Packets can be fixed or variable sized depending on the protocol (such
as Ethernet and PCIe). IP are usually closely tied to the physical layer interface or logical
level interface with transmit and receive AXI4-Stream interfaces, sideband TUSER signals
for control/status, and some offer additional AXI4-Lite and AXI4 memory mapped
interfaces.
Use of
TUSER signal
for sideband
data
None: No TUSER signal used
Pass-through: TUSER signal is
passed from an input interface to
an output interface
IP Specific: TUSER conveys IP
specific sideband data.
Complex Multiplier can work
without TUSER or can
Pass-through TUSER.
DDS compiler can include
TUSER in its output interface
with IP-specific TDM channel
markers.
IP-specific TUSER often requires custom logic
to decode.
TUSER Pass-through mode is useful for
transmitting user information through a core to
match latency with the data.
Use of TLAST
signal
None: No TLAST signal used.
Pass-thru: TLAST signal is passed
from an input interface to an output
interface.
Block end marker: TLAST
indicates the last transfer in a
block:
IP-specific TLAST used to mark an
IP specific location in the data
transfers.
Divider Generator can work
without TLAST or can
Pass-through TLAST.
FFT uses TLAST for block end
marker.
IP must have same notion of TLAST when
used as block end marker.
TLAST Pass-through mode is useful for
transmitting user information through a core
with latency matched to that of the data.
IP specific TLAST often requires custom logic
to decode TLAST.
Table 6-8: DSP/Wireless IP Layered Protocol Summary (Contd)
Key
Characteristics
Description Examples Interoperability Considerations
Table 6-9: DSP/Wireless IP Interoperability Guidelines
Guideline Description Rule to Achieve Interoperability User Effort/Notes
Data Type Scalars, arrays, and structures
with data type elements for fixed
and floating point number
representation.
Adhere to defined data types and
conventions. For example real versus
integer, common binary point and data
size.
Must understand data types/structures and
ensure consistency; adapters might be
needed.
Data Burst Sample or block processing; IP
processing applied to a single
transfer or a block of transfers.
Use of (optional) TLAST to delimit
blocks, packets, or frames.
Must align block sizes to match data structure
size, adapters might be needed.
AXI4-Stream
Optional
Signals
Optional: ACLKEN, TREADY
TLAST, TUSER.
Fixed: TDATA, TVALID
Adhere to DSP/Wireless IP specific
guidelines in DSP and Wireless IP: AXI
Feature Adoption in Chapter 3.
Optional signals must be used consistently or
adapters are needed.
AXI4-Stream
TUSER Signals
No use, Pass-through, or IP
specific use of TUSER. TUSER is
generally optional.
For higher interoperability, avoid use of
IP specific TUSER.
TUSER signals must be used consistently.
Custom logic might be needed to handle
IP-specific TUSER.
Data Type Scalars, arrays, and structures
with data type elements for fixed
and floating point number
representation.
Adhere to defined data types and
conventions. For example real versus
integer, common binary point and data
size.
Must understand data types/structures and
ensure consistency; adapters might be
needed.
116 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
Because of the specific relationship to a communications protocol standard, IP in this
domain are often used with custom logic, infrastructure IP, other IP of the same protocol
type.
Table 6-10 summarizes some of the key guidelines for layered protocols used in the
Communications IP domain.
AXI Infrastructure IP Layered Protocols
IP in this domain often do not use specific layered protocols but are configurable to
Pass-through data or to generate/receive data using a processor or DMA engine.
The key elements for interoperability is to use the AXI4-Stream protocol following the
recommendations in Signaling Protocol in Chapter 3. AXI Infrastructure IP is designed to
be broadly compatible with IP from different domains because it has highly configurable
interfaces, and generally does not use the contents of the data.
Table 6- 11, page 117 summarizes some of the key characteristics and interoperability
considerations for the AXI Infrastructure IP domain.
Table 6-10: Communications IP Layered Protocol Interoperability Guidelines
Guideline Description Rule to Achieve User Effort/Notes
Data Type Packetized data, with and
without headers/footers.
Matched to protocols like
Ethernet, PCIe, or SRIO.
Remove header and
footer to access raw
packet data or transfer
data to memory mapped
Must understand data types and
ensure packet data is delivered in
the correct order.
Adapters might be needed.
Data Burst Variable size:
Minimum can be a single
cycle of data.
Maximum depends upon
the parent protocol.
All cores configurable to
support standard burst
sizes, up to a defined
limit for a given protocol.
Care must be taken to ensure that
legal-sized packets are transferred
between cores.
Adapters might be needed to break
apart too-large packets
AXI4-Strea
m Optional
Signals
TREADY, TKEEP,
TDATA,TUSER, TLAST..
In some cases, ACLKEN,
TDEST, TID.
Use adapters for
infrastructure IP.
TDEST/TID can be
used for data
Adapters might be required.
AXI4-Strea
m TUSER
Signals
Variety of uses and sizes.
Common uses: packet
discontinue, framing
signals, packet details.
Avoid using TUSER. Set
control/status
information in AXI4-Lite
register space if possible.
Migrate TUSER signals to
dedicated AXI4-Lite or
AXI4-Stream sideband bus.
Adapter might be required.
Number of
Channels
and
AXI4-Lite
Generally single
AXI4-Stream in each
direction, through IP.
SRIO can have up to 8
For cores with multiple
input/ output streams,
read the datasheet to
understand data
Cores must have an appropriate
port to which to connect.
Refer to the individual data sheet
for each core.
AXI Reference Guide www.xilinx.com 117
UG761 (v14.1) April 24, 2012
Domain-Specific Data Interpretation and Interoperability Guidelines
Table 6-11: AXI Infrastructure IP Interoperability Guidelines
Guidelines Description Rules to Achieve
Interoperability
User Effort/Notes
AXI4-Stream
Optional
Signals Use
Endpoint type IP generally limit
support for TID, TDEST (unless
multi-channel IP), TSTRB, TUSER.
Endpoint IP using TLAST should be
aware of burst size.
TKEEP only for packet remainders.
Use Continuous or
Continuous Aligned
Streams.
Low for Pass-through and Monitor IP types.
Generally low when using core signals
TDATA, TVALID, TREADY, TLAST, TKEEP
(remainders).
Medium to high with more complex
systems using TID, TDEST, TSTRB, or
TUSER.
Layered
Protocols
Pass-through and Monitor IP types
do not alter data.
Modifiers can transform data.
Endpoints can synthesize or receive
layered protocols with proper
configuration.
Minimize use of TID, TDEST,
TSTRB, and TUSER.
Consider how modifiers
change the data structures
used by layered protocols.
Endpoints require the user to
properly program them to
generate data contents
matching the layered
Low for Pass-through and Monitor IP types.
Medium when using modifiers that can
alter the data encodings algorithmically.
Medium to High for endpoint IPs that must
be configured and programmed properly to
match the requirements of layered
protocols.
Other
Interoperability
Factors
There are Interdependencies, such
as Endpoint IP often have additional
AXI4-Lite control interfaces.
Pay attention to real time
system impact when using
infrastructure IP, such as
FIFOs might increase latency.
Low for Monitor types.
Low to Medium for Pass-through and
Modifier types that could affect real time
behavior.
Medium to high for endpoint types which
could have control ports.
Interfacing to
IP in other
Domains
Pass-through and Monitor types
designed to work with all domains.
Modifier IP can work in video or DSP
domains, but need users to validate
data structure integrity.
Endpoint IP have limited ability to
interface to other domains and might
need domain specific endpoint IP
(example AXI Video DMA, PCIe
DMA, and so forth).
Care must be taken to
configure and program
Endpoints to match the
layered protocol
requirements of other IP
domains.
Low for Pass-through and Monitor IP types.
Medium when using modifiers that can
alter data structures.
Medium to High for using endpoints that
have to synthesize or receive layered
protocols.
118 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Chapter 6: AXI4-Stream IP Interoperability: Tips and Hints
AXI Reference Guide www.xilinx.com 119
UG761 (v14.1) April 24, 2012
Appendix A
AXI Adoption Summary
This appendix provides a summary of protocol signals adopted by Xilinx
AMBA
AXI
specification from http://www.amba.com.
Additionally, this document references documents located at the following Xilinx website:
http://www.xilinx.com/support/documentation/axi_ip_documentation.htm
Multi-port Memory Controller (MPMC) Data Sheet (DS643)
AXI Interconnect IP (DS768)
AXI-To-AXI Connector IP Data Sheet (DS803)
AXI External Master Connector (DS804)
AXI External Slave Connector (DS805)
AXI Bus Functional Models (DS824)
AXI Data Mover Product Guide (PS022)
LogiCORE IP FIFO Generator, (DS317)
Xilinx Documentation
Bridging Xilinx Streaming Video Interface with AXI4-Stream Protocol (XAPP521):
http://www.xilinx.com/support/documentation/application_notes/xapp521.pdf
AXI Multi-Ported Memory Controller Application Note, (XAPP739):
http://www.xilinx.com/support/documentation/application_notes/xapp739_axi_mpmc.pdf
Designing High-Performance Video Systems with the AXI Interconnect, (XAPP740):
http://www.xilinx.com/support/documentation/application_notes/xapp740_axi_video.pdf
AXI Bus Functional Models User Guide (UG783): http://www.xilinx.com/support/
documentation/sw_manuals/xilinx14_1/ug783_axi_bfm.pdf
MicroBlaze Processor Reference Guide (UG081):
http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_1/
mb_ref_guide.pdf
Xilinx Design Tools: Installation and Licensing Guide (UG798):
http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_1/iil.pdf
Xilinx Design Tools: Release Notes Guide (UG631):
http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_1/irn.pdf
Video Demonstrations: http://www.xilinx.com/design
Xilinx Answer Database: http://www.xilinx.com/support/mysupport.htm
Xilinx Glossary:
http://www.xilinx.com/support/documentation/sw_manuals/glossary
128 www.xilinx.com AXI Reference Guide
UG761 (v14.1) April 24, 2012
Appendix C: Additional Resources
EDK Website: http://www.xilinx.com/tools/embedded.htm
CORE Generator
Tool: http://www.xilinx.com/tools/coregen.htm
Memory Control:
http://www.xilinx.com/products/technology/memory-solutions/index.htm
System Generator: http://www.xilinx.com/tools/sysgen.htm
Local-Link:
http://www.xilinx.com/products/design_resources/conn_central/locallink_member/sp06.pdf
Targeted Designs:
http://www.xilinx.com/products/targeted_design_platforms.htm
Answer Record: http://www.xilinx.com/support/answers/37425.htm
Third Party Documents
International Telecommunications Union (ITU): ITU-R BT.1614:
http://engineers.ihs.com/document/abstract/SUCFEBAAAAAAAAAA
HDTV Standards and Practices for Digital Broadcasting: SMTPE 352M-2009