Pg332 Ernic en Us 4.0
Pg332 Ernic en Us 4.0
Chapter 1: Overview
Navigating Content by Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Core Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Feature Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Unsupported Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Licensing and Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
ERNIC v4.0 2
PG332 December 2, 2022 www.xilinx.com
Chapter 5: Example Design
Example Design Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Example Design Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Simulating the Example Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Example Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Appendix B: Debugging
Debugging the IP/System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
ERNIC v4.0 3
PG332 December 2, 2022 www.xilinx.com
IP Facts
• Support for Reliable Connection (RC) RDMA For supported simulators, see the
Simulation
Xilinx Design Tools: Release Notes Guide.
transport service types
Synthesis Vivado Synthesis
• QP1 support for sending and receiving MAD
packets Support
Release Notes
• Hardware handshake mode on user interface to and Known N/A
support hardware RDMA applications in the Issues
user logic All Vivado IP
Master Vivado IP Change Logs: 72775
• Supports incoming and outgoing RDMA SEND, Change Logs
RDMA READ, RDMA WRITE, RDMA SEND WITH Xilinx Support web page
IMM, RDMA WRITE WITH IMM, and RDMA
Notes:
SEND WITH INVALIDATE message types.
1. For a complete list of supported devices, see the Vivado IP
• Designed to scale up to 2047 RDMA Queue catalog.
pairs(3) 2. For the supported versions of third-party tools, see the
Xilinx Design Tools: Release Notes Guide.
• Support for IPv4 and IPv6 packets 3. For -1 speed grade devices design might have timing
violations for more than 64 QP configuration.
• Support for Explicit Congestion Notification
(ECN)
• Supports Priority flow control with different
priorities for RoCE and non-RoCE traffic.
• Supports memory registrations and protection
domains
ERNIC v4.0 4
PG332 December 2, 2022 www.xilinx.com Product Specification
Chapter 1
Overview
• Hardware, IP, and Platform Development: Creating the PL IP blocks for the hardware
platform, creating PL kernels, subsystem functional simulation, and evaluating the
Vivado ® timing, resource and power closure. Also involves developing the hardware
platform for system integration. Topics in this document that apply to this design
process include:
° Port Descriptions
° Register Space
° Clocking
° Resets
° Example Design
Core Overview
This chapter provides an overview of the ERNIC IP core and details of the applications,
licensing requirements, and standards conformance. ERNIC is a soft IP implementing RDMA
over a Converged Ethernet (RoCE v2) protocol for embedded target or initiator devices. This
implementation is based on the specifications described in InfiniBand Architecture
Specification Volume 1, Annex A16 RoCE and Annex 17 RoCE V2 [Ref 1].
ERNIC v4.0 5
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
Figure 1-1 shows the ERNIC and its connections to other IPs in the subsystem.
X-Ref Target - Figure 1-1
rx_pkt_hndlr_ddr_m_axi_*
Request Validation
DMA rx_pkt_hndlr_o_rq_db_*
roce_cmac_s_axis_* Engine
Response rx_pkt_hndlr_rdrsp_m_axi_*
Validation
Response Handler
resp_hndler_o_send_cq_db_*
Flow Control
Manager ACK PSN Buffer
FSM Logic
cmac_m_axis_* MPSN Buffer OSQ Buffer
resp_hndler_m_axi_*
non_roce_dma_s_axis_*
Header
Generator qp_mgr_m_axi_*
CRC Caching Logic
Buffer
Logic Manager
DMA
s_axi_lite_*
Engine Configuration Registers
wqe_proc_top_m_axi_* AXI4-Stream interface
AXI MM interface
wqe_proc_wr_ddr_m_axi_*
X25376-052621
Note: The user logic or target IP that connects to ERNIC is referred to as application and the
direction of the arrows is from master to slave.
Apart from the ERNIC IP, the ERNIC subsystem includes the Xilinx Ethernet IP, AXI DMA, and
AXI Interconnect among other IPs. On the user application front, the ERNIC IP exposes side
band interfaces to allow efficient doorbell exchanges without going through the
interconnect.
Each queue is identified with a set of read and write pointers called the Producer Index
(write pointer) and Consumer Index (read pointer). The register address locations for these
pointers are termed as doorbells in this document. A doorbell exchange or doorbell ringing
indicates that the corresponding register location is updated.
ERNIC v4.0 6
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
Feature Summary
The ERNIC IP interfaces with any Ethernet MAC IP using an AXI4-Stream interface. Access to
DDR or any other memory region is necessary for reading and writing various data
structures for RDMA packet processing. This connection is achieved using multiple AXI4
interfaces. The IP works on a 512-bit internal datapath that can be completely hardware
accelerated without any software intervention for data transfer. All recoverable faults like
retransmission due to packet drops are also handled entirely in the hardware.
The ERNIC IP implements embedded RNIC functionality. As a result, only the following
subset of RoCE v2 functionality is implemented compared to a general purpose RNIC:
• Support for RDMA SEND, RDMA READ, RDMA WRITE, RDMA SEND INVALIDATE, RDMA
SEND IMMEDIATE, and RDMA WRITE IMMEDIATE for incoming and outgoing packets.
Atomic operations are not supported.
• Support for up to 2046 connections.
• Scalable design of up to 2047 RDMA Queue pairs.
Note: Default Vivado strategies allow for the timing to pass up to 127 queue pairs. To match the
timing for 2047 queue pairs, use the Vivado strategy — Performance_refinePlacement.
• Supports dynamic memory registration.
• Hardware handshake mechanism for efficient doorbell exchange with the user
application logic.
Note: When switching in the handshake mode, the software layer should not have any traffic on
that particular QP.
ERNIC Modules
The ERNIC IP consists of the following main modules that are explained in this section.
• QP Manager
• WQE Processor Engine
• RX PKT Handler
• Response Handler
• Flow Control Manager
ERNIC v4.0 7
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
QP Manager
The QP Manager module houses the configurations for all the QPs and provides an
AXI4-Lite interface to the processor. It also arbitrates across various SEND Queues and
caches the SEND Work Queue Entries (WQEs). These WQEs are then provided to the WQE
processor module for further processing. This module also handles the QP pointer updates
in the event of retransmission.
The WQE Processor Engine is also responsible for sending outgoing acknowledgment
packets for the incoming RDMA SEND/WRITE requests and read responses for incoming
RDMA READ requests.
RX PKT Handler
The RX PKT Handler module receives the incoming RDMA packets. Non-RDMA packets
should be filtered out before receiving the RX PKT Handler (roce_cmac_s_axis)
interface. The ERNIC IP handles the following types of incoming RoCE v2 packets:
• RDMA SEND, RDMA WRITE, RDMA READ and response packets for RDMA READ
(request sent from ERNIC)
• RDMA SEND with Invalidate, RDMA SEND with immediate, and RDMA WRITE with
immediate packets
• Acknowledgment packets for RDMA WRITE/RDMA SEND (request sent from ERNIC)
• Communication management (Management Datagram) packets to QP1
The RX PKT Handler module is responsible for validating the incoming packets. It also
triggers outgoing acknowledgment packets for incoming RDMA SEND and RDMA WRITE
requests and pushes the packets that pass the validation to the corresponding memory
location. The RDMA READ responses are channeled to the target application directly. The
module handles the incoming RDMA READ requests and forwards the request to the TX
path.
ERNIC v4.0 8
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
The RX PKT Handler module also decodes the RDMA SEND invalidate/Immediate and RDMA
WRITE Immediate packets. The 32-bit data present in either IETH or IMMDT headers is
provided on a separate AXI4-Stream interface. 64 bits of data is provided on this streaming
interface for every entry. Below table shows encoding of these 64 bits.
Outgoing Pause requests for RDMA traffic are handled inside this module. When remaining
buffer locations reaches to XON condition then it triggers for Pause ON and deasserts when
buffer pointer reaches to XOFF condition.
Response Handler
The Response Handler module manages the outstanding queues. These queues hold the
information about all packets sent to the remote host but have not yet been acknowledged
or responded to. In addition, this module triggers a re-transmission if the remote host
sends a Negative Acknowledgment (NAK). If this module does not receive a response from
the remote host within a specified time (timeout value), it triggers a timeout related
retransmission.
ERNIC v4.0 9
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
Applications
The ERNIC IP can be used in range of applications that require reliable transfer of packets
across the network fabric. A few such applications are listed here:
Unsupported Features
The ERNIC IP does not support:
ERNIC v4.0 10
PG332 December 2, 2022 www.xilinx.com
Chapter 1: Overview
License Checkers
If the IP requires a license key, the key must be verified. The Vivado design tools have
several license checkpoints for gating licensed IP through the flow. If the license check
succeeds, the IP can continue generation. Otherwise, generation halts with an error. License
checkpoints are enforced by the following tools:
• Vivado synthesis
• Vivado implementation
• write_bitstream (Tcl command)
IMPORTANT: IP license level is ignored at checkpoints. The test confirms a valid license exists. It does
not check IP license level.
ERNIC v4.0 11
PG332 December 2, 2022 www.xilinx.com
Chapter 2
Product Specification
The ERNIC IP provides an embedded implementation of a RoCE v2 enabled NIC. The RDMA
technology allows for faster movement of data over standard Ethernet while completely
offloading CPU bandwidth. The ERNIC IP core comes with SW drivers that can be ported to
any Zynq® MPSoC or FPGA devices. This allows the ERNIC IP to function independent of
any external processor.
Figure 2-1 shows sample end-to-end system with multiple host CPUs and multiple native
NVMe devices talking over a network fabric through the Xilinx ERNIC + NVMEOFABRIC IP
subsystem.
X-Ref Target - Figure 2-1
X19877-120220
ERNIC v4.0 12
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
• RDMA Write
• RDMA WRITE Immediate
• RDMA SEND
• RDMA SEND Immediate
• RDMA Read
• RDMA SEND Invalidate
• ATOMIC
• Bind Memory Window
• Local Invalidate
• Fast Register Physical MR
The Receive Queue work requests need not be posted by application as the ERNIC
Hardware automatically re-posts consumed receive buffers as per the configured receive
queue depth.
Table 2-1 shows the structure of Send work requests. Each Work Queue Entry (WQE) is 64
bytes in size.
ERNIC v4.0 13
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Work Completions
Work completions are posted for every WQE posted on the Send Queue. Completions are
not posted for Receive Queue (RQ) entries, instead doorbells are rung per Queue Pair (QP)
at the address pointed to by RQWPTRDBADDi when a new receive buffer is consumed by an
incoming RDMA SEND request. The structure of a Completion Queue Entry (CQE) is given in
Table 2-2. Each CQE is 4 bytes in size.
RDMA Queues
The ERNIC IP implements RDMA queues like Receive Queue (RQ), Send Queue (SQ), and
Completion Queue (CQ). These queues are referred to as Queue Pairs or QPs. SQ houses the
send WQEs posted by the user application.
ERNIC v4.0 14
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
sq_depth rq_depth
stat_cq_head/
rq_ci_db
sq_cmpl_db
COMPLETION QUEUE
sq_pidb
Pending WQEs
sq_depth
stat_cq_head/
cq_ba
X19878-102117
These queues are implemented in memory regions outside the ERNIC IP. The ERNIC
accesses the IP through the various AXI master interfaces. See Table 3-2 for details of ERNIC
memory requirements.
The next few sections provide a brief overview of the incoming (RX) and outgoing (TX) data
flow of the ERNIC IP.
ERNIC v4.0 15
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC RX Path
The ERNIC RX Path gets the packet data from the MAC through the AXI4-Stream interface.
All incoming packets are validated and all packet headers that fail packet validation are sent
to the error buffer (base address specified by ERRBUFBA[31:0]) if the value of
XRNICCONF[5] is set to 1. The header is prefixed with an error syndrome by the ERNIC RX
packet handler module as per Table 2-3. These buffers provide useful debug information for
incoming packet errors. RX path implements logic to detect ECN marked packets and logs
it in an interrupt status bit corresponding to that QP in CNPSCHDSTS*REG register. On
reception of ECN marked packet an interrupt is generated to notify driver. Driver generates
a CNP packet for that QP and schedule it through QP1 instead of QPN. Rx path implements
logic to detect incoming CNP packets and notifies QP manager. QP manager reduces the
outstanding on corresponding QP from 16 to 8. Further reception of CNPs will not have any
effect on the outgoing traffic.
ERNIC v4.0 16
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Most of the packet validation errors are handled entirely by hardware and no software
intervention is required. However, if an incoming packet causes the QP to enter into a FATAL
state, software intervention is required to process the error and to initiate a disconnection.
Such errors are available for the SW in the incoming packet error status buffers defined by
IPKTERRQBA, IPKTERRQSZ, and IPKTERRQWPTR registers. Each error status buffer entry is
64-bit wide. The format for the error status is as shown in Figure 2-3.
ERNIC v4.0 17
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Incoming RDMA SEND/WRITE/READ requests are expected on the RX side. All other types
of packets are response packets for the outgoing requests. The data flow for incoming
RDMA SEND requests is shown in Figure 2-4. The direction of arrows show the flow of data.
On receiving a valid RDMA SEND incoming packet on a connected QP, the packet content
ERNIC v4.0 18
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
(without the headers) is pushed into the RX Buffer for the relevant QP. The ERNIC rings the
RQ Producer Index Doorbell (RQPI DB), to indicate that a new packet is available, either
using the side band interface or through the AXI interface. This depends on the
configuration of QPCONFi[4]. An acknowledgment is also posted to the remote host at this
point. The user application may inform the ERNIC of having consumed the new packet by
ringing the RQ consumer Index Doorbell (RQCI DB). On receiving this doorbell, the
corresponding RX buffer is made available to be used for new incoming packets.
X-Ref Target - Figure 2-4
RDMA SEND
packet sent
Packet posted on
RQ buffer, RQPli
doorbell rung on
sideband I/F,
STATRQPIDBi
updated
Explicit or
coalesced ACK sent
X19879-120220
ERNIC v4.0 19
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
WQE posted
RDMA SEND with
Capsule containing
address information
(WQE posted at the
target).
RDMA READ
Request
Data fetched
from the local
buffer
Single or
multiple RDMA
READ Response
packets
depending on
the DMA length.
X24862-120220
ERNIC v4.0 20
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
WQE posted
RDMA SEND with
Capsule containing
address information
(WQE posted at the
target).
RDMA WRITE
Request
Explicit or
coalesced ACK sent
X24863-121020
ERNIC TX Path
The TX Path consists of outgoing RDMA READ, RDMA WRITE transactions, and ACK packets
for incoming RDMA SEND/WRITE requests and responses for incoming RDMA READ
requests. Based on the SQPIi doorbell, the Send Work Queue requests are processed. The
DMA module is configured for data transfers for all outgoing transactions. The TX data flow
for RDMA WRITE/SEND is shown in Figure 2-7.
The user application requests the ERNIC IP to transmit an RDMA WRITE/SEND/READ packet
by posting a WQE on the SEND Queue for a particular QP and ringing the corresponding SQ
Producer Index Doorbell (SQPI DB). The ERNIC processes the WQE and pulls data for RDMA
WRITE/SEND commands based on the information provided in the WQE. This data along
with relevant headers is pushed out on the link. Once an acknowledgment is received from
the remote host, the ERNIC informs the user application of the successful completion of the
WQE by posting a CQE, based on the configuration of QPCONFi[5], and by posting the
completion count through the side band interface. The CQHEAD for the corresponding QP
is also updated. For RDMA READ requests, the RX packet handler intimates the TX path. For
ERNIC v4.0 21
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
this communication, an outstanding read request queue is available for each QP. The depth
of the queue is determined by a parameter for incoming request resources. When the
outstanding read request queue is full, it is indicated to the RX path. Any further requests to
the QP will result in an NAK-Invalid and the QP is moved to fatal state.
WQE posted,
SQPli DB rung
RDMA
SEND/WRITE
Packet(s) sent RDMA WRITE data
pulled by ERNIC from
Local buffer
CQ entry posted,
Completion count
sent through
Explicit or
sideband channel
coalesced ACK
CQHEADi updated
X19881-120220
ERNIC v4.0 22
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Figure 2-8 shows the flow of RDMA READ requests to the remote host from the ERNIC.
X-Ref Target - Figure 2-8
RDMA READ
Packet sent
Read
Response(s)
received Read response pushed
to local buffer
CQ entry posted,
Completion count
sent through
sideband channel
CQHEADi updated
X19882-120220
ERNIC v4.0 23
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Standards
This implementation is based on the standard and specifications described in InfiniBand
Architecture Specification Volume 1, Annex A16 RoCE and Annex 17 RoCE V2 [Ref 1].
Performance
The ERNIC IP is designed with an internal data path throughput of up to 100 Gb/s at a
frequency of 200 MHz.
For more details about resource utilization, see the Performance and Resource Utilization.
Resource Utilization
This section summarizes the estimated maximum performance for various modules within
the ERNIC IP. The data is separated into a table per device family. Each row describes a test
case. The columns are divided into test parameters and results. The test parameters include
the part information and the core-specific configuration parameters. Any configuration
parameters that are not listed have their default values; any parameters with a blank value
are disabled or set automatically by the IP core. Consult the product guide for this IP core
for a list of GUI parameter and user parameter mappings.
• Resource use numbers are taken from the utilization report issued at the end of an
implementation using the Out-of-Context flow in Vivado Design Suite.
• The Out-of-Context IP constraints include HD.CLK_SRC properties as required to ensure
correct hold timing closure. These properties are enabled using the following Tcl
command: set_param ips.includeClockLocationConstraints true
• The frequencies used for clock inputs are stated for each test case.
• LUT numbers do not include LUTs used as pack-thrus, but include LUTs used as
memory.
• Default Vivado® Design Suite 2018.1 settings are used. You can improve on these
numbers using different settings. However, because the surrounding circuitry will affect
placement and timing, these numbers might not repeat in a larger design.
For more details about resource utilization, see the Performance and Resource Utilization.
ERNIC v4.0 24
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Port Descriptions
Table 2-5 table describes the ports and their interface definitions
ERNIC v4.0 25
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 26
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 27
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 28
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Parameter Descriptions
As many features in the ERNIC controller design are controlled using parameters, the
controller implementation can be uniquely tailored using only the resources required for
the desired functionality. This approach also achieves the best possible performance with
the lowest resource usage.
ERNIC v4.0 29
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
Register Space
All the ERNIC registers are synchronous to the AXI4-Lite domain. Any bits not specified in
register tables below are considered reserved and return the values as 0 upon read. The
power-on reset values of control registers are 0 unless specified in the definition. You
should always write the reserved locations with a 0 unless stated otherwise. Only address
offsets are listed in the table below and the base address is configured by the AXI
interconnect at system level. The contents of the memory region table are used for header
validation as shown in the following figure.
X-Ref Target - Figure 2-9
RSVD PD Number
31 23
Step 1 Step 2
Match PD
Number
RETH
Use MR Index which is part PD Information
of RETH Rkey
Memory Region Table 31 23 0
to fetch PD information
from PD table RSVD PD Number
31 0
Virtual Address LSB
RETH Virtual Address LSB Virtual Address MSB
RETH Virtual Address MSB Physical Addr LSB
MR index Physical Addr MSB
RETH DMA Length Reserved R_Key
31 7 0 DMA Length
DMA Length MSB RSVD Access
31 16 7 0
Match
R_KEY
Buffer
Range
Validation
Check
X21690-121020
ERNIC v4.0 30
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 31
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 32
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 33
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 34
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 35
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 36
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 37
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 38
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 39
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 40
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 41
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 42
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 43
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 44
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 45
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 46
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 47
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 48
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 49
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 50
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 51
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 52
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 53
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 54
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 55
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 56
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 57
PG332 December 2, 2022 www.xilinx.com
Chapter 2: Product Specification
ERNIC v4.0 58
PG332 December 2, 2022 www.xilinx.com
Chapter 3
DDR
wqe_proc_wr_retry
resp hndler l/f
pkt_hndler I/f
wQe_proc l/f
AXI4-Stream
QP mgr l/f
AXI4
AXI4-Lite
Customer side band
non_roce_dma_m_axis_I/f
M M M M M pkt_hndler rresp_I/f
AXI M
M
DMA non_roce_dma_s_axis_I/f
RQPI sideband
S M
RQCI sideband
ERNIC Target IP S Customer Logic
non_roce_cmac_s_axis_I/f
S SQCI sideband
roce_cmac_s_axis_I/f
M
CMAC S SQPI sideband
S
cmac_m_axis_I/f
M
S
s_axi_lite
X19883-010621
Figure 3-1 shows the various ERNIC IP interfaces. The interfaces shown as interfacing with
DDR may interface with any memory mapped region. Refer to Table 2-5 for details on each
of these interfaces. The AXI4 and the AXI4-Stream interfaces are 512 bits wide and are
mainly used for data transfers. The ERNIC IP provides sideband interfaces to allow for
ERNIC v4.0 59
PG332 December 2, 2022 www.xilinx.com
Chapter 3: Designing with the Core
efficient exchange of queue pair related doorbells. These side band interfaces can be
enabled or disabled for each queue pair (QP) based on the configuration.
The ERNIC IP has one AXI4-Lite slave interface to access the register space. The details of
the memory map required for this slave interface is shown in the following table.
Apart from these, the IP also requires some memory regions to be allocated for some
specific data structures. These memory regions may be mapped to a local DRAM or an AXI
BRAM or any other memory mapped slave. Ensure that there is adequate bandwidth on
these memory interfaces based on the line rate that ERNIC should achieve.
Notes:
1. Use the Vivado implementation strategy — Performance_RefinePlacement — when using the ERNIC with more
than 256 QPs.
Interrupts
ERNIC IP provides a single interrupt line, which is generated on different interrupts status
signals defined in INTEN register. Each of these interrupt lines can be enabled by writing a
1 to the corresponding bits of the Interrupt Enable INTEN register. On receiving an ORed
interrupt the SW can read the INTSTS register to know the cause of the interrupt.
Interrupt bits 4 and 6 inform the drivers about a WQE completion or an incoming RDMA
SEND respectively. These interrupts can be enabled or disabled on a per QP basis. Bit [2] of
the QPCONFi registers allows selective enabling of Receive Queue interrupts per QP.
Similarly, bit [3] of the QPCONFi registers allows selective enabling of Send Completion
Queue interrupts. In general QPs that require SW handling should have this option enabled.
ERNIC v4.0 60
PG332 December 2, 2022 www.xilinx.com
Chapter 3: Designing with the Core
The QPs that are directly handled by the hardware will be informed through the hardware
handshake ports and corresponding interrupt enable bits can be disabled. Such QPs
should have the hardware handshake disable QPCONFi[4] bit reset to 0.
The RQINTSTSn and CQINTSTSn registers provide bitwise information about the QPs that
have a pending RQ or CQ entry to be serviced.
Clocking
Two clocks are exposed at the top of the ERNIC IP. These are: AXI4 clock and AXI4-Lite clock.
All the registers accesses work on the AXI4-Lite clock while the rest of the logic works on
AXI4 clock. Typically, the AXI4 clock would be of higher frequency (up to 200 MHz) while the
AXI4-Lite interface could be clocked at a lower frequency (divided AXI4 clock with the
divided clock edges aligning with the AXI4 clock). However, these clocks are treated as
synchronous inside the ERNIC IP and are expected to be generated from the same clock
source.
Resets
The ERNIC IP requires two active-Low resets that are synchronized to the two clock
domains, respectively.
ERNIC v4.0 61
PG332 December 2, 2022 www.xilinx.com
Chapter 4
• Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
[Ref 2]
• Vivado Design Suite User Guide: Designing with IP (UG896) [Ref 3]
• Vivado Design Suite User Guide: Getting Started (UG910) [Ref 4]
• Vivado Design Suite User Guide: Logic Simulation (UG900) [Ref 5]
If you are customizing and generating the core in the Vivado IP integrator, see the Vivado
Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994) [Ref 2] for
detailed information. IP integrator might auto-compute certain configuration values when
validating or generating the design. To check whether the values change, see the
description of the parameter in this chapter. To view the parameter value, run the
validate_bd_design command in the Tcl Console.
You can customize the IP for use in your design by specifying values for the various
parameters associated with the IP core using the following steps:
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896) [Ref 3] and
the Vivado Design Suite User Guide: Getting Started (UG910) [Ref 4].
Note: Figures in this chapter are illustrations of the Vivado Integrated Design Environment (IDE).
The layout depicted here might vary from the current version.
ERNIC v4.0 62
PG332 December 2, 2022 www.xilinx.com
Chapter 4: Design Flow Steps
Output Generation
For details, see the Vivado Design Suite User Guide: Designing with IP (UG896) [Ref 3].
Simulation
For comprehensive information about Vivado simulation components, as well as
information about using supported third-party tools, see the Vivado Design Suite User
Guide: Logic Simulation (UG900) [Ref 5].
IMPORTANT: For cores targeting Xilinx 7 series FPGAs or Zynq-7000 devices, UNIFAST libraries are not
supported. Xilinx IP is tested and qualified with UNISIM libraries only.
ERNIC v4.0 63
PG332 December 2, 2022 www.xilinx.com
Chapter 5
Example Design
This chapter contains information about the ERNIC core example design. The example
design consists of the following modules:
Memory Read
Legend from
[Header and Data] Memory
Example design modules
XRNIC IP Modules
AXI4-Stream I/F VerticalHeader
AXI4 I/F
AXI4-Lite I/F
Side Band I/F
DMA
Packet
Read Read SGL
Generator WQE Processor
[TX-WQE] WQE QP Manager
SQ-PI
BRAM
Update
Configuration TX Packet
Reg Config CRC Calculator Checker
Module Registers
Initiation
file
Write
Response Handler Completions Memory [Data]
Data
RX Checker Packet Generator
Data/Doorbell
towards RX Packet Handler [RX]
NVMof
Memory
[capsules towards
NVMof]
ERNIC v4.0 64
PG332 December 2, 2022 www.xilinx.com
Chapter 5: Example Design
Apart from the ERNIC IP, the example design integrates the following modules:
• Register Configuration Module: This module configures all the required registers of
the ERNIC IP.
• Packet Generator Module: This generates the following types of packets:
° SEND Packets
° RDMA Read Response Packets for all the Read Requests from the ERNIC IP
° ACK Packets for all the RDMA Write requests from the ERNIC IP
° Data checker which checks the data received for RDMA WRITE operation.
° RDMA READ RESPONSE packet check for RDMA READ operation and ACK packet
check for RDMA WRITE operation.
• RDMA SEND
• RDMA Read Response
• ACK Packets
• RDMA READ
• RDMA WRITE
ERNIC v4.0 65
PG332 December 2, 2022 www.xilinx.com
Chapter 5: Example Design
Simulation Results
The simulation script compiles the ERNIC example design and supporting simulation files.
It then runs the simulation and checks to ensure that it completed successfully. If the test
passes, then the following message is displayed:
ERNIC v4.0 66
PG332 December 2, 2022 www.xilinx.com
Chapter 5: Example Design
Example Sequence
The demonstration test bench performs the following tasks:
• All the required ERNIC registers are configured by the Register Configuration Module
through AXI4-Lite interface.
• There are three operations being handled in example test bench:
a. RDMA SEND:
- Packet Generator module generates eight SEND packets to the QP2 to QP7.
- RX Path Checker of example design, checks for the data integrity on the capsule
transferred from the ERNIC along with the door bells rang.
b. RDMA RD REQUEST:
- Example design has a predefined RDMA Read work Queue entry packets. The
WQE Generator module rings the SQPI doorbell and sends the work queue entry
packets when requested by the ERNIC.
- The TX Checker Module checks the necessary fields of the RDMA read request
received.
- After successful validation of the request, the Packet Generator module sends
RDMA Read Response to the ERNIC IP.
- The RX Path Checker checks for the data integrity on the payload transferred by
the ERNIC along with the door bells rang.
c. RDMA WRITE REQUEST:
- Example design has a predefined RDMA Write work Queue entry Packets. The
WQE Generator module rings the SQPI doorbell and sends the work queue entry
packets when requested by the ERNIC.
- The TX Checker Module checks the necessary fields of the RDMA write request
received.
- Upon successful validation of the request, the Packet Generator module sends
ACK packet to the ERNIC IP.
d. RDMA WRITE:
- For Initiator functionality, RDMA write packets are sent to the ERNIC module
which writes payload data to the RX path and the acknowledgment (ACK packet)
is received on the TX path.
ERNIC v4.0 67
PG332 December 2, 2022 www.xilinx.com
Chapter 5: Example Design
- The ACK packet and data is checked through the checker modules.
e. RDMA READ:
- For initiator Functionality, RDMA Read packets are sent to ERNIC module which
reads the data from DDR and sends the read response on the TX path.
- The Read response packets are checked through checker module.
ERNIC v4.0 68
PG332 December 2, 2022 www.xilinx.com
Chapter 6
• Configuration of error buffers queue. Allocate memory and configure the base address
to ERRBUFBA and queue depth and size to ERRBUFSZ registers. The register
ERRBUFWPTR is used by the hardware to indicate the SW about the new entries in the
ERROR buffer queue.
• Configure incoming packet error status queue. This queue gives the status of incoming
packets with errors. To configure this queue, allocate memory and write the base
address to IPKTERRQBA, queue depth and size to IPKTERRQSZ registers.
IPKTERRQWPTR register is used by the hardware to indicate the SW about the new
entries in the queue.
• Configure response error packet buffer. This involves writing memory base address to
RESP_ERR_PKT_BUF_BA, queue depth and size to RESP_ERR_BUF_SZ and a DDR address
to RESP_ERR_BUF_WRPTR for the hardware to indicate the SW about the response error
buffer writes happened.
ERNIC v4.0 69
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
QP1 Creation
QP1 is a special QP in the ERNIC IP. This QP is designated for exchanging the MAD packets
with remote hosts for establishing the connection for RC QPs. QP1 must be configured first
before creation of any other RC QP. See the InfiniBand Architecture Specification Volume 1,
Annex A16 RoCE and Annex 17 RoCE V2 [Ref 1] for more information on QP1. The following
steps are required to configure and enable QP1:
1. Allocate memory for queues (RQ, SQ, CQ), and program the respective base address
registers.
Memory Registration
Memory must be registered with ERNIC hardware before it is exchanged with remote hosts
for doing the RDMA operations (incoming RDMA READ and incoming RDMA WRITE). The
following steps are important to register a memory with ERNIC:
ERNIC v4.0 70
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
RC QP Creation
Once the QP1 is enabled, RC QPs can be created by exchanging the CM MAD packets on
QP1. See the “Communication Management” chapter in the InfiniBand Architecture
Specification Volume 1, Annex A16 RoCE and Annex 17 RoCE V2 [Ref 1] for details on CM
MAD packets. For creating any RC QP, the following steps need to be performed:
1. Allocate memory for queues (RQ, SQ, CQ), and program the respective base address
registers.
ERNIC v4.0 71
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
WQE SQ Posting
• The QP can now be used for posting work requests for RDMA operations. To do that
prepare the WQE in the format described in Table 2-1 and copy it to SQ memory.
• Ring the doorbell by incrementing SQPIi by 1 and writing the value to the SQPIi
register.
1. Read the RQWPTRDBADDi value. If there is a change from previous value, then process
the received data by reading the data from RQBAi+offset. This offset is calculated based
on RQ buffer size and the value in RQWPTRDBADDi.
2. After processing the received messages, increment RQCIi register value to indicate to
the hardware that buffer is consumed so it can be used for further incoming messages.
ERNIC v4.0 72
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
Enabling QP HW Offload
The RC QPs can be enabled for HW handshake mode, where the external hardware
application can send and receive RDMA messages on this QP. The offloads the software
posting of RDMA messages to external hardware application. To enable the HW handshake
mode on a QP, the following steps need to be performed:
QP Deletion
An RC can be deleted when it is no longer in use (either because of locally initiated
disconnect or from the remote peer). The following steps needs to be performed for
deleting the QP and removing its configuration:
1. Wait for SQ and outstanding queues to become empty. The status bits are in STATQPi
register.
2. Wait for all completions received for WQEs in SQ. This is done by checking SQPIi and
CQHEADi registers.
3. Enable the software override by writing to XRNIC_ADV_CONF register and disable the
QP by writing to QPCONFi register.
4. Reset the QP pointers by writing 0 to RQWPTRDBADDi, SQPIi, CQHEADi, RQCIi,
STATRQPIDBi, STATCURSQPTRi, SQPSNi, LSTRQREQi, STATMSN, and then disable the QP
and kept it in recovery mode by writing QPCONFi.
5. After resetting pointers, software override should be disabled by writing to
XRNIC_ADV_CONF.
6. After this, software should free memory allocated for SQ, RQ, and CQs.
ERNIC v4.0 73
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
QP Fatal Recovery
The following steps describe the recovery process for a QP that entered the fatal condition.
It describes how to clear the existing traffic on the QP and re-initialize it so that it can be
reused.
1. On detecting QP under the FATAL interrupt, read the IPKTERRQBA register to know the
Incoming Pkt Error Status Queue base address written by the driver.
2. Read this base address to know the QP FATAL status and decide which QP went into
FATAL (Bits[31:16]) and check QP FATAL status code (Bits[15:0]), see Table 2-4.
3. Stop pushing any further SQ PI doorbells.
4. Set the “QP under recovery” bit in the QPCONFi register to 1.
5. Read the STATQPi register to check “send Q empty” and “outstanding Q empty” bits to
become 1.
6. Poll the CQHEADi register to check its value is the same as SQPIi register.
7. Poll the RESPHNDSTS register for “sq pici db check en” —16 th bit to be set.
8. Set the QPCONFi register “QP enable” bit to 0 and “QP under recovery” bit to 1.
1. Poll the CQHEADi register to check its value is the same as SQPIi register.
2. Poll the RESPHNDSTS register for “sq pici db check en”—16 th bit to be set.
3. Set the “SW OVERWRIDE” bit in the XRNICADCONF register to 1.
4. Initialize the following QP registers to 0:
° STATRQPIDBi
° STATRQBUFCAi
° STATRQBUFCAMSBi
° RQCIi
° STATCURSQPTRi
° SQPIi
° SQPSNi
° LSTRQREQi
° STATMSN
ERNIC v4.0 74
PG332 December 2, 2022 www.xilinx.com
Chapter 6: ERNIC Software Flow
° CQHEADi
5. Poll the CQHEADi register to check its value is 0.
6. Initialize the following register with the new value:
° SQPSNi
° LSTRQREQi
7. Initialize the following Ethernet side registers:
° MACDESADDMSBi
° MACDESADDLSBi
° IPDESADDR1i
° IPDESADDR2i
° IPDESADDR3i
° IPDESADDR4i
8. Re-configure the IP version in the QPCONFi.
9. Re-initialize the “RNR nack count” and “retry count” in the STATQPi register.
10. Set the ACCESSDESC register “access type” for that QP to 'b10.
11. Re-program the QPCONFi register by re-initializing fields like “RQ interrupt enable”, “CQ
interrupt enable”, “PMTU”, and “HW handshake disable”. Selectively enable “CQE write
enable” to debug error completions and re-initialize “RQ Buffer size”.
12. Re-program the QPADVCONFi register by randomizing “Traffic class”, “Time to live”, and
“Partition key”.
13. Set “QP EN” to 1 and “QP under recovery” to 0 in QPCONFi register.
ERNIC v4.0 75
PG332 December 2, 2022 www.xilinx.com
Appendix A
ERNIC v4.0 76
PG332 December 2, 2022 www.xilinx.com
Appendix A: Requesting ERNIC Support Questionnaire
ERNIC v4.0 77
PG332 December 2, 2022 www.xilinx.com
Appendix B
Debugging
This appendix includes details about resources available on the Xilinx Support website and
debugging tools. Before reaching out to Xilinx Support, fill out these basic questionnaire
provided in Appendix A.
TIP: If the IP generation halts with an error, there might be a license issue. See License Checkers in
Chapter 1 for more details.
The steps to dump the different logs are explained in the following sections.
1. Change the connect command directive with appropriate IP address of the Smart cable
using the following command:
$ connect -host <smartlink IP address>
ERNIC v4.0 78
PG332 December 2, 2022 www.xilinx.com
Appendix B: Debugging
2. At the end of the test, dump all the counter information from the initiator RNIC using
the command:
$ ethtool –S <RNIC Card name> > ethtool.log
$ cd /sys/class/infiniband/mlx5_0/ports/1/hw_counters/
$ for file in `ls .`; do echo -n "${file}:"; cat $file; done
Note: For more information on hardware debug counters for Mellanox, see DOC-2572 on the
Mellanox community.
Some quick debug checks that you can do to ensure that the system is clean are listed here.
• Check the ethtool.log file for any link failures or CRC failures. Any non-zero value in
these two counters points towards an unstable link and can be the cause of failure. Two
snippets from the ethtool.log file are listed here.
Sample 1:
rx_wqe_err: 0
rx_mpwqe_filler: 0
rx_mpwqe_frag: 0
rx_buff_alloc_err: 0
rx_cqe_compress_blks: 0
rx_cqe_compress_pkts: 0
link_down_events_phy: 4
rx_out_of_buffer: 0
rx_vport_unicast_packets: 5
rx_vport_unicast_bytes: 422
tx_vport_unicast_packets: 10
tx_vport_unicast_bytes: 714
rx_vport_multicast_packets: 46
rx_vport_multicast_bytes: 7608
Sample 2:
rx_vport_rdma_multicast_bytes: 0
tx_vport_rdma_multicast_packets: 0
tx_vport_rdma_multicast_bytes: 0
tx_packets_phy: 320587336
rx_packets_phy: 324924215
rx_crc_errors_phy: 0
tx_bytes_phy: 27657272230
rx_bytes_phy: 467673988652
tx_multicast_phy: 59
ERNIC v4.0 79
PG332 December 2, 2022 www.xilinx.com
Appendix B: Debugging
tx_broadcast_phy: 32
rx_multicast_phy: 0
rx_broadcast_phy: 9
rx_in_range_len_errors_phy: 0
• Check the hw_counters on the initiator side. These counters give a picture of all fatal/
non-fatal errors seen by the initiator. A sample of the counters:
duplicate_request:0
implied_nak_seq_err:0
lifespan:10
local_ack_timeout_err:0
out_of_buffer:0
out_of_sequence:0
packet_seq_err:0
rnr_nak_retry_err:0
rx_atomic_requests:0
rx_read_requests:5
rx_write_requests:108307999
• Check the following register locations from the ERNIC register dump. The QP Status
(STATQPi) registers for all enabled QPs provide a status of the different QPs. Check if
the QP FATAL status is set to 1 in any of the QP status registers. For example, the QP
Status register for QP 5:
0x84020688: 30620601 • QP Fatal is set to 1
• If the QP is in FATAL state, no transactions are performed from this QP and the QP gets
disconnected. Bits[22:16] in the same register provide the last AETH syndrome received
from the initiator. In many cases the QP might go into FATAL state due to a NAK
syndrome received from the initiator. The NAK syndrome helps you understand the
failure being seen by the initiator RNIC card. In the above example, the AETH syndrome
of 0x62 indicates a “Remote Access Error” from the initiator. The decoding of the AETH
syndrome is provided in the Infiniband Architecture Specification Volume 1 (Release
1.2.1). For NAK code details in this specification, see Table 43: AETH Syndrome field and
Table 44: NAK Codes.
• Check the Incoming and outgoing NAK count registers ((INNACKPKTCNT) and
(OUTNACKPKTCNT)) at offset 0x134 and 0x138 for the number of incoming NAK
syndromes seen and number of NAK syndromes sent out. This number should normally
correlate with the number seen from the hw_counters seen at the initiator. In general
not all NAK codes are fatal. However, all NAK codes lead to retries and can lower the
overall performance of the system. A high number of NAK codes can be a cause of
concern.
• The total number of retries initiated by the target can be known from the Retry count
status register (RETRYCNTSTS) at offset 0x140. Normally this number will match with
the incoming NAK count. In case this number is more than the incoming NAK count
value, it might be due to timeouts. Timeouts happen when the responder (in this case,
the initiator RNIC) does not respond to a request in a given time. The timeout value is
configured in the Timeout Configuration register (TIMEOUTCONF). This timeout
interval is implemented as per the InfiniBand™ Architecture Specification Volume 1
ERNIC v4.0 80
PG332 December 2, 2022 www.xilinx.com
Appendix B: Debugging
(Release 1.2.1) clause C9-141. It might be worthwhile to try and increase the timeout
interval and check if the number of retries is reduced.
• ERNIC register offset 0x6C (ERRBUFWPTR) indicates Error buffer write pointer. This
register gives the number of error packets received. Each error packet will be stored in
the address location provided in Error buffer base address (ERRBUFBA) register (offset
0x60). Each entry in this buffer will be given with error syndrome. See ERNIC RX Path
for details. The rows highlighted in yellow enlist the conditions that will cause the QP to
go into FATAL state.
ERNIC v4.0 81
PG332 December 2, 2022 www.xilinx.com
Appendix C
Xilinx Resources
For support resources such as Answers, Documentation, Downloads, and Forums, see Xilinx
Support.
For a glossary of technical terms used in Xilinx documentation, see the Xilinx Glossary.
• From the Vivado® IDE, select Help > Documentation and Tutorials.
• On Windows, select Start > All Programs > Xilinx Design Tools > DocNav.
• At the Linux command prompt, enter docnav.
Xilinx Design Hubs provide links to documentation organized by design tasks and other
topics, which you can use to learn key concepts and address frequently asked questions. To
access the Design Hubs:
• In the Xilinx Documentation Navigator, click the Design Hubs View tab.
• On the Xilinx website, see the Design Hubs page.
Note: For more information on Documentation Navigator, see the Documentation Navigator page
on the Xilinx website.
ERNIC v4.0 82
PG332 December 2, 2022 www.xilinx.com
Appendix C: Additional Resources and Legal Notices
References
These documents provide supplemental material useful with this product guide:
1. InfiniBand Architecture Specification Volume 1, Annex A16 RoCE and Annex 17 RoCE V2
2. Vivado Design Suite User Guide: Designing IP Subsystems using IP Integrator (UG994)
3. Vivado Design Suite User Guide: Designing with IP (UG896)
4. Vivado Design Suite User Guide: Getting Started (UG910)
5. Vivado Design Suite User Guide: Logic Simulation (UG900)
6. Vivado Design Suite User Guide: Programming and Debugging (UG908)
7. Vivado Design Suite User Guide: Implementation (UG904)
8. Vivado Design Suite User Guide: AXI Reference Guide (UG1037)
Revision History
The following table shows the revision history for this document.
ERNIC v4.0 83
PG332 December 2, 2022 www.xilinx.com
Appendix C: Additional Resources and Legal Notices
ERNIC v4.0 84
PG332 December 2, 2022 www.xilinx.com
Appendix C: Additional Resources and Legal Notices
ERNIC v4.0 85
PG332 December 2, 2022 www.xilinx.com