Ug Rtile Pcie MCDMA
Ug Rtile Pcie MCDMA
Contents
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
2
Contents
4.3. Resets................................................................................................................52
4.4. Multi Channel DMA...............................................................................................53
4.4.1. Avalon-MM PIO Master..............................................................................53
4.4.2. Avalon-MM Write Master (H2D).................................................................. 54
4.4.3. Avalon-MM Read Master (D2H).................................................................. 55
4.4.4. Avalon-ST Source (H2D)........................................................................... 56
4.4.5. Avalon-ST Sink (D2H)...............................................................................56
4.4.6. User Event MSI-X Interface....................................................................... 57
4.4.7. User Functional Level Reset (FLR) Interface................................................. 57
4.5. Bursting Avalon-MM Master (BAM) Interface............................................................ 58
4.6. Bursting Avalon-MM Slave (BAS) Interface.............................................................. 59
4.7. Legacy Interrupt Interface.................................................................................... 61
4.8. MSI Interface...................................................................................................... 61
4.9. Config Slave Interface (RP only) ........................................................................... 62
4.10. Hard IP Reconfiguration Interface......................................................................... 62
4.11. Config TL Interface.............................................................................................63
4.12. Configuration Intercept Interface (EP Only)........................................................... 63
4.13. Data Mover Interface.......................................................................................... 64
4.13.1. H2D Data Mover Interface....................................................................... 64
4.13.2. D2H Data Mover Interface....................................................................... 65
4.14. Hard IP Status Interface..................................................................................... 65
4.15. Precision Time Management (PTM) Interface..........................................................66
5. Parameters (H-Tile)...................................................................................................... 68
5.1. IP Settings..........................................................................................................68
5.1.1. System Settings...................................................................................... 68
5.1.2. MCDMA Settings...................................................................................... 69
5.1.3. Device Identification Registers................................................................... 70
5.1.4. Multifunction and SR-IOV System Settings Parameters [Endpoint Mode].......... 71
5.1.5. Configuration, Debug and Extension Options................................................72
5.1.6. PHY Characteristics.................................................................................. 73
5.1.7. PCI Express / PCI Capabilities Parameters................................................... 73
5.2. Example Designs................................................................................................. 76
6. Parameters (P-Tile) (F-Tile) (R-Tile)............................................................................ 77
6.1. Top-Level Settings............................................................................................... 77
6.2. PCIe0 Settings.................................................................................................... 80
6.2.1. Base Address Register.............................................................................. 80
6.2.2. PCIe0 Configuration, Debug and Extension Options.......................................82
6.2.3. PCIe0 Device Identification Registers.......................................................... 86
6.2.4. PCIe0 PCI Express / PCI Capabilities..........................................................87
6.2.5. MCDMA Settings...................................................................................... 94
6.3. Example Designs................................................................................................. 97
6.4. Analog Parameters (F-Tile MCDMA IP Only)............................................................100
6.5. PCIe1 Settings...................................................................................................100
6.5.1. PCIe1 Configuration, Debug and Extension Options.....................................101
7. Designing with the IP Core......................................................................................... 102
7.1. Generating the IP Core....................................................................................... 102
7.2. Simulating the IP Core........................................................................................103
7.3. IP Core Generation Output - Quartus Prime Pro Edition........................................... 104
7.4. Systems Integration and Implementation.............................................................. 107
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
3
Contents
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User
Guide..................................................................................................................... 180
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
4
683821 | 2025.01.27
Send Feedback
D2H Device-to-Host
EP End Point
File (or Packet) A group of descriptors defined by SOF and EOF bits of the
descriptor for the streaming. At Avalon-ST user interface, a file (or
packet) is marked by means of sof/eof.
H2D Host-to-Device
HIP Hard IP
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
1. Before You Begin
683821 | 2025.01.27
Term Definition
IP Intellectual Property
PD Packet Descriptor
RP Root Port
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
6
1. Before You Begin
683821 | 2025.01.27
Note: R-Tile MCDMA does not support PIPE mode simulations in the 24.2 release.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
7
683821 | 2025.01.27
Send Feedback
2. Introduction
Figure 1. Multi Channel DMA IP for PCI Express Usage in Server Hardware
Infrastructure
Host Intel FPGA
Ch. 0 MCDMA
H2D
Queue H2D
VM QCSR
D2H
Queue D2H
QCSR
Ch. 1
H2D
H2D QCSR
Queue
VM D2H
D2H QCSR AVMM/ User
Virtual Root PCIe AVST
Queue Machine Logic
Complex HIP Port
Manager
Ch. n
H2D H2D
Queue QCSR
VM
D2H D2H
Queue QCSR
The Multi Channel DMA IP for PCI Express enables you to efficiently transfer data
between the host and device. The Multi Channel DMA IP for PCI Express supports
multiple DMA channels between the host and device over the underlying PCIe link. A
DMA channel consists of H2D (host to device) and D2H (device to host) queue pair.
As shown in the figure above, the Multi Channel DMA IP for PCI Express can be used
in a server’s hardware infrastructure to allow communication between various VM-
clients and their FPGA-device based counterparts. The Multi Channel DMA IP for PCI
Express operates on descriptor-based queues set up by driver software to transfer
data between local FPGA and host. The Multi Channel DMA IP for PCI Express’s control
logic reads the queue descriptors and executes them.
The Multi Channel DMA IP for PCI Express integrates the Intel® PCIe Hard IP and
interfaces with the host Root Complex via the PCIe serial lanes. On the user logic
interface, Avalon-MM/Avalon-ST interfaces allow the designer easy integration of the
Multi Channel DMA IP for PCI Express with other Platform Designer components.
Besides DMA functionality, Multi Channel DMA IP for PCI Express enables standalone
Endpoint or Rootport functionality with Avalon-MM interfaces to the user logic. This
functionality is described in more detail in the Functional Description chapter.
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
2. Introduction
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
9
2. Introduction
683821 | 2025.01.27
• Supports Precision Time Measurement (PTM). (Only available for R-Tile MCDMA IP
Endpoint Ports 0 and 1)
• MSI Interrupt in BAS (only available for for all H/R/F/P Tiles MCDMA IP). It is not
supported in R-Tile MCDMA IP x4 Endpoint Ports 2 and 3)
• H2D address and payload size alignment to byte granularity for AVST
• Ports subdivided 2x8 can run independently and concurrently (separately)
instances on MCDMA and AVMM IP for P-Tile MCDMA Stratix 10 DX and R-Tile
MCDMA Agilex 7 devices.
— Example: Port0 -> MCDMA AVMMDMA DE & Port1 -> BAM+MCDMA Pkt Gen
Checker DE.
— Each instance runs independently with separate PERST
— SCTH support
• Ports subdivided 4x4 can run independently and concurrently (separately)
instances on MCDMA and AVMM IP for R-Tile MCDMA Intel Agilex 7 devices.
— Examples: Port0 -> MCDMA, Port1 -> BAM+MCDMA, Port2 -> BAS & Port3 ->
BAM+BAS.
— Each instance runs independently with separate PERST
— Only SCT support
• Example Design Simulation is only supported on Port0. Simulation is not support
on Port0, Port1 and Port2
• Maximum payload size supported:
— Stratix 10 GX and Stratix 10 MX devices: 512 bytes
— Stratix 10 DX and Agilex 7 devices: 512 / 256 / 128 bytes
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
10
2. Introduction
683821 | 2025.01.27
Stratix 10 Final
Agilex 7 Preliminary
Agilex 9 Preliminary
Related Information
Timing and Power Models
Reports the default device support levels in the current version of the Intel Quartus
Prime Pro Edition software.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
11
2. Introduction
683821 | 2025.01.27
Related Information
Intel Quartus Standard for Timing Closure and Optimization
Use this link for the Quartus Prime Pro Edition Software.
MCDMA PCIe Gen3 256 44,034 37,502 109,399 99,491 532 512
x16 for H-
Tile
PCIe Gen4
x16 for P-
Tile
BAM_MCD PCIe Gen4 256 48,447 41,835 120,555 110,600 616 596
MA x16 for P-
Tile
PCIe Gen3
x16 for H-
Tile
BAM PCIe Gen4 n/a 25,162 17,567 53,976 42,111 307 285
x16 for P-
Tile
PCIe Gen3
x16 for H-
Tile
BAS PCIe Gen4 n/a 26,818 20,126 61,369 49,486 257 236
x16 for P-
Tile
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
12
2. Introduction
683821 | 2025.01.27
PCIe Gen3
x16 for H-
Tile
BAM+BAS PCIe Gen4 n/a 33,655 25,104 78,809 65,025 372 346
x16 for P-
Tile
PCIe Gen3
x16 for H-
Tile
MCDMA PCIe Gen3 256 22,914 25,822 61,888 69,774 397 372
x8 for H-
Tile
PCIe Gen4
x8 for P-
Tile
BAM_MCD PCIe Gen3 256 25,329 28,320 68,691 76,285 452 431
MA x8 for H-
Tile
PCIe Gen4
x8 for P-
Tile
BAM PCIe Gen3 n/a 8,257 9,938 21,171 27,441 199 177
x8 for H-
Tile
PCIe Gen4
x8 for P-
Tile
BAS PCIe Gen3 n/a 9,227 11,374 24,973 31,260 169 149
x8 for H-
Tile
PCIe Gen4
x8 for P-
Tile
BAM+BAS PCIe Gen3 n/a 12,530 14,563 34,508 40,592 248 226
x8 for H-
Tile
PCIe Gen4
x8 for P-
Tile
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
13
2. Introduction
683821 | 2025.01.27
Table 7. Intel Stratix 10 H-Tile and P-Tile PCIe x16 [1 port Avalon-ST]
User Mode Link Conf DMA ALMs Logic Registers M20Ks
Channels
H-Tile P-Tile H-Tile P-Tile H-Tile P-Tile
MCDMA PCIe Gen3 1 / 32 / 64 47,866 / 38,634 / 117,470 / 104,793 / 560 / 578 / 536 / 555 /
x16 for H- 50,093 / 41,181 / 122,854 / 110,305 / 601 576
Tile 52,951 43,852 128,771 115,833
PCIe Gen4
x16 for P-
Tile
BAM_MCD PCIe Gen3 2 / 32 / 64 51,976 / 42,155 / 128,208 / 113,660 / 643 / 662 / 615 / 625 /
MA x16 for H- 54,300 / 43,745 / 133,935 / 117,292 / 684 638
Tile 57,132 45,118 139,874 120,406
PCIe Gen4
x16 for P-
Tile
MCDMA Gen4 x16 256 33,805 37,445 97,557 103,143 512 521
BAM_MCD Gen4 x16 256 38,546 42,198 108,328 113,886 595 605
MA
BAM_BAS_ Gen4 x16 2,048 43,907 44,000 139,591 139,552 855 855
MCDMA
BAM Gen4 x16 n/a 17,246 20,780 42,097 47,680 285 295
BAS Gen4 x16 n/a 19,164 22,677 49,327 54,854 236 246
BAM+BAS Gen4 x16 n/a 24,955 28,562 64,885 70,342 346 356
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
14
2. Introduction
683821 | 2025.01.27
Table 11. Agilex 7 P-Tile and F-Tile PCIe x16 [1 port Avalon-ST]
User Mode Link Conf DMA ALMs Logic Registers M20Ks
Channels
P-Tile F-Tile P-Tile F-Tile P-Tile F-Tile
MCDMA Gen4 x16 1 / 32 / 64 33,913 / 37,567 / 102,712 / 108,303 / 537 / 554 / 546 / 564 /
36,373 / 40,071 / 108,215 / 113,764 / 576 587
39,480 43,078 114,039 119,553
BAM_MCD Gen4 x16 2 / 32 / 64 38,247 / 41,880 / 112,445 / 118,007 / 620 / 625 / 629 / 636 /
MA 39,448 / 43,115 / 115,445 / 120,995 / 639 648
41,041 44,686 118,806 124,434
MCDMA Gen4 x8 1 / 32 /64 22,978 / 24,705 / 72,007 / 73,565 / 397 / 413 / 407 / 424 /
25,343 / 27,066 / 77,499 / 79,005 / 436 446
28,399 30,219 83,182 84,731
BAM_MCD Gen4 x8 2 / 32 /64 24,790 / 26,541 / 77,532 / 79,104 / 455 / 461 / 465 / 470 /
MA 26,083 / 27,776 / 80,585 / 82,126 / 473 483
27,550 29,334 84,057 85,545
Table 13. Agilex 7 P-Tile and F-Tile Data Mover Only User Mode
User Mode Link Conf DMA ALMs Logic Registers M20Ks
Channels
P-Tile F-Tile P-Tile F-Tile P-Tile F-Tile
Data Mover Gen4 x16 n/a 31,528 41,514 83,773 91,219 522 532
Only
Data Mover Gen4 x8 n/a 18,163 25,718 56,890 60,391 381 391
Only
Table 14. Intel Stratix 10 P-Tile Data Mover Only User Mode
User Mode Link DMA Channels ALMs Logic Registers M20Ks
Configuration
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
15
2. Introduction
683821 | 2025.01.27
A change in:
• X indicates a major revision of the IP. If you update your Quartus Prime software,
you must regenerate the IP.
• Y indicates the IP includes new features. Regenerate your IP to include these new
features.
• Z indicates the IP includes minor changes. Regenerate your IP to include these
changes.
Table 15. Release Information for the Multi Channel DMA IP for PCI Express Core
Item Description
Quartus Prime Version Quartus Prime Pro Edition 24.3.1 Software Release
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
16
683821 | 2025.01.27
Send Feedback
3. Functional Description
Figure 2. Multi Channel DMA IP for PCI Express Block Diagram
Not all the blocks co-exist in a design. Required functional blocks are enabled based
on the user mode that you select when you configure the IP. The following table shows
valid user modes that Multi Channel DMA IP for PCI Express supports. Each row
indicates a user mode with required block(s).
Endpoint MCDMA √ × × × ×
BAM × √ × × ×
BAS × × √ × ×
BAM+BAS × √ √ × ×
BAM+MCDMA √ √ × × ×
BAM+BAS √ √ √ × ×
+MCDMA
continued...
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
3. Functional Description
683821 | 2025.01.27
Data Mover × √ × × √
Only
BAS × × √ √ ×
BAM+BAS × √ √ √ ×
Note: Data mover only mode is not available for x4 topology in P/F/R-Tile MCDMA IPs.
The MCDMA engine operates on software DMA queue to transfer data between local
FPGA and host. The elements of each queue are software descriptors that are written
by driver/software. Hardware reads the queue descriptors and executes them.
Hardware can support up to 2K DMA channels. For each channel, separate queues are
used for read/write DMA operations.
Note: MCDMA requires the Source and Destination addresses be 64 byte aligned in D2H
direction. This may not be required in future release.
There are two modes of usage for the H2DDM: queue descriptors fetching and H2D
data payload transfer.
When used for descriptor fetching, the destination of the completion data is internal
descriptor FIFOs where descriptors are stored before being dispatched to the H2DDM
or D2HDM for actual data transfer.
When used for data payload transfer, the H2DDM generates Mem Rd TLPs based on
descriptor information such as PCIe address (source), data size, and MRRS value as
follows:
• First MemRd to the MRRS address boundary
• Following with MemRd’s of full MRRS size
• Last MemRd of the remaining partial MRRS
The received completions are re-ordered to ensure the read data is delivered to user
logic in order.
When a descriptor is completed, that is, all read data has been received and forwarded
to the Avalon-MM Write Master / Avalon-ST Source interface, The H2DDM performs
the housekeeping tasks that include:
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
18
3. Functional Description
683821 | 2025.01.27
MSI-X and Writeback are memory write to host via the D2HDM to avoid race condition
due to out-of-order writes. Based on the updated status, software can proceed with
releasing the transmit buffer and reuse the descriptor ring entries.
In AVMM mode, the D2HDM sends a series of AVMM reads via the master port based
on PCIe address, MPS, and DMA transfer size. The AVMM read is generated as follows:
• First AVMM read to 64-byte address boundary. Multiple bursts are read on first
AVMM read if:
— AVMM address is 64-byte aligned
— Total payload count of the descriptor is 64-byte aligned and less than max
supported MPS
• Following with AVMM reads with max supported MPS size
• Last AVMM Read of the remaining size
In AVST mode, D2HDM AVST sink de-asserts ready when descriptors are not available.
• Host sets up software descriptors for a port. Max payload count can be up to 1MB.
SOF/EOF fields in the descriptor may not be set by the Host.
— D2HDM uses descriptor update sequence to update SOF, EOF, Rx payload
count fields in the software descriptor at Host location through a Memory
Write request
• AVST d2h_st_sof_i signal assertion triggers a descriptor update sequence by
D2HDM to mark start of AVST frame.
— D2HDM issues a MWr to set the SOF field in the descriptor
— WB/MSI-X, if set in the descriptor, is issued
• AVST d2h_st_eof_i signal assertion triggers a descriptor update sequence by
D2HDM to mark end of AVST frame. The descriptor update sequence is as follows:
— D2HDM terminates the descriptor at d2h_st_eof_i and initiates a descriptor
update sequence.
— During descriptor update sequence, a MWr is issued to set EOF field in the
descriptor and update Rx payload count field with total bytes transferred.
— WB/MSI-X if set in descriptor, is issued
• The descriptor immediately after EOF sequence, is considered as start of next
AVST data frame and initiates a descriptor update sequence to set SOF field.
When a descriptor is completed, that is, all DMA data corresponding to the descriptor
has been sent to the host, the D2HDM performs housekeeping tasks that include:
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
19
3. Functional Description
683821 | 2025.01.27
Based on the updated status, software can proceed with releasing the receive buffer
and reuse the descriptor ring entries.
When you enable multiple channels over single port in AVST mode, the MCDMA IP
limits the number of the channels that can be active or can prefetch the descriptors
for the data movement to avoid implementing the larger memory to hold descriptors
simultaneously for all channels.
The descriptor FIFO is designed to hold descriptors only for a defined number of
channels. When the data is received on the user interface (AVST port), there is no
handshake between Host SW and User Logic through the MCDMA IP to control the
order of descriptor fetch or data movement of multiple channels. To enable easy
access to descriptors of multiple channels, the MCDMA IP implements segmentation of
descriptor FIFO.
In AVST mode, when data is received for a channel that does not have Tail pointer
(TID) updates from the Host, the corresponding AVST packet from SOF to EOF is
dropped.
Note: D2H does not support Tail pointer updates on a disabled channel in the current IP
version. The host software must make sure a channel is enabled before doing Tail
pointer updates.
When all the segments are occupied and D2HDM receives data for a channel that does
not have descriptors prefetched, the least recently used segment is cleared to
accommodate descriptors fetched for this new channel. Descriptors in the least
recently used segment that were cleared, are refetched whenever D2HDM receives
data for the corresponding channel.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
20
3. Functional Description
683821 | 2025.01.27
3.1.3. Descriptors
A DMA channel to support Multi Channel DMA data movement consists of a pair of the
descriptor queues: one H2D descriptor queue and one D2H descriptor queue.
Descriptors are arranged contiguously within a 4 KB page.
Each descriptor is 32 bytes in size. The descriptors are kept in host memory in a
linked-list of 4 KB pages. For a 32 byte descriptor and a 4 KB page, each page
contains up to 128 descriptors. The last descriptor in a 4 KB page must be a “link
descriptor” – a descriptor containing a link to the next 4 KB page with the link bit set
to 1. The last entry in the linked list must be a link pointing to the base address
programmed in the QCSR, in order to achieve a circular buffer containing a linked-list
of 4 KB pages. The figure below shows the descriptor linked list.
4KB Page
1* Link=0 Q_START_ADDR_L/H (from QCSR)
•
•
•
128 Link=1
4KB Page
129 Link=0
•
•
•
256 Link=1
4KB Page
257 Link=0
•
•
•
384 Link=1
••
•
••
* = Descriptor index always starts from 1 • 4KB Page
n-127 Link=0
•
•
•
n Link=1
Software and hardware communicate and manage the descriptors using tail index
pointer (Q_TAIL_POINTER) and head index pointer (Q_HEAD_POINTER) QCSR
registers as shown in the following figure. The DMA starts when software writes the
last valid descriptor index to the Q_TAIL_POINTER register.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
21
3. Functional Description
683821 | 2025.01.27
DESC_IDX
1 Q_HEAD_POINTER
DESC_IDX
(Descriptor last fetched by HW)
2
DESC_IDX
3
DESC_IDX
n
Q_TAIL_POINTER
(Last valid descriptor added by SW)
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
22
3. Functional Description
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
23
3. Functional Description
683821 | 2025.01.27
MCDMA IP supports the following alignment modes for the descriptor source/
destination address and payload count fields.
Default alignment Enable address byte AVMM: DWORD AVMM: DWORD • Descriptors are 32-
aligned = FALSE aligned aligned byte aligned
AVST: 64-byte AVST: 64-byte • Low resource
aligned (or full data aligned (exception: utilization
width aligned) last descriptor of a
packet/file)
Unaligned (or Byte Enable address byte Byte aligned Byte aligned • Supported for H2D
aligned) Access aligned = TRUE AVST Interface
only
• Descriptors are 32-
byte aligned
• DMA read requests
use the PCIe First
DWORD Byte
Enable and Last
DWORD Byte
Enable to support
byte granularity
• High resource
utilization
8 Byte Metadata
In Avalon Streaming mode, once you select 8 Byte metadata support during IP
generation, the source and destination address field in the existing descriptor
structure are repurposed for metadata support. The following fields of the existing
descriptor defined above have revised properties.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
24
3. Functional Description
683821 | 2025.01.27
Table 19.
Name Width Description
3.1.3.3. MSI-X/Writeback
MSI-X and Writeback block update the host with the current processed queue’s head
pointer and interrupt. Apart from a global MSI-X Enable and Writeback Enable, there is
a provision to selectively enable or disable the MSI-X and Writeback on a per-
descriptor basis. This feature can be used by applications to throttle the MSI-X/
Writeback.
The table below shows the relation between global and per-descriptor MSI-X/
Writeback Enable.
Table 20. Multi Channel DMA Per-descriptor Enable vs. Global MSI-X/Writeback Enable
Global Enable Per-descriptor Enable MSI-X/Writeback Generation
1 1 On
1 0 Off
0 1 Off
0 0 Off
If enabled, a Writeback is sent to the host to update the status (completed descriptor
ID) stored in Q_CONSUMED_HEAD_ADDR location. In addition, for D2H streaming DMA,
an additional MWr TLP is issued to the D2H descriptor itself when the IP’s Avalon-ST
sink interface has received an sof/eof from the user logic. It updates the D2H
descriptor packet information fields such as start of a file/packet(SOF), end of a file/
packet(EOF), and received payload count (RX_PYLD_CNT).
Note: Do not attempt to perform 32-bit transactions on PIO interface. Only 64-bit
transactions are supported.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
25
3. Functional Description
683821 | 2025.01.27
The Avalon-MM PIO Master is present only if you select Multi Channel DMA User
Mode for MCDMA Settings in the IP Parameter Editor GUI. The Avalon-MM PIO
Master is always present irrespective of the Interface type (Avalon-ST/Avalon-MM)
that you select.
The Avalon-MM Write Master is used to write H2D DMA data to the Avalon-MM slave in
the user logic through the memory-mapped interface. The Write Master can issue
AVMM write commands for up to 8/16/32 burst count for 512/256/128 data-width
respectively. The waitrequestAllowance of this port is enabled, allowing the
master to transfer up to N additional write command cycles after the waitrequest
signal has been asserted. Value of <N> for H2D AVMM Master is as follows:
• 512-bit data-width is 16
• 256-bit data-width is 32
• 128-bit data-width is 64
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
26
3. Functional Description
683821 | 2025.01.27
The Avalon-MM Read Master is used to read D2H DMA data from the Avalon-MM slave
in the user logic through the memory-mapped interface. The Read Master can issue
AVMM read commands for up to 8 bursts (burst count = 8). The waitrequestAllowance
of this port is enabled, allowing the master to transfer up to N additional write
command cycles after the waitrequest signal has been asserted. Value of <N> for
H2D AVMM Master is as follows:
• 512-bit data-width is 16
• 256-bit data-width is 32
• 128-bit data-width is 64
When you select AVST 1 port mode, the IP provides 1 AVST Source and Sink port for
DMA. In this mode, you can enable up to 2K DMA channels.
Table 21. IP Parameters specific to D2H Descriptor Fetch in Avalon-ST 1 Port Mode
IP GUI Parameter Description Value for MCDMA
D2H Prefetch Channels Number of prefetch channels 8, 16, 32, 64, 128, 256
Note: In the current Quartus Prime release, the D2H Prefetch Channels follows the total
number of DMA channels that you select up to 256 total channels. When the total
number of channels selected is greater than 256, then D2H Prefetch channels are
fixed to 64. The resource utilization shall increase with the number of D2H prefetch
channels.
For details about these parameters, refer to the D2H Data Mover section.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
27
3. Functional Description
683821 | 2025.01.27
When streaming the DMA data, the packet (file) boundary is indicated by the SOF and
EOF bits of the descriptor and corresponding sof and eof signals of the Avalon-ST
interface. Channel interleaving is not supported. A channel switch on AVST interface
can only happen on packet boundary
Packet Boundary Descriptor Field AVST Source (H2D) Signal AVST Sink (D2H) Signal
In Avalon-ST 1 port mode, a channel switch can only happen at packet boundary.
3.1.6.3. Metadata
When streaming DMA data, user can optionally enable 8-byte Metadata that contains
metadata for user application. When enabled, the H2D descriptor destination address
field is replaced with metadata and D2H descriptor source address field is replaced
with Metadata.
With Metadata enabled, Avalon-ST SOF qualifies only the metadata and does not have
any data. Since the metadata size is always 8 bytes with predefined property, user
side does not expect an empty signal.
2’b00 and 2’b10 to address the Descriptor completion related interrupts (DMA
operation MSI-X) on both the paths.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
28
3. Functional Description
683821 | 2025.01.27
Note: MCDMA R-Tile IP Port 2 and Port 3 in Endpoint Mode do not support User MSI-X
feature.
The following table shows 4MB space mapped for each function in PCIe config space
through BAR0.
MSI-X (Table and PBA) 22’h10_0000 - 22’h1F_FFFF 1 MB MSI-X Table and PBA space
Note: For more information on Control registers, refer to Control Register (GCSR) on page
151
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
29
3. Functional Description
683821 | 2025.01.27
You can select to map any BAR register other than the BAR0 of the physical function
to BAM side for the user application. The BAM interface address mapping is as follows:
BAM address = {vf_active, pf, vf, bar_num, bam_addr}
Example: If the transaction was received for BAR3 (max aperture of 4GB) of PF2/VF1
where only 3 PFs have been enabled by the user, 25 VFs have been enabled by the
user, the BAM address is {1'b1, 2'b10, 5'b00001, 3'b011, bam_addr[31:0]}.
Note: For the Root Port: In the Root Port mode, the AVMM address output from BAM is the
same as the one received on the Hard IP AVST.
The BAS supports both 256 bit and 512 bit data widths to achieve bandwidths
required for Gen4 x8 and Gen4 x16. It supports bursts up to 512 bytes and multiple
outstanding read requests. The default support is only for the 64 NP outstanding.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
30
3. Functional Description
683821 | 2025.01.27
Completion Re-ordering
Avalon-MM BAS interface is slave interface to the User Avalon-MM. The User AVMM
can initiate AVMM reads to host interface and this translates to BAS Non-Posted packet
interface signals. The BAS module keeps track of the initiated NP requests and tracks
against the completions received from the PCIe on the scheduler completion packet
interface.
Since the completion from the PCIe can come out of order, the completion re-ordering
module ensures the returned completions are re-ordered against the pending requests
and send in the order on the AVMM interface since AVMM doesn’t track out of order
completions.
Note: BAS AVMM Slave waitrequestAllowance is 0, which means BAS can accept 0 additional
command cycle after waitrequest is asserted.
When you enable MSI Capability in Endpoint BAS or BAM+BAS mode, the IP core
exposes MSI request interface to user logic. When user issues an MSI request through
this interface, internal Interrupt Controller receives inputs such as function number
and MSI number from user logic and generates AVMM Write to the BAS module as
shown in the figure below. The BAS receives MSI signaling from interrupt controller
and generating an MSI.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
31
3. Functional Description
683821 | 2025.01.27
AVST
BAM AVMM Slave
HIP Interface/Scheduler/Arbiter
Rx
PCIe MSI Interrupt Controller
Link tl_cfg* (Func., MSI Addr/Data)
Func 0 MSI Addr/Data
PCIe Func 1 MSI Addr/Data
HIP AVST User Interrupt
Func 7 MSI Addr/Data I/F
Tx Requester
MSI Gen.
BAS
AVMM Master
* tl_cfg bus provides MSI Capability register and various config space register values.
Note: Endpoint MSI Interrupt is also available for H-Tile MCDMA IP starting with Quartus
Prime Pro Edition 23.4 version.
Note: Endpoint MSI Interrupt is not supported for R-Tile MCDMA IP x4 Endpoint Ports 2 and
3.
The MSI Interrupt controller also gets the MSI Mask bits from the tl_cfg interface. If
the MSI is masked for a specific function, the MSI Interrupt controller does not send
the MSI for that function. In addition, it provides the Hard IP information on MSI
Pending bits.
When the user requests the generation of the MSI, the user provides MSI vector
(number) and user function information which the MSI Interrupt Controller indexes to
get the MSI addr/data and send this information to the BAS. The MSI capability
message control register [6:4] selects the number of the MSI vectors per function.
The MSI vector input (msi_num_i[4:0]) are used to manipulate the MSI data LSB bits
as per the PCIe specification.
The BAS logic takes the MSI request from the MSI Interrupt controller and forms a
Memory Write TLP. The internal request to Interrupt Gen is MSI_Pending &
~MSI_Mask. The MSI pending is User_MSI & MSI_En (from Hard IP).
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
32
3. Functional Description
683821 | 2025.01.27
7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 07 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
Byte 0 0Fmt Type At T T E Attr Length
1 1 0 0 0 0 0 R TC R tr R H D P 00 AT 0 0 0 0 0 0 0 0 0 1
Byte 4 Requester ID Tag Last DW First DW
0000 1111
Header
Byte 8 MSI Message Address [63:32]
Byte 12 MSI Message Address [31:0] 00
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
33
3. Functional Description
683821 | 2025.01.27
The BAS logic Arbs for the Interrupt ping (on the Side band wires) from the Interrupt
controller and sends the MSI on the AVST interface to the scheduler.
CS module converts the AVMM request into a configuration TLP with a fixed TAG value
(decimal 255) assigned to it and sends it to scheduler. One unique TAG is enough as it
doesn’t support more than one outstanding transaction. This unique TAG helps in
rerouting the completions to CS module.
Re-routing the completion is handled at the top level and since only 1 NP outstanding
is needed, the TLP RX scheduler parses the completion field to decode the completion
on a fixed TAG and route the transaction over to CS.
Configuration AVMM
cs_cpl pkt CS Master Intf
Slave (CS)
cs_np pkt
Config Slave interface supports 29-bit address format in Quartus Prime Pro Edition
v21.1 and Quartus Prime Pro Edition v21.2.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
34
3. Functional Description
683821 | 2025.01.27
Note: Support for 29-bit address format is not available in Quartus Prime Pro Edition v21.3
onwards.
Dword Aligned
13 12 11 0
Dword Aligned
The two most significant bits [13:12] determines whether address [11:0] is used to
form a Config TLP sent downstream or used to write to/read from local Config Slave
registers.
The following is a list of local CS registers that are supported in 14-bit address mode.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
35
3. Functional Description
683821 | 2025.01.27
EP Config Space Read • AVMM read to EP Register address • One AVMM write of BDF info to
(AVMM address includes BDF 0x0004 (with 13th bit set to 1)
+Register) • One AVMM read to EP Register
• Type1/Type0 based on 28th bit address (with 13th bit set to 0)
• CplD data is available on AVMM • Type1/Type0 based on 12th bit
read data bus • CplD data is available on AVMM
read data bus
cs_writedata_i[31:0] BDF
cs_readdatavalid_o
cs_readdata_o[31:0] Data
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
36
3. Functional Description
683821 | 2025.01.27
Figure 18. Example Use Cases with the Root Port Address Translation Table (ATT)
Note: For information about the CS register offset used to program ATT, refer to Local CS
Registers supported in 14-bit address mode.
The IP provides the following parameters in the IP Parameter Editor GUI (MCDMA
Settings) that allows you to select address mapping when you enable ATT.
• ATT Table Address Width (1-9): Sets the depth of ATT. Depth is equal to 2 to the
power of number entered.
• ATT Window Address Width (10-63): Sets the number of BAS address bits to be
used directly.
When address mapping is disabled, the Avalon-MM slave address is used as-is in the
resulting PCIe TLPs. When address mapping is enabled, burst of transactions on the
Avalon-MM slave interfaces must not cross address mapping page boundaries. This
requirement means (address + 32 * burst count) <= (page base address + page
size ). Host software is expected to guarantee this in the Root Port mode and the
BAS/CS does not have any mechanism to report the error if this requirement is
violated.
When address mapping is enabled, Avalon-MM lower-order address bits are passed
through to the PCIe TLPs unchanged and are ignored in the address mapping table.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
37
3. Functional Description
683821 | 2025.01.27
The number of lower-order address bits that are passed through defines the size of
the page and is set by ATT Window Address Width parameter in the IP GUI. If bits
[63:32] of the resulting PCIe address are zero, TLPs with 32-bit wide addresses are
created as required by the PCI Express standard.
In the figure below, ATT depth is 64. This is set by ATT Table Address Width = 6 (26=
64 deep). Address pass through width is 16. This is set by ATT Window Address Width
= 16. This means BAS forwards the lower 16 bit address as is. The upper 6 bits are
used to select the ATT entry. In the example, the ATT entry selection is 0x03.
0x0 0x3000
0x1 0x3008
CS Address View
0x2 0x3010
0x3 12340h 56780000h 0x3018
BAS Mapping View 0x4 0x3020
0x3028
0x11 0x3030
CS Programming
from Host
3Fh 0x31F8
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
38
3. Functional Description
683821 | 2025.01.27
interface. In Root Port mode, the application logic uses the Hard IP reconfiguration
interface to access its PCIe configuration space to perform link control functions such
as Hot Reset, Link Disable or Link Retrain.
Note: After a warm reset or cold reset, changes made to the configuration registers of the
Hard IP via the Hard IP reconfiguration interface are lost and these registers revert
back to their default values.
Note: H-Tile MCDMA IP does not respond with completion to a Hard IP reconfiguration read
or write request when those are targeted to 0x0FFC - 0xFFF address range. These
addresses belong to the PCIe configuration space.
Note: MCDMA R-Tile IP does not support Config TL interface. The CII interface should be
used as a replacement for similar functionality.
Related Information
• P-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• F-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• R-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
Note: Data Mover with the External Descriptor Controller only supports data movement over
AVMM interface to the user logic. AVST interface support to user logic may be added in
future.
Note: Data Mover only mode is not available for any of the x4 topologies in P/F/R-Tile
MCDMA IPs.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
39
3. Functional Description
683821 | 2025.01.27
The figure below is the top-level block diagram of the MCDMA IP in Data Mover Only
mode with user descriptor controller.
Figure 20. PCIe Data Mover Subsystem connected to External DMA Controller
d2hdm_desc_status
h2ddm_desc_status
h2ddm_desc_cmpl
d2hddm_desc
h2ddm_desc
H2D AVMM Wr
Data Mover Master
PCIe Memory
HIP
Link Interface/ D2H AVMM Rd
PCIe
Scheduler/ Data Mover Master
HIP
Arbiter
BAM AVMM Slave
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
40
3. Functional Description
683821 | 2025.01.27
• Bursting Avalon Master (BAM): Performs non-bursting PIO operations for register
programming.
• Support for PCIe semantics
— Scheduler enforces PCIe ordering rules in both Tx and Rx direction leading to
PCIe HIP
— TLP chunking at MPS size
— Check for the 4KB cross over and other error logging
• Completion re-ordering: Data Mover subsystem performs the re-ordering of the
received completions before sending data to AVMM Write Master or sending the
descriptor completion packets to external descriptor controller.
h2ddm_master AVMM Write Master This interface provides the read data
from the host memory to the user
application.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
41
3. Functional Description
683821 | 2025.01.27
RSVD 1 Reserved
The H2D DM descriptor completion data which is the returned completion from the
host when original descriptor request was with MM_Mode=0 is as follows.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
42
3. Functional Description
683821 | 2025.01.27
RSVD 1 Reserved
The table below is the H2D Data Mover descriptor status sent to the external DMA
controller when it completes the execution of a descriptor.
Note: This is intended to be used only when MM_mode=1 in original H2D Data Mover
descriptor.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
43
3. Functional Description
683821 | 2025.01.27
d2hdm_master AVMM Read Master D2H data mover reads the data from
local memory and writes the data to
host memory location.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
44
3. Functional Description
683821 | 2025.01.27
d2hdm_desc_data_i [63:0] 64-bits Writeback Data: Write-back 32-bits MSI-X Data: {32'h0,MSI-X
Data [63:0] Data [31:0]}
32-bits Writeback Data: {32'h0, Write-
back Data [31:0]}
d2hdm_desc datai [127:64] Write-back Host Address [63:0] MSI-X Address [63:0]
d2hdm_desc_data_i [155:148] Data Mover Write: 'h60 Data Mover Write: 'h60
d2hdm_desc_data_i [175:160] {VF_ACTIVE, VFNUM [10:0], PF[3:0]} {VF_ACTIVE, VFNUM [10:0], PF[3:0]}
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
45
3. Functional Description
683821 | 2025.01.27
The table below is the D2H Data Mover descriptor status sent to the external DMA
controller when it completes the execution of a descriptor.
0 1 1 Reserved
0 1 0 Reserved
0 0 0 Reserved
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
46
683821 | 2025.01.27
Send Feedback
4. Interface Overview
Interfaces for the Multi Channel DMA IP for PCI Express are:
• Clocks
• Resets
• Multi Channel DMA mode interfaces (EP only):
— Avalon-MM PIO Master Interface
— Avalon-MM Write Master Interface
— Avalon-MM Read Master Interface
— Avalon-ST Source Interface
— Avalon-ST Sink Interface
— User MSI-X
— User FLR
• Bursting Avalon-MM Master Interface (BAM)
• Bursting Avalon-MM Slave Interface (BAS)
• MSI Interface (in BAS mode & BAM+BAS mode for EP H/P/F/R-Tile MCDMA IP)
• Config Slave Interface (RP only)
• Hard IP Reconfig Interface
• Config TL Interface
• Data Mover Mode (available in MCDMA P-Tile, R-Tile and F-Tile IPs)
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
4. Interface Overview
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
48
4. Interface Overview
683821 | 2025.01.27
d2hdm_address_o[63:0] h2d_st_sof_<k>_o
d2hdm_byteenable_o[63:0] h2d_st_eof_<k>_o <k> = 0 (1 AVST port)
d2hdm_read_o
D2H Avalon-MM d2hdm_burstcount_o[3:0] h2d_st_empty_<k>_o[5:0]
Read Master d2hdm_waitrequest_i h2d_st_channel_<k>_o[10:0] H2D Avalon-ST
Interface d2hdm_readdata_i[511:0] h2d_st_valid_<k>_o Source Interface
d2hdm_readdatavalid_i h2d_st_data_<k>_o[511:0]
d2hdm_response_i[1:0] h2d_st_ready_<k>_i
h2ddm_address_o[63:0] h2ddm_desc_cmpl_ready_i
h2ddm_byteenable_o[63:0] h2ddm_desc_cmpl_valid_o
H2D Avalon-MM
h2ddm_burstcount_o[3:0] x16: h2ddm_desc_cmpl_data_o[511:0]
Write Master h2ddm_write_o x8: h2ddm_desc_cmpl_data_o[255:0]
Interface h2ddm_writedata_o[511:0] x16: h2ddm_desc_cmpl_empty_o[5:0]
h2ddm_waitrequest_i x8: h2ddm_desc_cmpl_empty_o[4:0]
h2ddm_desc_cmpl_sop_o
H2D Data Mover Interface
<a> = {vf_active, pf, vf, bam_address_o[<a>:0] h2ddm_desc_cmpl_eop_o
bar_num, bam_addr} - 1 bam_byteenable_o[63:0] h2ddm_desc_ready_o
bam_burstcount_o[3:0] h2ddm_desc_valid_i
Bursting Avalon-MM bam_read_o
bam_readdata_i[511:0] h2ddm_desc_data_i[255:0]
Master Interface
bam_readdatavalid_i h2ddm_desc_status_valid_o
(BAM) bam_write_o h2ddm_desc_status_data_o[31:0]
bam_writedata_o[511:0]
bam_waitrequest_i
usr_event_msix_ready_o User MSI-X
bas_address_i[63:0] usr_event_msix_valid_i Interface
bas_byteenable_i[63:0] usr_event_msix_data_i[15:0]
bas_burstcount_i[3:0]
Bursting Avalon-MM bas_read_i usr_flr_rcvd_val_o
bas_readdata_o[511:0] usr_flr_rcvd_chan_num_o[10:0] User FLR
Slave Interface bas_readdatavalid_o Interface
(BAS) bas_write_i usr_flr_completed_i
bas_writedata_i[511:0]
bas_waitrequest_o usr_hip_tl_cfg_func_o[2:0] Configuration
bas_response_o[1:0] usr_hip_tl_cfg_add_o[4:0] Output Interface
usr_hip_tl_cfg_ctl_o[15:0] (Only P-Tile and F-Tile)
cs_address_i[13:0]
cs_byteenable_i[3:0] msi_req_i
cs_read_i msi_func_num_i[2:0]
cs_readdata_o[31:0] msi_num_i[4:0] User MSI
Config Interface
Slave Interface cs_readdatavalid_o msi_ack_o
cs_write_i msi_status_o[1:0]
(CS) cs_writedata_i[31:0]
cs_waitrequest_o usr_cii_hdr_poisoned_o
cs_response_o[1:0]
cs_writeresponsevalid_o usr_cii_override_en_i
usr_cii_hdr_first_be_o[3:0]
usr_hip_reconfig_address_i[31:0] R-Tile usr_cii_dout_o[31:0] User/ HIP Configuration
usr_hip_reconfig_address_i[20:0] P/F-Tile usr_cii_halt_i Intercept Interface
usr_hip_reconfig_read_i usr_cii_req_o (Only R-Tile)
usr_hip_reconfig_readdata_o[7:0] usr_cii_addr_o[9:0]
HIP Dynamic usr_cii_wr_o
usr_hip_reconfig_readdatavalid_o
Reconfiguration usr_hip_reconfig_write_i usr_cii_override_din_i[31:0]
Interface usr_hip_reconfig_writedata_i[7:0]
usr_hip_reconfig_waitrequest_o
usr_hip_reconfig_rst_n_o link_up_o
usr_hip_reconfig_clk_o dl_up_o
surprise_down_err_o
Hard IP Status Interface
d2hdm_desc_ready_o ltssm_state_o [5:0] [P/F-Tile]
d2hdm_desc_valid_i ltssm_st_hipfifo_ovrflw_o [R-Tile]
D2HData Mover ltssm_state_delay_o[5:0] [R-Tile]
d2hdm_desc_data_i[255:0]
Interface
d2hdm_desc_status_valid_o
d2hdm_desc_status_data_o[31:0] ptm_context_valid_o
User PTM
ptm_clk_updated_o
Interface
ptm_local_clock_o[63:0]
(Only R-Tile)
ptm_manual_update_i
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
49
4. Interface Overview
683821 | 2025.01.27
4.2. Clocks
Table 36. Multi Channel DMA IP for PCI Express Clock Signals
Signal Name I/O Type Description Clock Frequency
H-Tile
refclk Input PCIe reference clock defined 100 MHz ± 300 ppm
by the PCIe specification.
This input reference clock
must be stable and free-
running at device power-up
for a successful device
configuration.
refclk0 Input PCIe reference clock defined 100 MHz ± 300 ppm
by the PCIe specification.
refclk1 Input These clocks must be free-
running and driven by the
single clock source.
For F-Tile, connect
outrefclk_fgt_i (i = 0 to
7) from “F-Tile Reference
and SystemPLL Clocks” IP to
this port.
Drive refclk1 input port
with the same clock for
refclk0 input port if your
design does not need a
separate refclk.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
50
4. Interface Overview
683821 | 2025.01.27
Note: Divide-by-2 or
Divide-by-4 clock
derived from
coreclkout_hip.
Use the Slow Clock
Divider option in the
Parameter Editor to
choose between the
divide by 2 or 4
versions of
coreclkout_hip
for this clock.
Reference clock to the System PLL can be driven to any one of the F-Tile reference
clock pins, refclk[0] to refclk[7]. The reference clock must adhere to the following
requirements:
• If compliance to PCI Express link training timing specifications are required, the
reference clock to System PLL must be available and stable before device
configuration begins. You must set the Refclk is available at power-on parameter
in the System PLL IP to On. Derive the reference clock from an independent and
free running clock source. Alternately, if the reference clock from the PCIe link is
guaranteed available before device configuration begins, you can use it to drive
the System PLL. Once the PCIe link refclk is alive, it is never allowed to go
down.
• If compliance to PCI Express link training timing specifications are not required
and the reference clock to System PLL may not be available before device
configuration begins, you must set the Refclk is available at power-on parameter
in the System PLL IP to Off. In this case, you may use the reference clock from
the PCI Express link to drive the System PLL. The System PLL does not lock to the
reference until you perform the Global Avalon memory-mapped interface write
operations signaling that the reference clock is available.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
51
4. Interface Overview
683821 | 2025.01.27
Once the reference clock for the System PLL is up, it must be stable and present
throughout the device operation and must not go down. If you are not able to adhere
to this requirement, you must reconfigure the device.
Note: Refer to Implementing the F-Tile Reference and System PLL Clocks Intel FPGA IP
section in F-Tile Architecture and PMA and FEC Direct PHY IP User Guide for
information about this IP.
Note: Refer to Example Flow to Indicate All System PLL Reference Clocks are Ready section
in F-Tile Architecture and PMA and FEC Direct PHY IP User Guide to trigger the System
PLL to lock to reference clock.
Related Information
• F-Tile Architecture and PMA and FEC Direct PHY IP User Guide
• Implementing the F-Tile Reference and System PLL Clocks Intel® FPGA IP
• Example Flow to Indicate All System PLL Reference Clocks are Ready
4.3. Resets
Table 37. Multi Channel DMA IP for PCI Express Reset Signals
Signal Name I/O Type Description
H-Tile
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
52
4. Interface Overview
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
53
4. Interface Overview
683821 | 2025.01.27
Note: In R-Tile, the Port x4 can be 256 bit write master when Gen4 4x4 Interface - 256 bit is
selected in PCI Express Hard IP Mode.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
54
4. Interface Overview
683821 | 2025.01.27
x8/x4*:
h2ddm_byteenable_o[31:0]
x4 (128-bit):
h2ddm_byteenable_o[15:0]
Note: In R-Tile, the Port x4 can be the 256-bit read master when Gen4 4x4 Interface - 256-
bit is selected in the PCI Express Hard IP Mode.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
55
4. Interface Overview
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
56
4. Interface Overview
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
57
4. Interface Overview
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
58
4. Interface Overview
683821 | 2025.01.27
x4 (128-bit):
bam_readdata_i[127:0]
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
59
4. Interface Overview
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
60
4. Interface Overview
683821 | 2025.01.27
Note: In Root Port mode, legacy interrupts are supported for p0 and p1.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
61
4. Interface Overview
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
62
4. Interface Overview
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
63
4. Interface Overview
683821 | 2025.01.27
Related Information
• P-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• F-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• R-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
Table 53. H2D Data Mover Descriptor Status Interface (h2ddm_desc_status) Signals
Signal Name I/O Description
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
64
4. Interface Overview
683821 | 2025.01.27
Table 54. H2D Data Mover Descriptor Status Interface (h2ddm_desc_status) Signals
Signal Name I/O Description
Table 56. D2H Data Mover Descriptor Status Interface (d2hdm_desc_status) Signals
Signal Name I/O Description
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
65
4. Interface Overview
683821 | 2025.01.27
• 6'h0A: S_CFG_LANENUM_ACCEPT
• 6'h0B: S_CFG_COMPLETE
• 6'h0C: S_CFG_IDLE
• 6'h0D: S_RCVRY_LOCK
• 6'h0E: S_RCVRY_SPEED
• 6'h0F: S_RCVRY_RCVRCFG
• 6'h10: S_RCVRY_IDLE
• 6'h11: S_L0
• 6'h12: S_L0S
• 6'h13: S_L123_SEND_EIDLE
• 6'h14: S_L1_IDLE
• 6'h15: S_L2_IDLE
• 6'h16: S_L2_WAKE
• 6'h17: S_DISABLED_ENTRY
• 6'h18: S_DISABLED_IDLE
• 6'h19: S_DISABLED
• 6'h1A: S_LPBK_ENTRY
• 6'h1B: S_LPBK_ACTIVE
• 6'h1C: S_LPBK_EXIT
• 6'h1D: S_LPBK_EXIT_TIMEOUT
• 6'h1E: S_HOT_RESET_ENTRY
• 6'h1F: S_HOT_RESET
• 6'h20: S_RCVRY_EQ0
• 6'h21: S_RCVRY_EQ1
• 6'h22: S_RCVRY_EQ2
• 6'h23: S_RCVRY_EQ3"
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
66
4. Interface Overview
683821 | 2025.01.27
ptm_context_valid_o Output When this signal is asserted, it indicates that the value
present on the ptm_time bus is valid. Hardware will
deassert this bit whenever a PTM dialogue is requested
and an update is in progress.
ptm_clk_updated_o Output This one clock pulse is an indication that the PTM
dialogue has completed and the results of that operation
have been driven on the ptm_time bus.
ptm_local_clock_o[63:0] Output This bus contains the calculated master time at t1’ as
indicated in the PCIe spec plus any latency to do the
calculation and to drive the value to the requester.
ptm_manual_update_i Input Asserted high for one coreclkout_hip clock when the user
application wants to request a PTM handshake to get a
snapshot of the latest time.
For detailed information about topic refer to R-Tile Avalon
Streaming Intel FPGA IP for PCI Express User Guide.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
67
683821 | 2025.01.27
Send Feedback
5. Parameters (H-Tile)
This chapter provides a reference for all the H-Tile parameters of the Multi Channel
DMA IP for PCI Express.
5.1. IP Settings
Hard IP mode Gen3x16, Interface - 512-bit, 250 MHz Selects the following elements:
Gen3x8, Interface - 256 bit, 250 MHz • The lane data rate. Gen3 is supported
• The Application Layer interface frequency
The width of the data interface between the hard IP
Transaction Layer and the Application Layer
implemented in the FPGA fabric.
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
5. Parameters (H-Tile)
683821 | 2025.01.27
Root Port
PIO BAR2 Address Width NA Address width for PIO AVMM port.
128 Bytes - 7 bits ~ 8 EBytes - 63 Default address width is 22 bits
bits
User Mode Multi channel DMA This option allows user to configure the
Bursting Master mode of operation for MCDMA IP.
MCDMA mode has the DMA
Bursting Slave
functionality. BAM and BAS offer
BAM+BAS Bursting Master and Slave AVMM
BAM+MCDMA capabilities without DMA functionality
BAM+BAS+MCDMA
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
69
5. Parameters (H-Tile)
683821 | 2025.01.27
Enable MSI Capability Value On / Off Enables or disables MSI capability for
BAS. Note: This parameter is only
available when User Mode is set to BAS
or BAM+BAS.
Enable MSI Extended Data On / Off Enables or disables MSI extended data
Capability capability.
Enable address byte aligned On / Off This option allows you to enable the
transfer Byte aligned address mode support
needed for Kernel or DPDK drivers and
DMA makes no assumption on the
alignment of data with respect to to
the address.
Note: This parameter is only available
when the Interface type is set
to AVST.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
70
5. Parameters (H-Tile)
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
71
5. Parameters (H-Tile)
683821 | 2025.01.27
Total physical functions (PFs) 1-4 Sets the number of physical functions
Total virtual functions of physical 0 (default) Set the number of VFs to be assigned
function (PF VFs) to each Physical Function
Number of DMA channels allocated 0 - 512 Number of DMA Channels between the
to PF0 host and device PF Avalon-ST / Avalon-
MM ports.
Number of DMA channels allocated 0 - 512 When SRIOV support is turned on for
to each VF in PF0 the PF, this parameter sets the number
of DMA channels allocated to each VF
in the PF
Note: This parameter is active when
'Enable SR-IOV support' is set
to ON and 'Enable SRIOV for PF'
is also set to ON.
Enable Native PHY, LCPLL, and On / Off When on, Native PHY and ATXPLL and
fPLL ADME for Toolkit fPLL ADME are enabled for Transceiver
Toolkit. Must enable transceiver
dynamic reconfiguration before
enabling ADME
Enable PCIe Link Inspector On / Off When on, PCIe link inspector is
enabled. Must enable HIP dynamic
reconfiguration, transceiver dynamic
reconfiguration and ADME for Toolkit to
use PCIe link inspector
Enable PCIe Link Inspector AVMM On / Off When on, PCIe link inspector AVMM
Interface interface is exported. When on, JTAG
to Avalon Bridge IP instantiation is
included in the Example Design
generation for debug
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
72
5. Parameters (H-Tile)
683821 | 2025.01.27
VCCR/VCCT supply 1_1V Allows you to report the voltage supplied by the board for the
voltage for the 1_0V transceivers.
transceiver
5.1.7.1. Device
Maximum 512 bytes 512 bytes 0x074 Specifies the maximum payload size supported. This
payload Note: Value parameter sets the read-only value of the max payload
sizes is size supported field of the Device Capabilities register.
supported fixed
at 512
bytes
5.1.7.2. Link
Link port number 0x01 Sets the read-only value of the port number field in the Link
(Root Port only) Capabilities register. This parameter is for Root Ports only. It should
not be changed.
Slot clock On/Off When you turn this option On, indicates that the Endpoint uses the
configuration same physical reference clock that the system provides on the
connector. When Off, the IP core uses an independent clock regardless
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
73
5. Parameters (H-Tile)
683821 | 2025.01.27
5.1.7.3. MSI-X
Note: The MSI-X capability parameters cannot be set or modified if you select the MCDMA
mode.
Pending bit array (PBA) offset 0x0000000000030000 Used as an offset from the address
contained in one of the function's Base
Address registers to point to the base
of the MSI-X PBA. The lower 3 bits of
the PBA BIR are set to zero by
software to form a 32-bit qword-
aligned offset. This field is read-only
after being programmed
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
74
5. Parameters (H-Tile)
683821 | 2025.01.27
Endpoint L0s Maximum of 64 ns This design parameter specifies the maximum acceptable latency that
acceptable latency Maximum of 128 ns the device can tolerate to exit the L0s state for any links between the
device and the root complex. It sets the read-only value of the
Maximum of 256 ns
Endpoint L0s acceptable latency field of the Device Capabilities
Maximum of 512 ns
Register (0x084).
Maximum of 1 us
This Endpoint does not support the L0s or L1 states. However, in a
Maximum of 2 us switched system there may be links connected to switches that have
Maximum of 4 us L0s and L1 enabled. This parameter is set to allow system configuration
No limit software to read the acceptable latencies for all devices in the system
and the exit latencies for each link to determine which links can enable
Active State Power Management (ASPM). This setting is disabled for
Root Ports.
The default value of this parameter is 64 ns. This is a safe setting for
most designs.
Endpoint L1 Maximum of 1 us This value indicates the acceptable latency that an Endpoint can
acceptable latency Maximum of 2 us withstand in the transition from the L1 to L0 state. It is an indirect
measure of the Endpoint’s internal buffering. It sets the read-only value
Maximum of 4 us
of the Endpoint L1 acceptable latency field of the Device
Maximum of 8 us
Capabilities Register.
Maximum of 16 us
This Endpoint does not support the L0s or L1 states. However, a
Maximum of 32 us switched system may include links connected to switches that have L0s
Maximum of 64 ns and L1 enabled. This parameter is set to allow system configuration
No limit software to read the acceptable latencies for all devices in the system
and the exit latencies for each link to determine which links can enable
Active State Power Management (ASPM). This setting is disabled for
Root Ports.
The default value of this parameter is 1 µs. This is a safe setting for
most designs.
The Stratix 10 Avalon-ST Hard IP for PCI Express and Stratix 10 Avalon-MM Hard IP
for PCI Express do not support the L1 or L2 low power states. If the link ever gets into
these states, performing a reset (by asserting pin_perst, for example) allows the IP
core to exit the low power state and the system to recover.
These IP cores also do not support the in-band beacon or sideband WAKE# signal,
which are mechanisms to signal a wake-up event to the upstream device.
User ID register Custom value Sets the read-only value of the 16-bit User ID register from the Vendor
from the Vendor Specific Extended Capability. This parameter is only valid for Endpoints.
Specific Extended
Capability
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
75
5. Parameters (H-Tile)
683821 | 2025.01.27
Currently PIO using MQDMA Select an example design available from the pulldown list. User Mode and
Selected Example Bypass mode Avalon-ST/Avalon-MM Interface type setting determines available example
Design AVMM DMA designs
Device-side
Packet Loopback
Packet Generate/
Check
Simulation On/Off When On, the generated output includes a simulation model.
Select simulation Intel FPGA BFM Choose the appropriate BFM for simulation.
Root Complex Third-party BFM Intel FPGA BFM: Default. This bus functional model (BFM) supports x16
BFM configurations by downtraining to x8.
Third-party BFM: Select this If you want to simulate all 16 lanes using a
third-party BFM.
Synthesis On/Off When On, the generated output includes a synthesis model.
Generated HDL Verilog/VHDL Only Verilog HDL is available in the current release.
format
Note: For more information about example designs, refer to the Multi Channel DMA Intel
FPGA IP for PCI Express Design Example User Guide.
Related Information
Multi Channel DMA Intel FPGA IP for PCI Express Design Example User Guide
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
76
683821 | 2025.01.27
Send Feedback
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Hard IP mode Gen4 x16, Interface – 512 bit Gen4x16, Selects the following elements:
(P-Tile) Gen3 x16, Interface – 512 bit Interface • Lane data rate: Gen3 and Gen4 are
– 512 bit supported.
Gen4 2x8, Interface - 256 bit
Gen3 2x8, Interface - 256 bit • Lane width: x16 supports both Root Port
and Endpoint modes. 1 x8, 2 x8 support
Gen4 1x8, Interface – 256 bit
only Endpoint mode. 4 x4 mode only
Gen3 1x8, Interface – 256 bit supports Root Port mode.
Gen4 4x4, Interface - 128 bit
Gen3 4x4, Interface - 128 bit
Hard IP mode Gen4 x16, Interface – 512 bit Gen4x16, Selects the following elements:
(F-Tile) Gen3 x16, Interface – 512 bit Interface • Lane data rate: Gen3 and Gen4 are
– 512 bit supported.
Gen4 2x8, Interface - 256 bit
Gen3 2x8, Interface - 256 bit • Lane width: x16 and 1 x8 support both
Root Port and Endpoint mode. 2 x8 and 1
Gen4 1x8, Interface – 256 bit
x4 support only Endpoint mode. 2 x4 and 4
Gen3 1x8, Interface – 256 bit x4 support only Root Port mode.
Gen4 1x4, Interface - 128 bit
Gen3 1x4, Interface - 128 bit
Gen4 2x4, Interface - 128 bit
Gen3 2x4, Interface - 128 bit
Gen4 4x4, Interface - 128 bit
Gen3 4x4, Interface - 128 bit
Hard IP mode Gen4 x16, Interface – 512 bit Gen5 2x8, Selects the following elements:
(R-Tile) Gen3 x16, Interface – 512 bit Interface • Link data rate: Gen5, Gen4 and Gen3 are
– 512 bit supported.
Gen5 2x8, Interface – 512 bit
Gen4 2x8, Interface - 512 bit • Link width: x16, x8 and x4 modes
supported for both Root Port and Endpoint
Gen3 2x8, Interface - 512 bit
DK-DEV-AGI0227RES R-Tile A0 revision only
Gen4 2x8, Interface – 256 bit
supports:
Gen3 2x8, Interface – 256 bit
• Gen5 2x8, Interface - 512 bit
Gen5 4x4, Interface - 256 bit
• Gen4 2x8, Interface - 512 bit
Gen4 4x4, Interface - 256 bit
• Gen3 2x8, Interface - 512 bit
Gen3 4x4, Interface - 256 bit
Gen4 4x4, Interface - 128 bit
Gen3 4x4, Interface - 128 bit
Enable Ptile On / Off Off Enable the Debug Toolkit for JTAG-based
Debug Toolkit System Console debug access.
(P-Tile)
Enable Debug
Toolkit (F-Tile
and R-Tile)
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
78
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
PLD Clock 500 MHz 350 MHz Select the frequency of the Application clock.
Frequency (P- 450 MHz (for Gen4 The options available vary depending on the
Tile and F-Tile) modes) setting of the Hard IP Mode parameter.
400 MHz
250 MHz For Gen4 modes, the available clock
350 MHz
(for Gen3 frequencies are 500 MHz / 450 MHz / 400
250 MHz modes) MHz / 350 MHz / 250 MHz / 225 MHz / 200
225 MHz MHz / 175 MHz (for Agilex 7) and 400 MHz /
200 MHz 350 MHz / 200 MHz /175 MHz (for Intel Stratix
10 DX).
175 MHz
For Gen3 modes, the available clock frequency
is 250 MHz (for Agilex 7 and Intel Stratix 10
DX).
PLD Clock 500 MHz 500 MHz Selects the frequency of the Application clock.
Frequency (R- 475 MHz (for Gen5 The options available vary depending on the
Tile) mode) setting of the Hard IP Mode parameter.
450 MHz
500 MHz For Gen5 modes, the available clock
425 MHz
or 300 frequencies are 500 MHz / 475 MHz / 450
400 MHz MHz (for MHz / 425 MHz / 400 MHz
350 MHz Gen4 For Gen4 modes, the available clock
275 MHz mode) frequencies are 500 MHz / 475 MHz / 450
250 MHz 250 MHz MHz / 425 MHz / 400 MHz / 300 MHz / 275
(for Gen3 MHz / 250 MHz
mode) For Gen3 modes, the available clock frequency
are 300 MHz / 275 MHz / 250 MHz
Enable SRIS On / Off Off Enable the Separate Reference Clock with
Mode Independent Spread Spectrum Clocking (SRIS)
feature.
When you enable this option, the Slot clock
configuration option under the PCIe Settings →
PCIe PCI Express/PCI Capabilities → PCIe Link
tab will be automatically disabled.
P-Tile Sim Mode On / Off Off Enabling this parameter reduces the simulation
time of Hot Reset tests by 5 ms.
Note: Do not enable this option if you need to
run synthesis.
Note: This parameter is not supported for R-
Tile and F-Tile.
Enable PIPE On / Off Off When you set this parameter, the PIPE
Mode interface is exposed which can be used to
Simulation improve the simulation time.
Note: This parameter is not supported by F-
Tile MCDMA IP or by F-Tile MCDMA
Design Examples.
Note: This parameter is not supported by R-
Tile MCDMA IP or by R-Tile MCDMA
Design Examples.
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
79
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Enable On / Off Off Enable the reset of PCS and Controller in User
Independent Mode for Endpoint 2x8 mode.
Perst (P-Tile When this parameter is On, new signals
and F-Tile) p<n>_cold_perst_n_i and
Enable p<n>_warm_perst_n_i are exported to the
Independent user application for P/R-Tiles. In case of F-Tile,
GPIO Perst (R- i_gpio_perst#_n is exported to the user
Tile) application..
When this parameter is Off (default), the IP
internally ties off these signals instead of
exporting them.
Note: This parameter is required for the
independent reset feature, which is
only supported in the x8x8 Endpoint/
Endpoint mode. In F-Tile, the Hard IP
Reconfiguration Interface must be
enabled and p0_hip_reconfig_clk
port must be connected to a clock
source when it is using this reset signal
or Enable Independent Perst option
is turned on.
Note: For more information regarding the
independent resets feature and its
usage, refer to
• P-Tile Avalon Streaming Intel FPGA
IP for PCI Express User Guide
Appendix E
• R-Tile Avalon Streaming Intel FPGA
IP for PCI Express User Guide
• F-Tile Avalon Streaming Intel FPGA
IP for PCI Express User Guide
Enable CVP On / Off Off Enable support for CVP flow for single tile only
(Intel VSEC) Refer to Agilex 7 Device Configuration via
Protocol (CvP) Implementation User Guide for
more information
Note: This tab is only available for Bursting Master, Bursting Slave, BAM+BAS, BAM
+MCDMA, BAM+BAS+MCDMA and Data Mover Only user modes. This tab is not
available when only MCDMA user mode is selected. Options of setting BAR0/BAR1
Type is not available if MCDMA is selected with BAM.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
80
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
BARn Size 128 Bytes - 16 EBytes Specifies the size of the address space
accessible to BARn when BARn is
enabled.
n = 0, 1, 2, 3, 4 or 5
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
81
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Figure 29. Endpoint PCIe0 Configuration, Debug and Extension Options [F-Tile]
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
82
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Figure 30. Endpoint PCIe0 Configuration, Debug and Extension Options [R-Tile]
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
83
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Figure 31. Rootport PCIe0 Configuration, Debug and Extension Options [P-Tile]
Figure 32. RootPort PCIe2 Configuration, Debug and Extension Options [R-Tile]
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
84
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
85
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
86
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
87
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Maximum payload size 512 Bytes 512 Bytes Specifies the maximum
supported 256 Bytes payload size supported. This
parameter sets the read-
128 Bytes
only value of the max
payload size supported field
of the Device Capabilities
registers.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
88
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Link port number 0-255 1 Sets the read-only value of the port number field in the Link
(Root Port only) Capabilities register. This parameter is for Root Ports
only. It should not be changed.
Slot clock On/Off On When you turn this option On, indicates that the Endpoint
configuration uses the same physical reference clock that the system
provides on the connector.
When Off, the IP core uses an independent clock regardless
of the presence of a reference clock on the connector. This
parameter sets the Slot Clock Configuration bit (bit 12) in the
PCI Express Link Status register.
You cannot enable this option when the Enable SRIS Mode
option is enabled.
Enable Modified On/Off On Enables the controller to send the Modified TS OS if both
TS sides of the link agree when the SUPPORT_MOD_TS register is
set to 1.
If the port negotiates alternate protocols or passes a Training
Set Message, the bit must be set to 1.
Note: This feature is only supported for R-Tile.
Note: The PCIe0 MSI-X feature parameters cannot be set or modified if you select the
MCDMA mode.
Note: The PCIe0 MSI-X feature parameters can be modified on BAS/BAM/BAM+BAS mode.
Pending bit array (PBA) offset 0x0000000000030000 Used as an offset from the address
contained in one of the function's Base
Address registers to point to the base
of the MSI-X PBA. The lower 3 bits of
the PBA BIR are set to zero by
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
89
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Pending bit array (PBA) offset 0x0000000000030000 Used as an offset from the address
contained in one of the function's Base
Address registers to point to the base
of the VF MSI-X PBA. The lower 3 bits
of the PBA BIR are set to zero by
software to form a 32-bit qword-
aligned offset. This field is read-only
after being programmed.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
90
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Device Serial Number 32 bits 0x0000000000000000 Set the lower 32 bits of the
(DW1) IEEE 64-bit Device Serial
Number (DW1)
Device Serial Number 32 bits 0x0000000000000000 Set the upper 32 bits of the
(DW2) IEEE 64-bit Device Serial
Number (DW2)
Note: This capability is only available for R-Tile MCDMA IP Endpoint mode in Ports 0 and 1.
Note: The parameters in this feature are automatically set by the IP. You have no control
over the selection.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
91
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Enable Access On / Off On ACS defines a set of control points within a PCI Express
Control Services topology to determine whether a TLP is to be routed
(ACS) normally, blocked, or redirected.
Note: This feature is available if the IP is in Root Port mode,
or if the IP is in Endpoint mode and the Enable
multiple physical functions and/or the Enable SR-
IOV support in the PCIeN Device tab is set to True.
Enable ACS P2P On / Off Off Indicates if the component supports Peer to Peer Traffic
Traffic Support Note: This feature is available if the IP is in Root Port mode,
or if the IP is in Endpoint mode and the Enable
multiple physical functions and/or the Enable SR-
IOV support in the PCIeN Device tab is set to True.
Enable Access On / Off Off ACS defines a set of control points within a PCI Express
Control Services topology to determine whether a TLP is to be routed
(ACS) normally, blocked, or redirected.
Note: This feature is available if the MCDMA IP is in Endpoint
mode and the Enable SRIOV support in the PCIeN
Device tab is set to True.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
92
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Enable TLP Processing Hints (TPH) On / Off Enable or disable TLP Processing Hints
(TPH) capability.
Using TPH may improve the latency
performance and reduce traffic
congestion.
Enable TLP Processing Hints (TPH) On / Off Enable or disable TLP Processing Hints
(TPH) capability.
Using TPH may improve the latency
performance and reduce traffic
congestion.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
93
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
BAR2 Address Width 128 Bytes - 8 Bytes 4 Mbytes – 22 bits Address width for PIO AVMM
port. Default address width
is 22 bits
User Mode Multi channel DMA Multi channel DMA This option allows user to
Bursting Master configure the mode of
operation for MCDMA IP.
Bursting Slave
MCDMA mode has the DMA
BAM+BAS functionality. BAM and BAS
BAM+MCDMA
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
94
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
95
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
96
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
97
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Currently Selected PIO using MQDMA Bypass Based on MCDMA setting for
Example Design mode "User Mode" and "Interface
Device-side Packet Type" different Example
Loopback Designs are supported.
Packet Generate/Check List of Example design
options are:
AVMM DMA
User Mode=MCDMA, BAM
Traffic Generator/
+MCDMA and BAM+BAS
Checker
+MCDMA* Interface
External Descriptor Type=AVST:
Controller
• PIO using MCDMA Bypass
mode
• Device-side Packet
Loopback
• Packet Generate/Check
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
98
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Note: For more information about example designs, refer to the Multi Channel DMA Intel
FPGA IP for PCI Express Design Example User Guide.
Related Information
Multi Channel DMA Intel FPGA IP for PCI Express Design Example User Guide
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
99
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Enable PCIe low loss DISABLE/ENABLE DISABLE When you select ENABLE,
the parameter enables the
transceiver analog settings
for low loss PCIe design.
This parameter should only
be enabled for chip-to-chip
design where the insertion
loss from endpoint silicon
pad to root port silicon pad
including the package
insertion loss is below 8 dB
at 8 GHz.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
100
6. Parameters (P-Tile) (F-Tile) (R-Tile)
683821 | 2025.01.27
Table 96. PCIe1 Configuration, Debug and Extension Options Settings Table
Parameter Value Default Value Description
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
101
683821 | 2025.01.27
Send Feedback
Follow the steps shown in the figure below to generate a custom Multi Channel DMA IP
for PCI Express component.
You can select Multi Channel DMA IP for PCI Express in the Quartus Prime Pro Edition
IP Catalog or Platform Designer as shown below.
Figure 41. Quartus Prime Pro Edition IP Catalog (with filter applied)
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
7. Designing with the IP Core
683821 | 2025.01.27
Figure 42. Quartus Prime Pro Edition IP Catalog (with filter applied)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
103
7. Designing with the IP Core
683821 | 2025.01.27
Figure 44. Multi Channel DMA IP for PCI Express Simulation in Quartus Prime Pro
Edition
For information about supported simulators, refer to the Multi Channel DMA for PCI
Express Intel FPGA IP Design Example User Guide.
Note: The Intel testbench and Root Port BFM provide a simple method to do basic testing of
the Application Layer logic that interfaces to the PCIe IP variation. This BFM allows you
to create and run simple task stimuli with configurable parameters to exercise basic
functionality of the example design. The testbench and Root Port BFM are not intended
to be a substitute for a full verification environment. Corner cases and certain traffic
profile stimuli are not covered. To ensure the best verification coverage possible, Intel
strongly recommends that you obtain commercially available PCIe verification IP and
tools, or do your own extensive hardware testing, or both.
Related Information
• Introduction to Intel FPGA IP Cores
• Simulating Intel FPGA IP Cores
• Simulation Quick-Start
• Multi Channel DMA for PCI Express Design Example User Guide
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
104
7. Designing with the IP Core
683821 | 2025.01.27
Figure 45. Individual IP Core Generation Output (Quartus Prime Pro Edition)
<Project Directory>
<your_ip>.ip - Top-level IP variation file
<your_ip> - IP core variation files
<your_ip>.bsf - Block symbol schematic file
<your_ip>.cmp - VHDL component declaration
<your_ip>.ppf - XML I/O pin information file
<your_ip>.qip - Lists files for IP core synthesis
<your_ip>.spd - Simulation startup scripts
<your_ip>_bb.v - Verilog HDL black box EDA synthesis file *
<your_ip>_generation.rpt - IP generation report
<your_ip>_inst.v or .vhd - Lists file for IP core synthesis
<your_ip>.qgsimc - Simulation caching file (Platform Designer)
<your_ip>.qgsynthc - Synthesis caching file (Platform Designer)
sim - IP simulation files
<your_ip>.v or vhd - Top-level simulation file
<simulator vendor> - Simulator setup scripts
<simulator_setup_scripts>
synth - IP synthesis files
<your_ip>.v or .vhd - Top-level IP synthesis file
<IP Submodule>_<version> - IP Submodule Library
sim- IP submodule 1 simulation files
<HDL files>
synth - IP submodule 1 synthesis files
<HDL files>
<your_ip>_tb - IP testbench system *
<your_testbench>_tb.qsys - testbench system file
<your_ip>_tb - IP testbench files
your_testbench> _tb.csv or .spd - testbench file
sim - IP testbench simulation files
* If supported and enabled for your IP core variation.
<your_ip>.cmp The VHDL Component Declaration (.cmp) file is a text file that contains local
generic and port definitions that you use in VHDL design files.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
105
7. Designing with the IP Core
683821 | 2025.01.27
<your_ip>.qgsimc (Platform Designer Simulation caching file that compares the .qsys and .ip files with the current
systems only) parameterization of the Platform Designer system and IP core. This comparison
determines if Platform Designer can skip regeneration of the HDL.
<your_ip>.qgsynth (Platform Synthesis caching file that compares the .qsys and .ip files with the current
Designer systems only) parameterization of the Platform Designer system and IP core. This comparison
determines if Platform Designer can skip regeneration of the HDL.
<your_ip>.bsf A symbol representation of the IP variation for use in Block Diagram Files
(.bdf).
<your_ip>.ppf The Pin Planner File (.ppf) stores the port and node assignments for IP
components you create for use with the Pin Planner.
<your_ip>_bb.v Use the Verilog blackbox (_bb.v) file as an empty module declaration for use
as a blackbox.
<your_ip>_inst.v or _inst.vhd HDL example instantiation template. Copy and paste the contents of this file
into your HDL file to instantiate the IP variation.
<your_ip>.regmap If the IP contains register information, the Quartus Prime software generates
the .regmap file. The .regmap file describes the register map information of
master and slave interfaces. This file complements the .sopcinfo file by
providing more detailed register information about the system. This file enables
register display views and user customizable statistics in System Console.
<your_ip>.svd Allows HPS System Debug tools to view the register maps of peripherals that
connect to HPS within a Platform Designer system.
During synthesis, the Quartus Prime software stores the .svd files for slave
interface visible to the System Console masters in the .sof file in the debug
session. System Console reads this section, which Platform Designer queries
for register map information. For system slaves, Platform Designer accesses
the registers by name.
<your_ip>.v <your_ip>.vhd HDL files that instantiate each submodule or child IP core for synthesis or
simulation.
/synopsys/vcs/ Contains a shell script vcs_setup.sh to set up and run a VCS* simulation.
/synopsys/vcsmx/ Contains a shell script vcsmx_setup.sh and synopsys_sim.setup file to
set up and run a VCS MX* simulation.
/cadence/ Contains a shell script ncsim_setup.sh and other setup files to set up and
run an NCSIM simulation.
/<IP submodule>/ Platform Designer generates /synth and /sim sub-directories for each IP
submodule directory that Platform Designer generates.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
106
7. Designing with the IP Core
683821 | 2025.01.27
In order to keep application logic held in the reset state until the entire FPGA fabric is
in user mode, Stratix 10 and Agilex 7 devices require you to include the Stratix 10
Reset Release IP.
Refer to the Multi Channel DMA for PCI Express IP design example to see how the
Reset Release IP is connected with the Multi Channel DMA for PCI Express IP
component.
Related Information
AN 891: Using the Reset Release Intel FPGA IP
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
107
683821 | 2025.01.27
Send Feedback
The software files are created in the Multi Channel DMA IP for PCI Express design
example project folder when you generate an Multi Channel DMA IP for PCI Express
design example from the IP Parameter Editor as shown below. The software
configuration is specific to the example design generated by Quartus Prime.
Multi Channel DMA Intel FPGA IP for PCI Express design example project folder has
multiple software directories depending on the Hard IP mode selected (1x16, 2x8 or
4x4) for Quartus Prime Pro Edition 23.4 version and onwards. Each software folder is
specific to each port:
• p0_software folder is generated only for 1x16 Hard IP modes.
• p1_software folder is generated only for 2x8 Hard IP modes.
• p2_software and p3_software folders are generated only for 4x4 Hard IP modes.
Note: You must use the corresponding software folder with each IP port.
dpdk
kernel
user
readme
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
8. Software Programming Model
683821 | 2025.01.27
Custom • Customized user space MCDMA If you have your own user space
library, which can be installed in the platform to use this driver with custom
form of a static library file and APIs.
corresponding test utility. API information is shared in this User
• Supports accessing the device by Guide
using VFIO and UIO kernel Example: Any user space application
frameworks. which needs DMA features.
• Sample performance-based
application developed to
demonstrate the performance and
usage.
DPDK • Poll mode MCDMA driver by using If you use DPDK as your platform, you
DPDK infrastructure and example can integrate this PMD with your DPDK
test application are developed. framework to perform DMA.
• DPDK Patches are provided to Example: DPDK based NFV applications
support MSI-X and address some
error cases
• Supports both UIO and VFIO kernel
frameworks, which can be enabled
at DPDK build time.
Netdev • MCDMA network driver exposes the All TCP/IP applications can use this
device as ethernet device. driver. iperf, netperf, scapy use
(ifconfig displays the device as this driver.
ethernet device).
• DMA operations can be initiated in
kernel mode and all the TCP/IP
applications can used.
• Use kernel base framework for DMA
memory management.
8.1.1. Architecture
The figure below shows the software architecture block diagram of MCDMA custom
driver.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
109
8. Software Programming Model
683821 | 2025.01.27
IOMMU
PF VF VF
CH CH
••• CH
CH CH CH
CH CH CH
CH CH CH
In the above block diagram, dotted lines represent memory mapped I/O interface. The
other two lines represent read and write operations triggered by the device.
The Multi Channel DMA IP for PCI Express supports the following kernel based
modules to expose the device to user space.
• vfio-pci
• UIO
These drivers do not perform any device management and indicate to the Operating
System (OS) that the devices are being used by user space such that the OS does not
perform any action (e.g. scanning the device etc.) on these devices.
vfio-pci
This is the secure kernel module, provided by kernel distribution. This module allows
you to program the I/O Memory Managnment Unit (IOMMU). IOMMU is the hardware
which helps to ensure memory safety in user space drivers. In case, if you are using
Single Root I/O Virtualization (SR-IOV) , you can load vfio-pci and bind the device.
• This module enables IOMMU programming and Function level reset (FLR)
• To expose device Base Address Registers (BAR) to user space, vfio-pci enables
ioctl
• Supports MSI-X (Message Signal Interrupts extensions) interrupts
• Kernel versions >= 5.7, supports the enablement of virtual functions by using
sysfs interface.
If you are using kernel versions below 5.7, you have the following alternatives:
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
110
8. Software Programming Model
683821 | 2025.01.27
ifc_uio
By using PCIe, sysfs, interrupt framework utilities this module reads allows the user
space to access the device.
Like vfio-pci, this module can also be used from guest VM through the hypervisor. This
driver allows the enablement/disablement of virtual functions. Once a virtual function
is created, by default it binds to ifc_uio. Based on the requirement, you may unbind
and bind to another driver.
libmqdma
This is a user-space library used by the application to access the PCIe device.
• This library has the APIs to access the MCDMA IP design and you can develop your
application using this API.
• It features calls for allocation, release and reset of the channels
• libmqdma supports accessing the devices binded by two user space drivers UIO
(uio) or Virtual Function I/O (VFIO) (vfio-pci).
In case of UIO, ifc_uio driver reads the BAR register info by using sysfs and register
MSI-X info by using eventfds.
In case of VFIO, user space uses IOCTL command to read BAR registers, MSIX
information and programming of IOMMU table.
Sample application
This application uses the APIs from libmqdma and takes the following command line
arguments as the input.
• Total message sizes/ time duration
• Packet size per descriptor
• Write/Read
• Completion reporting method
• Number of channels
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
111
8. Software Programming Model
683821 | 2025.01.27
It runs multiple threads for accessing the DMA channel. It also has performance
measuring capabilities. Based on the number threads you are using and number of
channels you are processing, queues are scheduled on threads.
The libmqdma framework is installed on the host as a dynamic link library and exports
the APIs to the application. Applications running in user space are responsible to use
MCDMA IP by using those APIs.
When libmqdma is handing over the available channel to the application, it performs
the following functions:
At the time of channel initialization, the device allocates the descriptor and data
memory.
Descriptor memory
Maximum length of data in descriptor is 1 MB. Link specifies whether the next
descriptor is in any other page.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
112
8. Software Programming Model
683821 | 2025.01.27
Application need to pass these values to the hardware through the libmqdma.
Data Memory
The user space data page can be much bigger than the normal TLB entry page size of
4 KB. libqdma library implements the allocator to organize the memory.
Following are the hardware registers which the software updates as part of the
channel enumeration.
• Q_START_ADDR_L, Q_START_ADDR_H: Contains the physical address of the
start of the descriptor array.
• Q_SIZE: Logarithmic value of number of descriptors
• Q_CONS_HEAD_ADDR_L, Q_CONS_HEAD_ADDR_H: Physical address of the
head index of the ring, where FPGA sync the value of the head.
There are two modes for selecting the descriptor completion status, MSI-X and
Writeback mode. The default mode is writeback. This can be changed in the following
C header file.
pX_software/user/common/include/ifc_libmqdma.h
8.1.3. Application
At the time of starting the application, by using the APIs provided by driver, it reads
the MCDMA capabilities, creates application context, open BAR registers, initializes the
PCI Express functions. At the time of termination, it clears the application context and
stops all the channels.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
113
8. Software Programming Model
683821 | 2025.01.27
Based on the input parameters, the application starts multiple threads with posix
thread APIs, associate the queue to the thread and submit DMA transactions one at a
time independently. As part of this, driver updates the tail register of that channel.
Update tid ID update, hardware picks up the channel and start DMA operation.
As multiple threads can try to grab and release the channel at a time, userspace driver
(libmqdma) handles synchronization problems while performing channel management.
Scheduling Threads
As POSIX libraries are being used for thread management, Linux scheduler takes care
of scheduling the threads, there is no custom scheduler which takes care of scheduling
the threads.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
114
8. Software Programming Model
683821 | 2025.01.27
ltc_uio.ko or
vfio-pci patches
6 uio.ko or vfio 5
Completion Status DMA Operation
PCIe
3 4 Tail Pointer Write
FPGA Logic
H2D D2H H2D D2H Tail Pointer FIFO HW DMA Block
Channel 1 Channel 2 New components
Kernel base
FPGA logic blocks
Step 1
QCSR registers:
• Q_RESET (offset 8’h48)
• Q_TAIL_POINTER (offset 8’h14) Set 0
• Q_START_ADDR_L (Offset 8’h08)
• Q_START_ADDR_H (Offset 8’h0C)
• Q_SIZE (Offset 8’h10)
• Q_CONSUMED_HEAD_ADDR_L (Offset 8’h20)
• Q_CONSUMED_HEAD_ADDR_H (Offset 8’h24)
• Q_BATCH_DELAY (Offset 8’h28)
• Set q_en, q_wb/intr_en bits Q_CTRL (Offset 8’h00)
• (Q_PYLD_COUNT) (Offset 8'h44)
GCSR register:
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
115
8. Software Programming Model
683821 | 2025.01.27
Step 2
• Threads continuously try to send/receive the data and library keeps checking if
channel is busy or descriptor ring is full
• If channel is not busy and descriptor ring is not full it goes to step 3. If channel is
busy or descriptors ring is full thread retries to initiate the transfer again
Descriptor ring full is identified by checking the Consumed Head and Tail pointer
registers.
Step 3
Thread requests for new descriptor to submit the request and updates the required
field i.e. descriptor index, SOF, EOF, Payload, MSI-X enable and writeback enable.
Step 4
After initializing descriptor ring buffer, the libqdma writes number of descriptor
updates into tail register of QCSR region. On every descriptor update the tail pointer is
increased by 1.
Step 5
• Once the tail pointer write happens, Multi Channel DMA IP for PCI Express fetches
descriptors from host memory starting from the programmed
Q_START_ADDR_L/H address
• Multi Channel DMA IP for PCI Express parses the descriptor content to find the
sources, destination addresses and length of the data from descriptor and starts
DMA operation
Step 6
The API flow below shows loading one descriptor in the descriptor ring buffer and then
submit DMA transfer by updating the tail pointer register by increment of 1.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
116
8. Software Programming Model
683821 | 2025.01.27
lfc_app_start() sysfs_enum_pcie
env_enum_hugepage_mem
ok
lfc_qdma_device_get() sysfs_mmap_pcie_bar
ok, dev mmio-probe-chnl-resources
loop lfc_qdma_channel_get(dev)
alloc_chnl_from_pool mmio-reset-chnl
[rw_thread] chnl mmio-enable-chnl
lfc_qdma_channel_put(chnl) mmio-disable-chnl
release loop
[chnl, queue_depth]
lfc_qdma_request_free(rq[i])
lfc_qdma_device_put(dev)
lfc_app_stop()
The API flow below shows loading the descriptors in bunch in the descriptor ring buffer
and then submit for DMA transfer by updating the tail pointer register with total
loaded descriptors.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
117
8. Software Programming Model
683821 | 2025.01.27
lfc_app_start() sysfs_enum_pcie
env_enum_hugepage_mem
ok
lfc_qdma_device_get() sysfs_mmap_pcie_bar
ok, dev mmio-probe-chnl-resources
loop lfc_qdma_channel_get(dev)
alloc_chnl_from_pool
[rw_thread] chnl mmio-enable-chnl
lfc_qdma_channel_put(chnl) mmio-disable-chnl
release loop
[chnl, queue_depth]
lfc_qdma_request_free(rq[i])
lfc_qdma_device_put(dev)
lfc_app_stop()
8.1.6.1. ifc_api_start
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
118
8. Software Programming Model
683821 | 2025.01.27
8.1.6.2. ifc_mcdma_port_by_name
8.1.6.3. ifc_qdma_device_get
int Based on the port number, port - port number of the Updates device context and
ifc_qdma_device_get(in API returnS corresponding device, which is returned by returns 0 on success
t port, struct device context to the ifc_mcdma_port_by_name negative otherwise
ifc_qdma_device application. Application must API
maintain the device context qdev - Address of the
**qdev)
and use it for further pointer to device context
operations.
When the application is done
with I/O, it releases the
context by using
ifc_qdma_device_put
API.
8.1.6.4. ifc_num_channels_get
int This API returns the total qdev - Pointer to device Number of channels
ifc_num_channels_get(s number of channels context supported
truct ifc_qdma_device supported by QDMA device
*qdev);
8.1.6.5. ifc_qdma_channel_get
int Before submitting the DMA qdev: QDMA device 0 : on success and populates
ifc_qdma_channel_get(s transactions, application is chnl: Pointer to update channel context
truct ifc_qdma_device responsible to acquire the channel context -1 : No channel is ready to
channel and pass the be used. Channel context is
*qdev, struct chno: Channel no if user
context on further returned as NULL.
ifc_qdma_channel wants specific channel. -1 if
interactions with framework.
**chnl, int chno) no specific -2 : Requested channel is
This API performs following: already allocated. But valid
channel context is returned.
Application may use this
channel context.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
119
8. Software Programming Model
683821 | 2025.01.27
8.1.6.6. ifc_qdma_acquire_channels
ifc_qdma_acquire_channels
Table 104.
API API Description Input Parameters Return Values
int This API acquires n number qdev: QDMA device Number of channels
ifc_qdma_acquire_chann of channels from hardware. num: Number of channels acquired successfully.
els(struct Once the channels acquired, requested negative otherwise
ifc_qdma_device user must call
ifc_qdma_channel_get()
*qdev,int num)
to initialize the channels and
use for DMA.
8.1.6.7. ifc_qdma_release_all_channels
int This API releases all the qdev: QDMA device 0 on success
ifc_qdma_release_all_c channels acquired by the Negative otherwise
hannels(struct device. User must make
ifc_qdma_device *qdev) sure to stop the traffic on all
the channels, before calling
this function. Perfq_app calls
this API at the exit of
application.
8.1.6.8. ifc_qdma_device_put
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
120
8. Software Programming Model
683821 | 2025.01.27
8.1.6.9. ifc_qdma_channel_put
8.1.6.10. ifc_qdma_completion_poll
8.1.6.11. ifc_qdma_request_start
8.1.6.12. ifc_qdma_request_prepare
int Depending on the direction, qchnl: channel context Returns the number of
ifc_qdma_request_prepa application selects the queue received on transactions completed.
re(struct and prepares the descriptor ifc_qchannel_get() negative otherwise
ifc_qdma_channel but not submit the dir: DMA direction, one of
transactions. Application IFC_QDMA_DIRECTION_*
*qchnl, int dir,
must use
struct r: request struct that needs
ifc_qdma_request *r); to be processed
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
121
8. Software Programming Model
683821 | 2025.01.27
ifc_qdma_request_submit
API to submit the
transactions to DMA engine.
8.1.6.13. ifc_qdma_descq_queue_batch_load
int Depending on the direction, qchnl: channel context Returns the number of
ifc_qdma_descq_queue_b application selects the queue received on transactions completed.
atch_load(struct and prepare n numbr of ifc_qchannel_get() negative otherwise
ifc_qdma_channel descriptors but not submit dir: DMA direction, one of
the transactions. Application IFC_QDMA_DIRECTION_*
*qchnl, void *req_buf,
must use
int dir, int n) r: request struct that needs
ifc_qdma_request_submit
to be processed
API to submit the
transactions to DMA engine.
8.1.6.14. ifc_qdma_request_submit
8.1.6.15. ifc_qdma_pio_read32
uint32_t Read the value from BAR2 qdev: QDMA device 0 on success
ifc_qdma_pio_read32(st address This API is used for addr: address to read negative otherwise
ruct ifc_qdma_device PIO testing, dumping
*qdev, uint64_t addr) statistics, and pattern
generation.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
122
8. Software Programming Model
683821 | 2025.01.27
8.1.6.16. ifc_qdma_pio_write32
void Writes the value to BAR2 qdev: QDMA device 0 on success and populates
ifc_qdma_pio_write32(s address addr: address to write channel context
truct ifc_qdma_device val: value to write negative otherwise
*qdev, uint64_t addr,
uint32_t val)
8.1.6.17. ifc_qdma_pio_read64
Table 115.
API API Description Input Parameters Return Values
Uint64_t Read the value from BAR2 qdev: QDMA device 0 on success
ifc_qdma_pio_read64(st address This API would be addr: address to read negative otherwise
ruct ifc_qdma_device used for PIO testing,
*qdev, uint64_t addr); dumping statistics, pattern
generation etc.
8.1.6.18. ifc_qdma_pio_write64
Table 116.
API API Description Input Parameters Return Values
void Writes 64 bit value to BAR2 qdev: QDMA device 0 on success and populates
ifc_qdma_pio_write64(s address addr: address to write channel context
truct ifc_qdma_device val: value to write negative otherwise
*qdev, uint64_t addr,
uint64_t val)
8.1.6.19. ifc_qdma_pio_read128
Table 117.
API API Description Input Parameters Return Values
uint128_t Read the value from BAR2 qdev: QDMA device 0 on success
ifc_qdma_pio_read128(s address This API would be addr: address to read negative otherwise
truct ifc_qdma_device used for PIO testing,
*qdev, uint64_t addr); dumping statistics, pattern
generation etc.
8.1.6.20. ifc_qdma_pio_write128
Table 118.
API API Description Input Parameters Return Values
void Writes 64 bit value to BAR2 qdev: QDMA device 0 on success and populates
ifc_qdma_pio_write128( address addr: address to write channel context
struct ifc_qdma_device val: value to write negative otherwise
*qdev, __uint128_t
addr, uint64_t val)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
123
8. Software Programming Model
683821 | 2025.01.27
8.1.6.21. ifc_qdma_pio_read256
Table 119.
API API Description Input Parameters Return Values
int Read 128 bit value from qdev: QDMA device 0 on success
ifc_qdma_pio_read128(s BAR2 address This API addr: address to read negative otherwise
truct ifc_qdma_device would be used for PIO
*qdev, uint64_t addr, testing, dumping statistics,
pattern generation etc.
void *vall);
8.1.6.22. ifc_qdma_pio_write256
Table 120.
API API Description Input Parameters Return Values
void Writes 128 bit value to BAR2 qdev: QDMA device 0 on success and populates
ifc_qdma_pio_write128( address addr: address to write val: channel context
struct ifc_qdma_device value to write negative otherwise
*qdev, __uint128_t
addr, uint64_t *val)
8.1.6.23. ifc_request_malloc
struct libmqdma allocates the len - size of data buffer for 0 on success
ifc_qdma_request buffer for I/O request. The I/O request Negative otherwise
*ifc_request_malloc(si returned buffer is DMA-able
ze_t len) and allocated from huge
pages
8.1.6.24. ifc_request_free
void Release the passed buffer req - start address of 0 : on success and populates
ifc_request_free(void and add in free pool allocation buffer channel context
*req); -1 : channel not available
-2 : Requested for specific
channel. But already
occupied
8.1.6.25. ifc_app_stop
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
124
8. Software Programming Model
683821 | 2025.01.27
8.1.6.26. ifc_qdma_poll_init
int This resets the poll eventfds qdev: QDMA device 0 on success
ifc_qdma_poll_init(str Application, need to pass Negative otherwise
uct ifc_qdma_device this fd_set to poll in case if
*qdev) MSI-X interrupts enabled.
8.1.6.27. ifc_qdma_poll_add
int Append event fds to the poll qdev: QDMA device 0 on success
ifc_qdma_poll_add(stru list chnl: channel context Negative otherwise
ct ifc_qdma_device dir: direction, which needs
*qdev, to poll
ifc_qdma_channel
*chnl, int dir)
8.1.6.28. ifc_qdma_poll_wait
int Monitor for interrupts for all qdev: QDMA device 0 on success and updates
ifc_qdma_poll_wait(str added queues. In case if any qchan: address of channel channel context and
uct ifc_qdma_device interrupt comes, it will context direction
return. Negative otherwise
*qdev, dir: address of direction
ifc_qdma_channel Timeout: 1 msec parameters
**chnl, int *dir)
8.1.6.29. ifc_mcdma_port_by_name
int This function return the port Input parameters: BDF negative otherwise
ifc_mcdma_port_by_nam number to corresponding Return values 0 on success
e(const char* bdf) BDF
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
125
8. Software Programming Model
683821 | 2025.01.27
uint64_t metadata;
void *ctx; /* libqdma contexst, NOT for application */
};
1. buf: - DMA buffer. In the case of H2D, this buffer data is moved to FPGA. In the
case of D2H, FPGA copies the content to this buffer.
2. len: Length of the data in this descriptor
3. pyld_cnt: D2H: Length of the valid date, in case if descriptor contains EOF. H2D:
This field not used
4. flags: This the mask which contains the flags which describe the content
Currently, these flags are being used to notify the SOF and EOF of data.
5. metadata: In case of H2D, you need to update the metadata in this field. In case
of D2H driver, updates back the metadata
Note: For the Single Port AVST Design, the sof and eof should be on the same descriptor or
SOF can be at the start and EOF at the end descriptor of a single TID update.
8.2.1. Architecture
Figure 51. MCDMA Driver Architecture
vfio_pci/igb_uio vfio_pci/igb_uio
qemu qemu
DPDK App
McDMA PMD
ioctl/sysfs Kernel
kvm
vfio_pci/igb_uio
Modified
(Multi-QDMA Design) Existing
New
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
126
8. Software Programming Model
683821 | 2025.01.27
igb_uio
This is the PCIe end point kernel module provided by DPDK framework and on top of
this, there are some paths added to support MSI-X and SRIOV features. By using
PCIe, sysfs interrupt framework utilities, this module reads allows the user space to
access the device.
vfio-pci
vfio-pci is the base kernel module, allows you to access the device and allows IOMMU
programming by using ioctl interface. If you want to enable the VFs by using vfio-
pci, you may need to use the kernel version>5.7
MCDMA PMD
This is a poll mode driver which implements the APIs to perform channel
management, device management and also DMA on both H2D and D2H directions.
part This module exposes the device as ethdev interface.
aUsing DPDK Environment Abstraction Layer (EAL) utilities to perform the memory
management and device management.
In this application you are using to continuously sending/receiving data traffic from/to
device, use the following command line arguments as the input.
• Total message sizes/ time duration
• Packet size per descriptor
• Write/Read
• Completion reporting method
• Number of channels
The test application runs multiple threads for accessing the DMA channels. It also has
performance measurement capability. Based on the number threads being used and
number of channels being processed, queues are scheduled on threads.
testpmd
The testpmd application can also be used to test the DPDK in a packet forwarding
mode.
The following command line arguments are used to initiate data transfer from Host to
device or device to Host:
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
127
8. Software Programming Model
683821 | 2025.01.27
• Forwarding mode
• Number of CPU cores
• TX and RX channels per port
• Number of packets per burst
• Number of descriptors in the RX and TX ring
• Maximum packet length
Note: testpmd support is provided only in CentOS and not provided in Ubuntu.
There are two modes for selecting the descriptor completion status, MSI-X mode &
Writeback mode. The default mode is Writeback. This can be changed, if desired, in
the following C header file.
drivers/net/mcdma/rte_pmd_mcdma.h
In this approach, MCDMA IP updates the completed descriptor index in the host
memory. MCDMA PMD goes for local read and not for PCIe read.
In this approach, when the transaction is completed, the MCDMA IP sends the
interrupt to the Host and updates the completed descriptor index in the host memory.
MCDMA PMD reads the completion status up on receiving the interrupt.
In this approach, driver knows the completion status by polling the completion head
register. As register read is costly from host perspective, performance of smaller
payloads would be less in this approach.
Metadata is the 8 byte private data, which the Host wants to send to the device in
H2D direction and the device wants to send to Host in D2H direction. In case of AVMM
interface, both srcaddr and dstaddr fields are used. In case of AVST interface,
dstaddr in H2D, srcaddr in D2H are used to store private meta data. As both
addresses are used in AVMM, metadata support is not available if AVMM is enabled.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
128
8. Software Programming Model
683821 | 2025.01.27
Application is responsible to poll for a set of registered interrupt addresses and if User
MSI-X is triggered, the corresponding registered interrupt callback gets called.
Currently, this callback address is being sent in private pointers of queue config
registers.
int
rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int
socket_id,
const struct rte_eth_rxconf
*rx_conf,
struct rte_mempool *mp)
struct rte_eth_rxconf {
struct rte_eth_thresh rx_thresh; /**< RX ring threshold registers. */
uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
uint64_t offloads;
…
uint64_t reserved_64s[2]; /**< Reserved for future fields */
void *reserved_ptrs[2]; reserved_ptrs[0] should be populated with
user MSIX callback
};
Int
rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int
socket_id,
const struct rte_eth_rxconf
*tx_conf)
struct rte_eth_txconf {
struct rte_eth_thresh tx_thresh; /**< TX ring threshold registers. */
uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */
uint16_t tx_free_thresh;
…
uint64_t reserved_64s[2]; /**< Reserved for future fields */
void *reserved_ptrs[2]; reserved_ptrs[0] should be populated with
user MSIX callback
};
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
129
8. Software Programming Model
683821 | 2025.01.27
Based on the input parameters, application starts multiple threads and submits DMA
transactions one at a time in run to completion model. As multiple threads try to grab
and release the channel at a time, MCDMA PMD handles synchronization problems
while performing channel management.
DPDK thread management libraries are used for thread creation and initialization. As
more number of queues must be supported from single thread, test application
schedules multiple queues on single threads for DMA operations.
User space driver performs DMA operation. Hence, kernel context switch handling is
not needed in any scenario.
Note: For the Avalon-ST Design, the application should pass the sof, eof and metadata to
driver in the private structure added in rte_mbuf structure. sof and eof flags should
be updated based on file size. For example, if file_size = 127, 0th descriptor should
contains SOF flag should be set and 126th descriptor should contains EOF file should
be set.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
130
8. Software Programming Model
683821 | 2025.01.27
struct private_data {
uint64_t flags; /* SOF, EOF */
uint64_t metadata; /* Private meta data */
};
lgb_uio or vfio-pci
6 5
uio.ko or vfio-pci
Completion Status DMA Operation
PCIe
Step 1
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
131
8. Software Programming Model
683821 | 2025.01.27
• QCSR registers:
— Q_RESET (offset 8’h48)
— Q_TAIL_POINTER (offset 8’h14) Set 0
— Q_START_ADDR_L (Offset 8’h08)
— Q_START_ADDR_H (Offset 8’h0C)
— Q_SIZE (Offset 8’h10)
— Q_CONSUMED_HEAD_ADDR_L (Offset 8’h20)
— Q_CONSUMED_HEAD_ADDR_H (Offset 8’h24)
— Q_BATCH_DELAY (Offset 8’h28)
— Set q_en, q_wb/intr_en bits, Q_CTRL (Offset 8’h00)
— (Q_PYLD_COUNT) (Offset 8'h44)
• Once all the queues are configured it then starts the device.
• Q Application creates the thread based on the number of queues specified.
Step 2
Thread requests for new descriptor to submit the request and updates the required
field i.e., descriptor index, SOF, EOF, Payload, MSI-X enable and writeback enable.
Step 3
After initializing descriptor ring buffer, the McDMA PMD writes number of descriptor
updates into tail register of QCSR region. On every descriptor update the tail pointer is
increased by 1. QCSR tail pointer register: Q_TAIL_POINTER (Offset 8’h14)
Step 4
• Once the tail pointer write happens, MCDMA IP fetches descriptors from host
memory starting from the programmed Q_START_ADDR_L/H address.
• MCDMA IP parses the descriptor content to find the sources, destination addresses
and length of the data from descriptor and starts DMA operation.
Step 5
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
132
8. Software Programming Model
683821 | 2025.01.27
rte_eal_init() sysfs_enum_pcie
ok sysfs_mmap_pcie_bar
rte_eth_dev_configure()
ifc_mcdma_dev_configure()
ok dev mmio-probe-chnl-resources
rte_eth_tx_queue_setup()
ifc_mcdma_dev_tx_queue_setup()
ok
rte_eth_dev_set_mtu()
ok ifc_mcdma_dev_mtu_set()
rte_eth_dev_start()
ok ifc_mcdma_dev_start()
mmio-enable-queues
loop
rte_eth_tx_burst() mmio-bump-ring-tail
asyn-update-read-head
ifc_mcdma_xmit_pkts()
completions
cleanup()
The flow between the Host software components and hardware components is
depicted in below sequence diagram for Device to Host data transfer.
rte_eal_init() sysfs_enum_pcie
ok sysfs_mmap_pcie_bar
rte_eth_dev_configure()
ifc_mcdma_dev_configure()
ok dev mmio-probe-chnl-resources
rte_eth_rx_queue_setup()
ifc_mcdma_dev_rx_queue_setup()
ok
rte_eth_dev_set_mtu()
ok ifc_mcdma_dev_mtu_set()
rte_eth_dev_start()
ok ifc_mcdma_dev_start()
mmio-enable-queues
loop
rte_eth_rx_burst() mmio-bump-ring-tail
asyn-update-read-head
ifc_mcdma_recv_pkts()
completions
cleanup()
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
133
8. Software Programming Model
683821 | 2025.01.27
int This API configures an port ID: device ID 0 Success, device configured
rte_eth_dev_configure(uint1 Ethernet device. This nb_tx_queues : Number of <0 Error code returned by
6_t port_id, uint16_t function must be invoked TX Queues the driver configuration
nb_rx_q, uint16_t nb_tx_q, first before any other function
num_rx_queues: Number of
const struct rte_eth_conf function in the Ethernet API.
Rx Queues
*dev_conf)
eth_config : input
configuration
Int This API allocates and set up port ID: Port ID of device 0 on success
rte_eth_tx_queue_setup(uin a receive queue for DMA tx_queue_id: Queue ID negative otherwise
t16_t port_id, uint16_t device.
nb_tx_desc: Number of Tx
tx_queue_id, uint16_t
descriptors to allocate for
nb_tx_desc, unsigned int
the transmit ring
socket_id, const struct
rte_eth_rxconf *tx_conf) socket_id: socket identifier
tx_conf: TX configuration
context
int This API allocates and set up port ID: Port ID of device 0 on success
rte_eth_rx_queue_setup(uin a receive queue for DMA rx_queue_id: Queue ID negative otherwise
t16_t port_id, uint16_t device.
nb_rx_desc: Number of Rx
rx_queue_id, uint16_t
descriptors to allocate for
nb_rx_desc, unsigned int
the receive ring
socket_id, const struct
rte_eth_rxconf *rx_conf, socket_id: socket identifier
struct rte_mempool *mp) rx_conf: RX configuration
context
mp: Pointer to the memory
pool from which is used to
allocate memory buffers for
each descriptor of the
receive ring
int This API sets the payload port ID: Port ID of device 0 on success
rte_eth_dev_set_mtu(uint16 value for the processing. mtu: MTU to be applied negative otherwise
_t port_id, uint16_t mtu)
int This API starts the Ethernet port ID: Port ID of device 0 on success
rte_eth_dev_start(uint16_t device by initializing negative otherwise
port_id) descriptors, QCSR Rx and Tx
context.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
134
8. Software Programming Model
683821 | 2025.01.27
void This API stops the Ethernet port ID: Port ID of device void
rte_eth_dev_stop(uint16_t device.
port_id)
Void This API closes the Ethernet port ID: Port ID of device void
rte_eth_dev_close(uint16_t device.
port_id)
static inline uint16_t This API is used to transmit port ID: Port ID of device The number of output
rte_eth_tx_burst (uint16_t burst of packets. queue_id: packets actually stored in
port_id, uint16_t queue_id, Queue ID tx_pkts: Array of transmit descriptors.
struct rte_mbuf **tx_pkts, pointers to *rte_mbuf*
const uint16_t nb_pkts) structures
nb_pkts: Maximum number
of packets to retrieve
static inline uint16_t This API is used to receives port ID: Port ID of device The number of packets
rte_eth_rx_burst (uint16_t burst of packets. queue_id: Queue ID actually retrieved.
port_id, uint16_t queue_id, rx_pkts: Array of pointers to
struct rte_mbuf **rx_pkts, *rte_mbuf* structures
const uint16_t nb_pkts)
nb_pkts: Maximum number
of packets to retrieve
Uint64_t Read 64b value from BAR2 qdev: MCDMA device 0 on success
ifc_mcdma_pio_read64(stru address This API is used for addr: address to read negative otherwise
ct ifc_mcdma_device *qdev, PIO testing, dumping
uint64_t addr); - New statistics, pattern generation
etc.
void Writes 64 bit value to BAR2 qdev: MCDMA device addr: 0 on success and populates
ifc_mcdma_pio_write64(stru address address to write channel context
ct ifc_mcdma_device *qdev, val: value to write negative otherwise
uint64_t addr, uint64_t val)
- New
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
135
8. Software Programming Model
683821 | 2025.01.27
int Read 128 bit value from qdev: MCDMA device 0 on success
ifc_mcdma_pio_read128(uin specified address of addr: address to read negative otherwise
t16_t portid, uint64_t offset, specified BAR
uint64_t *buf, int bar_num) This API is used for PIO
- New testing etc.
int Writes 128 bit value to qdev: MCDMA device 0 on success and populates
ifc_mcdma_pio_write128(uin specified address of addr: address to write channel context
t16_t portid, uint64_t offset, specified BAR. negative otherwise
val: value to write
uint64_t *val, int bar_num) This API is used for PIO
– New testing etc.
int Read 256 bit value from qdev: MCDMA device 0 on success
ifc_mcdma_pio_read256(uin specified address of addr: address to read negative otherwise
t16_t portid, uint64_t offset, specified BAR
uint64_t *buf, int bar_num) This API is used for PIO
– New testing etc.
int Writes 128 bit value to qdev: MCDMA device 0 on success and populates
ifc_mcdma_pio_write512(uin specified address of addr: address to write channel context
t16_t portid, uint64_t offset, specified BAR negative otherwise
val: value to write
uint64_t *val, int bar_num) This API is used for PIO
– New testing etc.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
136
8. Software Programming Model
683821 | 2025.01.27
8.3.1. Architecture
Figure 55. MCDMA IP Kernel Mode Network Device Driver Architecture
SoC
Linux Performance Tools User Applications Device Config and Management
iperf ethtool
netperf ip
APPs
tcpdump lfconfig
User
Space Network Network
RX TX Device RX TX Device
Kernel
RX TX RX TX
Space RX TX McDMA Network Driver RX TX
PF PF
MC-DMA MC-DMA
PCIe G3x8
Channel Channel
D2H Channel
H2D D2H Channel
H2D
D2H Channel
H2D D2H Channel
H2D
D2H H2D McDMA D2H H2D
Network driver expose the device as ethernet interface. Following are the different
components involved in this architecture.
ethtool, ip, ifconfig are the utilities which are a part of the kernel tree and are used to
configure and manage the device.
iperf, netperf, iperf3 are opensource applications, that typically are used to verify the
performance of network based applications.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
137
8. Software Programming Model
683821 | 2025.01.27
ifc_mcdma_netdev driver supports the ethtool and ifconfig and ip utilities to configure
and manage the device.
IP Reset
ifconfig support
By using ifconfig, the driver supports bring-down and bring-up of the device. To
support these operations, the driver overrides ndo_open and ndo_stop operations of
the device.
When you bring down the device by using ifconfig command, the kernel changes the
state of the device to DOWN state and executes the registered call back. As a result of
the callback functionality, the driver stops the TX queue, disables the interrupts and
releases the acquired channels and all the internal resources allocated to the device.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
138
8. Software Programming Model
683821 | 2025.01.27
When you bring up the device by using ifconfig command, the kernel changes the
state of the device to “UP” state and executes the registered call back. As a result of
the callback functionality, the driver starts the TX queue, acquires and enables
channels and corresponding interrupts.
Each network device associated with one physical or virtual function and can support
up to 512 channels. As part of network device (ifc_mcdma<i>) bringup, all channels
of that device are initialized and enabled. Whenever a packet arrives from application,
one of the Tx queues are selected to transfer it.
1. Linux default queue selection: In this case, queue is selected based on logic
provided by Linux multi-queue support feature.
2. XPS (Transmit Packet Steering): This technique is part of the Linux kernel &
provides a mechanism to map multiple cores to a Tx queue of the device. For all
packets coming from any of these cores, the mapped Tx queue will be used for
transfer. For more information, refer to XPS: Transmit Packet Steering.
3. MCDMA custom queue selection: This technique provides a mechanism to map
multiple queues to each core. For each core, a separate list is managed to keep
track of every flow of transfer coming to the core. This is done using 4 touple hash
over IP and TCP addresses of each packet and the Tx the queue allocated for that
flow.
Upon receipt of a Tx packet from the upper layers, a lookup on this table is done using
the hash of that packet. If a match is found, the corresponsing queue is used for
transfer. Otherwise, a new queue is allocated to this new flow.
As a part of channel initialization, the driver allocates the memory for descriptors and
associates to the channel. Driver uses dma_alloc_coherent API of Linux DMA
framework to allocate non-swapable and physically contagious memory.
The kernel module and the hardware supports MSI-X interrupt mechanism as
descriptor process completion indication. At queue initialization, the device enables the
interrupts in the kernel by using the interrupts framework.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
139
8. Software Programming Model
683821 | 2025.01.27
Debugfs directories get created, when the device and channel specific files are created
while initializing the channels.
VF devices creation
Netdev driver supports enabling the interrupts and perform the DMA from virtual
functions. Netdev driver registers SRIOV callback and reuses the sysfs infrastructure
created by the kernel’s infrastructure.
IOMMU Support
If host is supporting IOMMU and is enabled from boot parameters, the Netdev driver
maps the device memory to IOMMU by using API called dma_map_single and
configure the I/O Virtual address in descriptor. This enables protection to the host
from the attacks attempted by malicious or unsecured devices. If IOMMU is disabled,
netdev configures the physical address provided by MMU in host.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
140
8. Software Programming Model
683821 | 2025.01.27
Figure 56. MCDMA IP Kernel Mode Network Device Driver : Software Flow
iperf/netperf/scapy
User Space
When user space application attempts to send one packet to network device
1. Application generates the data and the data can be copied to the kernel
2. TCP/IP Stack creates the skb and calls the transmit handler registered by
ndo_start_xmit callback overriden by MCDMA network driver
3. Driver retrieves the physical address or I/O Virtual address,, loads the descriptor
and submits the DMA transactions
1. Hardware completes the transaction and notifies the host via an interrupt
2. MCDMA driver receives the completion and frees the allocated skb
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
141
8. Software Programming Model
683821 | 2025.01.27
When user space application attempts to send one packet to network device
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
142
683821 | 2025.01.27
Send Feedback
9. Registers
The Multi Channel DMA IP for PCI Express provides configuration, control and status
registers to support the DMA operations including:
• D2H and H2D Queue control and status (QCSR)
• MSI-X Table and PBA for interrupt generation
• General/global DMA control (GCSR)
These Multi Channel DMA registers are mapped to BAR0 of a function.
Note: Read/Write access to CSR address space is limited to 32-bits at a time through the
Mrd / Mwr commands from the host.
Following table shows 4 MB aperture space mapped for PF0 in PCIe config space
through BAR0.
MSI-X (Table and PBA) 22’h10_0000 - 22’h1F_FFFF 1MB MSI-X Table and PBA space
Following table shows how QCSR registers for each DMA channel are mapped with 1
MB space of QCSR.
QCSR (D2H) 512 KB DMA Channel 0 256 B QCSR for DMA channel
0
…. …. ….
QCSR (H2D) 512 KB DMA Channel 0 256 B QCSR for DMA channel
0
continued...
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
9. Registers
683821 | 2025.01.27
…. …. ….
The following registers are defined for H2D/D2H queues. The base address for H2D
and D2H are different, but registers (H2D and D2H) has the same address offsets.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
144
9. Registers
683821 | 2025.01.27
The following registers are defined for each implemented H2D and D2H queue. The
total QCSR address space for each H2D/D2H is 256B and requires 8-bit of address.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
145
9. Registers
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
146
9. Registers
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
147
9. Registers
683821 | 2025.01.27
[31:21] rsvd - - -
[19] rsvd - - -
[18] rsvd - - -
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
148
9. Registers
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
149
9. Registers
683821 | 2025.01.27
MSI-X Table
Each entry (vector) is 16 bytes (4 DWORDs) and is divided into Message Address,
Data, and Mask (Vector Control) fields as shown in the figure below. To support 2048
interrupts, MSI-X Table requires 32 KB of space per function. But it is mapped to a 512
KB of space.
MSI-X PBA
MSI-X PBA (Pending Bit Array) memory space is mapped to a 512 KB region. Actual
amount of memory depends on the IP configuration. The Pending Bit Array contains
the Pending bits, one per MSI-X Table entry, in array of QWORDs (64 bits). The PBA
format is shown below.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
150
9. Registers
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
151
9. Registers
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
152
683821 | 2025.01.27
Send Feedback
10. Troubleshooting/Debugging
10.1.1. Overview
The Debug Toolkit is a System Console-based tool that provides real-time control,
monitoring and debugging of the PCIe links at the Physical Layer.
Note: Unless or otherwise noted, the features described in this chapter apply to P-Tile, F-Tile
and R-Tile versions of MCDMA IP.
Note: The current version of Quartus Prime supports enabling the Debug Toolkit for Endpoint
mode only and with the Linux and Windows operating systems only.
Note: Debug Toolkit is not verified in Root Port mode for MCDMA IP
The following figure provides an overview of the Debug Toolkit in the Multi Channel
DMA IP for PCI Express.
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
10. Troubleshooting/Debugging
683821 | 2025.01.27
intel_pcie_ptile_mcdma
Drive the Debug Toolkit from a System Console. The System Console connects to the
Debug Toolkit via an Native PHY Debug Master Endpoint (NPDME). Make this
connection via an Intel FPGA Download Cable.
Provide a clock source (50 MHz - 125 MHz, 100 MHz recommended clock frequency) to
drive the xcvr_reconfig_clk clock. Use the output of the Reset Release Intel FPGA
IP to drive the ninit_done, which provides the reset signal to the NPDME module.
Note: When you enable the Debug Toolkit, the Hard IP Reconfiguration interface is enabled
by default.
When you run a dynamically-generated design example on the Intel Development Kit,
make sure that clock and reset signals are connected to their respective sources and
appropriate pin assignments are made. Here is a sample .qsf assignments for the
Debug Toolkit.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
154
10. Troubleshooting/Debugging
683821 | 2025.01.27
For F-Tile, these statements below are needed in .qsf for the Debug Toolkit launch
set_location_assignment PIN_CK18 -to xcvr_reconfig_clk_clk
set_instance_assignment -name IO_STANDARD "TRUE DIFFERENTIAL SIGNALING" -to
xcvr_reconfig_clk_clk -entity pcie_ed
For R-Tile:
set_location_assignment PIN_AN61 -to xcvr_reconfig_clk_clk
set_instance_assignment -name IO_STANDARD "TRUE DIFFERENTIAL SIGNALING" -to
xcvr_reconfig_clk_clk -entity pcie_ed
The user AVMM reconfiguration interface has default access (the default is when
toolkit_mode = 0). Upon launching the Debug Toolkit (DTK) from System Console,
toolkit_mode is automatically set to 1 for DTK access. While the Debug Toolkit is open
in System Console, user logic is not able to drive the signals on the user AVMM
interface as the multiplexer is set to toolkit_mode = 1. Upon exiting (closing) the
Debug Toolkit window in System Console, toolkit_mode is automatically set to 0 for
user access.
The Debug Toolkit can be launched successfully only if pending read/write transactions
on the reconfiguration interface are completed (indicated by the deassertion of the
reconfig_waitrequest signal).
Note: Upon being launched from System Console, the Debug Toolkit first checks if any of the
waitrequest signals from the Hard IP are asserted (i.e. if there is an ongoing
request from the user). The System Console message window shows an error
message to let the user know there is an ongoing request and the Debug Toolkit
cannot be launched.
To use the Debug Toolkit, download the .sof to the Intel Development Kit. Then, open
the System Console and load the design to the System Console as well. Loading
the .sof to the System Console allows the System Console to communicate with the
design using NPDME. NPDME is a JTAG-based Avalon-MM master. It drives Avalon-MM
slave interfaces in the PCIe design. When using NPDME, the Quartus Prime software
inserts the debug interconnect fabric to connect with JTAG.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
155
10. Troubleshooting/Debugging
683821 | 2025.01.27
1. Use the Quartus Prime Programmer to download the .sof to the Intel FPGA
Development Kit.
Note: To ensure correct operation, use a full installation of the Quartus Prime Pro
Edition Software and Devices of the same version of the Quartus Prime
Programmer and Quartus Prime Pro Edition software that you used to
generate the .sof.
Note: A standalone install of the Quartus Prime Pro Edition Programmer and Tools
will not work.
2. To load the design into System Console:
a. Launch the Quartus Prime Pro Edition software.
b. Start System Console by choosing Tools, then System Debugging Tools,
then System Console.
c. On the System Console File menu, select Load design and browse to the .sof
file.
d. Select the .sof and click OK. The .sof loads to the System Console.
3. The System Console Toolkit Explorer window will list all the DUTs in the design
that have the Debug Toolkit enabled.
a. Select the DUT with the Debug Toolkit you want to view. This will open the
Debug Toolkit instance of that DUT in the Details window. An example of P-
Tile Debug Toolkit is shown in the following figure.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
156
10. Troubleshooting/Debugging
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
157
10. Troubleshooting/Debugging
683821 | 2025.01.27
c. A new window Main view will open with a view of all the channels in that
instance.
The main view tab lists a summary of the transmitter and receiver settings per
channel for the given instance of the PCIe IP.
The following table shows the channel mapping when using subdivided ports.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
158
10. Troubleshooting/Debugging
683821 | 2025.01.27
Tile Information
This lists a summary of the PCIe IP parameter settings in the PCIe IP Parameter Editor
when the IP was generated, as read by the Debug Toolkit when initialized. If you have
port subdivision enabled in your design (for example, x8x8), then this tab will
populate the information for each core (P0 core, P1 core, etc.).
Port Type Root Port, Endpoint (1) Indicates the Hard IP Port type.
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
159
10. Troubleshooting/Debugging
683821 | 2025.01.27
Link status Link up, link down Indicates if the link (DL) is up or not.
Green: no timeout
Replay Timer Timeout Green, Red
Red: timeout
(1) The current version of Quartus Prime supports enabling the Debug Toolkit for Endpoint mode
only, and for the Linux and Windows operating systems only.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
160
10. Troubleshooting/Debugging
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
161
10. Troubleshooting/Debugging
683821 | 2025.01.27
Event Counter
This tab allows you to read the error events like the number of receiver errors,
framing errors, etc. for each port. You can use the Clear P0 counter/Clear P1
counter to reset the error counter.
This tab allows you to read the configuration space registers for that port. You will see
a separate tab with the configuration space for each port.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
162
10. Troubleshooting/Debugging
683821 | 2025.01.27
The channel parameters window allows you to read the transmitter and receiver
settings for a given channel. It has the following 3 sub-windows. Use the Lane
Refresh button to read the status of the General PHY, TX Path, and RX Path sub-
windows for each channel.
Note: To refresh channel parameters for more than one lanes simultaneously, select the
lanes under the Collection tab, right click and select Refresh.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
163
10. Troubleshooting/Debugging
683821 | 2025.01.27
General PHY
This tab shows the reset status of the PHY. In the F-Tile debug toolkit, the PHY Reset
is not available. The reset status is indicated by PIPE PhyStatus under Tile Information
tab.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
164
10. Troubleshooting/Debugging
683821 | 2025.01.27
TX Path
This tab allows you to monitor the transmitter settings for the channel selected.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
165
10. Troubleshooting/Debugging
683821 | 2025.01.27
Indicates transmitter
equalization status. The TX
local and remote parameters
Not attempted, Completed, are valid only when the
TX Equalization Status
Unsuccessful value of Equalization status
is returned as completed,
indicating equalization has
completed successfully.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
166
10. Troubleshooting/Debugging
683821 | 2025.01.27
corresponds to the
coefficient received during
Phase 2 of Equalization.
Note: (†) Refer to the following sections of the PCI Express Base Specification Revision 4.0:
4.2.3 Link Equalization Procedure for 8.0 GT/s and Higher Data Rates and 8.3.3 Tx
Voltage Parameters.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
167
10. Troubleshooting/Debugging
683821 | 2025.01.27
RX Path
This tab allows you to monitor and control the receiver settings for the channel
selected.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
168
10. Troubleshooting/Debugging
683821 | 2025.01.27
Indicates RX polarity
inversion for the selected
lane.
No polarity inversion,
RX Polarity No polarity inversion: no
Polarity inversion
polarity inversion on RX.
RX Status Polarity inversion: polarity
inversion on RX.
Indicates if RX is in electrical
idle or not.
RX Electrical Idle True, False True: RX is in electrical idle.
False: RX is out of electrical
idle.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
169
10. Troubleshooting/Debugging
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
170
10. Troubleshooting/Debugging
683821 | 2025.01.27
The Debug Toolkit supports the Eye Viewer tool that allows you to measure on-die eye
margin.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
171
10. Troubleshooting/Debugging
683821 | 2025.01.27
• Provides a pictorial representation of the eye for each channel, both in the
subdivided (e.g., x8x8) and non-subdivided (e.g., x16) configurations. This feature
is available in P-Tile debug toolkit only.
• Provides information on the total eye height, total eye width and eye
measurement information from the center of the eye to the four corners (left,
right, top, bottom).
• Uses fixed step sizes in the horizontal and vertical directions.
• For P-Tile debug toolkit, Performs the eye measurement at the following bit error
rates (BER):
— 8.0 GT/s (Gen3) @ e-8, 100% confidence level
— 16.0 GT/s (Gen4) @ e-9, 90% confidence level
— 8.0 GT/s (Gen3) and 16.0 GT/s (Gen4) @ e-12, 95% confidence level
• For F-tile debug toolkit, performs the eye measurement at BER = 1e-12, 95%
confidence level.
Note: The Eye Viewer feature of the Debug Toolkit does not support independent error
sampler for performing eye margining. The eye margining is performed on the actual
data path. As a result, the eye margining may produce uncorrectable errors in the
data stream and cause the LTSSM to go to the Recovery state. You may mask out all
errors (example AER registers) while performing the eye margining and reset all error
counters, error registers etc. after margining is completed.
2. This will open a new tab Eye View Tool next to the Main View tab. Choose the
instance and channel for which you want to run the eye view tests.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
172
10. Troubleshooting/Debugging
683821 | 2025.01.27
3. For P-Tile debug toolkit, set the Eye Max BER. Two options are available: 1e-9 or
1e-12.
4. Click Start to begin the eye measurement for the selected channel.
5. The messages window displays information messages to indicate the eye view
tool's progress.
6. Once the eye measurement is complete, the eye height, eye width and eye
diagram are displayed.
Figure 71. Sample Eye Plot [for BER = 1e-9 in P-Tile debug toolkit]
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
173
10. Troubleshooting/Debugging
683821 | 2025.01.27
Figure 72. Sample Eye Plot [for BER = 1e-12 in P-Tile debug toolkit]
1. To run Eye Viewer for a lane, select the lane from the Collection table.
2. Select the Eye Viewer tab in the channel parameter window of the lane.
3. Select Eye Height, Eye Width or both options.
4. Click Start Eye Scan to begin the eye measurement for the selected lane.
5. The messages window displays information messages to indicate the eye view
tool's progress.
6. Once the eye measurement completes, the eye height and eye width results are
displayed.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
174
10. Troubleshooting/Debugging
683821 | 2025.01.27
To reduce the repetitive steps to run eye viewer for more than one lanes, select the
lanes from the Collection table, right click, select Actions, and select Start Eye Scan.
The eye viewer runs for the selected lanes sequentially.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
175
10. Troubleshooting/Debugging
683821 | 2025.01.27
The Link Inspector is found under the PCIe Link Inspector tab after opening the
Debug Toolkit.
The Link Inspector is enabled by default when the Enable Debug Toolkit is enabled.
It tracks up to 1024 state transitions with the capability to dump them into a file.
When the Dump LTSSM Sequence to Text File button is initially clicked, a text file
(ltssm_sequence_dump_p*.txt) with the LTSSM information is created in the
location from where the System Console window is opened. Depending on the PCIe
topology, there can be up to four text files. Subsequent LTSSM sequence dumps will
append to the respective files.
Note: If you open System Console in a directory that is not writable, the text file will not be
generated. To avoid this issue, open System Console from the Command Prompt
window (on a Windows system) or change the directory's permission settings to
writable.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
176
10. Troubleshooting/Debugging
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
177
10. Troubleshooting/Debugging
683821 | 2025.01.27
Each LTSSM monitor has a FIFO storing the time values and captured LTSSM states.
When you choose to dump out the LTSSM states, reads are dependent on the FIFO
elements and will empty out the FIFO.
The Link Inspector only writes to its FIFO if there is a state transition. In cases where
the link is stable in L0, there will be no write and hence no text file will be dumped.
When you want to dump the LTSSM sequence, a single read of the FIFO status of the
respective core is performed. Depending on the empty status and how many entries
are in the FIFO, successive reads are executed.
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
178
683821 | 2025.01.27
Send Feedback
11. Multi Channel DMA Intel FPGA IP for PCI Express User
Guide Archives
For the latest and previous versions of this document, refer to the Multi Channel DMA
Intel FPGA IP for PCI Express User Guide. If an IP or software version is not listed, the
user guide for the previous IP or software version applies.
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
683821 | 2025.01.27
Send Feedback
12. Revision History for the Multi Channel DMA Intel FPGA
IP for PCI Express User Guide
Date Quartus Prime IP Version Changes
Version
2025.01.27 24.3.1 24.2.0 [H-Tile] • Updated the maximum number of DMA channels in
8.3.0 [P-Tile] Endpoint Mode.
9.3.0 [F-Tile] • Updated the tables in Resource Utilization.
5.3.0 [R-Tile] • Updated the IP and Quartus versions in Release
Information.
• Added a Note in Legacy Interrupt Interface.
2024.07.30 24.2 24.1.0 [H-Tile] • Terms and Acronyms: PCIe Gen1/2/3/4/5 terms
8.1.0 [P-Tile] added
9.1.0 [F-Tile] • Known Issues: 24.2 known issues added
5.1.0 [R-Tile] • Device Family Support: Agilex 9 support added
• Release Information: Release Information table
updated
• Avalon-MM Write (H2D) and Read (D2H) Master:
Information updated
• Avalon-MM Read Master (D2H): Information
updated
• Legacy Interrupt Interface: New section added
• MCDMA Settings: Information updated
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
• MCDMA Settings:
— R-Tile support information updated for
parameters Enable User-FLR and Enable MSI
Capability
— Data Mover Only information updated for
parameter User Mode
— Export pld_warm_rst_rdy and link_req_rst_n
interface to top level: Parameter support
removed
• Architecture: ifc_uio replaced with igb_uio in
the MCDMA Architecture block diagram
• API Flow: Device to Host Sequence figure updated
2023.10.06 23.3 23.0.0 [H-Tile] • Known Issues: List updated with new bullet points
7.0.0 [P-Tile] from (7) to (11) added for the Quartus Prime 23.3
release
7.0.0 [F-Tile]
• Release Information: Table updated for Quartus
4.0.0 [R-Tile]
Prime 23.3 release.
• User MSI-X: Note added about R-Tile support
• Endpoint MSI Support through BAS: Note added
about H-Tile support
• Hard IP Reconfiguration Interface: Note added
about H-Tile support
• MCDMA Settings: New row added to table Enable
address byte aligned transfer
• MSI-X: Note updated and Table added
• Top Level Settings: New row added to table Enable
PIPE Mode Simulation
• PCIeo MSI-X: Note updated
• PCIe0 PRS: New row added to the table PF0 Page
Request Services Outstanding Capacity
• PCIe0 TPM: New section added
• Analog Parameters (F-Tile MCDMA IP Only): New
section added
• Software Programming Model: Software folder
information added
• Overview: Debug Toolkit support information note
updated
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
181
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
2023.04.11 23.1 22.2.0 [H-Tile] • Updated product family name to "Intel Agilex® 7".
5.1.0 [P-Tile] • Known Issues: Removed issues fixed in Quartus
5.1.0 [F-Tile] Prime 23.1 release. Bullet point (7) newly added in
Quartus Prime 23.1 release.
2.0.0 [R-Tile]
• Release Information: Release Information table
updated
• Bursting Avalon-MM Slave (BAS) Interface:
bas_address_i parameter information updated
• Top-Level Settings: Enable Independent Perst
parameter updated
• Running Eye Viewer in the F-Tile Debug Toolkit:
New section added
2023.02.11 22.4 22.1.0 [H-Tile] • Initial Release and selected feature support for
5.0.0 [P-Tile] MCDMA R-Tile IP
5.0.0 [F-Tile] • Multi Channel DMA IP Kernel Mode Character
Device Driver is no longer supported Intel Quartus
1.0.0 [R-Tile]
Prime 22.4 release onwards. All related information
has been removed.
• Known Issues: Bullet points (3) to (7) added for
Quartus Prime 22.4 release
• Endpoint Mode: R-Tile information added
• Root Port Mode: F/R/P/H-Tile specific information
added
• Recommended Speed Grades: R-Tile information
added. Gen5 information added.
• Resource Utilization: BAM_BAS_MCDMA user mode
information added
• Resource Utilization: Intel Agilex R-Tile PCIe x8
[Avalon-MM Interface] table added
• Release Information: IP version information
updated. R-Tile version added.
• Functional Description: BAM+BAS+MCDMA user
mode information added
• Endpoint MSI Support through BAS: Note added at
the end of the section
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
182
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
2022.11.01 22.3 22.0.0 [H-Tile] • Known Issues: Past issues that no longer present in
4.0.0 [P-Tile] the current release have been removed
4.0.0 [F-Tile] • Endpoint Mode: Feature list updated
• Root Port Mode: Feature list updated
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
183
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
184
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
185
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
2022.01.14 21.4 21.3.0 [H-Tile] • Data Mover Only user mode option added to
2.2.0 [P-Tile] Endpoint Mode
1.1.0 [F-Tile] • Resource Utilization tables updated
• IP Version updated in Release Information
• Data Mover Only user mode option added in
Chapter Functional Description
• Data Mover Interface and Hard IP Status Interface
added to Chapter Interface Iverview
• Port List (P-Tile and F-Tile) figure updated with data
mover mode interfaces
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide Send Feedback
186
12. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express User Guide
683821 | 2025.01.27
2021.10.29 21.3 21.2.0 [H-Tile] • Recommended Speed Grades table updated with F-
2.1.0 [P-Tile] Tile support information
1.0.0 [F-Tile] • Resource Utilization tables updated
• Release Information updated
• Valid user modes and required functional blocks
table updated
• Address format information added to Config Slave
• Multi Channel DMA IP for PCI Express Port List (P-
Tile and F-Tile) figure updated with F-Tile
information
• Config TL Interface signal table updated
• F-Tile support information added to Configuration
Intercept Interface (EP Only)
• F-Tile support information added to Parameters (P-
Tile and F-Tile) Chapter
• MCDMA IP Software Driver Differentiation table
added
• Network Device Driver information added in Multi
Channel DMA IP Kernel Mode Network Device Driver
• Debug Toolkit information added
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* User Guide
187