DPDK
DPDK
DPDK
Release 17.05.0-rc0
13 FAQ 802
i
CHAPTER 1
Introduction
This document contains instructions for installing and configuring the Data Plane Development
Kit (DPDK) software. It is designed to get customers up and running quickly. The document
describes how to compile and run a DPDK application in a Linux application (linuxapp) envi-
ronment, without going deeply into detail.
Documentation Roadmap
1
DPDK documentation, Release 17.05.0-rc0
System Requirements
Note: If the DPDK is being used on an Intel® Communications Chipset 89xx Series platform,
please consult the Intel® Communications Chipset 89xx Series Software for Linux Getting
Started Guide.
For the majority of platforms, no special BIOS settings are needed to use basic DPDK func-
tionality. However, for additional HPET timer and power management functionality, and high
performance of small packets on 40G NIC, BIOS setting changes may be needed. Consult the
section on Enabling Additional Functionality for more information on the required changes.
Required Tools:
Note: Testing has been performed using Fedora 18. The setup commands and installed
packages needed on other systems may be different. For details on other Linux distributions
and the versions tested, please consult the DPDK Release Notes.
• GNU make.
• coreutils: cmp, sed, grep, arch, etc.
• gcc: versions 4.9 or later is recommended for all platforms. On some distributions, some
specific compiler flags and linker flags are enabled by default and affect performance
(-fstack-protector, for example). Please refer to the documentation of your distri-
bution and to gcc -dumpspecs.
• libc headers, often packaged as gcc-multilib (glibc-devel.i686 /
libc6-dev-i386; glibc-devel.x86_64 / libc6-dev for 64-bit compilation
on Intel architecture; glibc-devel.ppc64 for 64 bit IBM Power architecture;)
• Linux kernel headers or sources required to build kernel modules. (kernel - devel.x86_64;
kernel - devel.ppc64)
• Additional packages required for 32-bit compilation on 64-bit systems are:
– glibc.i686, libgcc.i686, libstdc++.i686 and glibc-devel.i686 for Intel i686/x86_64;
– glibc.ppc64, libgcc.ppc64, libstdc++.ppc64 and glibc-devel.ppc64 for IBM ppc_64;
Note: x86_x32 ABI is currently supported with distribution packages only on Ubuntu
higher than 13.10 or recent Debian distribution. The only supported compiler is gcc 4.9+.
• Python, version 2.7+ or 3.2+, to use various helper scripts included in the DPDK package.
Optional Tools:
• Intel® C++ Compiler (icc). For installation, additional libraries may be required. See the
icc Installation Guide found in the Documentation directory under the compiler installa-
tion.
• IBM® Advance ToolChain for Powerlinux. This is a set of open source development
tools and runtime libraries which allows users to take leading edge advantage of IBM’s
latest POWER hardware features on Linux. To install it, see the IBM official installation
document.
• libpcap headers and libraries (libpcap-devel) to compile and use the libpcap-based
poll-mode driver. This driver is disabled by default and can be enabled by setting
CONFIG_RTE_LIBRTE_PMD_PCAP=y in the build time config file.
• libarchive headers and library are needed for some unit tests using tar to get their re-
sources.
To run an DPDK application, some customization may be required on the target machine.
System Software
Required:
• Kernel version >= 2.6.34
The kernel version in use can be checked using the command:
uname -r
Hugepage support is required for the large memory pool allocation used for packet buffers (the
HUGETLBFS option must be enabled in the running kernel as indicated the previous section).
By using hugepage allocations, performance is increased since fewer pages are needed, and
therefore less Translation Lookaside Buffers (TLBs, high speed translation caches), which re-
duce the time it takes to translate a virtual page address to a physical page address. Without
hugepages, high TLB miss rates would occur with the standard 4k page size, slowing perfor-
mance.
The allocation of hugepages should be done at boot time or as soon as possible after system
boot to prevent memory from being fragmented in physical memory. To reserve hugepages at
boot time, a parameter is passed to the Linux kernel on the kernel command line.
For 2 MB pages, just pass the hugepages option to the kernel. For example, to reserve 1024
pages of 2 MB, use:
hugepages=1024
For other hugepage sizes, for example 1G pages, the size must be specified explicitly and can
also be optionally set as the default hugepage size for the system. For example, to reserve 4G
of hugepage memory in the form of four 1G pages, the following options should be passed to
the kernel:
default_hugepagesz=1G hugepagesz=1G hugepages=4
Note: The hugepage sizes that a CPU supports can be determined from the CPU flags on Intel
architecture. If pse exists, 2M hugepages are supported; if pdpe1gb exists, 1G hugepages are
supported. On IBM Power architecture, the supported hugepage sizes are 16MB and 16GB.
In the case of a dual-socket NUMA system, the number of hugepages reserved at boot time is
generally divided equally between the two sockets (on the assumption that sufficient memory
is present on both sockets).
See the Documentation/kernel-parameters.txt file in your Linux source tree for further details
of these and other kernel options.
Alternative:
For 2 MB pages, there is also the option of allocating hugepages after the system has booted.
This is done by echoing the number of hugepages required to a nr_hugepages file in the
/sys/devices/ directory. For a single-node system, the command to use is as follows (as-
suming that 1024 pages are required):
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Note: For 1G pages, it is not possible to reserve the hugepage memory after the system has
booted.
Once the hugepage memory is reserved, to make the memory available for DPDK use, perform
the following steps:
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
The mount point can be made permanent across reboots, by adding the following line to the
/etc/fstab file:
nodev /mnt/huge hugetlbfs defaults 0 0
For 1GB pages, the page size must be specified as a mount option:
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
The existing memory management implementation is based on the Linux kernel hugepage
mechanism. On the Xen hypervisor, hugepage support for DomainU (DomU) Guests means
that DPDK applications work as normal for guests.
However, Domain0 (Dom0) does not support hugepages. To work around this limitation, a new
kernel module rte_dom0_mm is added to facilitate the allocation and mapping of memory via
IOCTL (allocation) and MMAP (mapping).
By default, Xen Dom0 mode is disabled in the DPDK build configuration files. To support
Xen Dom0, the CONFIG_RTE_LIBRTE_XEN_DOM0 setting should be changed to “y”, which
enables the Xen Dom0 mode at compile time.
Furthermore, the CONFIG_RTE_EAL_ALLOW_INV_SOCKET_ID setting should also be
changed to “y” in the case of the wrong socket ID being received.
To run any DPDK application on Xen Dom0, the rte_dom0_mm module must be loaded into
the running kernel with rsv_memsize option. The module is found in the kmod sub-directory
of the DPDK target directory. This module should be loaded using the insmod command as
shown below (assuming that the current directory is the DPDK target directory):
sudo insmod kmod/rte_dom0_mm.ko rsv_memsize=X
After the rte_dom0_mm.ko kernel module has been loaded, the user must configure the mem-
ory size for DPDK usage. This is done by echoing the memory size to a memsize file in the
/sys/devices/ directory. Use the following command (assuming that 2048 MB is required):
echo 2048 > /sys/kernel/mm/dom0-mm/memsize-mB/memsize
The user can also check how much memory has already been used:
cat /sys/kernel/mm/dom0-mm/memsize-mB/memsize_rsvd
Xen Domain0 does not support NUMA configuration, as a result the --socket-mem command
line option is invalid for Xen Domain0.
Note: The memsize value cannot be greater than the rsv_memsize value.
To run the DPDK application on Xen Domain0, an extra command line option --xen-dom0 is
required.
Note: Parts of this process can also be done using the setup script described in the Quick
Start Setup Script section of this document.
First, uncompress the archive and move to the uncompressed DPDK source directory:
tar xJf dpdk-<version>.tar.xz
cd dpdk-<version>
where:
• ARCH can be: i686, x86_64, ppc_64
• MACHINE can be: native, power8
• EXECENV can be: linuxapp, bsdapp
• TOOLCHAIN can be: gcc, icc
The targets to be installed depend on the 32-bit and/or 64-bit packages and compilers installed
on the host. Available targets can be found in the DPDK/config directory. The defconfig_ prefix
should not be used.
Note: Configuration files are provided with the RTE_MACHINE optimization level set. Within
the configuration files, the RTE_MACHINE configuration value is set to native, which means that
the compiled software is tuned for the platform on which it is built. For more information on this
setting, and its possible values, see the DPDK Programmers Guide.
When using the Intel® C++ Compiler (icc), one of the following commands should be invoked
for 64-bit or 32-bit use respectively. Notice that the shell scripts update the $PATH variable and
therefore should not be performed in the same session. Also, verify the compiler’s installation
directory since the path may be different:
source /opt/intel/bin/iccvars.sh intel64
source /opt/intel/bin/iccvars.sh ia32
To install and make targets, use the make install T=<target> command in the top-level
DPDK directory.
For example, to compile a 64-bit target using icc, run:
make install T=x86_64-native-linuxapp-icc
To compile a 32-bit build using gcc, the make command should be:
make install T=i686-native-linuxapp-gcc
To prepare a target without building it, for example, if the configuration changes need to be
made before compilation, use the make config T=<target> command:
make config T=x86_64-native-linuxapp-gcc
Warning: Any kernel modules to be used, e.g. igb_uio, kni, must be compiled with
the same kernel as the one running on the target. If the DPDK is not being built on the
target machine, the RTE_KERNELDIR environment variable should be used to point the
compilation at a copy of the kernel version to be used on the target machine.
Once the target environment is created, the user may move to the target environment directory
and continue to make code changes and re-compile. The user may also make modifications to
the compile-time DPDK configuration by editing the .config file in the build directory. (This is a
build-local copy of the defconfig file from the top- level config directory).
cd x86_64-native-linuxapp-gcc
vi .config
make
In addition, the make clean command can be used to remove any existing compiled files for a
subsequent full, clean rebuild of the code.
Once a target is created it contains all libraries, including poll-mode drivers, and header files
for the DPDK environment that are required to build customer applications. In addition, the test
and testpmd applications are built under the build/app directory, which may be used for testing.
A kmod directory is also present that contains kernel modules which may be loaded if needed.
To run any DPDK application, a suitable uio module can be loaded into the running kernel.
In many cases, the standard uio_pci_generic module included in the Linux kernel can
provide the uio capability. This module can be loaded using the command
sudo modprobe uio_pci_generic
As an alternative to the uio_pci_generic, the DPDK also includes the igb_uio module which
can be found in the kmod subdirectory referred to above. It can be loaded as shown below:
sudo modprobe uio
sudo insmod kmod/igb_uio.ko
Note: For some devices which lack support for legacy interrupts, e.g. virtual function (VF)
devices, the igb_uio module may be needed in place of uio_pci_generic.
Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional for platforms
that support using VFIO.
To run an DPDK application and make use of VFIO, the vfio-pci module must be loaded:
sudo modprobe vfio-pci
Note that in order to use VFIO, your kernel must support it. VFIO kernel modules have been
included in the Linux kernel since version 3.6.0 and are usually present by default, however
please consult your distributions documentation to make sure that is the case.
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualiza-
tion (such as Intel® VT-d).
For proper operation of VFIO when running DPDK applications as a non-privileged user, cor-
rect permissions should also be set up. This can be done by using the DPDK setup script
(called dpdk-setup.sh and located in the usertools directory).
As of release 1.4, DPDK applications no longer automatically unbind all supported network
ports from the kernel driver in use. Instead, all ports that are to be used by an DPDK ap-
plication must be bound to the uio_pci_generic, igb_uio or vfio-pci module before
the application is run. Any network ports under Linux* control will be ignored by the DPDK
poll-mode drivers and cannot be used by the application.
Warning: The DPDK will, by default, no longer automatically unbind network ports from
the kernel driver at startup. Any ports to be used by an DPDK application must be unbound
from Linux* control and bound to the uio_pci_generic, igb_uio or vfio-pci module
before the application is run.
To bind ports to the uio_pci_generic, igb_uio or vfio-pci module for DPDK use, and
then subsequently return ports to Linux* control, a utility script called dpdk_nic _bind.py is
provided in the usertools subdirectory. This utility can be used to provide a view of the current
state of the network ports on the system, and to bind and unbind those ports from the different
kernel modules, including the uio and vfio modules. The following are some examples of how
the script can be used. A full description of the script and its parameters can be obtained by
calling the script with the --help or --usage options. Note that the uio or vfio kernel modules
to be used, should be loaded into the kernel before running the dpdk-devbind.py script.
Warning: Due to the way VFIO works, there are certain limitations to which devices can
be used with VFIO. Mainly it comes down to how IOMMU groups work. Any Virtual Function
device can be used with VFIO on its own, but physical devices will require either all ports
bound to VFIO, or some of them bound to VFIO while others not being bound to anything at
all.
If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU
group in which your device is in. Therefore, the bridge driver should also be unbound from
the bridge PCI device for VFIO to work with devices behind the bridge.
Warning: While any user can run the dpdk-devbind.py script to view the status of the
network ports, binding or unbinding network ports requires root privileges.
or, alternatively,
./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1
The chapter describes how to compile and run applications in an DPDK environment. It also
provides a pointer to where sample applications are stored.
Note: Parts of this process can also be done using the setup script described the Quick Start
Setup Script section of this document.
make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
ls build/app
helloworld helloworld.map
Note: In the above example, helloworld was in the directory structure of the DPDK. How-
ever, it could have been located outside the directory structure to keep the DPDK structure
intact. In the following case, the helloworld application is copied to a new directory as a
new starting point.
export RTE_SDK=/home/user/DPDK
cp -r $(RTE_SDK)/examples/helloworld my_rte_app
cd my_rte_app/
export RTE_TARGET=x86_64-native-linuxapp-gcc
make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
Warning: The UIO drivers and hugepages must be setup prior to running an application.
Warning: Any ports to be used by the application must be already bound to an appropriate
kernel module, as described in Binding and Unbinding Network Ports to/from the Kernel
Modules, prior to running the application.
The application is linked with the DPDK target environment’s Environmental Abstraction Layer
(EAL) library, which provides some options that are generic to every DPDK application.
The following is the list of options that can be given to the EAL:
./rte-app -c COREMASK [-n NUM] [-b <domain:bus:devid.func>] \
[--socket-mem=MB,...] [-m MB] [-r NUM] [-v] [--file-prefix] \
[--proc-type <primary|secondary|auto>] [-- xen-dom0]
Note: The --proc-type and --file-prefix EAL options are used for running multiple
DPDK processes. See the “Multi-process Sample Application” chapter in the DPDK Sample
Applications User Guide and the DPDK Programmers Guide for more details.
The coremask parameter is always mandatory for DPDK applications. Each bit of the mask
corresponds to the equivalent logical core number as reported by Linux. Since these logical
core numbers, and their mapping to specific cores on specific NUMA sockets, can vary from
platform to platform, it is recommended that the core layout for each platform be considered
when choosing the coremask to use in each case.
On initialization of the EAL layer by an DPDK application, the logical cores to be used and their
socket location are displayed. This information can also be determined for all cores on the
system by examining the /proc/cpuinfo file, for example, by running cat /proc/cpuinfo.
The physical id attribute listed for each processor indicates the CPU socket to which it belongs.
This can be useful when using other processors to understand the mapping of the logical cores
to the sockets.
Note: A more graphical view of the logical core layout may be obtained using the lstopo
Linux utility. On Fedora Linux, this may be installed and run using the following command:
sudo yum install hwloc
./lstopo
Warning: The logical core layout can change between different board layouts and should
be checked before selecting an application coremask.
When running an application, it is recommended to use the same amount of memory as that
allocated for hugepages. This is done automatically by the DPDK application at startup, if no
-m or --socket-mem parameter is passed to it when run.
If more memory is requested by explicitly passing a -m or --socket-mem value, the applica-
tion fails. However, the application itself can also fail if the user requests less memory than the
reserved amount of hugepage-memory, particularly if using the -m option. The reason is as
follows. Suppose the system has 1024 reserved 2 MB pages in socket 0 and 1024 in socket 1.
If the user requests 128 MB of memory, the 64 pages may not match the constraints:
• The hugepage memory by be given to the application by the kernel in socket 1 only. In
this case, if the application attempts to create an object, such as a ring or memory pool
in socket 0, it fails. To avoid this issue, it is recommended that the --socket-mem option
be used instead of the -m option.
• These pages can be located anywhere in physical memory, and, although the DPDK EAL
will attempt to allocate memory in contiguous blocks, it is possible that the pages will not
be contiguous. In this case, the application is not able to allocate big memory pools.
The socket-mem option can be used to request specific amounts of memory for specific sock-
ets. This is accomplished by supplying the --socket-mem flag followed by amounts of mem-
ory requested on each socket, for example, supply --socket-mem=0,512 to try and reserve
512 MB for socket 1 only. Similarly, on a four socket system, to allocate 1 GB memory on
each of sockets 0 and 2 only, the parameter --socket-mem=1024,0,1024 can be used.
No memory will be reserved on any CPU socket that is not explicitly referenced, for example,
socket 3 in this case. If the DPDK cannot allocate enough memory on each socket, the EAL
initialization fails.
In addition, there are two other applications that are built when the libraries are created. The
source files for these are in the DPDK/app directory and are called test and testpmd. Once the
libraries are created, they can be found in the build/app directory.
• The test application provides a variety of specific tests for the various functions in the
DPDK.
• The testpmd application provides a number of different packet throughput tests and ex-
amples of features such as how to use the Flow Director found in the Intel® 82599 10
Gigabit Ethernet Controller.
BIOS Support
The High Precision Timer (HPET) must be enabled in the platform BIOS if the HPET is to be
used. Otherwise, the Time Stamp Counter (TSC) is used by default. The BIOS is typically
accessed by pressing F2 while the platform is starting up. The user can then navigate to
the HPET option. On the Crystal Forest platform BIOS, the path is: Advanced -> PCH-IO
Configuration -> High Precision Timer -> (Change from Disabled to Enabled if necessary).
On a system that has already booted, the following command can be issued to check if HPET
is enabled:
If no entries are returned, HPET must be enabled in the BIOS (as per the instructions above)
and the system rebooted.
The DPDK makes use of the platform HPET timer by mapping the timer counter into the pro-
cess address space, and as such, requires that the HPET_MMAP kernel configuration option be
enabled.
Warning: On Fedora, and other common distributions such as Ubuntu, the HPET_MMAP
kernel option is not enabled by default. To recompile the Linux kernel with this option en-
abled, please consult the distributions documentation for the relevant instructions.
By default, HPET support is disabled in the DPDK build configuration files. To use HPET,
the CONFIG_RTE_LIBEAL_USE_HPET setting should be changed to y, which will enable the
HPET settings at compile time.
For an application to use the rte_get_hpet_cycles() and rte_get_hpet_hz() API
calls, and optionally to make the HPET the default time source for the rte_timer library, the
new rte_eal_hpet_init() API call should be called at application initialization. This API
call will ensure that the HPET is accessible, returning an error to the application if it is not, for
example, if HPET_MMAP is not enabled in the kernel. The application can then determine what
action to take, if any, if the HPET is not available at run-time.
Note: For applications that require timing APIs, but not the HPET timer specifically, it is recom-
mended that the rte_get_timer_cycles() and rte_get_timer_hz() API calls be used
instead of the HPET-specific APIs. These generic APIs can work with either TSC or HPET time
sources, depending on what is requested by an application call to rte_eal_hpet_init(),
if any, and on what is available on the system at runtime.
Although applications using the DPDK use network ports and other hardware resources di-
rectly, with a number of small permission adjustments it is possible to run these applications
as a user other than “root”. To do so, the ownership, or permissions, on the following Linux file
system objects should be adjusted to ensure that the Linux user account being used to run the
DPDK application has access to them:
• All directories which serve as hugepage mount points, for example, /mnt/huge
• The userspace-io device files in /dev, for example, /dev/uio0, /dev/uio1, and so on
• The userspace-io sysfs config and resource files, for example for uio0:
/sys/class/uio/uio0/device/config
/sys/class/uio/uio0/device/resource*
Note: On some Linux installations, /dev/hugepages is also a hugepage mount point cre-
ated by default.
Enhanced Intel SpeedStep® Technology must be enabled in the platform BIOS if the
power management feature of DPDK is to be used. Otherwise, the sys file folder
/sys/devices/system/cpu/cpu0/cpufreq will not exist, and the CPU frequency- based
power management cannot be used. Consult the relevant BIOS documentation to determine
how these settings can be accessed.
For example, on some Intel reference platform BIOS variants, the path to Enhanced Intel
SpeedStep® Technology is:
Advanced
-> Processor Configuration
-> Enhanced Intel SpeedStep® Tech
In addition, C3 and C6 should be enabled as well for power management. The path of C3 and
C6 on the same platform BIOS is:
Advanced
-> Processor Configuration
-> Processor C3 Advanced
-> Processor Configuration
-> Processor C6
While the threads used by an DPDK application are pinned to logical cores on the system,
it is possible for the Linux scheduler to run other tasks on those cores also. To help prevent
additional workloads from running on those cores, it is possible to use the isolcpus Linux
kernel parameter to isolate them from the general Linux scheduler.
For example, if DPDK applications are to run on logical cores 2, 4 and 6, the following should
be added to the kernel parameter list:
isolcpus=2,4,6
To run the DPDK Kernel NIC Interface (KNI) sample application, an extra kernel module (the
kni module) must be loaded into the running kernel. The module is found in the kmod sub-
directory of the DPDK target directory. Similar to the loading of the igb_uio module, this
module should be loaded using the insmod command as shown below (assuming that the
current directory is the DPDK target directory):
insmod kmod/rte_kni.ko
Note: See the “Kernel NIC Interface Sample Application” chapter in the DPDK Sample Appli-
cations User Guide for more details.
To enable Intel® VT-d in a Linux kernel, a number of kernel configuration options must be set.
These include:
• IOMMU_SUPPORT
• IOMMU_API
• INTEL_IOMMU
In addition, to run the DPDK with Intel® VT-d, the iommu=pt kernel parameter must be
used when using igb_uio driver. This results in pass-through of the DMAR (DMA Remap-
ping) lookup in the host. Also, if INTEL_IOMMU_DEFAULT_ON is not set in the kernel, the
intel_iommu=on kernel parameter must be used too. This ensures that the Intel IOMMU is
being initialized as expected.
Please note that while using iommu=pt is compulsory for igb_uio driver, the vfio-pci
driver can actually work with both iommu=pt and iommu=on.
As there might be firmware fixes for performance enhancement in latest version of firmware
image, the firmware update might be needed for getting high performance. Check with the
local Intel’s Network Division application engineers for firmware updates. Users should consult
the release notes specific to a DPDK release to identify the validated firmware version for a
NIC using the i40e driver.
As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes
size can provide helps to high performance of small packets. Configuration of
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC in config files can be changed to use 16
bytes size RX descriptors.
Due to the hardware design, the interrupt signal inside NIC is needed for per packet de-
scriptor write-back. The minimum interval of interrupts could be set at compile time by
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL in configuration files. Though there is a default
configuration, the interval could be tuned by the users with that configuration item depends on
what the user cares about more, performance or per packet latency.
The dpdk-setup.sh script, found in the usertools subdirectory, allows the user to perform the
following tasks:
• Build the DPDK libraries
• Insert and remove the DPDK IGB_UIO kernel module
• Insert and remove VFIO kernel modules
• Insert and remove the DPDK KNI kernel module
• Create and delete hugepages for NUMA and non-NUMA cases
• View network port status and reserve ports for DPDK application use
• Set up permissions for using VFIO as a non-privileged user
• Run the test and testpmd applications
• Look at hugepages in the meminfo
• List hugepages in /mnt/huge
• Remove built DPDK libraries
Once these steps have been completed for one of the EAL targets, the user may compile their
own application that links in the EAL libraries to create the DPDK image.
Script Organization
The dpdk-setup.sh script is logically organized into a series of steps that a user performs in
sequence. Each step provides a number of options that guide the user to completing the
desired task. The following is a brief synopsis of each step.
Step 1: Build DPDK Libraries
Initially, the user must select a DPDK target to choose the correct target type and compiler
options to use when building the libraries.
The user must have all libraries, modules, updates and compilers installed in the system prior
to this, as described in the earlier chapters in this Getting Started Guide.
Step 2: Setup Environment
The user configures the Linux* environment to support the running of DPDK applications.
Hugepages can be set up for NUMA or non-NUMA systems. Any existing hugepages will
be removed. The DPDK kernel module that is needed can also be inserted in this step, and
network ports may be bound to this module for DPDK application use.
Step 3: Run an Application
The user may run the test application once the other steps have been performed. The test
application allows the user to run a series of functional tests for the DPDK. The testpmd appli-
cation, which supports the receiving and sending of packets, can also be run.
Step 4: Examining the System
This step provides some tools for examining the status of hugepage mappings.
Use Cases
The following are some example of how to use the dpdk-setup.sh script. The script should be
run using the source command. Some options in the script prompt the user for further data
before proceeding.
source usertools/dpdk-setup.sh
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
[1] i686-native-linuxapp-gcc
[2] i686-native-linuxapp-icc
[3] ppc_64-power8-linuxapp-gcc
[4] x86_64-native-bsdapp-clang
[5] x86_64-native-bsdapp-gcc
[6] x86_64-native-linuxapp-clang
[7] x86_64-native-linuxapp-gcc
[8] x86_64-native-linuxapp-icc
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
------------------------------------------------------------------------
Option:
The following selection demonstrates the creation of the x86_64-native-linuxapp-gcc
DPDK library.
Option: 9
Configuration done
== Build lib
...
Build complete
RTE_TARGET exported as x86_64-native-linuxapp-gcc
The following selection demonstrates the starting of the DPDK UIO driver.
Option: 25
The following selection demonstrates the creation of hugepages in a NUMA system. 1024 2
MByte pages are assigned to each node. The result is that the application should use -m 4096
for starting the application to access both memory areas (this is done automatically if the -m
option is not provided).
Option: 15
The following selection demonstrates the launch of the test application to run on a single core.
Option: 20
Applications
Once the user has run the dpdk-setup.sh script, built one of the EAL targets and set up
hugepages (if using one of the Linux EAL targets), the user can then move on to building
and running their application or one of the examples provided.
The examples in the /examples directory provide a good starting point to gain an understanding
of the operation of the DPDK. The following command sequence shows how the helloworld
sample application is built and run. As recommended in Section 4.2.1 , “Logical Core Use by
Applications”, the logical core layout of the platform should be determined when selecting a
core mask to use for an application.
cd helloworld/
make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
This document is a step-by-step guide for getting high performance from DPDK applications
on Intel platforms.
For best performance use an Intel Xeon class server system such as Ivy Bridge, Haswell or
newer.
Ensure that each memory channel has at least one memory DIMM inserted, and that the mem-
ory size for each is at least 4GB. Note: this has one of the most direct effects on performance.
You can check the memory configuration using dmidecode as follows:
dmidecode -t memory | grep Locator
Locator: DIMM_A1
Bank Locator: NODE 1
Locator: DIMM_A2
Bank Locator: NODE 1
Locator: DIMM_B1
Bank Locator: NODE 1
Locator: DIMM_B2
Bank Locator: NODE 1
...
Locator: DIMM_G1
Bank Locator: NODE 2
Locator: DIMM_G2
Bank Locator: NODE 2
Locator: DIMM_H1
Bank Locator: NODE 2
Locator: DIMM_H2
Bank Locator: NODE 2
The sample output above shows a total of 8 channels, from A to H, where each channel has 2
DIMMs.
You can also use dmidecode to determine the memory frequency:
dmidecode -t memory | grep Speed
The output shows a speed of 2133 MHz (DDR4) and Unknown (not existing). This aligns with
the previous output which showed that each channel has one memory bar.
Use a DPDK supported high end NIC such as the Intel XL710 40GbE.
Make sure each NIC has been flashed the latest version of NVM/firmware.
Use PCIe Gen3 slots, such as Gen3 x8 or Gen3 x16 because PCIe Gen2 slots don’t provide
enough bandwidth for 2 x 10GbE and above. You can use lspci to check the speed of a PCI
slot using something like the following:
lspci -s 03:00.1 -vv | grep LnkSta
LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- ...
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ ...
When inserting NICs into PCI slots always check the caption, such as CPU0 or CPU1 to
indicate which socket it is connected to.
Care should be take with NUMA. If you are using 2 or more ports from different NICs, it is best
to ensure that these NICs are on the same CPU socket. An example of how to determine this
is shown further below.
BIOS Settings
The following are some recommendations on BIOS settings. Different platforms will have dif-
ferent BIOS naming so the following is mainly for reference:
1. Before starting consider resetting all BIOS settings to their default.
2. Disable all power saving options such as: Power performance tuning, CPU P-State, CPU
C3 Report and CPU C6 Report.
3. Select Performance as the CPU Power and Performance policy.
4. Disable Turbo Boost to ensure the performance scaling increases with the number of
cores.
5. Set memory frequency to the highest available number, NOT auto.
6. Disable all virtualization options when you test the physical function of the NIC, and turn
on VT-d if you wants to use VFIO.
3. Isolate CPU cores which will be used for DPDK. For example:
isolcpus=2,3,4,5,6,7,8
1. Build the DPDK target and reserve huge pages. See the earlier section on Use of
Hugepages in the Linux Environment for more details.
The following shell commands may help with building and configuration:
# Build DPDK target.
cd dpdk_folder
make install T=x86_64-native-linuxapp-gcc -j
2. Check the CPU layout using using the DPDK cpu_layout utility:
cd dpdk_folder
usertools/cpu_layout.py
Usually 0x:00.x is on socket 0 and 8x:00.x is on socket 1. Note: To get the best
performance, ensure that the core and NICs are in the same socket. In the example
above 85:00.0 is on socket 1 and should be used by cores on socket 1 for the best
performance.
4. Bind the test ports to DPDK compatible drivers, such as igb_uio. For example bind two
ports to a DPDK compatible driver and check the status:
# Bind ports 82:00.0 and 85:00.0 to dpdk driver
./dpdk_folder/usertools/dpdk-devbind.py -b igb_uio 82:00.0 85:00.0
The following is an example of running the DPDK l3fwd sample application to get high perfor-
mance with an Intel server platform and Intel XL710 NICs. For specific 40G NIC configuration
please refer to the i40e NIC guide.
The example scenario is to get best performance with two Intel XL710 40GbE ports. See Fig.
1.1 for the performance test setup.
1. Add two Intel XL710 NICs to the platform, and use one port per card to get best perfor-
mance. The reason for using two NICs is to overcome a PCIe Gen3’s limitation since
it cannot provide 80G bandwidth for two 40G ports, but two different PCIe Gen3 x8 slot
can. Refer to the sample NICs output above, then we can select 82:00.0 and 85:00.0
as test ports:
82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
2. Connect the ports to the traffic generator. For high speed testing, it’s best to use a
hardware traffic generator.
3. Check the PCI devices numa node (socket id) and get the cores number on the exact
socket id. In this case, 82:00.0 and 85:00.0 are both in socket 1, and the cores on
socket 1 in the referenced platform are 18-35 and 54-71. Note: Don’t use 2 logical cores
on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2
logical cores from different cores (e.g core18 and core19).
4. Bind these two ports to igb_uio.
5. As to XL710 40G port, we need at least two queue pairs to achieve best performance,
then two queues per port will be required, and each queue pair will need a dedicated
CPU core for receiving/transmitting packets.
6. The DPDK sample application l3fwd will be used for performance testing, with using
two ports for bi-directional forwarding. Compile the l3fwd sample with the default lpm
mode.
7. The command line of running l3fwd would be something like the followings:
./l3fwd -c 0x3c0000 -n 4 -w 82:00.0 -w 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19
for port 0, queue pair 1 forwarding, core 20 for port 1, queue pair 0 forwarding, and core
21 for port 1, queue pair 1 forwarding.
8. Configure the traffic at a traffic generator.
• Start creating a stream on packet generator.
• Set the Ethernet II type to 0x0800.
Introduction
This document contains instructions for installing and configuring the Data Plane Development
Kit (DPDK) software. It is designed to get customers up and running quickly and describes
how to compile and run a DPDK application in a FreeBSD application (bsdapp) environment,
without going deeply into detail.
For a comprehensive guide to installing and using FreeBSD, the following handbook is available
from the FreeBSD Documentation Project: FreeBSD Handbook.
Note: The DPDK is now available as part of the FreeBSD ports collection. Installing via the
ports collection infrastructure is now the recommended way to install the DPDK on FreeBSD,
and is documented in the next chapter, Installing DPDK from the Ports Collection.
Documentation Roadmap
26
DPDK documentation, Release 17.05.0-rc0
• API Reference: Provides detailed information about DPDK functions, data structures
and other programming constructs.
• Sample Applications User Guide: Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.
The easiest way to get up and running with the DPDK on FreeBSD is to install it from the ports
collection. Details of getting and using the ports collection are documented in the FreeBSD
Handbook.
Note: Testing has been performed using FreeBSD 10.0-RELEASE (x86_64) and requires the
installation of the kernel sources, which should be included during the installation of FreeBSD.
On a system with the ports collection installed in /usr/ports, the DPDK can be installed
using the commands:
cd /usr/ports/net/dpdk
make install
After the installation of the DPDK port, instructions will be printed on how to install the kernel
modules required to use the DPDK. A more complete version of these instructions can be
found in the sections Loading the DPDK contigmem Module and Loading the DPDK nic_uio
Module. Normally, lines like those below would be added to the file /boot/loader.conf.
# Reserve 2 x 1G blocks of contiguous memory using contigmem driver:
hw.contigmem.num_buffers=2
hw.contigmem.buffer_size=1073741824
contigmem_load="YES"
# Identify NIC devices for DPDK apps to use and load nic_uio driver:
hw.nic_uio.bdfs="2:0:0,2:0:1"
nic_uio_load="YES"
When the DPDK has been installed from the ports collection it installs its example
applications in /usr/local/share/dpdk/examples - also accessible via symlink as
/usr/local/share/examples/dpdk. These examples can be compiled and run as de-
scribed in Compiling and Running Sample Applications. In this case, the required environmen-
tal variables should be set as below:
• RTE_SDK=/usr/local/share/dpdk
• RTE_TARGET=x86_64-native-bsdapp-clang
Note: To install a copy of the DPDK compiled using gcc, please download the official DPDK
package from http://dpdk.org/ and install manually using the instructions given in the next chap-
ter, Compiling the DPDK Target from Source
An example application can therefore be copied to a user’s home directory and compiled and
run as below:
export RTE_SDK=/usr/local/share/dpdk
export RTE_TARGET=x86_64-native-bsdapp-clang
cp -r /usr/local/share/dpdk/examples/helloworld .
cd helloworld/
gmake
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
sudo ./build/helloworld -c F -n 2
Note: To run a DPDK process as a non-root user, adjust the permissions on the
/dev/contigmem and /dev/uio device nodes as described in section Running DPDK
Applications Without Root Privileges
Note: For an explanation of the command-line parameters that can be passed to an DPDK
application, see section Running a Sample Application.
System Requirements
The DPDK and its applications require the GNU make system (gmake) to build on FreeBSD.
Optionally, gcc may also be used in place of clang to build the DPDK, in which case it too
must be installed prior to compiling the DPDK. The installation of these tools is covered in this
section.
Compiling the DPDK requires the FreeBSD kernel sources, which should be included during
the installation of FreeBSD on the development platform. The DPDK also requires the use of
FreeBSD ports to compile and function.
To use the FreeBSD ports system, it is required to update and extract the FreeBSD ports tree
by issuing the following commands:
portsnap fetch
portsnap extract
If the environment requires proxies for external communication, these can be set using:
setenv http_proxy <my_proxy_host>:<port>
setenv ftp_proxy <my_proxy_host>:<port>
The FreeBSD ports below need to be installed prior to building the DPDK. In general these can
be installed using the following set of commands:
cd /usr/ports/<port_location>
make config-recursive
make install
make clean
For compiling and using the DPDK with gcc, the compiler must be installed from the ports
collection:
• gcc: version 4.9 is recommended /usr/ports/lang/gcc49. Ensure that CPU_OPTS
is selected (default is OFF).
When running the make config-recursive command, a dialog may be presented to the user.
For the installation of the DPDK, the default options were used.
Note: To avoid multiple dialogs being presented to the user during make install, it is advisable
before running the make install command to re-run the make config-recursive command until
no more dialogs are seen.
First, uncompress the archive and move to the DPDK source directory:
unzip DPDK-<version>.zip
cd DPDK-<version>
Where:
• ARCH is: x86_64
• MACHINE is: native
• EXECENV is: bsdapp
• TOOLCHAIN is: gcc | clang
The configuration files for the DPDK targets can be found in the DPDK/config directory in the
form of:
defconfig_ARCH-MACHINE-EXECENV-TOOLCHAIN
Note: Configuration files are provided with the RTE_MACHINE optimization level set. Within
the configuration files, the RTE_MACHINE configuration value is set to native, which means that
the compiled software is tuned for the platform on which it is built. For more information on this
setting, and its possible values, see the DPDK Programmers Guide.
Note: If the compiler binary to be used does not correspond to that given in the TOOLCHAIN
part of the target, the compiler command may need to be explicitly specified. For example,
if compiling for gcc, where the gcc binary is called gcc4.9, the command would need to be
gmake install T=<target> CC=gcc4.9.
Once a target is created, it contains all the libraries and header files for the DPDK environment
that are required to build customer applications. In addition, the test and testpmd applications
are built under the build/app directory, which may be used for testing. A kmod directory is also
present that contains the kernel modules to install.
To run a DPDK application, physically contiguous memory is required. In the absence of non-
transparent superpages, the included sources for the contigmem kernel module provides the
ability to present contiguous blocks of memory for the DPDK to use. The contigmem module
must be loaded into the running kernel before any DPDK is run. The module is found in the
kmod sub-directory of the DPDK target directory.
The amount of physically contiguous memory along with the number of physically contiguous
blocks to be reserved by the module can be set at runtime prior to module loading using:
kenv hw.contigmem.num_buffers=n
kenv hw.contigmem.buffer_size=m
The kernel environment variables can also be specified during boot by placing the following in
/boot/loader.conf:
hw.contigmem.num_buffers=n hw.contigmem.buffer_size=m
Where n is the number of blocks and m is the size in bytes of each area of contiguous memory.
A default of two buffers of size 1073741824 bytes (1 Gigabyte) each is set during module load
if they are not specified in the environment.
The module can then be loaded using kldload (assuming that the current directory is the DPDK
target directory):
kldload ./kmod/contigmem.ko
It is advisable to include the loading of the contigmem module during the boot process to
avoid issues with potential memory fragmentation during later system up time. This can be
achieved by copying the module to the /boot/kernel/ directory and placing the following
into /boot/loader.conf:
contigmem_load="YES"
is generally attributed to not having enough contiguous memory available and can be verified
via dmesg or /var/log/messages:
kernel: contigmalloc failed for buffer <n>
To avoid this error, reduce the number of buffers or the buffer size.
After loading the contigmem module, the nic_uio module must also be loaded into the run-
ning kernel prior to running any DPDK application. This module must be loaded using the
kldload command as shown below (assuming that the current directory is the DPDK target
directory).
kldload ./kmod/nic_uio.ko
Note: If the ports to be used are currently bound to a existing kernel driver then the
hw.nic_uio.bdfs sysctl value will need to be set before loading the module. Setting
this value is described in the next section below.
Currently loaded modules can be seen by using the kldstat command and a module can be
removed from the running kernel by using kldunload <module_name>.
To load the module during boot, copy the nic_uio module to /boot/kernel and place the
following into /boot/loader.conf:
nic_uio_load="YES"
By default, the nic_uio module will take ownership of network ports if they are recognized
DPDK devices and are not owned by another module. However, since the FreeBSD kernel
includes support, either built-in, or via a separate driver module, for most network card devices,
it is likely that the ports to be used are already bound to a driver other than nic_uio. The
following sub-section describe how to query and modify the device ownership of the ports to
be used by DPDK applications.
Device ownership can be viewed using the pciconf -l command. The example below shows
four Intel® 82599 network ports under if_ixgbe module ownership.
pciconf -l
ix0@pci0:1:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix1@pci0:1:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix2@pci0:2:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix3@pci0:2:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
Where a comma separated list of selectors is set, the list must not contain any whitespace.
For example to re-bind ix2@pci0:2:0:0 and ix3@pci0:2:0:1 to the nic_uio module
upon loading, use the following command:
kenv hw.nic_uio.bdfs="2:0:0,2:0:1"
The variable can also be specified during boot by placing the following into
/boot/loader.conf, before the previously-described nic_uio_load line - as shown:
hw.nic_uio.bdfs="2:0:0,2:0:1"
nic_uio_load="YES"
If the original driver for a network port has been compiled into the kernel, it is necessary to
reboot FreeBSD to restore the original device binding. Before doing so, update or remove the
hw.nic_uio.bdfs in /boot/loader.conf.
If rebinding to a driver that is a loadable module, the network port binding can be reset without
rebooting. To do so, unload both the target kernel module and the nic_uio module, modify
or clear the hw.nic_uio.bdfs kernel environment (kenv) value, and reload the two drivers -
first the original kernel driver, and then the nic_uio driver. Note: the latter does not need
to be reloaded unless there are ports that are still to be bound to it.
Example commands to perform these steps are shown below:
kldunload nic_uio
kldunload <original_driver>
kldload <original_driver>
The chapter describes how to compile and run applications in a DPDK environment. It also
provides a pointer to where sample applications are stored.
gmake CC=gcc49
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
ls build/app
helloworld helloworld.map
Note: In the above example, helloworld was in the directory structure of the DPDK. How-
ever, it could have been located outside the directory structure to keep the DPDK structure
intact. In the following case, the helloworld application is copied to a new directory as a
new starting point.
cd my_rte_app/
setenv RTE_TARGET x86_64-native-bsdapp-gcc
gmake CC=gcc49
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
1. The contigmem and nic_uio modules must be set up prior to running an application.
2. Any ports to be used by the application must be already bound to the nic_uio module,
as described in section Binding Network Ports to the nic_uio Module, prior to running the
application. The application is linked with the DPDK target environment’s Environment
Abstraction Layer (EAL) library, which provides some options that are generic to every
DPDK application.
The following is the list of options that can be given to the EAL:
./rte-app -c COREMASK [-n NUM] [-b <domain:bus:devid.func>] \
[-r NUM] [-v] [--proc-type <primary|secondary|auto>]
Note: EAL has a common interface between all operating systems and is based on the Linux
notation for PCI devices. For example, a FreeBSD device selector of pci0:2:0:1 is referred
to as 02:00.1 in EAL.
Note: The --proc-type and --file-prefix EAL options are used for running multiple
DPDK processes. See the “Multi-process Sample Application” chapter in the DPDK Sample
Applications User Guide and the DPDK Programmers Guide for more details.
Although applications using the DPDK use network ports and other hardware resources di-
rectly, with a number of small permission adjustments, it is possible to run these applications
as a user other than “root”. To do so, the ownership, or permissions, on the following file sys-
tem objects should be adjusted to ensure that the user account being used to run the DPDK
application has access to them:
• The userspace-io device files in /dev, for example, /dev/uio0, /dev/uio1, and so on
• The userspace contiguous memory device: /dev/contigmem
Note: Please refer to the DPDK Release Notes for supported applications.
Introduction
This document describes the sample applications that are included in the Data Plane Devel-
opment Kit (DPDK). Each chapter describes a sample application that showcases specific
functionality and provides instructions on how to compile, run and use the sample application.
Documentation Roadmap
37
DPDK documentation, Release 17.05.0-rc0
This chapter describes the Command Line sample application that is part of the Data Plane
Development Kit (DPDK).
Overview
The Command Line sample application is a simple application that demonstrates the use of
the command line interface in the DPDK. This application is a readline-like interface that can
be used to debug a DPDK application, in a Linux* application environment.
Note: The rte_cmdline library should not be used in production code since it is not validated
to the same standard as other DPDK libraries. See also the “rte_cmdline library should not
be used in production code due to limited testing” item in the “Known Issues” section of the
Release Notes.
The Command Line sample application supports some of the features of the GNU readline
library such as, completion, cut/paste and some other special bindings that make configuration
and debug faster and easier.
The application shows how the rte_cmdline application can be extended to handle a list of
objects. There are three simple commands:
• add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
• del obj_name: Delete the specified object.
• show obj_name: Show the IP associated with the specified object.
1. Go to example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/cmdline
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
Refer to the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The first task is the initialization of the Environment Abstraction Layer (EAL). This is achieved
as follows:
int main(int argc, char **argv)
{
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_panic("Cannot init EAL\n");
Then, a new command line object is created and started to interact with the user through the
console:
cl = cmdline_stdin_new(main_ctx, "example> ");
cmdline_interact(cl);
cmdline_stdin_exit(cl);
The cmd line_interact() function returns when the user types Ctrl-d and in this case, the appli-
cation exits.
A cmdline context is a list of commands that are listed in a NULL-terminated table, for example:
cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *) &cmd_obj_del_show,
(cmdline_parse_inst_t *) &cmd_obj_add,
(cmdline_parse_inst_t *) &cmd_help,
NULL,
};
struct cmd_obj_add_result {
cmdline_fixed_string_t action;
cmdline_fixed_string_t name;
struct object *obj;
};
cmdline_parse_inst_t cmd_obj_del_show = {
.f = cmd_obj_del_show_parsed, /* function to call */
.data = NULL, /* 2nd arg of func */
.help_str = "Show/del an object",
.tokens = { /* token list, NULL terminated */
(void *)&cmd_obj_action,
(void *)&cmd_obj_obj,
NULL,
},
};
The Ethtool sample application shows an implementation of an ethtool-like API and provides a
console environment that allows its use to query and change Ethernet card parameters. The
sample is based upon a simple L2 frame reflector.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
The application requires an available core for each port, plus one. The only available options
are the standard ones for the EAL:
./ethtool-app/ethtool-app/${RTE_TARGET}/ethtool [EAL options]
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
From this interface the available commands and descriptions of what they do as as follows:
• drvinfo: Print driver info
• eeprom: Dump EEPROM to file
• link: Print port link states
• macaddr: Gets/sets MAC address
• mtu: Set NIC MTU
• open: Open port
• pause: Get/set port pause state
• portstats: Print port statistics
• regs: Dump port register(s) to file
• ringparam: Get/set ring parameters
• rxmode: Toggle port Rx mode
• stop: Stop port
• validate: Check that given MAC address is valid unicast address
• vlan: Add/remove VLAN id
• quit: Exit program
Explanation
The sample program has two parts: A background packet reflector that runs on a slave core,
and a foreground Ethtool Shell that runs on the master core. These are described below.
Packet Reflector
The background packet reflector is intended to demonstrate basic packet processing on NIC
ports controlled by the Ethtool shim. Each incoming MAC frame is rewritten so that it is returned
to the sender, using the port in question’s own MAC address as the source address, and is then
sent out on the same port.
Ethtool Shell
The foreground part of the Ethtool sample is a console-based interface that accepts commands
as described in using the application. Individual call-back functions handle the detail associ-
ated with each command, which make use of the functions defined in the Ethtool interface to
the DPDK functions.
Ethtool interface
The Ethtool interface is built as a separate library, and implements the following functions:
• rte_ethtool_get_drvinfo()
• rte_ethtool_get_regs_len()
• rte_ethtool_get_regs()
• rte_ethtool_get_link()
• rte_ethtool_get_eeprom_len()
• rte_ethtool_get_eeprom()
• rte_ethtool_set_eeprom()
• rte_ethtool_get_pauseparam()
• rte_ethtool_set_pauseparam()
• rte_ethtool_net_open()
• rte_ethtool_net_stop()
• rte_ethtool_net_get_mac_addr()
• rte_ethtool_net_set_mac_addr()
• rte_ethtool_net_validate_addr()
• rte_ethtool_net_change_mtu()
• rte_ethtool_net_get_stats64()
• rte_ethtool_net_vlan_rx_add_vid()
• rte_ethtool_net_vlan_rx_kill_vid()
• rte_ethtool_net_set_rx_mode()
• rte_ethtool_get_ringparam()
• rte_ethtool_set_ringparam()
The Exception Path sample application is a simple example that demonstrates the use of the
DPDK to set up an exception path for packets to go through the Linux* kernel. This is done
by using virtual TAP network interfaces. These can be read from and written to by the DPDK
application and appear to the kernel as a standard network interface.
Overview
The application creates two threads for each NIC port being used. One thread reads from
the port and writes the data unmodified to a thread-specific TAP interface. The second thread
reads from a TAP interface and writes the data unmodified to the NIC port.
The packet flow through the exception path application is as shown in the following figure.
To make throughput measurements, kernel bridges must be setup to forward data between the
bridges appropriately.
1. Go to example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/exception_path
2. Set the target (a default target will be used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
This application is intended as a linuxapp only. See the DPDK Getting Started Guide for
possible RTE_TARGET values.
1. Build the application:
make
where:
• -p PORTMASK: A hex bitmask of ports to use
• -i IN_CORES: A hex bitmask of cores which read from NIC
• -o OUT_CORES: A hex bitmask of cores which write to NIC
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
The number of bits set in each bitmask must be the same. The coremask -c parameter of
the EAL options should include IN_CORES and OUT_CORES. The same bit must not be set
in IN_CORES and OUT_CORES. The affinities between ports and cores are set beginning
with the least significant bit of each mask, that is, the port represented by the lowest bit in
PORTMASK is read from by the core represented by the lowest bit in IN_CORES, and written
to by the core represented by the lowest bit in OUT_CORES.
For example to run the application with two ports and four cores:
./build/exception_path -c f -n 4 -- -p 3 -i 3 -o c
Getting Statistics
While the application is running, statistics on packets sent and received can be displayed by
sending the SIGUSR1 signal to the application from another terminal:
killall -USR1 exception_path
Explanation
Initialization
Setup of the mbuf pool, driver and queues is similar to the setup done in the L2 Forwarding
Sample Application (in Real and Virtualized Environments). In addition, the TAP interfaces
must also be created. A TAP interface is created for each lcore that is being used. The code
for creating the TAP interface is as follows:
/*
* Create a tap network interface, or use existing one with same name.
* If name[0]='\0' then a name is automatically assigned and returned in name.
*/
fd = open("/dev/net/tun", O_RDWR);
if (fd < 0)
return fd;
memset(&ifr, 0, sizeof(ifr));
if (ret < 0) {
close(fd);
return ret;
if (name)
snprintf(name, IFNAMSIZ, ifr.ifr_name);
return fd;
}
The other step in the initialization process that is unique to this sample application is the asso-
ciation of each port with two cores:
• One core to read from the port and write to a TAP interface
• A second core to read from a TAP interface and write to the port
This is done using an array called port_ids[], which is indexed by the lcore IDs. The population
of this array is shown below:
tx_port = 0;
rx_port = 0;
RTE_LCORE_FOREACH(i) {
if (input_cores_mask & (1ULL << i)) {
/* Skip ports that are not enabled */
while ((ports_mask & (1 << rx_port)) == 0) {
rx_port++;
if (rx_port > (sizeof(ports_mask) * 8))
goto fail; /* not enough ports */
}
port_ids[i] = rx_port++;
} else if (output_cores_mask & (1ULL << i)) {
/* Skip ports that are not enabled */
while ((ports_mask & (1 << tx_port)) == 0) {
tx_port++;
if (tx_port > (sizeof(ports_mask) * 8))
goto fail; /* not enough ports */
}
port_ids[i] = tx_port++;
}
}
Packet Forwarding
After the initialization steps are complete, the main_loop() function is run on each lcore.
This function first checks the lcore_id against the user provided input_cores_mask and out-
put_cores_mask to see if this core is reading from or writing to a TAP interface.
For the case that reads from a NIC port, the packet reception is the same as in the L2 Forward-
ing sample application (see Receive, Process and Transmit Packets). The packet transmission
is done by calling write() with the file descriptor of the appropriate TAP interface and then
explicitly freeing the mbuf back to the pool.
/* Loop forever reading from NIC and writing to tap */
for (;;) {
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
unsigned i;
lcore_stats[lcore_id].rx += nb_rx;
rte_pktmbuf_data_len(m));
rte_pktmbuf_free(m);
if (unlikely(ret<0))
lcore_stats[lcore_id].dropped++;
else
lcore_stats[lcore_id].tx++;
}
}
For the other case that reads from a TAP interface and writes to a NIC port, packets are
retrieved by doing a read() from the file descriptor of the appropriate TAP interface. This fills in
the data into the mbuf, then other fields are set manually. The packet can then be transmitted
as normal.
/* Loop forever reading from tap and writing to NIC */
for (;;) {
int ret;
struct rte_mbuf *m = rte_pktmbuf_alloc(pktmbuf_pool);
if (m == NULL)
continue;
m->pkt.nb_segs = 1;
m->pkt.next = NULL;
m->pkt.data_len = (uint16_t)ret;
To set up loops for measuring throughput, TAP interfaces can be connected using bridging.
The steps to do this are described in the section that follows.
The Exception Path sample application creates TAP interfaces with names of the format
tap_dpdk_nn, where nn is the lcore ID. These TAP interfaces need to be configured for use:
ifconfig tap_dpdk_00 up
To set up a bridge between two interfaces so that packets sent to one interface can be read
from another, use the brctl tool:
The TAP interfaces created by this application exist only when the application is running, so
the steps above need to be repeated each time the application is run. To avoid this, persistent
TAP interfaces can be created using openvpn:
openvpn --mktun --dev tap_dpdk_00
If this method is used, then the steps above have to be done only once and the same TAP
interfaces can be reused each time the application is run. To remove bridges and persistent
TAP interfaces, the following commands are used:
ifconfig br0 down
brctl delbr br0
openvpn --rmtun --dev tap_dpdk_00
The Hello World sample application is an example of the simplest DPDK application that can
be written. The application simply prints an “helloworld” message on every enabled lcore.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
Explanation
EAL Initialization
The first task is to initialize the Environment Abstraction Layer (EAL). This is done in the main()
function using the following code:
int
{
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_panic("Cannot init EAL\n");
This call finishes the initialization process that was started before main() is called (in case
of a Linuxapp environment). The argc and argv arguments are provided to the rte_eal_init()
function. The value returned is the number of parsed arguments.
Once the EAL is initialized, the application is ready to launch a function on an lcore. In this
example, lcore_hello() is called on every available lcore. The following is the definition of the
function:
static int
lcore_hello( attribute ((unused)) void *arg)
{
unsigned lcore_id;
lcore_id = rte_lcore_id();
printf("hello from core %u\n", lcore_id);
return 0;
}
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
rte_eal_remote_launch(lcore_hello, NULL, lcore_id);
}
lcore_hello(NULL);
Refer to the DPDK API Reference for detailed information on the rte_eal_mp_remote_launch()
function.
The Basic Forwarding sample application is a simple skeleton example of a forwarding appli-
cation.
To compile the application export the path to the DPDK source tree and go to the example
directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/skeleton
See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
make
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
Explanation
The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.
The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:
int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
The main() also allocates a mempool to hold the mbufs (Message Buffers) used by the ap-
plication:
mbuf_pool = rte_mempool_create("MBUF_POOL",
NUM_MBUFS * nb_ports,
MBUF_SIZE,
MBUF_CACHE_SIZE,
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(),
0);
Mbufs are the packet buffer structure used by DPDK. They are explained in detail in the “Mbuf
Library” section of the DPDK Programmer’s Guide.
The main() function also initializes all the ports using the user defined port_init() function
which is explained in the next section:
for (portid = 0; portid < nb_ports; portid++) {
if (port_init(portid, mbuf_pool) != 0) {
rte_exit(EXIT_FAILURE,
"Cannot init port %" PRIu8 "\n", portid);
}
}
Once the initialization is complete, the application is ready to launch a function on an lcore. In
this example lcore_main() is called on a single lcore.
lcore_main();
The main functional part of the port initialization used in the Basic Forwarding application is
shown below:
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_conf port_conf = port_conf_default;
const uint16_t rx_rings = 1, tx_rings = 1;
struct ether_addr addr;
int retval;
uint16_t q;
return 0;
}
The Ethernet ports are configured with default settings using the
rte_eth_dev_configure() function and the port_conf_default struct:
static const struct rte_eth_conf port_conf_default = {
.rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN }
};
For this example the ports are set up with 1 RX and 1 TX queue using the
rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup() functions.
The Ethernet port is then started:
retval = rte_eth_dev_start(port);
As we saw above the main() function calls an application function on the available lcores. For
the Basic Forwarding application the lcore function looks like the following:
static __attribute__((noreturn)) void
lcore_main(void)
{
const uint8_t nb_ports = rte_eth_dev_count();
uint8_t port;
/*
* Check that the port is on the same NUMA node as the polling thread
* for best performance.
*/
for (port = 0; port < nb_ports; port++)
if (rte_eth_dev_socket_id(port) > 0 &&
rte_eth_dev_socket_id(port) !=
(int)rte_socket_id())
printf("WARNING, port %u is on remote NUMA node to "
"polling thread.\n\tPerformance will "
"not be optimal.\n", port);
if (unlikely(nb_rx == 0))
continue;
if (unlikely(nb_rx == 0))
continue;
Packets are received in bursts on the RX ports and transmitted in bursts on the TX ports.
The ports are grouped in pairs with a simple mapping scheme using the an XOR on the port
number:
0 -> 1
1 -> 0
2 -> 3
3 -> 2
etc.
The rte_eth_tx_burst() function frees the memory buffers of packets that are transmit-
ted. If packets fail to transmit, (nb_tx < nb_rx), then they must be freed explicitly using
rte_pktmbuf_free().
The forwarding loop can be interrupted and the application closed using Ctrl-C.
The RX/TX Callbacks sample application is a packet forwarding application that demonstrates
the use of user defined callbacks on received and transmitted packets. The application per-
forms a simple latency check, using callbacks, to determine the time packets spend within the
application.
In the sample application a user defined callback is applied to all received packets to add a
timestamp. A separate callback is applied to all packets prior to transmission to calculate the
elapsed time, in CPU cycles.
To compile the application export the path to the DPDK source tree and go to the example
directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/rxtx_callbacks
See the DPDK Getting Started Guide for possible RTE_TARGET values.
The callbacks feature requires that the CONFIG_RTE_ETHDEV_RXTX_CALLBACKS setting is
on in the config/common_ config file that applies to the target. This is generally on by
default:
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
Explanation
The main() function performs the application initialization and calls the execution threads for
each lcore. This function is effectively identical to the main() function explained in Basic
Forwarding Sample Application.
The lcore_main() function is also identical.
The main difference is in the user defined port_init() function where the callbacks are
added. This is explained in the next section:
The main functional part of the port initialization is shown below with comments:
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_conf port_conf = port_conf_default;
const uint16_t rx_rings = 1, tx_rings = 1;
struct ether_addr addr;
int retval;
uint16_t q;
return 0;
}
More than one callback can be added and additional information can be passed to callback
function pointers as a void*. In the examples above NULL is used.
The add_timestamps() and calc_latency() functions are explained below.
The add_timestamps() callback is added to the RX port and is applied to all packets re-
ceived:
static uint16_t
add_timestamps(uint8_t port __rte_unused, uint16_t qidx __rte_unused,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *_ __rte_unused)
{
unsigned i;
uint64_t now = rte_rdtsc();
return nb_pkts;
}
The DPDK function rte_rdtsc() is used to add a cycle count timestamp to each packet (see
the cycles section of the DPDK API Documentation for details).
The calc_latency() callback is added to the TX port and is applied to all packets prior to
transmission:
static uint16_t
calc_latency(uint8_t port __rte_unused, uint16_t qidx __rte_unused,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *_ __rte_unused)
{
uint64_t cycles = 0;
uint64_t now = rte_rdtsc();
unsigned i;
latency_numbers.total_cycles += cycles;
latency_numbers.total_pkts += nb_pkts;
latency_numbers.total_cycles = latency_numbers.total_pkts = 0;
}
return nb_pkts;
}
The calc_latency() function accumulates the total number of packets and the total number
of cycles used. Once more than 100 million packets have been transmitted the average cycle
count per packet is printed out and the counters are reset.
The IPv4 Fragmentation application is a simple example of packet processing using the Data
Plane Development Kit (DPDK). The application does L3 forwarding with IPv4 and IPv6 packet
fragmentation.
Overview
The application demonstrates the use of zero-copy buffers for packet fragmentation. The ini-
tialization and run-time paths are very similar to those of the L2 Forwarding Sample Application
(in Real and Virtualized Environments). This guide highlights the differences between the two
applications.
There are three key differences from the L2 Forwarding sample application:
• The first difference is that the IP Fragmentation sample application makes use of indirect
buffers.
• The second difference is that the forwarding decision is taken based on information read
from the input packet’s IP header.
• The third difference is that the application differentiates between IP and non-IP traffic by
means of offload flags.
The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to store/lookup an
outgoing port number, associated with that IP address. Any unmatched packets are forwarded
to the originating port.
By default, input frame sizes up to 9.5 KB are supported. Before forwarding, the input IP packet
is fragmented to fit into the “standard” Ethernet* v2 MTU (1500 bytes).
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make
The LPM object is created and loaded with the pre-configured entries read from global
l3fwd_ipv4_route_array and l3fwd_ipv6_route_array tables. For each input packet, the packet
forwarding decision (that is, the identification of the output interface for the packet) is taken as
a result of LPM lookup. If the IP packet size is greater than default output MTU, then the input
packet is fragmented and several fragments are sent via the output interface.
Application usage:
./build/ip_fragmentation [EAL options] -- -p PORTMASK [-q NQ]
where:
• -p PORTMASK is a hexadecimal bitmask of ports to configure
• -q NQ is the number of queue (=ports) per lcore (the default is 1)
To run the example in linuxapp environment with 2 lcores (2,4) over 2 ports(0,2) with 1 RX
queue per lcore:
./build/ip_fragmentation -c 0x14 -n 3 -- -p 5
EAL: coremask set to 14
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 1
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 1
EAL: Detected lcore 4 on socket 0
...
To run the example in linuxapp environment with 1 lcore (4) over 2 ports(0,2) with 2 RX queues
per lcore:
./build/ip_fragmentation -c 0x10 -n 3 -- -p 5 -q 2
To test the application, flows should be set up in the flow generator that match the values in
the l3fwd_ipv4_route_array and/or l3fwd_ipv6_route_array table.
The default l3fwd_ipv4_route_array table is:
struct l3fwd_ipv4_route l3fwd_ipv4_route_array[] = {
{IPv4(100, 10, 0, 0), 16, 0},
{IPv4(100, 20, 0, 0), 16, 1},
{IPv4(100, 30, 0, 0), 16, 2},
{IPv4(100, 40, 0, 0), 16, 3},
{IPv4(100, 50, 0, 0), 16, 4},
{IPv4(100, 60, 0, 0), 16, 5},
{IPv4(100, 70, 0, 0), 16, 6},
For example, for the input IPv4 packet with destination address: 100.10.1.1 and packet length
9198 bytes, seven IPv4 packets will be sent out from port #0 to the destination address
100.10.1.1: six of those packets will have length 1500 bytes and one packet will have length
318 bytes. IP Fragmentation sample application provides basic NUMA support in that all the
memory structures are allocated on all sockets that have active lcores on them.
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
The IPv4 Multicast application is a simple example of packet processing using the Data Plane
Development Kit (DPDK). The application performs L3 multicasting.
Overview
The application demonstrates the use of zero-copy buffers for packet forwarding. The initial-
ization and run-time paths are very similar to those of the L2 Forwarding Sample Application
(in Real and Virtualized Environments). This guide highlights the differences between the two
applications. There are two key differences from the L2 Forwarding sample application:
• The IPv4 Multicast sample application makes use of indirect buffers.
• The forwarding decision is taken based on information read from the input packet’s IPv4
header.
The lookup method is the Four-byte Key (FBK) hash-based method. The lookup table is com-
posed of pairs of destination IPv4 address (the FBK) and a port mask associated with that IPv4
address.
For convenience and simplicity, this sample application does not take IANA-assigned multicast
addresses into account, but instead equates the last four bytes of the multicast group (that is,
the last four bytes of the destination IP address) with the mask of ports to multicast packets
to. Also, the application does not consider the Ethernet addresses; it looks only at the IPv4
destination address for any given packet.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make
Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified in the make
command.
where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -q NQ: determines the number of queues per lcore
Note: Unlike the basic L2/L3 Forwarding sample applications, NUMA support is not provided
in the IPv4 Multicast sample application.
Typically, to run the IPv4 Multicast sample application, issue the following command (as root):
./build/ipv4_multicast -c 0x00f -n 3 -- -p 0x3 -q 1
In this command:
• The -c option enables cores 0, 1, 2 and 3
• The -n option specifies 3 memory channels
• The -p option enables ports 0 and 1
• The -q option assigns 1 queue to each lcore
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The following sections provide some explanation of the code. As mentioned in the overview
section, the initialization and run-time paths are very similar to those of the L2 Forwarding
Sample Application (in Real and Virtualized Environments). The following sections describe
aspects that are specific to the IPv4 Multicast sample application.
The IPv4 Multicast sample application uses three memory pools. Two of the pools are for
indirect buffers used for packet duplication purposes. Memory pools for indirect buffers are
initialized differently from the memory pool for direct buffers:
packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct r
rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socke
The reason for this is because indirect buffers are not supposed to hold any packet data and
therefore can be initialized with lower amount of reserved memory for each buffer.
Hash Initialization
The hash object is created and loaded with the pre-configured entries read from a global array:
static int
init_mcast_hash(void)
{
uint32_t i;
mcast_hash_params.socket_id = rte_socket_id();
mcast_hash = rte_fbk_hash_create(&mcast_hash_params);
if (mcast_hash == NULL){
return -1;
}
Forwarding
All forwarding is done inside the mcast_forward() function. Firstly, the Ethernet* header is
removed from the packet and the IPv4 address is extracted from the IPv4 header:
/* Remove the Ethernet header from the input packet */
Then, the packet is checked to see if it has a multicast destination address and if the routing
table has any ports assigned to the destination address:
if (!IS_IPV4_MCAST(dest_addr) ||
(hash = rte_fbk_hash_lookup(mcast_hash, dest_addr)) <= 0 ||
(port_mask = hash & enabled_port_mask) == 0) {
rte_pktmbuf_free(m);
return;
}
Then, the number of ports in the destination portmask is calculated with the help of the bitcnt()
function:
/* Get number of bits set. */
This is done to determine which forwarding algorithm to use. This is explained in more detail
in the next section.
Thereafter, a destination Ethernet address is constructed:
/* construct destination Ethernet address */
dst_eth_addr = ETHER_ADDR_FOR_IPV4_MCAST(dest_addr);
Since Ethernet addresses are also part of the multicast process, each outgoing packet carries
the same destination Ethernet address. The destination Ethernet address is constructed from
the lower 23 bits of the multicast group OR-ed with the Ethernet address 01:00:5e:00:00:00,
as per RFC 1112:
#define ETHER_ADDR_FOR_IPV4_MCAST(x) \
(rte_cpu_to_be_64(0x01005e000000ULL | ((x) & 0x7fffff)) >> 16)
Then, packets are dispatched to the destination ports according to the portmask associated
with a multicast group:
for (port = 0; use_clone != port_mask; port_mask >>= 1, port++) {
/* Prepare output packet and send it out. */
if ((port_mask & 1) != 0) {
if (likely ((mc = mcast_out_pkt(m, use_clone)) != NULL))
mcast_send_pkt(mc, &dst_eth_addr.as_addr, qconf, port);
else if (use_clone == 0)
rte_pktmbuf_free(m);
}
}
RTE_ASSERT(ethdr != NULL);
ether_addr_copy(dest_addr, ðdr->d_addr);
ether_addr_copy(&ports_eth_addr[port], ðdr->s_addr);
ethdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = pkt;
qconf->tx_mbufs[port].len = ++len;
/* Transmit packets */
if (unlikely(MAX_PKT_BURST == len))
send_burst(qconf, port);
}
Buffer Cloning
This is the most important part of the application since it demonstrates the use of zero- copy
buffer cloning. There are two approaches for creating the outgoing packet and although both
are based on the data zero-copy idea, there are some differences in the detail.
The first approach creates a clone of the input packet, for example, walk though all segments
of the input packet and for each of segment, create a new buffer and attach that new buffer
to the segment (refer to rte_pktmbuf_clone() in the rte_mbuf library for more details). A new
buffer is then allocated for the packet header and is prepended to the cloned buffer.
The second approach does not make a clone, it just increments the reference counter for all
input packet segment, allocates a new buffer for the packet header and prepends it to the input
packet.
Basically, the first approach reuses only the input packet’s data, but creates its own copy of
packet’s metadata. The second approach reuses both input packet’s data and metadata.
The advantage of first approach is that each outgoing packet has its own copy of the metadata,
so we can safely modify the data pointer of the input packet. That allows us to skip creation
if the output packet is for the last destination port and instead modify input packet’s header in
place. For example, for N destination ports, we need to invoke mcast_out_pkt() (N-1) times.
The advantage of the second approach is that there is less work to be done for each outgoing
packet, that is, the “clone” operation is skipped completely. However, there is a price to pay.
The input packet’s metadata must remain intact, so for N destination ports, we need to invoke
mcast_out_pkt() (N) times.
Therefore, for a small number of outgoing ports (and segments in the input packet), first ap-
proach is faster. As the number of outgoing ports (and/or input segments) grows, the second
approach becomes more preferable.
Depending on the number of segments or the number of ports in the outgoing portmask, either
the first (with cloning) or the second (without cloning) approach is taken:
use_clone = (port_num <= MCAST_CLONE_PORTS && m->pkt.nb_segs <= MCAST_CLONE_SEGS);
It is the mcast_out_pkt() function that performs the packet duplication (either with or without
actually cloning the buffers):
static inline struct rte_mbuf *mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
{
struct rte_mbuf *hdr;
hdr->pkt.next = pkt;
hdr->pkt.in_port = pkt->pkt.in_port;
hdr->pkt.vlan_macip = pkt->pkt.vlan_macip;
hdr->pkt.hash = pkt->pkt.hash;
hdr->ol_flags = pkt->ol_flags;
rte_mbuf_sanity_check(hdr, RTE_MBUF_PKT, 1);
return hdr;
}
The L3 Forwarding application is a simple example of packet processing using the DPDK. The
application performs L3 forwarding with reassembly for fragmented IPv4 and IPv6 packets.
Overview
The application demonstrates the use of the DPDK libraries to implement packet forwarding
with reassembly for IPv4 and IPv6 fragmented packets. The initialization and run- time paths
are very similar to those of the L2 Forwarding Sample Application (in Real and Virtualized
Environments). The main difference from the L2 Forwarding sample application is that it re-
assembles fragmented IPv4 and IPv6 packets before forwarding. The maximum allowed size
of reassembled packet is 9.5 KB.
There are two key differences from the L2 Forwarding sample application:
• The first difference is that the forwarding decision is taken based on information read
from the input packet’s IP header.
• The second difference is that the application differentiates between IP and non-IP traffic
by means of offload flags.
The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to
store/lookup an outgoing port number, associated with that IPv4 address.
Any unmatched packets are forwarded to the originating port.Compiling
the Application
1. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make
where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -q NQ: Number of RX queues per lcore
• –maxflows=FLOWS: determines maximum number of active fragmented flows (1-65535).
Default value: 4096.
• –flowttl=TTL[(s|ms)]: determines maximum Time To Live for fragmented packet. If all
fragments of the packet wouldn’t appear within given time-out, then they are considered
as invalid and will be dropped. Valid range is 1ms - 3600s. Default value: 1s.
To run the example in linuxapp environment with 2 lcores (2,4) over 2 ports(0,2) with 1 RX
queue per lcore:
./build/ip_reassembly -c 0x14 -n 3 -- -p 5
EAL: coremask set to 14
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 1
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 1
EAL: Detected lcore 4 on socket 0
...
To run the example in linuxapp environment with 1 lcore (4) over 2 ports(0,2) with 2 RX queues
per lcore:
./build/ip_reassembly -c 0x10 -n 3 -- -p 5 -q 2
To test the application, flows should be set up in the flow generator that match the values in
the l3fwd_ipv4_route_array and/or l3fwd_ipv6_route_array table.
Please note that in order to test this application, the traffic generator should be generating valid
fragmented IP packets. For IPv6, the only supported case is when no other extension headers
other than fragment extension header are present in the packet.
The default l3fwd_ipv4_route_array table is:
struct l3fwd_ipv4_route l3fwd_ipv4_route_array[] = {
{IPv4(100, 10, 0, 0), 16, 0},
{IPv4(100, 20, 0, 0), 16, 1},
{IPv4(100, 30, 0, 0), 16, 2},
{IPv4(100, 40, 0, 0), 16, 3},
{IPv4(100, 50, 0, 0), 16, 4},
{IPv4(100, 60, 0, 0), 16, 5},
{IPv4(100, 70, 0, 0), 16, 6},
{IPv4(100, 80, 0, 0), 16, 7},
};
For example, for the fragmented input IPv4 packet with destination address: 100.10.1.1, a
reassembled IPv4 packet be sent out from port #0 to the destination address 100.10.1.1 once
all the fragments are collected.
Explanation
The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The following sections
describe aspects that are specific to the IP reassemble sample application.
This application uses the rte_ip_frag library. Please refer to Programmer’s Guide for more
detailed explanation of how to use this library. Fragment table maintains information about al-
ready received fragments of the packet. Each IP packet is uniquely identified by triple <Source
IP address>, <Destination IP address>, <ID>. To avoid lock contention, each RX queue has its
own Fragment Table, e.g. the application can’t handle the situation when different fragments
of the same packet arrive through different RX queues. Each table entry can hold information
about packet consisting of up to RTE_LIBRTE_IP_FRAG_MAX_FRAGS fragments.
Mempools Initialization
The reassembly application demands a lot of mbuf’s to be allocated. At any given time up to
(2 * max_flow_num * RTE_LIBRTE_IP_FRAG_MAX_FRAGS * <maximum number of mbufs
per packet>) can be stored inside Fragment Table waiting for remaining fragments. To keep
mempool size under reasonable limits and to avoid situation when one RX queue can starve
other queues, each RX queue uses its own mempool.
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT;
nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
function. If the packet is an IPv4 or IPv6 fragment, then it calls rte_ipv4_reassemble_packet()
for IPv4 packets, or rte_ipv6_reassemble_packet() for IPv6 packets. These functions either
return a pointer to valid mbuf that contains reassembled packet, or NULL (if the packet can’t
be reassembled for some reason). Then l3fwd_simple_forward() continues with the code for
the packet forwarding decision (that is, the identification of the output interface for the packet)
and actual transmit of the packet.
The rte_ipv4_reassemble_packet() or rte_ipv6_reassemble_packet() are responsible for:
1. Searching the Fragment Table for entry with packet’s <IP Source Address, IP Destination
Address, Packet ID>
2. If the entry is found, then check if that entry already timed-out. If yes, then free all
previously received fragments, and remove information about them from the entry.
3. If no entry with such key is found, then try to create a new one by one of two ways:
(a) Use as empty entry
(b) Delete a timed-out entry, free mbufs associated with it mbufs and store a new entry
with specified key in it.
4. Update the entry with new fragment information and check if a packet can be reassem-
bled (the packet’s entry contains all fragments).
(a) If yes, then, reassemble the packet, mark table’s entry as empty and return the
reassembled mbuf to the caller.
(b) If no, then just return a NULL to the caller.
If at any stage of packet processing a reassembly function encounters an error (can’t insert
new entry into the Fragment table, or invalid/timed-out fragment), then it will free all associated
with the packet fragments, mark the table entry as invalid and return NULL to the caller.
The Kernel NIC Interface (KNI) is a DPDK control plane solution that allows userspace ap-
plications to exchange packets with the kernel networking stack. To accomplish this, DPDK
userspace applications use an IOCTL call to request the creation of a KNI virtual device in the
Linux* kernel. The IOCTL call provides interface information and the DPDK’s physical address
space, which is re-mapped into the kernel address space by the KNI kernel loadable module
that saves the information to a virtual device context. The DPDK creates FIFO queues for
packet ingress and egress to the kernel module for each device allocated.
The KNI kernel loadable module is a standard net driver, which upon receiving the IOCTL
call access the DPDK’s FIFO queue to receive/transmit packets from/to the DPDK userspace
application. The FIFO queues contain pointers to data packets in the DPDK. This:
• Provides a faster mechanism to interface with the kernel net stack and eliminates system
calls
• Facilitates the DPDK using standard Linux* userspace net tools (tcpdump, ftp, and so on)
• Eliminate the copy_to_user and copy_from_user operations on packets.
The Kernel NIC Interface sample application is a simple example that demonstrates the use of
the DPDK to create a path for packets to go through the Linux* kernel. This is done by creating
one or more kernel net devices for each of the DPDK ports. The application allows the use of
standard Linux tools (ethtool, ifconfig, tcpdump) with the DPDK ports and also the exchange
of packets between the DPDK application and the Linux* kernel.
Overview
The Kernel NIC Interface sample application uses two threads in user space for each physical
NIC port being used, and allocates one or more KNI device for each physical NIC port with
kernel module’s support. For a physical NIC port, one thread reads from the port and writes to
KNI devices, and another thread reads from KNI devices and writes the data unmodified to the
physical NIC port. It is recommended to configure one KNI device for each physical NIC port.
If configured with more than one KNI devices for a physical NIC port, it is just for performance
testing, or it can work together with VMDq support in future.
The packet flow through the Kernel NIC Interface application is as shown in the following figure.
export RTE_TARGET=x86_64-native-linuxapp-gcc
Loading the KNI kernel module without any parameter is the typical way a DPDK application
gets packets into and out of the kernel net stack. This way, only one kernel thread is created
for all KNI devices for packet receiving in kernel side:
#insmod rte_kni.ko
Pinning the kernel thread to a specific core can be done using a taskset command such as
following:
#taskset -p 100000 `pgrep --fl kni_thread | awk '{print $1}'`
This command line tries to pin the specific kni_thread on the 20th lcore (lcore numbering starts
at 0), which means it needs to check if that lcore is available on the board. This command must
be sent after the application has been launched, as insmod does not start the kni thread.
For optimum performance, the lcore in the mask must be selected to be on the same socket
as the lcores used in the KNI application.
To provide flexibility of performance, the kernel module of the KNI, located in the kmod sub-
directory of the DPDK target directory, can be loaded with parameter of kthread_mode as
follows:
• #insmod rte_kni.ko kthread_mode=single
This mode will create only one kernel thread for all KNI devices for packet receiving in
kernel side. By default, it is in this single kernel thread mode. It can set core affinity for
this kernel thread by using Linux command taskset.
• #insmod rte_kni.ko kthread_mode =multiple
This mode will create a kernel thread for each KNI device for packet receiving in ker-
nel side. The core affinity of each kernel thread is set when creating the KNI device.
The lcore ID for each kernel thread is provided in the command line of launching the
application. Multiple kernel thread mode can provide scalable higher performance.
To measure the throughput in a loopback mode, the kernel module of the KNI, located in the
kmod sub-directory of the DPDK target directory, can be loaded with parameters as follows:
• #insmod rte_kni.ko lo_mode=lo_mode_fifo
This loopback mode will involve ring enqueue/dequeue operations in kernel space.
• #insmod rte_kni.ko lo_mode=lo_mode_fifo_skb
This loopback mode will involve ring enqueue/dequeue operations and sk buffer copies
in kernel space.
Where:
• -P: Set all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• –config=”(port,lcore_rx, lcore_tx[,lcore_kthread, ...]) [, port,lcore_rx,
lcore_tx[,lcore_kthread, ...]]”: Determines which lcores of RX, TX, kernel thread
are mapped to which ports.
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
The -c coremask parameter of the EAL options should include the lcores indicated by the
lcore_rx and lcore_tx, but does not need to include lcores indicated by lcore_kthread as they
are used to pin the kernel thread on. The -p PORTMASK parameter should include the ports
indicated by the port in –config, neither more nor less.
The lcore_kthread in –config can be configured none, one or more lcore IDs. In multiple kernel
thread mode, if configured none, a KNI device will be allocated for each port, while no specific
lcore affinity will be set for its kernel thread. If configured one or more lcore IDs, one or more
KNI devices will be allocated for each port, while specific lcore affinity will be set for its kernel
thread. In single kernel thread mode, if configured none, a KNI device will be allocated for each
port. If configured one or more lcore IDs, one or more KNI devices will be allocated for each
port while no lcore affinity will be set as there is only one kernel thread for all KNI devices.
For example, to run the application with two ports served by six lcores, one lcore of RX, one
lcore of TX, and one lcore of kernel thread for each port:
./build/kni -c 0xf0 -n 4 -- -P -p 0x3 -config="(0,4,6,8),(1,5,7,9)"
KNI Operations
Once the KNI application is started, one can use different Linux* commands to manage the
net interfaces. If more than one KNI devices configured for a physical port, only the first KNI
device will be paired to the physical device. Operations on other KNI devices will not affect the
physical port handled in user space application.
Assigning an IP address:
#ifconfig vEth0_0 192.168.0.1
When the DPDK userspace application is closed, all the KNI devices are deleted from Linux*.
Explanation
Initialization
Setup of mbuf pool, driver and queues is similar to the setup done in the L2 Forwarding Sample
Application (in Real and Virtualized Environments).. In addition, one or more kernel NIC inter-
faces are allocated for each of the configured ports according to the command line parameters.
The code for allocating the kernel NIC interfaces for a specific port is as follows:
static int
kni_alloc(uint8_t port_id)
{
uint8_t i;
struct rte_kni *kni;
struct rte_kni_conf conf;
struct kni_port_params **params = kni_port_params_array;
memset(&conf, 0, sizeof(conf));
if (params[port_id]->nb_lcore_k) {
snprintf(conf.name, RTE_KNI_NAMESIZE, "vEth%u_%u", port_id, i);
conf.core_id = params[port_id]->lcore_k[i];
conf.force_bind = 1;
} else
snprintf(conf.name, RTE_KNI_NAMESIZE, "vEth%u", port_id);
conf.group_id = (uint16_t)port_id;
conf.mbuf_size = MAX_PACKET_SZ;
/*
* The first KNI device associated to a port
* is the master, for multiple kernel thread
* environment.
*/
if (i == 0) {
struct rte_kni_ops ops;
struct rte_eth_dev_info dev_info;
conf.addr = dev_info.pci_dev->addr;
conf.id = dev_info.pci_dev->id;
memset(&ops, 0, sizeof(ops));
ops.port_id = port_id;
ops.change_mtu = kni_change_mtu;
ops.config_network_if = kni_config_network_interface;
if (!kni)
rte_exit(EXIT_FAILURE, "Fail to create kni for "
"port: %d\n", port_id);
params[port_id]->kni[i] = kni;
}
return 0;
}
The other step in the initialization process that is unique to this sample application is the asso-
ciation of each port with lcores for RX, TX and kernel threads.
• One lcore to read from the port and write to the associated one or more KNI devices
• Another lcore to read from one or more KNI devices and write to the port
• Other lcores for pinning the kernel threads on one by one
This is done by using the‘kni_port_params_array[]‘ array, which is indexed by the port ID. The
code is as follows:
static int
parse_config(const char *arg)
{
const char *p, *p0 = arg;
char s[256], *end;
unsigned size;
enum fieldnames {
FLD_PORT = 0,
FLD_LCORE_RX,
FLD_LCORE_TX,
_NUM_FLD = KNI_MAX_KTHREAD + 3,
};
int i, j, nb_token;
char *str_fld[_NUM_FLD];
unsigned long int_fld[_NUM_FLD];
uint8_t port_id, nb_kni_port_params = 0;
memset(&kni_port_params_array, 0, sizeof(kni_port_params_array));
size = p0 - p;
i = 0;
port_id = (uint8_t)int_fld[i++];
if (kni_port_params_array[port_id]) {
print_config();
return 0;
fail:
return -1;
Packet Forwarding
After the initialization steps are completed, the main_loop() function is run on each lcore. This
function first checks the lcore_id against the user provided lcore_rx and lcore_tx to see if this
lcore is reading from or writing to kernel NIC interfaces.
For the case that reads from a NIC port and writes to the kernel NIC interfaces, the packet
reception is the same as in L2 Forwarding sample application (see Receive, Process and
Transmit Packets). The packet transmission is done by sending mbufs into the kernel NIC
interfaces by rte_kni_tx_burst(). The KNI library automatically frees the mbufs after the kernel
successfully copied the mbufs.
/**
* Interface to burst rx and enqueue mbufs into rx_q
*/
static void
kni_ingress(struct kni_port_params *p)
{
uint8_t i, nb_kni, port_id;
unsigned nb_rx, num;
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
if (p == NULL)
return;
nb_kni = p->nb_kni;
port_id = p->port_id;
/* Burst tx to kni */
num = rte_kni_tx_burst(p->kni[i], pkts_burst, nb_rx);
kni_stats[port_id].rx_packets += num;
rte_kni_handle_request(p->kni[i]);
For the other case that reads from kernel NIC interfaces and writes to a physical NIC port,
packets are retrieved by reading mbufs from kernel NIC interfaces by rte_kni_rx_burst(). The
packet transmission is the same as in the L2 Forwarding sample application (see Receive,
Process and Transmit Packets).
/**
* Interface to dequeue mbufs from tx_q and burst tx
*/
static void
if (p == NULL)
return;
nb_kni = p->nb_kni;
port_id = p->port_id;
/* Burst tx to eth */
kni_stats[port_id].tx_packets += nb_tx;
To execute specific PMD operations in user space requested by some Linux* commands, call-
backs must be implemented and filled in the struct rte_kni_ops structure. Currently, setting a
new MTU and configuring the network interface (up/ down) are supported.
static struct rte_kni_ops kni_ops = {
.change_mtu = kni_change_mtu,
.config_network_if = kni_config_network_interface,
};
static int
kni_change_mtu(uint8_t port_id, unsigned new_mtu)
{
int ret;
struct rte_eth_conf conf;
rte_eth_dev_stop(port_id);
ret = rte_eth_dev_start(port_id);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to restart port %d\n", port_id);
return ret;
}
return 0;
}
static int
kni_config_network_interface(uint8_t port_id, uint8_t if_up)
{
int ret = 0;
if (if_up != 0) {
/* Configure network interface up */
rte_eth_dev_stop(port_id);
ret = rte_eth_dev_start(port_id);
} else /* Configure network interface down */
rte_eth_dev_stop(port_id);
if (ret < 0)
RTE_LOG(ERR, APP, "Failed to start port %d\n", port_id);
return ret;
}
The Keep Alive application is a simple example of a heartbeat/watchdog for packet processing
cores. It demonstrates how to detect ‘failed’ DPDK cores and notify a fault management entity
of this failure. Its purpose is to ensure the failure of the core does not result in a fault that is not
detectable by a management entity.
Overview
The application demonstrates how to protect against ‘silent outages’ on packet processing
cores. A Keep Alive Monitor Agent Core (master) monitors the state of packet processing cores
(worker cores) by dispatching pings at a regular time interval (default is 5ms) and monitoring
the state of the cores. Cores states are: Alive, MIA, Dead or Buried. MIA indicates a missed
ping, and Dead indicates two missed pings within the specified time interval. When a core is
Dead, a callback function is invoked to restart the packet processing core; A real life application
might use this callback function to notify a higher level fault management entity of the core
failure in order to take the appropriate corrective action.
Note: Only the worker cores are monitored. A local (on the host) mechanism or agent to
supervise the Keep Alive Monitor Agent Core DPDK core is required to detect its failure.
Note: This application is based on the L2 Forwarding Sample Application (in Real and Virtual-
ized Environments). As such, the initialization and run-time paths are very similar to those of
the L2 forwarding application.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• K PERIOD: Heartbeat check period in ms(5ms default; 86400 max)
• T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default,
86400 maximum).
To run the application in linuxapp environment with 4 lcores, 16 ports 8 RX queues per lcore
and a ping interval of 10ms, issue the command:
./build/l2fwd-keepalive -c f -n 4 -- -q 8 -p ffff -K 10
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The following sections provide some explanation of the The Keep-Alive/’Liveliness’ conceptual
scheme. As mentioned in the overview section, the initialization and run-time paths are very
similar to those of the L2 Forwarding Sample Application (in Real and Virtualized Environ-
ments).
The Keep-Alive/’Liveliness’ conceptual scheme:
• A Keep- Alive Agent Runs every N Milliseconds.
• DPDK Cores respond to the keep-alive agent.
• If keep-alive agent detects time-outs, it notifies the fault management entity through a
callback function.
The following sections provide some explanation of the code aspects that are specific to the
Keep Alive sample application.
The keepalive functionality is initialized with a struct rte_keepalive and the callback function to
invoke in the case of a timeout.
rte_global_keepalive_info = rte_keepalive_create(&dead_core, NULL);
if (rte_global_keepalive_info == NULL)
rte_exit(EXIT_FAILURE, "keepalive_create() failed");
The function that issues the pings keepalive_dispatch_pings() is configured to run every
check_period milliseconds.
if (rte_timer_reset(&hb_timer,
(check_period * rte_get_timer_hz()) / 1000,
PERIODICAL,
rte_lcore_id(),
&rte_keepalive_dispatch_pings,
rte_global_keepalive_info
) != 0 )
rte_exit(EXIT_FAILURE, "Keepalive setup failure.\n");
The rest of the initialization and run-time path follows the same paths as the the L2 forwarding
application. The only addition to the main processing loop is the mark alive functionality and
the example random failures.
rte_keepalive_mark_alive(&rte_global_keepalive_info);
cur_tsc = rte_rdtsc();
The L2 Forwarding with Crypto (l2fwd-crypto) sample application is a simple example of packet
processing using the Data Plane Development Kit (DPDK), in conjunction with the Cryptodev
library.
Overview
The L2 Forwarding with Crypto sample application performs a crypto operation (cipher/hash)
specified by the user from command line (or using the default values), with a crypto device
capable of doing that operation, for each packet that is received on a RX_PORT and performs
L2 forwarding. The destination port is the adjacent port from the enabled portmask, that is, if
the first four ports are enabled (portmask 0xf), ports 0 and 1 forward into each other, and ports
2 and 3 forward into each other. Also, the MAC addresses are affected as follows:
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure (default is all the ports)
• q NQ: A number of queues (=ports) per lcore (default is 1)
• s: manage all ports from single core
• T PERIOD: statistics will be refreshed each PERIOD seconds
(0 to disable, 10 default, 86400 maximum)
• cdev_type: select preferred crypto device type: HW, SW or anything (ANY)
(default is ANY)
• chain: select the operation chaining to perform: Cipher->Hash (CIPHER_HASH),
Hash->Cipher (HASH_CIPHER), Cipher (CIPHER_ONLY), Hash(HASH_ONLY)
(default is Cipher->Hash)
• cipher_algo: select the ciphering algorithm (default is AES CBC)
• cipher_op: select the ciphering operation to perform: ENCRYPT or DECRYPT
(default is ENCRYPT)
• cipher_key: set the ciphering key to be used. Bytes has to be separated with ”:”
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The L2 forward with Crypto application demonstrates the performance of a crypto operation on
a packet received on a RX PORT before forwarding it to a TX PORT.
The following figure illustrates a sample flow of a packet in the application, from reception until
transmission.
The following sections provide some explanation of the application.
Fig. 3.3: Encryption flow Through the L2 Forwarding with Crypto Application
All the packets received in all the ports get transformed by the crypto device/s (ciphering and/or
authentication). The crypto operation to be performed on the packet is parsed from the com-
mand line (go to “Running the Application section for all the options).
If no parameter is passed, the default crypto operation is:
• Encryption with AES-CBC with 128 bit key.
• Authentication with SHA1-HMAC (generation).
• Keys, IV and AAD are generated randomly.
There are two methods to pass keys, IV and ADD from the command line:
• Passing the full key, separated bytes by ”:”:
--cipher_key 00:11:22:33:44
Note: If full key is passed (first method) and the size is passed as well (second method), the
latter will be ignored.
Size of these keys are checked (regardless the method), before starting the app, to make sure
that it is supported by the crypto devices.
Once the encryption operation is defined, crypto devices are initialized. The crypto devices
must be either bound to a DPDK driver (if they are physical devices) or created using the EAL
option –vdev (if they are virtual devices), when running the application.
The initialize_cryptodevs() function performs the device initialization. It iterates through the list
of the available crypto devices and check which ones are capable of performing the operation.
Each device has a set of capabilities associated with it, which are stored in the device info
structure, so the function checks if the operation is within the structure of each device.
The following code checks if the device supports the specified cipher algorithm (similar for the
authentication algorithm):
/* Check if device supports cipher algo */
i = 0;
opt_cipher_algo = options->cipher_xform.cipher.algo;
cap = &dev_info.capabilities[i];
while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
cap_cipher_algo = cap->sym.cipher.algo;
if (cap->sym.xform_type ==
RTE_CRYPTO_SYM_XFORM_CIPHER) {
if (cap_cipher_algo == opt_cipher_algo) {
if (check_type(options, &dev_info) == 0)
break;
}
}
cap = &dev_info.capabilities[++i];
}
If a capable crypto device is found, key sizes are checked to see if they are supported (cipher
key and IV for the ciphering):
/*
* Check if length of provided cipher key is supported
* by the algorithm chosen.
*/
if (options->ckey_param) {
if (check_supported_size(
options->cipher_xform.cipher.key.length,
cap->sym.cipher.key_size.min,
cap->sym.cipher.key_size.max,
cap->sym.cipher.key_size.increment)
!= 0) {
printf("Unsupported cipher key length\n");
return -1;
}
/*
* Check if length of the cipher key to be randomly generated
* is supported by the algorithm chosen.
*/
} else if (options->ckey_random_size != -1) {
if (check_supported_size(options->ckey_random_size,
cap->sym.cipher.key_size.min,
cap->sym.cipher.key_size.max,
cap->sym.cipher.key_size.increment)
!= 0) {
printf("Unsupported cipher key length\n");
return -1;
}
options->cipher_xform.cipher.key.length =
options->ckey_random_size;
/* No size provided, use minimum size. */
} else
options->cipher_xform.cipher.key.length =
cap->sym.cipher.key_size.min;
After all the checks, the device is configured and it is added to the crypto device list.
Note: The number of crypto devices that supports the specified crypto operation must be at
least the number of ports to be used.
Session creation
The crypto operation has a crypto session associated to it, which contains information such as
the transform chain to perform (e.g. ciphering then hashing), pointers to the keys, lengths...
etc.
This session is created and is later attached to the crypto operation:
static struct rte_cryptodev_sym_session *
initialize_crypto_session(struct l2fwd_crypto_options *options,
uint8_t cdev_id)
{
struct rte_crypto_sym_xform *first_xform;
if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
first_xform = &options->cipher_xform;
first_xform->next = &options->auth_xform;
...
port_cparams[i].session = initialize_crypto_session(options,
port_cparams[i].dev_id);
Given N packets received from a RX PORT, N crypto operations are allocated and filled:
if (nb_rx) {
/*
* If we can't allocate a crypto_ops, then drop
* the rest of the burst and dequeue and
* process the packets to free offload structs
*/
if (rte_crypto_op_bulk_alloc(
l2fwd_crypto_op_pool,
RTE_CRYPTO_OP_TYPE_SYMMETRIC,
ops_burst, nb_rx) !=
nb_rx) {
for (j = 0; j < nb_rx; j++)
rte_pktmbuf_free(pkts_burst[i]);
nb_rx = 0;
}
After filling the crypto operation (including session attachment), the mbuf which will be trans-
formed is attached to it:
op->sym->m_src = m;
Since no destination mbuf is set, the source mbuf will be overwritten after the operation is done
(in-place).
Once the operation has been created, it has to be enqueued in one of the crypto devices.
Before doing so, for performance reasons, the operation stays in a buffer. When the buffer has
enough operations (MAX_PKT_BURST), they are enqueued in the device, which will perform
the operation at that moment:
static int
l2fwd_crypto_enqueue(struct rte_crypto_op *op,
struct l2fwd_crypto_params *cparams)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->op_buf[cparams->dev_id].len;
qconf->op_buf[cparams->dev_id].buffer[len] = op;
len++;
qconf->op_buf[cparams->dev_id].len = len;
return 0;
}
...
static int
l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
struct l2fwd_crypto_params *cparams)
{
struct rte_crypto_op **op_buffer;
unsigned ret;
ret = rte_cryptodev_enqueue_burst(cparams->dev_id,
cparams->qp_id, op_buffer, (uint16_t) n);
crypto_statistics[cparams->dev_id].enqueued += ret;
if (unlikely(ret < n)) {
crypto_statistics[cparams->dev_id].errors += (n - ret);
do {
rte_pktmbuf_free(op_buffer[ret]->sym->m_src);
rte_crypto_op_free(op_buffer[ret]);
} while (++ret < n);
}
return 0;
}
After this, the operations are dequeued from the device, and the transformed mbuf is extracted
from the operation. Then, the operation is freed and the mbuf is forwarded as it is done in the
L2 forwarding application.
/* Dequeue packets from Crypto device */
do {
nb_rx = rte_cryptodev_dequeue_burst(
cparams->dev_id, cparams->qp_id,
ops_burst, MAX_PKT_BURST);
crypto_statistics[cparams->dev_id].dequeued +=
nb_rx;
rte_crypto_op_free(ops_burst[j]);
l2fwd_simple_forward(m, portid);
}
} while (nb_rx == MAX_PKT_BURST);
The L2 Forwarding sample application is a simple example of packet processing using the Data
Plane Development Kit (DPDK) which also takes advantage of Single Root I/O Virtualization
(SR-IOV) features in a virtualized environment.
Overview
The L2 Forwarding sample application, which can operate in real and virtualized environments,
performs L2 forwarding for each packet that is received. The destination port is the adjacent
port from the enabled portmask, that is, if the first four ports are enabled (portmask 0xf), ports
1 and 2 forward into each other, and ports 3 and 4 forward into each other. Also, the MAC
addresses are affected as follows:
• The source MAC address is replaced by the TX port MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
This application can be used to benchmark performance using a traffic-generator, as shown in
the Fig. 3.4.
The application can also be used in a virtualized environment as shown in Fig. 3.5.
The L2 Forwarding application can also be used as a starting point for developing a new appli-
cation based on the DPDK.
This application can use the virtual function available in the system and therefore can be used
in a virtual machine without passing through the whole Network Device into a guest machine
in a virtualized scenario. The virtual functions can be enabled in the host machine or the
hypervisor with the respective physical function driver.
For example, in a Linux* host machine, it is possible to enable a virtual function using the
following command:
modprobe ixgbe max_vfs=2,2
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 85
core load statistics.
DPDK documentation, Release 17.05.0-rc0
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 86
core load statistics.
DPDK documentation, Release 17.05.0-rc0
This command enables two Virtual Functions on each of Physical Function of the NIC, with
two physical ports in the PCI configuration space. It is important to note that enabled Virtual
Function 0 and 2 would belong to Physical Function 0 and Virtual Function 1 and 3 would
belong to Physical Function 1, in this case enabling a total of four Virtual Functions.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• l: Use locale thousands separator when formatting big numbers.
To run the application in linuxapp environment with 4 lcores, 16 ports, 8 RX queues per lcore
and thousands separator printing, issue the command:
$ ./build/l2fwd-jobstats -c f -n 4 -- -q 8 -p ffff -l
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 87
core load statistics.
DPDK documentation, Release 17.05.0-rc0
EAL arguments are parsed first, then application-specific arguments. This is done at the be-
ginning of the main() function:
/* init EAL */
argc -= ret;
argv += ret;
Once the arguments are parsed, the mbuf pool is created. The mbuf pool contains a set of
mbuf objects that will be used by the driver and the application to store network packet data:
/* create the mbuf pool */
l2fwd_pktmbuf_pool =
rte_mempool_create("mbuf_pool", NB_MBUF,
MBUF_SIZE, 32,
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(), 0);
if (l2fwd_pktmbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. The number of
allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. A per-lcore cache of 32
mbufs is kept. The memory is allocated in rte_socket_id() socket, but it is possible to extend
this code to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer. The
default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a
new function derived from rte_pktmbuf_init( ) can be created.
Driver Initialization
The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide and the DPDK API Reference.
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 88
core load statistics.
DPDK documentation, Release 17.05.0-rc0
nb_ports = rte_eth_dev_count();
if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
/* reset l2fwd_dst_ports */
last_port = 0;
/*
* Each logical core is assigned a dedicated TX queue on each port.
*/
for (portid = 0; portid < nb_ports; portid++) {
/* skip ports that are not enabled */
if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
continue;
if (nb_ports_in_mask % 2) {
l2fwd_dst_ports[portid] = last_port;
l2fwd_dst_ports[last_port] = portid;
}
else
last_port = portid;
nb_ports_in_mask++;
The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: "
"err=%d, port=%u\n",
ret, portid);
.txmode = {
.mq_mode = ETH_DCB_NONE
},
};
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 89
core load statistics.
DPDK documentation, Release 17.05.0-rc0
RX Queue Initialization
The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff ), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
rte_eth_dev_socket_id(portid),
NULL,
l2fwd_pktmbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
ret, (unsigned) portid);
The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
truct mbuf_table tx_mbufs[RTE_MAX_ETHPORTS];
rte_atomic16_t stats_read_pending;
rte_spinlock_t lock;
} __rte_cache_aligned;
TX Queue Initialization
Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue on each port */
fflush(stdout);
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),
NULL);
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 90
core load statistics.
DPDK documentation, Release 17.05.0-rc0
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
ret, (unsigned) portid);
rte_timer_init(&qconf->flush_timer);
ret = rte_timer_reset(&qconf->flush_timer, drain_tsc, PERIODICAL,
lcore_id, &l2fwd_flush_job, NULL);
if (ret < 0) {
rte_exit(1, "Failed to reset flush job timer for lcore %u: %s",
lcore_id, rte_strerror(-ret));
}
rte_timer_init(&qconf->rx_timers[i]);
ret = rte_timer_reset(&qconf->rx_timers[i], 0, PERIODICAL, lcore_id,
l2fwd_fwd_job, (void *)(uintptr_t)i);
if (ret < 0) {
rte_exit(1, "Failed to reset lcore %u port %u job timer: %s",
lcore_id, qconf->rx_port_list[i], rte_strerror(-ret));
}
Main loop
do {
rte_jobstats_context_start(&qconf->jobs_context);
do {
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 91
core load statistics.
DPDK documentation, Release 17.05.0-rc0
uint8_t i;
uint64_t now = rte_get_timer_cycles();
} while (!need_manage);
rte_jobstats_finish(&qconf->idle_job, qconf->idle_job.target);
rte_timer_manage();
rte_jobstats_context_finish(&qconf->jobs_context);
} while (likely(stats_read_pending == 0));
rte_spinlock_unlock(&qconf->lock);
rte_pause();
}
First infinite for loop is to minimize impact of stats reading. Lock is only locked/unlocked when
asked.
Second inner while loop do the whole jobs management. When any job is ready, the use
rte_timer_manage() is used to call the job handler. In this place functions l2fwd_fwd_job() and
l2fwd_flush_job() are called when needed. Then rte_jobstats_context_finish() is called to mark
loop end - no other jobs are ready to execute. By this time stats are ready to be read and if
stats_read_pending is set, loop breaks allowing stats to be read.
Third do-while loop is the idle job (idle stats counter). Its only purpose is monitoring if any job
is ready or stats job read is pending for this lcore. Statistics from this part of code is considered
as the headroom available for additional processing.
The main task of l2fwd_fwd_job() function is to read ingress packets from the RX queue of
particular port and forward it. This is done using the following code:
total_nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
MAX_PKT_BURST);
Packets are read in a burst of size MAX_PKT_BURST. Then, each mbuf in the table is pro-
cessed by the l2fwd_simple_forward() function. The processing is very simple: process the TX
port from the RX port, then replace the source and destination MAC addresses.
The rte_eth_rx_burst() function writes the mbuf pointers in a local table and returns the number
of available mbufs in the table.
After first read second try is issued.
if (total_nb_rx == MAX_PKT_BURST) {
const uint16_t nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 92
core load statistics.
DPDK documentation, Release 17.05.0-rc0
MAX_PKT_BURST);
total_nb_rx += nb_rx;
for (j = 0; j < nb_rx; j++) {
m = pkts_burst[j];
rte_prefetch0(rte_pktmbuf_mtod(m, void *));
l2fwd_simple_forward(m, portid);
}
}
This second read is important to give job stats library a feedback how many packets was
processed.
/* Adjust period time in which we are running here. */
if (rte_jobstats_finish(job, total_nb_rx) != 0) {
rte_timer_reset(&qconf->rx_timers[port_idx], job->period, PERIODICAL,
lcore_id, l2fwd_fwd_job, arg);
}
Note: In the following code, one line for getting the output port requires some explanation.
During the initialization process, a static array of destination ports (l2fwd_dst_ports[]) is filled
such that for each source port, a destination port is assigned that is either the next or previous
enabled port from the portmask. Naturally, the number of ports in the portmask must be even,
otherwise, the application exits.
static void
l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port;
dst_port = l2fwd_dst_ports[portid];
/* 02:00:00:00:00:xx */
tmp = ð->d_addr.addr_bytes[0];
/* src addr */
ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->s_addr);
Then, the packet is sent using the l2fwd_send_packet (m, dst_port) function. For this test
application, the processing is exactly the same for all packets arriving on the same RX port.
Therefore, it would have been possible to call the l2fwd_send_burst() function directly from the
main loop to send all the received packets on the same TX port, using the burst-oriented send
function, which is more efficient.
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 93
core load statistics.
DPDK documentation, Release 17.05.0-rc0
However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that, so the same
approach can be reused in a more complex application.
The l2fwd_send_packet() function stores the packet in a per-lcore and per-txport table. If the
table is full, the whole packets table is transmitted using the l2fwd_send_burst() function:
/* Send the packet on an output interface */
static int
l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;
if (unlikely(len == MAX_PKT_BURST)) {
l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}
To ensure that no packets remain in the tables, the flush job exists. The l2fwd_flush_job() is
called periodically to for each lcore draining TX queue of each port. This technique introduces
some latency when there are not many packets to send, however it improves performance:
static void
l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg)
{
uint64_t now;
unsigned lcore_id;
struct lcore_queue_conf *qconf;
struct mbuf_table *m_table;
uint8_t portid;
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
rte_jobstats_start(&qconf->jobs_context, &qconf->flush_job);
now = rte_get_timer_cycles();
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
m_table = &qconf->tx_mbufs[portid];
if (m_table->len == 0 || m_table->next_flush_time <= now)
continue;
l2fwd_send_burst(qconf, portid);
}
3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 94
core load statistics.
DPDK documentation, Release 17.05.0-rc0
rte_jobstats_finish(&qconf->flush_job, qconf->flush_job.target);
}
The L2 Forwarding sample application is a simple example of packet processing using the Data
Plane Development Kit (DPDK) which also takes advantage of Single Root I/O Virtualization
(SR-IOV) features in a virtualized environment.
Overview
The L2 Forwarding sample application, which can operate in real and virtualized environments,
performs L2 forwarding for each packet that is received on an RX_PORT. The destination port is
the adjacent port from the enabled portmask, that is, if the first four ports are enabled (portmask
0xf), ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other. Also, if
MAC addresses updating is enabled, the MAC addresses are affected as follows:
• The source MAC address is replaced by the TX_PORT MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
This application can be used to benchmark performance using a traffic-generator, as shown in
the Fig. 3.6, or in a virtualized environment as shown in Fig. 3.7.
This application may be used for basic VM to VM communication as shown in Fig. 3.8, when
MAC addresses updating is disabled.
The L2 Forwarding application can also be used as a starting point for developing a new appli-
cation based on the DPDK.
This application can use the virtual function available in the system and therefore can be used
in a virtual machine without passing through the whole Network Device into a guest machine
in a virtualized scenario. The virtual functions can be enabled in the host machine or the
hypervisor with the respective physical function driver.
For example, in a Linux* host machine, it is possible to enable a virtual function using the
following command:
modprobe ixgbe max_vfs=2,2
This command enables two Virtual Functions on each of Physical Function of the NIC, with
two physical ports in the PCI configuration space. It is important to note that enabled Virtual
Function 0 and 2 would belong to Physical Function 0 and Virtual Function 1 and 3 would
belong to Physical Function 1, in this case enabling a total of four Virtual Functions.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• –[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
To run the application in linuxapp environment with 4 lcores, 16 ports and 8 RX queues per
lcore and MAC address updating enabled, issue the command:
$ ./build/l2fwd -c f -n 4 -- -q 8 -p ffff
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
EAL arguments are parsed first, then application-specific arguments. This is done at the be-
ginning of the main() function:
/* init EAL */
argc -= ret;
argv += ret;
Once the arguments are parsed, the mbuf pool is created. The mbuf pool contains a set of
mbuf objects that will be used by the driver and the application to store network packet data:
/* create the mbuf pool */
if (l2fwd_pktmbuf_pool == NULL)
rte_panic("Cannot init mbuf pool\n");
The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. The number of
allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. A per-lcore cache of 32
mbufs is kept. The memory is allocated in NUMA socket 0, but it is possible to extend this code
to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer. The
default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a
new function derived from rte_pktmbuf_init( ) can be created.
Driver Initialization
The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide - Rel 1.4 EAR and the DPDK API Reference.
if (rte_eal_pci_probe() < 0)
rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
nb_ports = rte_eth_dev_count();
if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
/* reset l2fwd_dst_ports */
last_port = 0;
/*
* Each logical core is assigned a dedicated TX queue on each port.
*/
if (nb_ports_in_mask % 2) {
l2fwd_dst_ports[portid] = last_port;
l2fwd_dst_ports[last_port] = portid;
}
else
last_port = portid;
nb_ports_in_mask++;
Observe that:
• rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver and as an Eth-
ernet* Poll Mode Driver.
• rte_eal_pci_probe() parses the devices on the PCI bus and initializes recognized devices.
The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: "
"err=%d, port=%u\n",
ret, portid);
.txmode = {
.mq_mode = ETH_DCB_NONE
},
};
RX Queue Initialization
The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff ), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, &rx_conf, l2fwd_pktmbuf_pool
if (ret < 0)
The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
} rte_cache_aligned;
The values n_rx_port and rx_port_list[] are used in the main packet processing loop (see Re-
ceive, Process and Transmit Packets).
The global configuration for the RX queues is stored in a static structure:
static const struct rte_eth_rxconf rx_conf = {
.rx_thresh = {
.pthresh = RX_PTHRESH,
.hthresh = RX_HTHRESH,
.wthresh = RX_WTHRESH,
},
};
TX Queue Initialization
Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue on each port */
fflush(stdout);
3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 100
DPDK documentation, Release 17.05.0-rc0
In the l2fwd_main_loop() function, the main task is to read ingress packets from the RX queues.
This is done using the following code:
/*
* Read packet from RX queues
*/
Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() function writes
the mbuf pointers in a local table and returns the number of available mbufs in the table.
Then, each mbuf in the table is processed by the l2fwd_simple_forward() function. The pro-
cessing is very simple: process the TX port from the RX port, then replace the source and
destination MAC addresses if MAC addresses updating is enabled.
Note: In the following code, one line for getting the output port requires some explanation.
During the initialization process, a static array of destination ports (l2fwd_dst_ports[]) is filled
such that for each source port, a destination port is assigned that is either the next or previous
enabled port from the portmask. Naturally, the number of ports in the portmask must be even,
otherwise, the application exits.
static void
l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port;
dst_port = l2fwd_dst_ports[portid];
/* 02:00:00:00:00:xx */
tmp = ð->d_addr.addr_bytes[0];
3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 101
DPDK documentation, Release 17.05.0-rc0
/* src addr */
ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], ð->s_addr);
Then, the packet is sent using the l2fwd_send_packet (m, dst_port) function. For this test
application, the processing is exactly the same for all packets arriving on the same RX port.
Therefore, it would have been possible to call the l2fwd_send_burst() function directly from the
main loop to send all the received packets on the same TX port, using the burst-oriented send
function, which is more efficient.
However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that, so the same
approach can be reused in a more complex application.
The l2fwd_send_packet() function stores the packet in a per-lcore and per-txport table. If the
table is full, the whole packets table is transmitted using the l2fwd_send_burst() function:
/* Send the packet on an output interface */
static int
l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;
if (unlikely(len == MAX_PKT_BURST)) {
l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}
To ensure that no packets remain in the tables, each lcore does a draining of TX queue in its
main loop. This technique introduces some latency when there are not many packets to send,
however it improves performance:
cur_tsc = rte_rdtsc();
/*
* TX burst queue drain
*/
3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 102
DPDK documentation, Release 17.05.0-rc0
qconf->tx_mbufs[portid].len = 0;
}
/* if timer is enabled */
if (timer_period > 0) {
/* advance the timer */
timer_tsc += diff_tsc;
if (lcore_id == rte_get_master_lcore()) {
print_stats();
prev_tsc = cur_tsc;
}
3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 103
DPDK documentation, Release 17.05.0-rc0
Requires libpqos from Intel’s intel-cmt-cat software package hosted on GitHub repository.
For installation notes, please see README file.
GIT:
• https://github.com/01org/intel-cmt-cat
To compile the application export the path to PQoS lib and the DPDK source tree and go to the
example directory:
export PQOS_INSTALL_PATH=/path/to/libpqos
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l2fwd-cat
See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
make
To run the example in a linuxapp environment and enable CAT on cpus 0-2:
./build/l2fwd-cat -c 2 -n 4 -- --l3ca="0x3@(0-2)"
3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 104
DPDK documentation, Release 17.05.0-rc0
For more info about pqos tool please see its man page or intel-cmt-cat wiki.
Explanation
The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.
The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:
int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
The next task is to initialize the PQoS library and configure CAT. The argc and argv argu-
ments are provided to the cat_init() function. The value returned is the number of parsed
arguments:
int ret = cat_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "PQOS: L3CA init failed!\n");
cat_init() is a wrapper function which parses the command, validates the requested pa-
rameters and configures CAT accordingly.
3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 105
DPDK documentation, Release 17.05.0-rc0
The L3 Forwarding application is a simple example of packet processing using the DPDK. The
application performs L3 forwarding.
Overview
The application demonstrates the use of the hash and LPM libraries in the DPDK to implement
packet forwarding. The initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The main difference
from the L2 Forwarding sample application is that the forwarding decision is made based on
information read from the input packet.
The lookup method is either hash-based or LPM-based and is selected at run time. When the
selected lookup method is hash-based, a hash object is used to emulate the flow classification
stage. The hash object is used in correlation with a flow table to map each input packet to its
flow at runtime.
The hash lookup key is represented by a DiffServ 5-tuple composed of the following fields read
from the input packet: Source IP Address, Destination IP Address, Protocol, Source Port and
Destination Port. The ID of the output interface for the input packet is read from the identified
flow table entry. The set of flows used by the application is statically configured and loaded
into the hash at initialization time. When the selected lookup method is LPM based, an LPM
object is used to emulate the forwarding stage for IPv4 packets. The LPM object is used as the
routing table to identify the next hop for each input packet at runtime.
The LPM lookup key is represented by the Destination IP Address field read from the input
packet. The ID of the output interface for the input packet is the next hop returned by the LPM
lookup. The set of LPM rules used by the application is statically configured and loaded into
the LPM object at initialization time.
In the sample application, hash-based forwarding supports IPv4 and IPv6. LPM-based for-
warding supports IPv4 only.
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Optional, sets all ports to promiscuous mode so that packets are accepted regard-
less of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted.
• -E: Optional, enable exact match.
• -L: Optional, enable longest prefix match.
• --config (port,queue,lcore)[,(port,queue,lcore)]: Determines which
queues from which ports are mapped to which cores.
• --eth-dest=X,MM:MM:MM:MM:MM:MM: Optional, ethernet destination for port X.
• --enable-jumbo: Optional, enables jumbo frames.
• --max-pkt-len: Optional, under the premise of enabling jumbo, maximum packet
length in decimal (64-9600).
• --no-numa: Optional, disables numa awareness.
• --hash-entry-num: Optional, specifies the hash entry number in hexadecimal to be
setup.
• --ipv6: Optional, set if running ipv6 packets.
• --parse-ptype: Optional, set to use software to analyze packet type. Without this
option, hardware will check the packet type.
For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7
and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
To enable L3 forwarding between two ports, assuming that both ports are in the same socket,
using two cores, cores 1 and 2, (which are in the same socket too), use the following command:
./build/l3fwd -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)"
In this command:
• The -l option enables cores 1, 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1.
1 0 2 Map queue 0 from port 1 to lcore 2.
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The following sections
describe aspects that are specific to the L3 Forwarding sample application.
Hash Initialization
The hash object is created and loaded with the pre-configured entries read from a global array,
and then generate the expected 5-tuple as key to keep consistence with those of real flow for
the convenience to execute hash performance test on 4M/8M/16M flows.
Note: The Hash initialization will setup both ipv4 and ipv6 hash table, and populate the either
table depending on the value of variable ipv6. To support the hash performance test with
up to 8M single direction flows/16M bi-direction flows, populate_ipv4_many_flow_into_table()
function will populate the hash table with specified hash table entry number(default 4M).
Note: Value of global variable ipv6 can be specified with –ipv6 in the command line. Value
of global variable hash_entry_number, which is used to specify the total hash entry number
for all used ports in hash performance test, can be specified with –hash-entry-num VALUE in
command line, being its default value 4.
static void
setup_hash(int socketid)
{
// ...
if (hash_entry_number != HASH_ENTRY_NUMBER_DEFAULT) {
if (ipv6 == 0) {
/* populate the ipv4 hash */
populate_ipv4_many_flow_into_table(ipv4_l3fwd_lookup_struct[socketid], hash_ent
} else {
/* populate the ipv6 hash */
populate_ipv6_many_flow_into_table( ipv6_l3fwd_lookup_struct[socketid], hash_en
}
} else
if (ipv6 == 0) {
/* populate the ipv4 hash */
populate_ipv4_few_flow_into_table(ipv4_l3fwd_lookup_struct[socketid]);
} else {
/* populate the ipv6 hash */
populate_ipv6_few_flow_into_table(ipv6_l3fwd_lookup_struct[socketid]);
}
}
}
#endif
LPM Initialization
The LPM object is created and loaded with the pre-configured entries read from a global array.
#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)
static void
setup_lpm(int socketid)
{
unsigned i;
int ret;
char s[64];
if (ipv4_l3fwd_lookup_struct[socketid] == NULL)
rte_exit(EXIT_FAILURE, "Unable to create the l3fwd LPM table"
" on socket %d\n", socketid);
if (ret < 0) {
rte_exit(EXIT_FAILURE, "Unable to add entry %u to the "
"l3fwd LPM table on socket %d\n", i, socketid);
}
}
}
#endif
For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
or simple_ipv4_fwd_4pkts() function for IPv4 packets or the simple_ipv6_fwd_4pkts() func-
tion for IPv6 packets. The l3fwd_simple_forward() function provides the basic functionality
for both IPv4 and IPv6 packet forwarding for any number of burst packets received, and the
packet forwarding decision (that is, the identification of the output interface for the packet) for
hash-based lookups is done by the get_ipv4_dst_port() or get_ipv6_dst_port() function. The
get_ipv4_dst_port() function is shown below:
static inline uint8_t
get_ipv4_dst_port(void *ipv4_hdr, uint8_t portid, lookup_struct_t *ipv4_l3fwd_lookup_struct)
{
int ret = 0;
union ipv4_5tuple_host key;
/* Get 5 tuple: dst port, src port, dst IP address, src IP address and protocol */
// ...
}
For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
function, but the packet forwarding decision (that is, the identification of the output interface for
the packet) for LPM-based lookups is done by the get_ipv4_dst_port() function below:
static inline uint8_t
get_ipv4_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid, lookup_struct_t *ipv4_l3fwd_lookup
{
uint8_t next_hop;
Introduction
Overview
The application demonstrates the use of the Power libraries in the DPDK to implement packet
forwarding. The initialization and run-time paths are very similar to those of the L3 Forwarding
Sample Application. The main difference from the L3 Forwarding sample application is that this
application introduces power-aware optimization algorithms by leveraging the Power library to
control P-state and C-state of processor based on packet load.
The DPDK includes poll-mode drivers to configure Intel NIC devices and their receive (Rx) and
transmit (Tx) queues. The design principle of this PMD is to access the Rx and Tx descriptors
directly without any interrupts to quickly receive, process and deliver packets in the user space.
In general, the DPDK executes an endless packet processing loop on dedicated IA cores that
include the following steps:
• Retrieve input packets through the PMD to poll Rx queue
• Process each received packet or provide received packets to other processing cores
through software queues
• Send pending output packets to Tx queue through the PMD
In this way, the PMD achieves better performance than a traditional interrupt-mode driver, at
the cost of keeping cores active and running at the highest frequency, hence consuming the
maximum power all the time. However, during the period of processing light network traffic,
which happens regularly in communication infrastructure systems due to well-known “tidal ef-
fect”, the PMD is still busy waiting for network packets, which wastes a lot of power.
Processor performance states (P-states) are the capability of an Intel processor to switch be-
tween different supported operating frequencies and voltages. If configured correctly, accord-
ing to system workload, this feature provides power savings. CPUFreq is the infrastructure
provided by the Linux* kernel to control the processor performance state capability. CPUFreq
supports a user space governor that enables setting frequency via manipulating the virtual file
device from a user space application. The Power library in the DPDK provides a set of APIs for
manipulating a virtual file device to allow user space application to set the CPUFreq governor
and set the frequency of specific cores.
This application includes a P-state power management algorithm to generate a frequency hint
to be sent to CPUFreq. The algorithm uses the number of received and available Rx packets
on recent polls to make a heuristic decision to scale frequency up/down. Specifically, some
thresholds are checked to see whether a specific core running an DPDK polling thread needs
to increase frequency a step up based on the near to full trend of polled Rx queues. Also, it
decreases frequency a step if packet processed per loop is far less than the expected threshold
or the thread’s sleeping time exceeds a threshold.
C-States are also known as sleep states. They allow software to put an Intel core into a low
power idle state from which it is possible to exit via an event, such as an interrupt. However,
there is a tradeoff between the power consumed in the idle state and the time required to wake
up from the idle state (exit latency). Therefore, as you go into deeper C-states, the power
consumed is lower but the exit latency is increased. Each C-state has a target residency. It is
essential that when entering into a C-state, the core remains in this C-state for at least as long
as the target residency in order to fully realize the benefits of entering the C-state. CPUIdle is
the infrastructure provide by the Linux kernel to control the processor C-state capability. Unlike
CPUFreq, CPUIdle does not provide a mechanism that allows the application to change C-
state. It actually has its own heuristic algorithms in kernel space to select target C-state to
enter by executing privileged instructions like HLT and MWAIT, based on the speculative sleep
duration of the core. In this application, we introduce a heuristic algorithm that allows packet
processing cores to sleep for a short period if there is no Rx packet received on recent polls.
In this way, CPUIdle automatically forces the corresponding cores to enter deeper C-states
instead of always running to the C0 state waiting for packets.
Note: To fully demonstrate the power saving capability of using C-states, it is recommended
to enable deeper C3 and C6 states in the BIOS during system boot up.
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd-power
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Sets all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• –config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which
ports are mapped to which cores.
• –enable-jumbo: optional, enables jumbo frames
• –max-pkt-len: optional, maximum packet length in decimal (64-9600)
• –no-numa: optional, disables numa awareness
See L3 Forwarding Sample Application for details. The L3fwd-power example reuses the L3fwd
command line options.
Explanation
The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are identical to those of the L3
forwarding application. The following sections describe aspects that are specific to the L3
Forwarding with Power Management sample application.
The Power library is initialized in the main routine. It changes the P-state governor to userspace
for specific cores that are under control. The Timer library is also initialized and several timers
are created later on, responsible for checking if it needs to scale down frequency at run time
by checking CPU utilization statistics.
// ...
rte_timer_subsystem_init();
// ...
/* per-core initialization */
ret = rte_power_init(lcore_id);
if (ret)
rte_exit(EXIT_FAILURE, "Power management library "
"initialization failed on core%d\n", lcore_id);
rte_timer_init(&power_timers[lcore_id]);
hz = rte_get_hpet_hz();
// ...
}
// ...
}
In general, the polling nature of the DPDK prevents the OS power management subsystem
from knowing if the network load is actually heavy or light. In this sample, sampling network
load work is done by monitoring received and available descriptors on NIC Rx queues in recent
polls. Based on the number of returned and available Rx descriptors, this example implements
algorithms to generate frequency scaling hints and speculative sleep duration, and use them
to control P-state and C-state of processors via the power management library. Frequency (P-
state) control and sleep state (C-state) control work individually for each logical core, and the
combination of them contributes to a power efficient packet processing solution when serving
light network loads.
The rte_eth_rx_burst() function and the newly-added rte_eth_rx_queue_count() function are
used in the endless packet processing loop to return the number of received and available Rx
descriptors. And those numbers of specific queue are passed to P-state and C-state heuristic
algorithms to generate hints based on recent network load trends.
static
attribute ((noreturn)) int main_loop( attribute ((unused)) void *dummy)
{
// ...
while (1) {
// ...
/**
* Read packet from RX queues
*/
lcore_scaleup_hint = FREQ_CURRENT;
lcore_rx_idle_count = 0;
if (unlikely(nb_rx == 0)) {
/**
* no packet received from rx queue, try to
* sleep for a while forcing CPU enter deeper
* C states.
*/
rx_queue->zero_rx_packet_count++;
rx_queue->idle_hint = power_idle_heuristic(rx_queue->zero_rx_packet_count);
lcore_rx_idle_count++;
} else {
rx_ring_length = rte_eth_rx_queue_count(portid, queueid);
rx_queue->zero_rx_packet_count = 0;
/**
* do not scale up frequency immediately as
* user to kernel space communication is costly
* which might impact packet I/O for received
* packets.
*/
// ...
}
if (likely(lcore_rx_idle_count != qconf->n_rx_queue)) {
for (i = 1, lcore_scaleup_hint = qconf->rx_queue_list[0].freq_up_hint; i < qconf->n_rx_
x_queue = &(qconf->rx_queue_list[i]);
lcore_scaleup_hint = rx_queue->freq_up_hint;
}
if (lcore_scaleup_hint == FREQ_HIGHEST)
rte_power_freq_max(lcore_id);
stats[lcore_id].sleep_time += lcore_idle_hint;
}
}
}
frequency.
Note: The assumption is that the Rx queue size is 128 and the thresholds specified above
must be adjusted accordingly based on actual hardware Rx queue size, which are configured
via the rte_eth_rx_queue_setup() function.
In general, a thread needs to poll packets from multiple Rx queues. Most likely, different queue
have different load, so they would return different frequency hints. The algorithm evaluates
all the hints and then scales up frequency in an aggressive manner by scaling up to highest
frequency as long as one Rx queue requires. In this way, we can minimize any negative
performance impact.
On the other hand, frequency scaling down is controlled in the timer callback function. Specif-
ically, if the sleep times of a logical core indicate that it is sleeping more than 25% of the
sampling period, or if the average packet per iteration is less than expectation, the frequency
is decreased by one step.
Whenever recent rte_eth_rx_burst() polls return 5 consecutive zero packets, an idle counter
begins incrementing for each successive zero poll. At the same time, the function
power_idle_heuristic() is called to generate speculative sleep duration in order to force log-
ical to enter deeper sleeping C-state. There is no way to control C- state directly, and the
CPUIdle subsystem in OS is intelligent enough to select C-state to enter based on actual sleep
period time of giving logical core. The algorithm has the following sleeping behavior depending
on the idle counter:
• If idle count less than 100, the counter value is used as a microsecond sleep value
through rte_delay_us() which execute pause instructions to avoid costly context switch
but saving power at the same time.
• If idle count is between 100 and 999, a fixed sleep interval of 100 𝜇s is used. A 100 𝜇s
sleep interval allows the core to enter the C1 state while keeping a fast response time in
case new traffic arrives.
• If idle count is greater than 1000, a fixed sleep value of 1 ms is used until the next timer
expiration is used. This allows the core to enter the C3/C6 states.
Note: The thresholds specified above need to be adjusted for different Intel processors and
traffic profiles.
If a thread polls multiple Rx queues and different queue returns different sleep duration values,
the algorithm controls the sleep time in a conservative manner by sleeping for the least possible
time in order to avoid a potential performance impact.
The L3 Forwarding with Access Control application is a simple example of packet processing
using the DPDK. The application performs a security check on received packets. Packets that
are in the Access Control List (ACL), which is loaded during initialization, are dropped. Others
are forwarded to the correct port.
Overview
The application demonstrates the use of the ACL library in the DPDK to implement access
control and packet L3 forwarding. The application loads two types of rules at initialization:
• Route information rules, which are used for L3 forwarding
• Access Control List (ACL) rules that blacklist (or block) packets with a specific character-
istic
When packets are received from a port, the application extracts the necessary information
from the TCP/IP header of the received packet and performs a lookup in the rule database to
figure out whether the packets should be dropped (in the ACL range) or forwarded to desired
ports. The initialization and run-time paths are similar to those of the L3 Forwarding Sample
Application. However, there are significant differences in the two applications. For example,
the original L3 forwarding application uses either LPM or an exact match algorithm to perform
forwarding port lookup, while this application uses the ACL library to perform both ACL and
route entry lookup. The following sections provide more detail.
Classification for both IPv4 and IPv6 packets is supported in this application. The application
also assumes that all the packets it processes are TCP/UDP packets and always extracts
source/destination port information from the packets.
The application implements packet classification for the IPv4/IPv6 5-tuple syntax specifically.
The 5-tuple syntax consist of a source IP address, a destination IP address, a source port, a
destination port and a protocol identifier. The fields in the 5-tuple syntax have the following
formats:
• Source IP address and destination IP address : Each is either a 32-bit field (for IPv4),
or a set of 4 32-bit fields (for IPv6) represented by a value and a mask length. For
example, an IPv4 range of 192.168.1.0 to 192.168.1.255 could be represented by a value
= [192, 168, 1, 0] and a mask length = 24.
• Source port and destination port : Each is a 16-bit field, represented by a lower start
and a higher end. For example, a range of ports 0 to 8192 could be represented by lower
= 0 and higher = 8192.
• Protocol identifier : An 8-bit field, represented by a value and a mask, that covers a
range of values. To verify that a value is in the range, use the following expression: “(VAL
& mask) == value”
The trick in how to represent a range with a mask and value is as follows. A range can be
enumerated in binary numbers with some bits that are never changed and some bits that are
dynamically changed. Set those bits that dynamically changed in mask and value with 0. Set
those bits that never changed in the mask with 1, in value with number expected. For example,
a range of 6 to 7 is enumerated as 0b110 and 0b111. Bit 1-7 are bits never changed and bit
0 is the bit dynamically changed. Therefore, set bit 0 in mask and value with 0, set bits 1-7 in
mask with 1, and bits 1-7 in value with number 0b11. So, mask is 0xfe, value is 0x6.
Note: The library assumes that each field in the rule is in LSB or Little Endian order when cre-
ating the database. It internally converts them to MSB or Big Endian order. When performing
a lookup, the library assumes the input is in MSB or Big Endian order.
The application needs to acquire ACL and route rules before it runs. Route rules are manda-
tory, while ACL rules are optional. To simplify the complexity of the priority field for each rule,
all ACL and route entries are assumed to be in the same file. To read data from the specified
file successfully, the application assumes the following:
• Each rule occupies a single line.
• Only the following four rule line types are valid in this application:
• ACL rule line, which starts with a leading character ‘@’
• Route rule line, which starts with a leading character ‘R’
• Comment line, which starts with a leading character ‘#’
• Empty line, which consists of a space, form-feed (‘f’), newline (‘n’), carriage return (‘r’),
horizontal tab (‘t’), or vertical tab (‘v’).
Other lines types are considered invalid.
• Rules are organized in descending order of priority, which means rules at the head of the
file always have a higher priority than those further down in the file.
• A typical IPv4 ACL rule line should have a format as shown below:
IPv4 addresses are specified in CIDR format as specified in RFC 4632. They consist of the dot
notation for the address and a prefix length separated by ‘/’. For example, 192.168.0.34/32,
where the address is 192.168.0.34 and the prefix length is 32.
Ports are specified as a range of 16-bit numbers in the format MIN:MAX, where MIN and MAX
are the inclusive minimum and maximum values of the range. The range 0:65535 represents all
possible ports in a range. When MIN and MAX are the same value, a single port is represented,
for example, 20:20.
The protocol identifier is an 8-bit value and a mask separated by ‘/’. For example: 6/0xfe
matches protocol values 6 and 7.
• Route rules start with a leading character ‘R’ and have the same format as ACL rules
except an extra field at the tail that indicates the forwarding port number.
Application Phases
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK IPL Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Sets all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• –config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which
ports are mapped to which cores
• –rule_ipv4 FILENAME: Specifies the IPv4 ACL and route rules file
• –rule_ipv6 FILENAME: Specifies the IPv6 ACL and route rules file
• –scalar: Use a scalar function to perform rule lookup
• –enable-jumbo: optional, enables jumbo frames
• –max-pkt-len: optional, maximum packet length in decimal (64-9600)
• –no-numa: optional, disables numa awareness
For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7
and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
To enable L3 forwarding between two ports, assuming that both ports are in the same socket,
using two cores, cores 1 and 2, (which are in the same socket too), use the following command:
./build/l3fwd-acl -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)" --rule_ipv4="./rule_ipv4.db"
In this command:
• The -c option enables cores 1, 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1.
1 0 2 Map queue 0 from port 1 to lcore 2.
• The –rule_ipv4 option specifies the reading of IPv4 rules sets from the ./ rule_ipv4.db file.
• The –rule_ipv6 option specifies the reading of IPv6 rules sets from the ./ rule_ipv6.db file.
• The –scalar option specifies the performing of rule lookup with a scalar function.
Explanation
The following sections provide some explanation of the sample application code. The aspects
of port, device and CPU configuration are similar to those of the L3 Forwarding Sample Appli-
cation. The following sections describe aspects that are specific to L3 forwarding with access
control.
As described earlier, both ACL and route rules are assumed to be saved in the same file. The
application parses the rules from the file and adds them to the database by calling the ACL li-
brary function. It ignores empty and comment lines, and parses and validates the rules it reads.
If errors are detected, the application exits with messages to identify the errors encountered.
The application needs to consider the userdata and priority fields. The ACL rules save the index
to the specific rules in the userdata field, while route rules save the forwarding port number.
In order to differentiate the two types of rules, ACL rules add a signature in the userdata field.
As for the priority field, the application assumes rules are organized in descending order of
priority. Therefore, the code only decreases the priority number with each rule it parses.
For each supported AC rule format (IPv4 5-tuple, IPv6 6-tuple) application creates a separate
context handler from the ACL library for each CPU socket on the board and adds parsed rules
into that context.
Note, that for each supported rule type, application needs to calculate the expected offset of
the fields from the start of the packet. That’s why only packets with fixed IPv4/ IPv6 header are
supported. That allows to perform ACL classify straight over incoming packet buffer - no extra
protocol field retrieval need to be performed.
Subsequently, the application checks whether NUMA is enabled. If it is, the application records
the socket IDs of the CPU cores involved in the task.
Finally, the application creates contexts handler from the ACL library, adds rules parsed from
the file into the database and build an ACL trie. It is important to note that the application
creates an independent copy of each database for each socket CPU involved in the task to
reduce the time for remote memory access.
Overview
The application demonstrates the use of the hash and LPM libraries in the DPDK to implement
packet forwarding. The initialization and run-time paths are very similar to those of the L3
Forwarding Sample Application. The forwarding decision is taken based on information read
from the input packet.
The lookup method is either hash-based or LPM-based and is selected at compile time. When
the selected lookup method is hash-based, a hash object is used to emulate the flow classifica-
tion stage. The hash object is used in correlation with the flow table to map each input packet
to its flow at runtime.
The hash lookup key is represented by the DiffServ 5-tuple composed of the following fields
read from the input packet: Source IP Address, Destination IP Address, Protocol, Source Port
and Destination Port. The ID of the output interface for the input packet is read from the
identified flow table entry. The set of flows used by the application is statically configured and
loaded into the hash at initialization time. When the selected lookup method is LPM based, an
LPM object is used to emulate the forwarding stage for IPv4 packets. The LPM object is used
as the routing table to identify the next hop for each input packet at runtime.
The LPM lookup key is represented by the Destination IP Address field read from the input
packet. The ID of the output interface for the input packet is the next hop returned by the LPM
lookup. The set of LPM rules used by the application is statically configured and loaded into
the LPM object at the initialization time.
Note: Please refer to Virtual Function Setup Instructions for virtualized test case setup.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified in the make
command.
where,
In this command:
• The -l option enables cores 1 and 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1
1 0 2 Map queue 0 from port 1 to lcore 2
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The operation of this application is similar to that of the basic L3 Forwarding Sample Applica-
tion. See Explanation for more information.
The Link Status Interrupt sample application is a simple example of packet processing using
the Data Plane Development Kit (DPDK) that demonstrates how network link status changes
for a network port can be captured and used by a DPDK application.
Overview
The Link Status Interrupt sample application registers a user space callback for the link sta-
tus interrupt of each port and performs L2 forwarding for each packet that is received on an
RX_PORT. The following operations are performed:
• RX_PORT and TX_PORT are paired with available ports one-by-one according to the
core mask
• The source MAC address is replaced by the TX_PORT MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
This application can be used to demonstrate the usage of link status interrupt and its user
space callbacks and the behavior of L2 forwarding each time the link status changes.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified on the
make command line.
where,
• -p PORTMASK: A hexadecimal bitmask of the ports to configure
• -q NQ: A number of queues (=ports) per lcore (default is 1)
• -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default)
To run the application in a linuxapp environment with 4 lcores, 4 memory channels, 16 ports
and 8 RX queues per lcore, issue the command:
$ ./build/link_status_interrupt -c f -n 4-- -q 8 -p ffff
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
The Link Status Interrupt sample application takes specific parameters, in addition to Environ-
ment Abstraction Layer (EAL) arguments (see Section Running the Application).
Command line parsing is done in the same way as it is done in the L2 Forwarding Sample
Application. See Command Line Arguments for more information.
Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See Mbuf Pool Initialization for more information.
Driver Initialization
The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide and the DPDK API Reference.
if (rte_eal_pci_probe() < 0)
rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
nb_ports = rte_eth_dev_count();
if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
/*
* Each logical core is assigned a dedicated TX queue on each port.
*/
if (nb_ports_in_mask % 2) {
lsi_dst_ports[portid] = portid_last;
lsi_dst_ports[portid_last] = portid;
}
else
portid_last = portid;
nb_ports_in_mask++;
Observe that:
• rte_eal_pci_probe() parses the devices on the PCI bus and initializes recognized devices.
The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t) portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n", ret, portid);
Configuring lsc to 0 (the default) disables the generation of any link status change inter-
rupts in kernel space and no user space interrupt event is received. The public interface
rte_eth_link_get() accesses the NIC registers directly to update the link status. Configuring
lsc to non-zero enables the generation of link status change interrupts in kernel space when a
link status change is present and calls the user space callbacks registered by the application.
The public interface rte_eth_link_get() just reads the link status in a global structure that would
be updated in the interrupt host thread only.
The application can register one or more callbacks to a specific port and interrupt event. An
example callback function that has been written as indicated below.
static void
lsi_event_callback(uint8_t port_id, enum rte_eth_event_type type, void *param)
{
struct rte_eth_link link;
RTE_SET_USED(param);
rte_eth_link_get_nowait(port_id, &link);
if (link.link_status) {
printf("Port %d Link Up - speed %u Mbps - %s\n\n", port_id, (unsigned)link.link_speed,
(link.link_duplex == ETH_LINK_FULL_DUPLEX) ? ("full-duplex") : ("half-duplex"));
} else
printf("Port %d Link Down\n\n", port_id);
}
This function is called when a link status interrupt is present for the right port. The port_id
indicates which port the interrupt applies to. The type parameter identifies the interrupt event
type, which currently can be RTE_ETH_EVENT_INTR_LSC only, but other types can be added
in the future. The param parameter is the address of the parameter for the callback. This
function should be implemented with care since it will be called in the interrupt host thread,
which is different from the main thread of its caller.
The application registers the lsi_event_callback and a NULL parameter to the link status inter-
rupt event on each port:
rte_eth_dev_callback_register((uint8_t)portid, RTE_ETH_EVENT_INTR_LSC, lsi_event_callback, NULL
This registration can be done only after calling the rte_eth_dev_configure() function and before
calling any other function. If lsc is initialized with 0, the callback is never called since no interrupt
event would ever be present.
RX Queue Initialization
The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, &rx_conf, lsi_pktmbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%u\n", ret, portid);
The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE]; unsigned tx_queue_id;
struct mbuf_table tx_mbufs[LSI_MAX_PORTS];
} rte_cache_aligned;
The n_rx_port and rx_port_list[] fields are used in the main packet processing loop (see Re-
ceive, Process and Transmit Packets).
The global configuration for the RX queues is stored in a static structure:
static const struct rte_eth_rxconf rx_conf = {
.rx_thresh = {
.pthresh = RX_PTHRESH,
.hthresh = RX_HTHRESH,
.wthresh = RX_WTHRESH,
},
};
TX Queue Initialization
Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue logical core on each port */
fflush(stdout);
.wthresh = TX_WTHRESH,
},
.tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
};
In the lsi_main_loop() function, the main task is to read ingress packets from the RX queues.
This is done using the following code:
/*
* Read packet from RX queues
*/
Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() function writes
the mbuf pointers in a local table and returns the number of available mbufs in the table.
Then, each mbuf in the table is processed by the lsi_simple_forward() function. The processing
is very simple: processes the TX port from the RX port and then replaces the source and
destination MAC addresses.
Note: In the following code, the two lines for calculating the output port require some expla-
nation. If portId is even, the first line does nothing (as portid & 1 will be 0), and the second line
adds 1. If portId is odd, the first line subtracts one and the second line does nothing. Therefore,
0 goes to 1, and 1 to 0, 2 goes to 3 and 3 to 2, and so on.
static void
lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port = lsi_dst_ports[portid];
/* 02:00:00:00:00:xx */
tmp = ð->d_addr.addr_bytes[0];
/* src addr */
ether_addr_copy(&lsi_ports_eth_addr[dst_port], ð->s_addr);
lsi_send_packet(m, dst_port);
}
Then, the packet is sent using the lsi_send_packet(m, dst_port) function. For this test applica-
tion, the processing is exactly the same for all packets arriving on the same RX port. Therefore,
it would have been possible to call the lsi_send_burst() function directly from the main loop to
send all the received packets on the same TX port using the burst-oriented send function,
which is more efficient.
However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that so the same
approach can be reused in a more complex application.
The lsi_send_packet() function stores the packet in a per-lcore and per-txport table. If the table
is full, the whole packets table is transmitted using the lsi_send_burst() function:
/* Send the packet on an output interface */
static int
lsi_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;
if (unlikely(len == MAX_PKT_BURST)) {
lsi_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}
qconf->tx_mbufs[port].len = len;
return 0;
}
To ensure that no packets remain in the tables, each lcore does a draining of the TX queue
in its main loop. This technique introduces some latency when there are not many packets to
send. However, it improves performance:
cur_tsc = rte_rdtsc();
/*
* TX burst queue drain
*/
lsi_send_burst(&lcore_queue_conf[lcore_id],
qconf->tx_mbufs[portid].len, (uint8_t) portid);
qconf->tx_mbufs[portid].len = 0;
}
/* if timer is enabled */
if (timer_period > 0) {
/* advance the timer */
timer_tsc += diff_tsc;
if (lcore_id == rte_get_master_lcore()) {
print_stats();
The Load Balancer sample application demonstrates the concept of isolating the packet I/O
task from the application-specific workload. Depending on the performance target, a number
of logical cores (lcores) are dedicated to handle the interaction with the NIC ports (I/O lcores),
while the rest of the lcores are dedicated to performing the application processing (worker
lcores). The worker lcores are totally oblivious to the intricacies of the packet I/O activity and
use the NIC-agnostic interface provided by software rings to exchange packets with the I/O
cores.
Overview
The architecture of the Load Balance application is presented in the following figure.
For the sake of simplicity, the diagram illustrates a specific case of two I/O RX and two I/O TX
lcores off loading the packet I/O overhead incurred by four NIC ports from four worker cores,
with each I/O lcore handling RX/TX for two NIC ports.
Each I/O RX lcore performs packet RX from its assigned NIC RX rings and then distributes
the received packets to the worker threads. The application allows each I/O RX lcore to com-
municate with any of the worker threads, therefore each (I/O RX lcore, worker lcore) pair is
connected through a dedicated single producer - single consumer software ring.
The worker lcore to handle the current packet is determined by reading a predefined 1-byte
field from the input packet:
worker_id = packet[load_balancing_field] % n_workers
Since all the packets that are part of the same traffic flow are expected to have the same value
for the load balancing field, this scheme also ensures that all the packets that are part of the
same traffic flow are directed to the same worker lcore (flow affinity) in the same order they
enter the system (packet ordering).
Each I/O lcore owns the packet TX for a predefined set of NIC ports. To enable each worker
thread to send packets to any NIC TX port, the application creates a software ring for each
(worker lcore, NIC TX port) pair, with each I/O TX core handling those software rings that are
associated with NIC ports that it handles.
Each worker lcore reads packets from its set of input software rings and routes them to the NIC
ports for transmission by dispatching them to output software rings. The routing logic is LPM
based, with all the worker threads sharing the same LPM rules.
For more details on how to build the DPDK libraries and sample applications, please refer
to the DPDK Getting Started Guide.
To successfully run the application, the command line used to start the application has to be in
sync with the traffic flows configured on the traffic generator side.
For examples of application command lines and traffic generator flows, please refer to the
DPDK Test Report. For more details on how to set up and run the sample applications provided
with DPDK package, please refer to the DPDK Getting Started Guide.
Explanation
Application Configuration
The application run-time configuration is done through the application command line param-
eters. Any parameter that is not specified as mandatory is optional, with the default value
hard-coded in the main.h header file from the application folder.
The list of application command line parameters is listed below:
1. –rx “(PORT, QUEUE, LCORE), ...”: The list of NIC RX ports and queues handled by the
I/O RX lcores. This parameter also implicitly defines the list of I/O RX lcores. This is a
mandatory parameter.
2. –tx “(PORT, LCORE), ... ”: The list of NIC TX ports handled by the I/O TX lcores. This
parameter also implicitly defines the list of I/O TX lcores. This is a mandatory parameter.
3. –w “LCORE, ...”: The list of the worker lcores. This is a mandatory parameter.
4. –lpm “IP / PREFIX => PORT; ...”: The list of LPM rules used by the worker lcores for
packet forwarding. This is a mandatory parameter.
5. –rsz “A, B, C, D”: Ring sizes:
(a) A = The size (in number of buffer descriptors) of each of the NIC RX rings read by
the I/O RX lcores.
(b) B = The size (in number of elements) of each of the software rings used by the I/O
RX lcores to send packets to worker lcores.
(c) C = The size (in number of elements) of each of the software rings used by the
worker lcores to send packets to I/O TX lcores.
(d) D = The size (in number of buffer descriptors) of each of the NIC TX rings written by
I/O TX lcores.
6. –bsz “(A, B), (C, D), (E, F)”: Burst sizes:
(a) A = The I/O RX lcore read burst size from NIC RX.
(b) B = The I/O RX lcore write burst size to the output software rings.
(c) C = The worker lcore read burst size from the input software rings.
(d) D = The worker lcore write burst size to the output software rings.
(e) E = The I/O TX lcore read burst size from the input software rings.
(f) F = The I/O TX lcore write burst size to the NIC TX.
7. –pos-lb POS: The position of the 1-byte field within the input packet used by the I/O RX
lcores to identify the worker lcore for the current packet. This field needs to be within the
first 64 bytes of the input packet.
The infrastructure of software rings connecting I/O lcores and worker lcores is built by the appli-
cation as a result of the application configuration provided by the user through the application
command line parameters.
A specific lcore performing the I/O RX role for a specific set of NIC ports can also perform the
I/O TX role for the same or a different set of NIC ports. A specific lcore cannot perform both
the I/O role (either RX or TX) and the worker role during the same session.
Example:
./load_balancer -c 0xf8 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx "(0,3),(1,3)" --w "4,5,6,7" --lpm "
There is a single I/O lcore (lcore 3) that handles RX and TX for two NIC ports (ports 0 and 1)
that handles packets to/from four worker lcores (lcores 4, 5, 6 and 7) that are assigned worker
IDs 0 to 3 (worker ID for lcore 4 is 0, for lcore 5 is 1, for lcore 6 is 2 and for lcore 7 is 3).
Assuming that all the input packets are IPv4 packets with no VLAN label and the source IP
address of the current packet is A.B.C.D, the worker lcore for the current packet is determined
by byte D (which is byte 29). There are two LPM rules that are used by each worker lcore to
route packets to the output NIC ports.
The following table illustrates the packet flow through the system for several possible traffic
flows:
Flow Source IP Destination IP Worker ID (Worker Output NIC
# Address Address lcore) Port
1 0.0.0.0 1.0.0.1 0 (4) 0
2 0.0.0.1 1.0.1.2 1 (5) 1
3 0.0.0.14 1.0.0.3 2 (6) 0
4 0.0.0.15 1.0.1.4 3 (7) 1
NUMA Support
The application has built-in performance enhancements for the NUMA case:
1. One buffer pool per each CPU socket.
2. One LPM table per each CPU socket.
3. Memory for the NIC RX or TX rings is allocated on the same socket with the lcore han-
dling the respective ring.
In the case where multiple CPU sockets are used in the system, it is recommended to enable
at least one lcore to fulfill the I/O role for the NIC ports that are directly attached to that CPU
socket through the PCI Express* bus. It is always recommended to handle the packet I/O with
lcores from the same CPU socket as the NICs.
Depending on whether the I/O RX lcore (same CPU socket as NIC RX), the worker lcore and
the I/O TX lcore (same CPU socket as NIC TX) handling a specific input packet, are on the
same or different CPU sockets, the following run-time scenarios are possible:
1. AAA: The packet is received, processed and transmitted without going across CPU sock-
ets.
2. AAB: The packet is received and processed on socket A, but as it has to be transmitted
on a NIC port connected to socket B, the packet is sent to socket B through software
rings.
3. ABB: The packet is received on socket A, but as it has to be processed by a worker
lcore on socket B, the packet is sent to socket B through software rings. The packet is
transmitted by a NIC port connected to the same CPU socket as the worker lcore that
processed it.
4. ABC: The packet is received on socket A, it is processed by an lcore on socket B, then
it has to be transmitted out by a NIC connected to socket C. The performance price for
crossing the CPU socket boundary is paid twice for this packet.
This sample application demonstrates the use of EFD library as a flow-level load balancer, for
more information about the EFD Library please refer to the DPDK programmer’s guide.
This sample application is a variant of the client-server sample application where a specific
target node is specified for every and each flow (not in a round-robin fashion as the original
load balancing sample application).
Overview
The architecture of the EFD flow-based load balancer sample application is presented in the
following figure.
As shown in Fig. 3.12, the sample application consists of a front-end node (server) using the
EFD library to create a load-balancing table for flows, for each flow a target backend worker
node is specified. The EFD table does not store the flow key (unlike a regular hash table), and
hence, it can individually load-balance millions of flows (number of targets * maximum number
of flows fit in a flow table per target) while still fitting in CPU cache.
It should be noted that although they are referred to as nodes, the frontend server and worker
nodes are processes running on the same platform.
Front-end Server
Upon initializing, the frontend server node (process) creates a flow distributor table (based on
the EFD library) which is populated with flow information and its intended target node.
The sample application assigns a specific target node_id (process) for each of the IP destina-
tion addresses as follows:
node_id = i % num_nodes; /* Target node id is generated */
ip_dst = rte_cpu_to_be_32(i); /* Specific ip destination address is
assigned to this target node */
then the pair of <key,target> is inserted into the flow distribution table.
The main loop of the server process receives a burst of packets, then for each packet, a flow
key (IP destination address) is extracted. The flow distributor table is looked up and the target
node id is returned. Packets are then enqueued to the specified target node id.
It should be noted that flow distributor table is not a membership test table. I.e. if the key has
already been inserted the target node id will be correct, but for new keys the flow distributor
table will return a value (which can be valid).
Upon initializing, the worker node (process) creates a flow table (a regular hash table that
stores the key default size 1M flows) which is populated with only the flow information that
is serviced at this node. This flow key is essential to point out new keys that have not been
inserted before.
The worker node’s main loop is simply receiving packets then doing a hash table lookup. If a
match occurs then statistics are updated for flows serviced by this node. If no match is found
in the local hash table then this indicates that this is a new flow, which is dropped.
For more details on how to build the DPDK libraries and sample applications, please refer
to the DPDK Getting Started Guide.
The application has two binaries to be run: the front-end server and the back-end node.
The frontend server (server) has the following command line options:
./server [EAL options] -- -p PORTMASK -n NUM_NODES -f NUM_FLOWS
Where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -n NUM_NODES: Number of back-end nodes that will be used
• -f NUM_FLOWS: Number of flows to be added in the EFD table (1 million, by default)
The back-end node (node) has the following command line options:
./node [EAL options] -- -n NODE_ID
Where,
• -n NODE_ID: Node ID, which cannot be equal or higher than NUM_MODES
First, the server app must be launched, with the number of nodes that will be run. Once it has
been started, the node instances can be run, with different NODE_ID. These instances have
to be run as secondary processes, with --proc-type=secondary in the EAL options, which
will attach to the primary process memory, and therefore, they can access the queues created
by the primary process to distribute packets.
To successfully run the application, the command line used to start the application has to be in
sync with the traffic flows configured on the traffic generator side.
For examples of application command lines and traffic generator flows, please refer to the
DPDK Test Report. For more details on how to set up and run the sample applications provided
with DPDK package, please refer to the DPDK Getting Started Guide for Linux and DPDK
Getting Started Guide for FreeBSD.
Explanation
/* create table */
efd_table = rte_efd_create("flow table", num_flows * 2, sizeof(uint32_t),
1 << socket_id, socket_id);
if (efd_table == NULL)
rte_exit(EXIT_FAILURE, "Problem creating the flow table\n");
}
static void
populate_efd_table(void)
{
unsigned int i;
int32_t ret;
uint32_t ip_dst;
uint8_t socket_id = rte_socket_id();
uint64_t node_id;
ip_dst = rte_cpu_to_be_32(i);
ret = rte_efd_update(efd_table, socket_id,
(void *)&ip_dst, (efd_value_t)node_id);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Unable to add entry %u in "
"EFD table\n", i);
}
After initialization, packets are received from the enabled ports, and the IPv4 address from the
packets is used as a key to look up in the EFD table, which tells the node where the packet
has to be distributed.
static void
process_packets(uint32_t port_num __rte_unused, struct rte_mbuf *pkts[],
uint16_t rx_count, unsigned int socket_id)
{
uint16_t i;
uint8_t node;
efd_value_t data[EFD_BURST_MAX];
const void *key_ptrs[EFD_BURST_MAX];
The burst of packets received is enqueued in temporary buffers (per node), and enqueued in
the shared ring between the server and the node. After this, a new burst of packets is received
and this process is repeated infinitely.
static void
flush_rx_queue(uint16_t node)
{
uint16_t j;
struct node *cl;
if (cl_rx_buf[node].count == 0)
return;
cl = &nodes[node];
if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
cl_rx_buf[node].count) != 0){
for (j = 0; j < cl_rx_buf[node].count; j++)
rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
cl->stats.rx_drop += cl_rx_buf[node].count;
} else
cl->stats.rx += cl_rx_buf[node].count;
cl_rx_buf[node].count = 0;
}
The second process, the back-end node, receives the packets from the shared ring with the
server and send them out, if they belong to the node.
At initialization, it attaches to the server process memory, to have access to the shared ring,
parameters and statistics.
rx_ring = rte_ring_lookup(get_rx_queue_name(node_id));
if (rx_ring == NULL)
rte_exit(EXIT_FAILURE, "Cannot get RX ring - "
"is server process running?\n");
mp = rte_mempool_lookup(PKTMBUF_POOL_NAME);
if (mp == NULL)
rte_exit(EXIT_FAILURE, "Cannot get mempool for mbufs\n");
mz = rte_memzone_lookup(MZ_SHARED_INFO);
if (mz == NULL)
rte_exit(EXIT_FAILURE, "Cannot get port info structure\n");
info = mz->addr;
tx_stats = &(info->tx_stats[node_id]);
filter_stats = &(info->filter_stats[node_id]);
Then, the hash table that contains the flows that will be handled by the node is created and
populated.
static struct rte_hash *
create_hash_table(const struct shared_info *info)
{
uint32_t num_flows_node = info->num_flows / info->num_nodes;
char name[RTE_HASH_NAMESIZE];
struct rte_hash *h;
/* create table */
struct rte_hash_parameters hash_params = {
.entries = num_flows_node * 2, /* table load = 50% */
.key_len = sizeof(uint32_t), /* Store IPv4 dest IP address */
.socket_id = rte_socket_id(),
.hash_func_init_val = 0,
};
if (h == NULL)
rte_exit(EXIT_FAILURE,
"Problem creating the hash table for node %d\n",
node_id);
return h;
}
static void
populate_hash_table(const struct rte_hash *h, const struct shared_info *info)
{
unsigned int i;
int32_t ret;
uint32_t ip_dst;
uint32_t num_flows_node = 0;
uint64_t target_node;
ip_dst = rte_cpu_to_be_32(i);
After initialization, packets are dequeued from the shared ring (from the server) and, like in the
server process, the IPv4 address from the packets is used as a key to look up in the hash table.
If there is a hit, packet is stored in a buffer, to be eventually transmitted in one of the enabled
ports. If key is not there, packet is dropped, since the flow is not handled by the node.
static inline void
handle_packets(struct rte_hash *h, struct rte_mbuf **bufs, uint16_t num_packets)
{
struct ipv4_hdr *ipv4_hdr;
uint32_t ipv4_dst_ip[PKT_READ_SIZE];
const void *key_ptrs[PKT_READ_SIZE];
unsigned int i;
int32_t positions[PKT_READ_SIZE] = {0};
Finally, note that both processes updates statistics, such as transmitted, received and dropped
packets, which are shown and refreshed by the server app.
static void
do_stats_display(void)
{
unsigned int i, j;
const char clr[] = {27, '[', '2', 'J', '\0'};
const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0'};
uint64_t port_tx[RTE_MAX_ETHPORTS], port_tx_drop[RTE_MAX_ETHPORTS];
uint64_t node_tx[MAX_NODES], node_tx_drop[MAX_NODES];
port_tx[j] += tx_val;
port_tx_drop[j] += drop_val;
node_tx[i] += tx_val;
node_tx_drop[i] += drop_val;
}
}
printf("PORTS\n");
printf("-----\n");
for (i = 0; i < info->num_ports; i++)
printf("Port %u: '%s'\t", (unsigned int)info->id[i],
get_printable_mac_addr(info->id[i]));
printf("\n\n");
for (i = 0; i < info->num_ports; i++) {
printf("Port %u - rx: %9"PRIu64"\t"
"tx: %9"PRIu64"\n",
(unsigned int)info->id[i], info->rx_stats.rx[i],
port_tx[i]);
}
printf("\nSERVER\n");
printf("-----\n");
printf("distributed: %9"PRIu64", drop: %9"PRIu64"\n",
flow_dist_stats.distributed, flow_dist_stats.drop);
printf("\nNODES\n");
printf("-------\n");
for (i = 0; i < num_nodes; i++) {
const unsigned long long rx = nodes[i].stats.rx;
const unsigned long long rx_drop = nodes[i].stats.rx_drop;
const struct filter_stats *filter = &info->filter_stats[i];
printf("\n");
}
This chapter describes the example applications for multi-processing that are included in the
DPDK.
Example Applications
The multi-process example applications are built in the same way as other sample applications,
and as documented in the DPDK Getting Started Guide. To build all the example applications:
1. Set RTE_SDK and go to the example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/multi_process
2. Set the target (a default target will be used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the applications:
make
Note: If just a specific multi-process application needs to be built, the final make command
can be run just in that application’s directory, rather than at the top-level multi-process directory.
The examples/simple_mp folder in the DPDK release contains a basic example application to
demonstrate how two DPDK processes can work together using queues and memory pools to
share information.
To run the application, start one copy of the simple_mp binary in one terminal, passing at least
two cores in the coremask, as follows:
./build/simple_mp -c 3 -n 4 --proc-type=primary
For the first DPDK process run, the proc-type flag can be omitted or set to auto, since all
DPDK processes will default to being a primary instance, meaning they have control over
the hugepage shared memory regions. The process should start successfully and display a
command prompt as follows:
$ ./build/simple_mp -c 3 -n 4 --proc-type=primary
EAL: coremask set to 3
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 0
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 0
...
Starting core 1
simple_mp >
To run the secondary process to communicate with the primary process, again run the same
binary setting at least two cores in the coremask:
./build/simple_mp -c C -n 4 --proc-type=secondary
When running a secondary process such as that shown above, the proc-type parameter can
again be specified as auto. However, omitting the parameter altogether will cause the process
to try and start as a primary rather than secondary process.
Once the process type is specified correctly, the process starts up, displaying largely similar
status messages to the primary instance as it initializes. Once again, you will be presented
with a command prompt.
Once both processes are running, messages can be sent between them using the send com-
mand. At any stage, either process can be terminated using the quit command.
EAL: Master core 10 is ready (tid=b5f89820) EAL: Master core 8 is ready (tid=864a3820
EAL: Core 11 is ready (tid=84ffe700) EAL: Core 9 is ready (tid=85995700)
Starting core 11 Starting core 9
simple_mp > send hello_secondary simple_mp > core 9: Received 'hello_secon
simple_mp > core 11: Received 'hello_primary' simple_mp > send hello_primary
simple_mp > quit simple_mp > quit
Note: If the primary instance is terminated, the secondary instance must also be shut-down
and restarted after the primary. This is necessary because the primary instance will clear and
reset the shared memory regions on startup, invalidating the secondary process’s pointers.
The secondary process can be stopped and restarted without affecting the primary process.
The core of this example application is based on using two queues and a single memory pool
in shared memory. These three objects are created at startup by the primary process, since
the secondary process cannot create objects in memory as it cannot reserve memory zones,
and the secondary process then uses lookup functions to attach to these objects as it starts
up.
if (rte_eal_process_type() == RTE_PROC_PRIMARY){
send_ring = rte_ring_create(_PRI_2_SEC, ring_size, SOCKET0, flags);
recv_ring = rte_ring_create(_SEC_2_PRI, ring_size, SOCKET0, flags);
message_pool = rte_mempool_create(_MSG_POOL, pool_size, string_size, pool_cache, priv_data_
} else {
recv_ring = rte_ring_lookup(_PRI_2_SEC);
send_ring = rte_ring_lookup(_SEC_2_PRI);
message_pool = rte_mempool_lookup(_MSG_POOL);
}
Note, however, that the named ring structure used as send_ring in the primary process is the
recv_ring in the secondary process.
Once the rings and memory pools are all available in both the primary and secondary pro-
cesses, the application simply dedicates two threads to sending and receiving messages re-
spectively. The receive thread simply dequeues any messages on the receive ring, prints them,
and frees the buffer space used by the messages back to the memory pool. The send thread
makes use of the command-prompt library to interactively request user input for messages to
send. Once a send command is issued by the user, a buffer is allocated from the memory pool,
filled in with the message contents, then enqueued on the appropriate rte_ring.
The second example of DPDK multi-process support demonstrates how a set of processes can
run in parallel, with each process performing the same set of packet- processing operations.
(Since each process is identical in functionality to the others, we refer to this as symmetric
multi-processing, to differentiate it from asymmetric multi- processing - such as a client-server
mode of operation seen in the next example, where different processes perform different tasks,
yet co-operate to form a packet-processing system.) The following diagram shows the data-
flow through the application, using two processes.
As the diagram shows, each process reads packets from each of the network ports in use.
RSS is used to distribute incoming packets on each port to different hardware RX queues.
Each process reads a different RX queue on each port and so does not contend with any other
process for that queue access. Similarly, each process writes outgoing packets to a different
TX queue on each port.
As with the simple_mp example, the first instance of the symmetric_mp process must be run
as the primary instance, though with a number of other application- specific parameters also
provided after the EAL arguments. These additional parameters are:
• -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system
are to be used. For example: -p 3 to use ports 0 and 1 only.
• –num-procs <N>, where N is the total number of symmetric_mp instances that will be
run side-by-side to perform packet processing. This parameter is used to configure the
appropriate number of receive queues on each network port.
• –proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes,
specified above). This identifies which symmetric_mp instance is being run, so that each
Note: In the above example, the process type can be explicitly specified as primary or sec-
ondary, rather than auto. When using auto, the first process run creates all the memory struc-
tures needed for all processes - irrespective of whether it has a proc-id of 0, 1, 2 or 3.
Note: For the symmetric multi-process example, since all processes work in the same manner,
once the hugepage shared memory and the network ports are initialized, it is not necessary
to restart all processes if the primary instance dies. Instead, that process can be restarted
as a secondary, by explicitly setting the proc-type to secondary on the command line. (All
subsequent instances launched will also need this explicitly specified, as auto-detection will
detect no primary processes running and therefore attempt to re-initialize shared memory.)
The initialization calls in both the primary and secondary instances are the same for the most
part, calling the rte_eal_init(), 1 G and 10 G driver initialization and then rte_eal_pci_probe()
functions. Thereafter, the initialization done depends on whether the process is configured as
a primary or secondary instance.
In the primary instance, a memory pool is created for the packet mbufs and the network ports
to be used are initialized - the number of RX and TX queues per port being determined by the
num-procs parameter passed on the command-line. The structures for the initialized network
ports are stored in shared memory and therefore will be accessible by the secondary process
as it initializes.
if (num_ports & 1)
rte_exit(EXIT_FAILURE, "Application must use an even number of ports\n");
In the secondary instance, rather than initializing the network ports, the port information ex-
ported by the primary process is used, giving the secondary process access to the hardware
and software rings for each network port. Similarly, the memory pool of mbufs is accessed by
doing a lookup for it by name:
mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_creat
Once this initialization is complete, the main loop of each process, both primary and secondary,
is exactly the same - each process reads from each port using the queue corresponding to its
proc-id parameter, and writes to the corresponding transmit queue on the output port.
The third example multi-process application included with the DPDK shows how one can use
a client-server type multi-process design to do packet processing. In this example, a single
server process performs the packet reception from the ports being used and distributes these
packets using round-robin ordering among a set of client processes, which perform the ac-
tual packet processing. In this case, the client applications just perform level-2 forwarding of
packets by sending each packet out on a different network port.
The following diagram shows the data-flow through the application, using two client processes.
The server process must be run initially as the primary process to set up all memory structures
for use by the clients. In addition to the EAL parameters, the application- specific parameters
are:
• -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system
are to be used. For example: -p 3 to use ports 0 and 1 only.
• -n <num-clients>, where the num-clients parameter is the number of client processes that
will process the packets received by the server application.
Note: In the server process, a single thread, the master thread, that is, the lowest numbered
lcore in the coremask, performs all packet I/O. If a coremask is specified with more than a
single lcore bit set in it, an additional lcore will be used for a thread to periodically print packet
count statistics.
Since the server application stores configuration data in shared memory, including the network
ports to be used, the only application parameter needed by a client process is its client instance
ID. Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with
two client processes running on lcores 3 and 4, the following commands could be used:
# ./mp_server/build/mp_server -c 6 -n 4 -- -p 3 -n 2
# ./mp_client/build/mp_client -c 8 -n 4 --proc-type=auto -- -n 0
# ./mp_client/build/mp_client -c 10 -n 4 --proc-type=auto -- -n 1
Note: If the server application dies and needs to be restarted, all client applications also need
to be restarted, as there is no support in the server application for it to run as a secondary
process. Any client processes that need restarting can be restarted without affecting the server
process.
The server process performs the network port and data structure initialization much as the
symmetric multi-process application does when run as primary. One additional enhancement
in this sample application is that the server process stores its port configuration data in a
memory zone in hugepage shared memory. This eliminates the need for the client processes
to have the portmask parameter passed into them on the command line, as is done for the
symmetric multi-process application, and therefore eliminates mismatched parameters as a
potential source of errors.
In the same way that the server process is designed to be run as a primary process instance
only, the client processes are designed to be run as secondary instances only. They have
no code to attempt to create shared memory objects. Instead, handles to all needed rings
and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup(). The
network ports for use by the processes are obtained by loading the network port drivers and
probing the PCI bus, which will, as in the symmetric multi-process example, automatically
get access to the network ports using the settings already configured by the primary/server
process.
Once all applications are initialized, the server operates by reading packets from each network
port in turn and distributing those packets to the client queues (software rings, one for each
client process) in round-robin order. On the client side, the packets are read from the rings in
as big of bursts as possible, then routed out to a different network port. The routing used is
very simple. All packets received on the first NIC port are transmitted back out on the second
port and vice versa. Similarly, packets are routed between the 3rd and 4th network ports and
so on. The sending of packets is done by writing the packets directly to the network ports; they
are not transferred back via the server process.
In both the server and the client processes, outgoing packets are buffered before being sent, so
as to allow the sending of multiple packets in a single burst to improve efficiency. For example,
the client process will buffer packets to send, until either the buffer is full or until we receive no
further packets from the server.
The fourth example of DPDK multi-process support demonstrates a master-slave model that
provide the capability of application recovery if a slave process crashes or meets unexpected
conditions. In addition, it also demonstrates the floating process, which can run among different
cores in contrast to the traditional way of binding a process/thread to a specific CPU core, using
the local cache mechanism of mempool structures.
This application performs the same functionality as the L2 Forwarding sample application,
therefore this chapter does not cover that part but describes functionality that is introduced in
this multi-process example only. Please refer to L2 Forwarding Sample Application (in Real
and Virtualized Environments) for more information.
Unlike previous examples where all processes are started from the command line with input
arguments, in this example, only one process is spawned from the command line and that
process creates other processes. The following section describes this in more detail.
The process spawned from the command line is called the master process in this document. A
process created by the master is called a slave process. The application has only one master
process, but could have multiple slave processes.
Once the master process begins to run, it tries to initialize all the resources such as memory,
CPU cores, driver, ports, and so on, as the other examples do. Thereafter, it creates slave
processes, as shown in the following figure.
The master process calls the rte_eal_mp_remote_launch() EAL function to launch an appli-
cation function for each pinned thread through the pipe. Then, it waits to check if any slave
processes have exited. If so, the process tries to re-initialize the resources that belong to that
slave and launch them in the pinned thread entry again. The following section describes the
recovery procedures in more detail.
For each pinned thread in EAL, after reading any data from the pipe, it tries to call the function
that the application specified. In this master specified function, a fork() call creates a slave
process that performs the L2 forwarding task. Then, the function waits until the slave exits, is
killed or crashes. Thereafter, it notifies the master of this event and returns. Finally, the EAL
pinned thread waits until the new function is launched.
After discussing the master-slave model, it is necessary to mention another issue, global and
static variables.
For multiple-thread cases, all global and static variables have only one copy and they can be
accessed by any thread if applicable. So, they can be used to sync or share data among
threads.
In the previous examples, each process has separate global and static variables in memory and
are independent of each other. If it is necessary to share the knowledge, some communication
mechanism should be deployed, such as, memzone, ring, shared memory, and so on. The
global or static variables are not a valid approach to share data among processes. For variables
in this example, on the one hand, the slave process inherits all the knowledge of these variables
after being created by the master. On the other hand, other processes cannot know if one or
more processes modifies them after slave creation since that is the nature of a multiple process
address space. But this does not mean that these variables cannot be used to share or sync
data; it depends on the use case. The following are the possible use cases:
1. The master process starts and initializes a variable and it will never be changed after
slave processes created. This case is OK.
2. After the slave processes are created, the master or slave cores need to change a vari-
able, but other processes do not need to know the change. This case is also OK.
3. After the slave processes are created, the master or a slave needs to change a variable.
In the meantime, one or more other process needs to be aware of the change. In this
case, global and static variables cannot be used to share knowledge. Another communi-
cation mechanism is needed. A simple approach without lock protection can be a heap
buffer allocated by rte_malloc or mem zone.
Before talking about the recovery mechanism, it is necessary to know what is needed before a
new slave instance can run if a previous one exited.
When a slave process exits, the system returns all the resources allocated for this process
automatically. However, this does not include the resources that were allocated by the DPDK.
All the hardware resources are shared among the processes, which include memzone, mem-
pool, ring, a heap buffer allocated by the rte_malloc library, and so on. If the new instance runs
and the allocated resource is not returned, either resource allocation failed or the hardware
resource is lost forever.
When a slave process runs, it may have dependencies on other processes. They could have
execution sequence orders; they could share the ring to communicate; they could share the
same port for reception and forwarding; they could use lock structures to do exclusive access
in some critical path. What happens to the dependent process(es) if the peer leaves? The
consequence are varied since the dependency cases are complex. It depends on what the
processed had shared. However, it is necessary to notify the peer(s) if one slave exited. Then,
the peer(s) will be aware of that and wait until the new instance begins to run.
Therefore, to provide the capability to resume the new slave instance if the previous one exited,
it is necessary to provide several mechanisms:
1. Keep a resource list for each slave process. Before a slave process run, the master
should prepare a resource list. After it exits, the master could either delete the allocated
resources and create new ones, or re-initialize those for use by the new instance.
2. Set up a notification mechanism for slave process exit cases. After the specific slave
leaves, the master should be notified and then help to create a new instance. This mech-
anism is provided in Section Master-slave Process Models.
3. Use a synchronization mechanism among dependent processes. The master should
have the capability to stop or kill slave processes that have a dependency on the one
that has exited. Then, after the new instance of exited slave process begins to run,
the dependency ones could resume or run from the start. The example sends a STOP
command to slave processes dependent on the exited one, then they will exit. Thereafter,
the master creates new instances for the exited slave processes.
The following diagram describes slave process recovery.
When the DPDK application runs, there is always a -c option passed in to indicate the cores
that are enabled. Then, the DPDK creates a thread for each enabled core. By doing so, it
creates a 1:1 mapping between the enabled core and each thread. The enabled core always
has an ID, therefore, each thread has a unique core ID in the DPDK execution environment.
With the ID, each thread can easily access the structures or resources exclusively belonging
to it without using function parameter passing. It can easily use the rte_lcore_id() function to
get the value in every function that is called.
For threads/processes not created in that way, either pinned to a core or not, they will not own a
unique ID and the rte_lcore_id() function will not work in the correct way. However, sometimes
these threads/processes still need the unique ID mechanism to do easy access on structures
or resources. For example, the DPDK mempool library provides a local cache mechanism
(refer to Local Cache) for fast element allocation and freeing. If using a non-unique ID or a
fake one, a race condition occurs if two or more threads/ processes with the same core ID try
to use the local cache.
Therefore, unused core IDs from the passing of parameters with the -c option are used to
organize the core ID allocation array. Once the floating process is spawned, it tries to allocate
a unique core ID from the array and release it on exit.
A natural way to spawn a floating process is to use the fork() function and allocate a unique
core ID from the unused core ID array. However, it is necessary to write new code to provide
a notification mechanism for slave exit and make sure the process recovery mechanism can
work with it.
To avoid producing redundant code, the Master-Slave process model is still used to spawn
floating processes, then cancel the affinity to specific cores. Besides that, clear the core ID as-
signed to the DPDK spawning a thread that has a 1:1 mapping with the core mask. Thereafter,
get a new core ID from the unused core ID allocation array.
This example has a command line similar to the L2 Forwarding sample application with a few
differences.
To run the application, start one copy of the l2fwd_fork binary in one terminal. Unlike the L2
Forwarding example, this example requires at least three cores since the master process will
wait and be accountable for slave process recovery. The command is as follows:
#./build/l2fwd_fork -c 1c -n 4 -- -p 3 -f
This example provides another -f option to specify the use of floating process. If not specified,
the example will use a pinned process to perform the L2 forwarding task.
To verify the recovery mechanism, proceed as follows: First, check the PID of the slave pro-
cesses:
#ps -fe | grep l2fwd_fork
root 5136 4843 29 11:11 pts/1 00:00:05 ./build/l2fwd_fork
root 5145 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork
root 5146 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork
It can also monitor the traffic generator statics to see whether slave processes have resumed.
Explanation
As described in previous sections, not all global and static variables need to change to be
accessible in multiple processes; it depends on how they are used. In this example, the statics
info on packets dropped/forwarded/received count needs to be updated by the slave process,
and the master needs to see the update and print them out. So, it needs to allocate a heap
buffer using rte_zmalloc. In addition, if the -f option is specified, an array is needed to store
the allocated core ID for the floating process so that the master can return it after a slave has
exited accidentally.
static int
l2fwd_malloc_shared_struct(void)
{
port_statistics = rte_zmalloc("port_stat", sizeof(struct l2fwd_port_statistics) * RTE_MAX_E
if (port_statistics == NULL)
return -1;
if (float_proc) {
int i;
}
return 0;
}
For each slave process, packets are received from one port and forwarded to another port
that another slave is operating on. If the other slave exits accidentally, the port it is operating
on may not work normally, so the first slave cannot forward packets to that port. There is a
dependency on the port in this case. So, the master should recognize the dependency. The
following is the code to detect this dependency:
for (portid = 0; portid < nb_ports; portid++) {
/* skip ports that are not enabled */
find_lcore = find_pair_lcore = 0;
pair_port = l2fwd_dst_ports[portid];
if (lcore_queue_conf[i].rx_port_list[j] == pair_port) {
pair_lcore = i;
find_pair_lcore = 1;
break;
}
}
if (!find_lcore || !find_pair_lcore)
rte_exit(EXIT_FAILURE, "Not find port=%d pair\\n", portid);
lcore_resource[lcore].pair_id = pair_lcore;
lcore_resource[pair_lcore].pair_id = lcore;
}
Before launching the slave process, it is necessary to set up the communication channel be-
tween the master and slave so that the master can notify the slave if its peer process with the
dependency exited. In addition, the master needs to register a callback function in the case
where a specific slave exited.
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_resource[i].enabled) {
/* Create ring for master and slave communication */
ret = create_ms_ring(i);
if (ret != 0)
rte_exit(EXIT_FAILURE, "Create ring for lcore=%u failed",i);
if (flib_register_slave_exit_notify(i,slave_exit_cb) != 0)
rte_exit(EXIT_FAILURE, "Register master_trace_slave_exit failed");
}
}
After launching the slave process, the master waits and prints out the port statics periodically.
If an event indicating that a slave process exited is detected, it sends the STOP command to
the peer and waits until it has also exited. Then, it tries to clean up the execution environment
and prepare new resources. Finally, the new slave instance is launched.
while (1) {
sleep(1);
cur_tsc = rte_rdtsc();
diff_tsc = cur_tsc - prev_tsc;
/* if timer is enabled */
if (timer_period > 0) {
/* advance the timer */
timer_tsc += diff_tsc;
prev_tsc = cur_tsc;
rte_spinlock_lock(&res_lock);
if (reset_pair(res->lcore_id, pair->lcore_id) != 0)
rte_exit(EXIT_FAILURE, "failed to reset slave");
res->flags = 0;
pair->flags = 0;
}
}
rte_spinlock_unlock(&res_lock);
}
When the slave process is spawned and starts to run, it checks whether the floating process
option is applied. If so, it clears the affinity to a specific core and also sets the unique core
ID to 0. Then, it tries to allocate a new core ID. Since the core ID has changed, the resource
allocated by the master cannot work, so it remaps the resource to the new core ID slot.
static int
l2fwd_launch_one_lcore( attribute ((unused)) void *dummy)
{
unsigned lcore_id = rte_lcore_id();
if (float_proc) {
unsigned flcore_id;
clear_cpu_affinity();
RTE_PER_LCORE(_lcore_id) = 0;
/* Get a lcore_id */
if (flib_assign_lcore_id() < 0 ) {
printf("flib_assign_lcore_id failed\n");
return -1;
}
flcore_id = rte_lcore_id();
mapping_id[lcore_id] = flcore_id;
printf("Org lcore_id = %u, cur lcore_id = %u\n",lcore_id, flcore_id);
remapping_slave_resource(lcore_id, flcore_id);
}
l2fwd_main_loop();
return 0;
}
The QoS meter sample application is an example that demonstrates the use of DPDK to pro-
vide QoS marking and metering, as defined by RFC2697 for Single Rate Three Color Marker
(srTCM) and RFC 2698 for Two Rate Three Color Marker (trTCM) algorithm.
Overview
The application uses a single thread for reading the packets from the RX port, metering, mark-
ing them with the appropriate color (green, yellow or red) and writing them to the TX port.
A policing scheme can be applied before writing the packets to the TX port by dropping or
changing the color of the packet in a static manner depending on both the input and output
colors of the packets that are processed by the meter.
The operation mode can be selected as compile time out of the following options:
• Simple forwarding
• srTCM color blind
• srTCM color aware
• srTCM color blind
• srTCM color aware
Please refer to RFC2697 and RFC2698 for details about the srTCM and trTCM configurable
parameters (CIR, CBS and EBS for srTCM; CIR, PIR, CBS and PBS for trTCM).
The color blind modes are functionally equivalent with the color-aware modes when all the
incoming packets are colored as green.
export RTE_TARGET=x86_64-native-linuxapp-gcc
The application is constrained to use a single core in the EAL core mask and 2 ports only in
the application port mask (first port from the port mask is used for RX and the other port in the
core mask is used for TX).
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
Explanation
To simplify debugging (for example, by using the traffic generator RX side MAC address based
packet filtering feature), the color is defined as the LSB byte of the destination MAC address.
The traffic meter parameters are configured in the application source code with following default
values:
struct rte_meter_srtcm_params app_srtcm_params[] = {
};
{.cir = 1000000 * 46, .pir = 1500000 * 46, .cbs = 2048, .pbs = 2048},
};
Assuming the input traffic is generated at line rate and all packets are 64 bytes Ethernet frames
(IPv4 packet size of 46 bytes) and green, the expected output traffic should be marked as
shown in the following table:
int policer_table[e_RTE_METER_COLORS][e_RTE_METER_COLORS] =
{
{ GREEN, RED, RED},
{ DROP, YELLOW, RED},
{ DROP, DROP, RED}
};
Where rows indicate the input color, columns indicate the output color, and the value that is
stored in the table indicates the action to be taken for that particular case.
There are four different actions:
• GREEN: The packet’s color is changed to green.
• YELLOW: The packet’s color is changed to yellow.
• RED: The packet’s color is changed to red.
• DROP: The packet is dropped.
In this particular case:
• Every packet which input and output color are the same, keeps the same color.
• Every packet which color has improved is dropped (this particular case can’t happen, so
these values will not be used).
• For the rest of the cases, the color is changed to red.
The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
Overview
The architecture of the QoS scheduler application is shown in the following figure.
There are two flavors of the runtime execution for this application, with two or three threads per
each packet flow configuration being used. The RX thread reads packets from the RX port,
classifies the packets based on the double VLAN (outer and inner) and the lower two bytes of
the IP destination address and puts them into the ring queue. The worker thread dequeues the
packets from the ring and calls the QoS scheduler enqueue/dequeue functions. If a separate
TX core is used, these are sent to the TX ring. Otherwise, they are sent directly to the TX port.
The TX thread, if present, reads from the TX ring and write the packets to the TX port.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
Note: To get statistics on the sample app using the command line interface as described in the
next section, DPDK must be compiled defining CONFIG_RTE_SCHED_COLLECT_STATS,
which can be done by changing the configuration file for the specific target to be compiled.
Note: In order to run the application, a total of at least 4 G of huge pages must be set up for
each of the used sockets (depending on the cores in use).
frame overhead = 24
number of subports per port = 1
number of pipes per subport = 4096
queue sizes = 64 64 64 64
; Subport configuration
[subport 0]
tb rate = 1250000000; Bytes per second
tb size = 1000000; Bytes
tc 0 rate = 1250000000; Bytes per second
; Pipe configuration
[pipe profile 0]
tb rate = 305175; Bytes per second
tb size = 1000000; Bytes
tc 0 oversubscription weight = 1
tc 1 oversubscription weight = 1
tc 2 oversubscription weight = 1
tc 3 oversubscription weight = 1
tc 0 wrr weights = 1 1 1 1
tc 1 wrr weights = 1 1 1 1
tc 2 wrr weights = 1 1 1 1
tc 3 wrr weights = 1 1 1 1
; RED params per traffic class and color (Green / Yellow / Red)
[red]
tc 0 wred min = 48 40 32
tc 0 wred max = 64 64 64
tc 0 wred inv prob = 10 10 10
tc 0 wred weight = 9 9 9
tc 1 wred min = 48 40 32
tc 1 wred max = 64 64 64
tc 1 wred inv prob = 10 10 10
tc 1 wred weight = 9 9 9
tc 2 wred min = 48 40 32
tc 2 wred max = 64 64 64
tc 2 wred inv prob = 10 10 10
tc 2 wred weight = 9 9 9
tc 3 wred min = 48 40 32
tc 3 wred max = 64 64 64
tc 3 wred inv prob = 10 10 10
tc 3 wred weight = 9 9 9
Interactive mode
These are the commands that are currently working under the command line interface:
• Control Commands
• –quit: Quits the application.
• General Statistics
Example
This example uses a single packet flow configuration which creates one RX thread on lcore 5
reading from port 3 and a worker thread on lcore 7 writing to port 2.
Another example with 2 packet flow configurations using different ports but sharing the same
core for QoS scheduler is given below:
./qos_sched -c c6 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
Note that independent cores for the packet flow configurations for each of the RX, WT and TX
thread are also supported, providing flexibility to balance the work.
The EAL coremask is constrained to contain the default mastercore 1 and the RX, WT and TX
cores only.
Explanation
The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS appli-
cation:
• A subport represents a predefined group of users.
• A pipe represents an individual user/subscriber.
• A traffic class is the representation of a different traffic type with a specific loss rate, delay
and jitter requirements; such as data voice, video or data transfers.
• A queue hosts packets from one or multiple connections of the same type belonging to
the same user.
The traffic flows that need to be configured are application dependent. This application classi-
fies based on the QinQ double VLAN tags and the IP destination address as indicated in the
following table.
This sample application demonstrates the use of the cryptographic operations provided by
the Intel® QuickAssist Technology from within the DPDK environment. Therefore, building
and running this application requires having both the DPDK and the QuickAssist Technology
Software Library installed, as well as at least one Intel® QuickAssist Technology hardware
device present in the system.
For this sample application, there is a dependency on either of:
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* package
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* package
Overview
An overview of the application is provided in Fig. 3.18. For simplicity, only two NIC ports and
one Intel® QuickAssist Technology device are shown in this diagram, although the number of
NIC ports and Intel® QuickAssist Technology devices can be different.
• D = 2: the cryptographic path for decryption is selected (the packet is first decrypted and
then forwarded out)
For the cryptographic path cases (D = 1 or D = 2), byte C specifies the cipher algorithm and byte
B the cryptographic hash algorithm to be used for the current packet. Byte A is not used and
can be any value. The cipher and cryptographic hash algorithms supported by this application
are listed in the crypto.h header file.
For each input packet, the destination NIC TX port is decided at the forwarding stage (executed
after the cryptographic stage, if enabled for the packet) by looking at the RX port index of the
dst_ports[ ] array, which was initialized at startup, being the outport the adjacent enabled port.
For example, if ports 1,3,5 and 6 are enabled, for input port 1, outport port will be 3 and vice
versa, and for input port 5, output port will be 6 and vice versa.
For the cryptographic path, it is the payload of the IPv4 packet that is encrypted or decrypted.
Setup
Building and running this application requires having both the DPDK package and the Quick-
Assist Technology Software Library installed, as well as at least one Intel® QuickAssist Tech-
nology hardware device present in the system.
For more details on how to build and run DPDK and Intel® QuickAssist Technology applica-
tions, please refer to the following documents:
• DPDK Getting Started Guide
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* Getting Started
Guide (440005)
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* Getting Started
Guide (523128)
For more details on the actual platforms used to validate this application, as well as perfor-
mance numbers, please refer to the Test Report, which is accessible by contacting your Intel
representative.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
Refer to the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
cd ${RTE_SDK}/examples/dpdk_qat
make
The Intel® QuickAssist Technology configuration files used by the application are located in
the config_files folder in the application folder. There following sets of configuration files are
included in the DPDK package:
• Stargo CRB (single CPU socket): located in the stargo folder
– dh89xxcc_qa_dev0.conf
• Shumway CRB (dual CPU socket): located in the shumway folder
– dh89xxcc_qa_dev0.conf
– dh89xxcc_qa_dev1.conf
• Coleto Creek: located in the coleto folder
– dh895xcc_qa_dev0.conf
The relevant configuration file(s) must be copied to the /etc/ directory.
Please note that any change to these configuration files requires restarting the Intel® Quick-
Assist Technology driver using the following command:
# service qat_service restart
Refer to the following documents for information on the Intel® QuickAssist Technology config-
uration files:
• Intel® Communications Chipset 8900 to 8920 Series Software Programmer’s Guide
• Intel® Communications Chipset 8925 to 8955 Series Software Programmer’s Guide
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* Getting Started
Guide.
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* Getting Started
Guide.
Refer to the L3 Forwarding Sample Application for more detailed descriptions of the –config
command line option.
As an example, to run the application with two ports and two cores, which are using different
Intel® QuickAssist Technology execution engines, performing AES-CBC-128 encryption with
AES-XCBC-MAC-96 hash, the following settings can be used:
• Traffic generator source IP address: 0.9.6.1
• Command line:
./build/dpdk_qat -c 0xff -n 2 -- -p 0x3 --config '(0,0,1),(1,0,2)'
Refer to the DPDK Test Report for more examples of traffic generator setup and the application
startup command lines. If no errors are generated in response to the startup commands, the
application is running correctly.
The Quota and Watermark sample application is a simple example of packet processing using
Data Plane Development Kit (DPDK) that showcases the use of a quota as the maximum
number of packets enqueue/dequeue at a time and low and high watermarks to signal low and
high ring usage respectively.
Additionally, it shows how ring watermarks can be used to feedback congestion notifications
to data producers by temporarily stopping processing overloaded rings and sending Ethernet
flow control frames.
This sample application is split in two parts:
• qw - The core quota and watermark sample application
• qwctl - A command line tool to alter quota and watermarks while qw is running
Overview
The Quota and Watermark sample application performs forwarding for each packet that is
received on a given port. The destination port is the adjacent port from the enabled port mask,
that is, if the first four ports are enabled (port mask 0xf), ports 0 and 1 forward into each other,
and ports 2 and 3 forward into each other. The MAC addresses of the forwarded Ethernet
frames are not affected.
Internally, packets are pulled from the ports by the master logical core and put on a variable
length processing pipeline, each stage of which being connected by rings, as shown in Fig.
3.19.
An adjustable quota value controls how many packets are being moved through the pipeline
per enqueue and dequeue. Adjustable watermark values associated with the rings control a
back-off mechanism that tries to prevent the pipeline from being overloaded by:
• Stopping enqueuing on rings for which the usage has crossed the high watermark thresh-
old
• Sending Ethernet pause frames
• Only resuming enqueuing on a ring once its usage goes below a global low watermark
threshold
This mechanism allows congestion notifications to go up the ring pipeline and eventually lead
to an Ethernet flow control frame being send to the source.
On top of serving as an example of quota and watermark usage, this application can be used to
benchmark ring based processing pipelines performance using a traffic- generator, as shown
in Fig. 3.20.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
-p PORTMASK: A hexadecimal bitmask of the ports to configure
To run the application in a linuxapp environment with four logical cores and ports 0 and 2, issue
the following command:
./qw/build/qw -c f -n 4 -- -p 5
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
The –proc-type=secondary option is necessary for the EAL to properly initialize the control
application to use the same huge pages as the core application and thus be able to access its
rings.
To run the application in a linuxapp environment on logical core 0, issue the following command:
./qwctl/build/qwctl -c 1 -n 4 --proc-type=secondary
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
qwctl is an interactive command line that let the user change variables in a running instance of
qw. The help command gives a list of available commands:
$ qwctl > help
Code Overview
The following sections provide a quick guide to the application’s source code.
Core Application - qw
The EAL arguments are parsed at the beginning of the main() function:
argc -= ret;
argv += ret;
Then, a call to init_dpdk(), defined in init.c, is made to initialize the poll mode drivers:
void
init_dpdk(void)
{
int ret;
ret = rte_eal_pci_probe();
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eal_pci_probe(): error %d\n", ret);
if (rte_eth_dev_count() < 2)
rte_exit(EXIT_FAILURE, "Not enough Ethernet port available\n");
}
To fully understand this code, it is recommended to study the chapters that relate to the Poll
Mode Driver in the DPDK Getting Started Guide and the DPDK API Reference.
The quota and low_watermark shared variables are put into an rte_memzone using a call to
setup_shared_variables():
void
setup_shared_variables(void)
{
const struct rte_memzone *qw_memzone;
if (qw_memzone == NULL)
rte_exit(EXIT_FAILURE, "%s\n", rte_strerror(rte_errno));
quota = qw_memzone->addr;
low_watermark = (unsigned int *) qw_memzone->addr + sizeof(int);
}
These two variables are initialized to a default value in main() and can be changed while qw is
running using the qwctl control program.
Application Arguments
The qw application only takes one argument: a port mask that specifies which ports should be
used by the application. At least two ports are needed to run the application and there should
be an even number of ports given in the port mask.
The port mask parsing is done in parse_qw_args(), defined in args.c.
Once the application’s arguments are parsed, an mbuf pool is created. It contains a set of mbuf
objects that are used by the driver and the application to store network packets:
/* Create a pool of mbuf to store packets */
if (mbuf_pool == NULL)
rte_panic("%s\n", rte_strerror(rte_errno));
The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each. A
per-lcore cache of 32 mbufs is kept. The memory is allocated in on the master lcore’s socket,
but it is possible to extend this code to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer.
The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a new
function derived from rte_pktmbuf_init() can be created.
Each port in the port mask is configured and a corresponding ring is created in the master
lcore’s array of rings. This ring is the first in the pipeline and will hold the packets directly
coming from the port.
for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
if (is_bit_set(port_id, portmask)) {
configure_eth_port(port_id);
init_ring(master_lcore_id, port_id);
}
pair_ports();
The configure_eth_port() and init_ring() functions are used to configure a port and a ring re-
spectively and are defined in init.c. They make use of the DPDK APIs defined in rte_eth.h and
rte_ring.h.
pair_ports() builds the port_pairs[] array so that its key-value pairs are a mapping between
reception and transmission ports. It is defined in init.c.
The application uses the master logical core to poll all the ports for new packets and enqueue
them on a ring associated with the port.
Each logical core except the last runs pipeline_stage() after a ring for each used port is ini-
tialized on that core. pipeline_stage() on core X dequeues packets from core X-1’s rings and
enqueue them on its own rings. See Fig. 3.21.
/* Start pipeline_stage() on all the available slave lcore but the last */
The last available logical core runs send_stage(), which is the last stage of the pipeline de-
queuing packets from the last ring in the pipeline and sending them out on the destination port
setup by pair_ports().
/* Start send_stage() on the last slave core */
In the receive_stage() function running on the master logical core, the main task is to read
ingress packets from the RX ports and enqueue them on the port’s corresponding first ring in
the pipeline. This is done using the following code:
lcore_id = rte_lcore_id();
ring = rings[lcore_id][port_id];
if (ring_state[port_id] != RING_READY) {
if (rte_ring_count(ring) > *low_watermark)
continue;
else
ring_state[port_id] = RING_READY;
}
}
}
For each port in the port mask, the corresponding ring’s pointer is fetched into ring and that
ring’s state is checked:
• If it is in the RING_READY state, *quota packets are grabbed from the port and put on
the ring. Should this operation make the ring’s usage cross its high watermark, the ring
is marked as overloaded and an Ethernet flow control frame is sent to the source.
• If it is not in the RING_READY state, this port is ignored until the ring’s usage crosses
the *low_watermark value.
The pipeline_stage() function’s task is to process and move packets from the preceding pipeline
stage. This thread is running on most of the logical cores to create and arbitrarily long pipeline.
lcore_id = rte_lcore_id();
previous_lcore_id = get_previous_lcore_id(lcore_id);
tx = rings[lcore_id][port_id];
rx = rings[previous_lcore_id][port_id];
if (ring_state[port_id] != RING_READY) {
if (rte_ring_count(tx) > *low_watermark)
continue;
else
ring_state[port_id] = RING_READY;
}
/* Enqueue them on tx */
The thread’s logic works mostly like receive_stage(), except that packets are moved from ring
to ring instead of port to ring.
In this example, no actual processing is done on the packets, but pipeline_stage() is an ideal
place to perform any processing required by the application.
Finally, the send_stage() function’s task is to read packets from the last ring in a pipeline and
send them on the destination port defined in the port_pairs[] array. It is running on the last
available logical core only.
lcore_id = rte_lcore_id();
previous_lcore_id = get_previous_lcore_id(lcore_id);
dest_port_id = port_pairs[port_id];
tx = rings[previous_lcore_id][port_id];
if (rte_ring_empty(tx)) continue;
For each port in the port mask, up to *quota packets are pulled from the last ring in its pipeline
and sent on the destination port paired with the current port.
The qwctl application uses the rte_cmdline library to provide the user with an interactive com-
mand line that can be used to modify and inspect parameters in a running qw application.
Those parameters are the global quota and low_watermark value as well as each ring’s built-in
high watermark.
Command Definitions
qw_memzone = rte_memzone_lookup(QUOTA_WATERMARK_MEMZONE_NAME);
if (qw_memzone == NULL)
rte_exit(EXIT_FAILURE, "Couldn't find memzone\n");
quota = qw_memzone->addr;
The Timer sample application is a simple application that demonstrates the use of a timer in
a DPDK application. This application prints some messages from different lcores regularly,
demonstrating the use of timers.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
In addition to EAL initialization, the timer subsystem must be initialized, by calling the
rte_timer_subsystem_init() function.
/* init EAL */
rte_timer_subsystem_init();
After timer creation (see the next paragraph), the main loop is executed on each slave lcore
using the well-known rte_eal_remote_launch() and also on the master.
/* call lcore_mainloop() on every slave lcore */
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
rte_eal_remote_launch(lcore_mainloop, NULL, lcore_id);
}
(void) lcore_mainloop(NULL);
while (1) {
/*
* Call the timer handler on each core: as we don't
* need a very precise timer, so only call
* rte_timer_manage() every ~10ms (at 2 GHz). In a real
* application, this will enhance performances as
* reading the HPET timer is not efficient.
*/
cur_tsc = rte_rdtsc();
As explained in the comment, it is better to use the TSC register (as it is a per-lcore register) to
check if the rte_timer_manage() function must be called or not. In this example, the resolution
of the timer is 10 milliseconds.
Managing Timers
In the main() function, the two timers are initialized. This call to rte_timer_init() is necessary
before doing any other operation on the timer structure.
/* init timer structures */
rte_timer_init(&timer0);
rte_timer_init(&timer1);
hz = rte_get_hpet_hz();
lcore_id = rte_lcore_id();
The callback for the first timer (timer0) only displays a message until a global counter reaches
20 (after 20 seconds). In this case, the timer is stopped using the rte_timer_stop() function.
/* timer0 callback */
static void
timer0_cb( attribute ((unused)) struct rte_timer *tim, __attribute ((unused)) void *arg)
{
static unsigned counter = 0;
/* this timer is automatically reloaded until we decide to stop it, when counter reaches 20
The callback for the second timer (timer1) displays a message and reloads the timer on the
next lcore, using the rte_timer_reset() function:
/* timer1 callback */
static void
timer1_cb( attribute ((unused)) struct rte_timer *tim, _attribute ((unused)) void *arg)
{
unsigned lcore_id = rte_lcore_id();
uint64_t hz;
hz = rte_get_hpet_hz();
The Packet Ordering sample app simply shows the impact of reordering a stream. It’s meant
to stress the library with different configurations for performance.
Overview
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
The -c EAL CPU_COREMASK option has to contain at least 3 CPU cores. The first CPU core
in the core mask is the master core and would be assigned to RX core, the last to TX core and
the rest to Worker cores.
The PORTMASK parameter must contain either 1 or even enabled port numbers. When setting
more than 1 port, traffic would be forwarded in pairs. For example, if we enable 4 ports, traffic
from port 0 to 1 and from 1 to 0, then the other pair from 2 to 3 and from 3 to 2, having [0,1]
and [2,3] pairs.
The disable-reorder long option does, as its name implies, disable the reordering of traffic,
which should help evaluate reordering performance impact.
The VMDQ and DCB Forwarding sample application is a simple example of packet processing
using the DPDK. The application performs L2 forwarding using VMDQ and DCB to divide the
incoming traffic into queues. The traffic splitting is performed in hardware by the VMDQ and
DCB features of the Intel® 82599 and X710/XL710 Ethernet Controllers.
Overview
This sample application can be used as a starting point for developing a new application that
is based on the DPDK and uses VMDQ and DCB for traffic partitioning.
The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic into input queues
on the basis of the Destination MAC address, VLAN ID and VLAN user priority fields. VMDQ
filters split the traffic into 16 or 32 groups based on the Destination MAC and VLAN ID. Then,
DCB places each packet into one of queues within that group, based upon the VLAN user
priority field.
All traffic is read from a single incoming port (port 0) and output on port 1, without any process-
ing being performed. With Intel® 82599 NIC, for example, the traffic is split into 128 queues
on input, where each thread of the application reads from multiple queues. When run with
8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16
queues.
As supplied, the sample application configures the VMDQ feature to have 32 pools with
4 queues each as indicated in Fig. 3.22. The Intel® 82599 10 Gigabit Ethernet Con-
troller NIC also supports the splitting of traffic into 16 pools of 8 queues. While the
Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ
pools of 4 or 8 queues each. For simplicity, only 16 or 32 pools is supported in this
sample. And queues numbers for each VMDQ pool can be changed by setting CON-
FIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM in config/common_* file. The nb-pools, nb-
tcs and enable-rss parameters can be passed on the command line, after the EAL parameters:
./build/vmdq_dcb [EAL options] -- -p PORTMASK --nb-pools NP --nb-tcs TC --enable-rss
Fig. 3.22: Packet Flow Through the VMDQ and DCB Sample Application
In Linux* user space, the application can display statistics with the number of packets received
on each queue. To have the application display the statistics, send a SIGHUP signal to the
running application process.
The VMDQ and DCB Forwarding sample application is in many ways simpler than the L2
Forwarding application (see L2 Forwarding Sample Application (in Real and Virtualized Envi-
ronments)) as it performs unidirectional L2 forwarding of packets from one port to a second
port. No command-line options are taken by this application apart from the standard EAL
command-line options.
Note: Since VMD queues are being used for VMM, this application works correctly when VTd
is disabled in the BIOS or Linux* kernel (intel_iommu=off).
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
Explanation
Initialization
The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample
application, as is the creation of the mbuf pool. See L2 Forwarding Sample Application (in Real
and Virtualized Environments). Where this example application differs is in the configuration of
the NIC port for RX.
The VMDQ and DCB hardware feature is configured at port initialization time by setting the
appropriate values in the rte_eth_conf structure passed to the rte_eth_dev_configure() API.
Initially in the application, a default structure is provided for VMDQ and DCB configuration to
be filled in later by the application.
/* empty vmdq+dcb configuration structure. Filled in programmatically */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 0, /**< IP checksum offload disabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
},
.txmode = {
.mq_mode = ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
* appropriate values
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
.nb_queue_pools = ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
.pool_map = {{0, 0},},
.dcb_tc = {0},
},
.dcb_rx_conf = {
.nb_tcs = ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
.nb_queue_pools = ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
.pool_map = {{0, 0},},
},
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
.nb_queue_pools = ETH_32_POOLS,
.dcb_tc = {0},
},
},
};
The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values, based
on the global vlan_tags array, and dividing up the possible user priority values equally among
the individual queues (also referred to as traffic classes) within each pool. With Intel® 82599
NIC, if the number of pools is 32, then the user priority fields are allocated 2 to a queue.
If 16 pools are used, then each of the 8 user priority fields is allocated to its own queue
within the pool. With Intel® X710/XL710 NICs, if number of tcs is 4, and number of queues
in pool is 8, then the user priority fields are allocated 2 to one tc, and a tc has 2 queues
mapping to it, then RSS will determine the destination queue in 2. For the VLAN IDs, each
one can be allocated to possibly multiple pools of queues, so the pools parameter in the
rte_eth_vmdq_dcb_conf structure is specified as a bitmask value. For destination MAC, each
VMDQ pool will be assigned with a MAC address. In this sample, each VMDQ pool is assigned
to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is, the MAC of VMDQ pool 2 on port 1
is 52:54:00:12:01:02.
const uint16_t vlan_tags[] = {
0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31
};
/* pool mac addr template, pool mac addr is like: 52 54 00 12 port# pool# */
static struct ether_addr pool_addr_template = {
.addr_bytes = {0x52, 0x54, 0x00, 0x12, 0x00, 0x00}
};
/* Builds up the correct configuration for vmdq+dcb based on the vlan tags array
* given above, and the number of traffic classes available for use. */
static inline int
get_eth_conf(struct rte_eth_conf *eth_conf)
{
struct rte_eth_vmdq_dcb_conf conf;
struct rte_eth_vmdq_rx_conf vmdq_conf;
struct rte_eth_dcb_rx_conf dcb_conf;
struct rte_eth_vmdq_dcb_tx_conf tx_conf;
uint8_t i;
conf.pool_map[i].vlan_id = vlan_tags[i];
vmdq_conf.pool_map[i].vlan_id = vlan_tags[i];
conf.pool_map[i].pools = 1UL << i ;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
}
dcb_conf.nb_tcs = (enum rte_eth_nb_tcs)num_tcs;
(void)(rte_memcpy(eth_conf, &vmdq_dcb_conf_default, sizeof(*eth_conf)));
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_dcb_conf, &conf,
sizeof(conf)));
(void)(rte_memcpy(ð_conf->rx_adv_conf.dcb_rx_conf, &dcb_conf,
sizeof(dcb_conf)));
(void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &vmdq_conf,
sizeof(vmdq_conf)));
(void)(rte_memcpy(ð_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
eth_conf->rxmode.mq_mode= ETH_MQ_RX_VMDQ_DCB_RSS;
eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
ETH_RSS_UDP |
ETH_RSS_TCP |
ETH_RSS_SCTP;
}
return 0;
}
......
Once the network port has been initialized using the correct VMDQ and DCB values, the
initialization of the port’s RX and TX hardware rings is performed similarly to that in the L2
Forwarding sample application. See L2 Forwarding Sample Application (in Real and Virtualized
Environments) for more information.
Statistics Display
When run in a linuxapp environment, the VMDQ and DCB Forwarding sample application can
display statistics showing the number of packets read from each RX queue. This is provided
by way of a signal handler for the SIGHUP signal, which simply prints to standard output the
packet counts in grid form. Each row of the output is a single pool with the columns being the
queue number within that pool.
To generate the statistics output, use the following command:
user@host$ sudo killall -HUP vmdq_dcb_app
Please note that the statistics output will appear on the terminal where the vmdq_dcb_app is
running, rather than the terminal from which the HUP signal was sent.
The vhost sample application demonstrates integration of the Data Plane Development Kit
(DPDK) with the Linux* KVM hypervisor by implementing the vhost-net offload API. The sam-
ple application performs simple packet switching between virtual machines based on Media
Access Control (MAC) address or Virtual Local Area Network (VLAN) tag. The splitting of
Ethernet traffic from an external switch is performed in hardware by the Virtual Machine De-
vice Queues (VMDQ) and Data Center Bridging (DCB) features of the Intel® 82599 10 Gigabit
Ethernet Controller.
Testing steps
This section shows the steps how to test a typical PVP case with this vhost-switch sample,
whereas packets are received from the physical NIC port first and enqueued to the VM’s Rx
queue. Through the guest testpmd’s default forwarding mode (io forward), those packets will
be put into the Tx queue. The vhost-switch example, in turn, gets the packets and puts back to
the same physical NIC port.
Build
Follow the Getting Started Guide for Linux on generic info about environment setup and build-
ing DPDK from source.
In this example, you need build DPDK both on the host and inside guest. Also, you need build
this example.
export RTE_SDK=/path/to/dpdk_source
export RTE_TARGET=x86_64-native-linuxapp-gcc
cd ${RTE_SDK}/examples/vhost
make
Check the Parameters section for the explanations on what do those parameters mean.
Start the VM
Note: For basic vhost-user support, QEMU 2.2 (or above) is required. For some specific
features, a higher version might be need. Such as QEMU 2.7 (or above) for the reconnect
feature.
Make sure you have DPDK built inside the guest. Also make sure the corresponding virtio-net
PCI device is bond to a uio driver, which could be done by:
modprobe uio_pci_generic
$RTE_SDK/usertools/dpdk-devbind.py -b=uio_pci_generic 0000:00:04.0
Inject packets
While a virtio-net is connected to vhost-switch, a VLAN tag starts with 1000 is assigned to it.
So make sure configure your packet generator with the right MAC and VLAN tag, you should
be able to see following log from the vhost-switch console. It means you get it work:
VHOST_DATA: (0) mac 52:54:00:00:00:14 and vlan 1000 registered
Parameters
–mergeable 0|1 Set 0/1 to disable/enable the mergeable Rx feature. It’s disabled by default.
–stats interval The stats parameter controls the printing of virtio-net device statistics. The
parameter specifies an interval (in unit of seconds) to print statistics, with an interval of 0
seconds disabling statistics.
–rx-retry 0|1 The rx-retry option enables/disables enqueue retries when the guests Rx queue
is full. This feature resolves a packet loss that is observed at high data rates, by allowing it to
delay and retry in the receive path. This option is enabled by default.
–rx-retry-num num The rx-retry-num option specifies the number of retries on an Rx burst, it
takes effect only when rx retry is enabled. The default value is 4.
–rx-retry-delay msec The rx-retry-delay option specifies the timeout (in micro seconds) be-
tween retries on an RX burst, it takes effect only when rx retry is enabled. The default value is
15.
–dequeue-zero-copy Dequeue zero copy will be enabled when this option is given.
–vlan-strip 0|1 VLAN strip option is removed, because different NICs have different behaviors
when disabling VLAN strip. Such feature, which heavily depends on hardware, should be
removed from this example to reduce confusion. Now, VLAN strip is enabled and cannot be
disabled.
Common Issues
• QEMU fails to allocate memory on hugetlbfs, with an error like the following:
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
When running QEMU the above error indicates that it has failed to allocate memory for
the Virtual Machine on the hugetlbfs. This is typically due to insufficient hugepages being
free to support the allocation request. The number of free hugepages can be checked as
follows:
cat /sys/kernel/mm/hugepages/hugepages-<pagesize>/nr_hugepages
The command above indicates how many hugepages are free to support QEMU’s allo-
cation request.
• vhost-user will not work with QEMU without the -mem-prealloc option
The current implementation works properly only when the guest memory is pre-allocated.
• vhost-user will not work with a QEMU version without shared memory mapping:
Make sure share=on QEMU option is given.
• Failed to build DPDK in VM
Make sure “-cpu host” QEMU option is given.
Introduction
The Netmap compatibility library provides a minimal set of APIs to give programs written
against the Netmap APIs the ability to be run, with minimal changes to their source code,
using the DPDK to perform the actual packet I/O.
Since Netmap applications use regular system calls, like open(), ioctl() and mmap() to
communicate with the Netmap kernel module performing the packet I/O, the compat_netmap
library provides a set of similar APIs to use in place of those system calls, effectively turning a
Netmap application into a DPDK application.
The provided library is currently minimal and doesn’t support all the features that Netmap
supports, but is enough to run simple applications, such as the bridge example detailed below.
Knowledge of Netmap is required to understand the rest of this section. Please refer to the
Netmap distribution for details about Netmap.
Available APIs
The library provides the following drop-in replacements for system calls usually used in Netmap
applications:
• rte_netmap_close()
• rte_netmap_ioctl()
• rte_netmap_open()
• rte_netmap_mmap()
• rte_netmap_poll()
They use the same signature as their libc counterparts, and can be used as drop-in replace-
ments in most cases.
Caveats
Given the difference between the way Netmap and the DPDK approach packet I/O, there are
caveats and limitations to be aware of when trying to use the compat_netmap library, the
most important of these are listed below. These may change as the library is updated:
• Any system call that can potentially affect file descriptors cannot be used with a descriptor
returned by the rte_netmap_open() function.
Note that:
• The rte_netmap_mmap() function merely returns the address of a DPDK memzone.
The address, length, flags, offset, and other arguments are ignored.
• The rte_netmap_poll() function only supports infinite (negative) or zero time outs.
It effectively turns calls to the poll() system call made in a Netmap application into
polling of the DPDK ports, changing the semantics of the usual POSIX defined poll.
• Not all of Netmap’s features are supported: host rings, slot flags and so on are not
supported or are simply not relevant in the DPDK model.
• The Netmap manual page states that “a device obtained through /dev/netmap also sup-
ports the ioctl supported by network devices”. This is not the case with this compatibility
layer.
• The Netmap kernel module exposes a sysfs interface to change some internal parame-
ters, such as the size of the shared memory region. This interface is not available when
using this compatibility layer.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for Linux for possible RTE_TARGET values.
3. Build the application:
make
where,
• -i INTERFACE: Interface (DPDK port number) to use.
If a single -i parameter is given, the interface will send back all the traffic it receives. If
two -i parameters are given, the two interfaces form a bridge, where traffic received on
one interface is replicated and sent to the other interface.
For example, to run the application in a linuxapp environment using port 0 and 2:
./build/bridge [EAL options] -- -i 0 -i 2
Refer to the DPDK Getting Started Guide for Linux for general information on running applica-
tions and the Environment Abstraction Layer (EAL) options.
Note that unlike a traditional bridge or the l2fwd sample application, no MAC address changes
are done on the frames. Do not forget to take this into account when configuring a traffic
generators and testing this sample application.
Application overview
The Internet Protocol (IP) Pipeline application is intended to be a vehicle for rapid development
of packet processing applications running on multi-core CPUs.
The application provides a library of reusable functional blocks called pipelines. These
pipelines can be seen as prefabricated blocks that can be instantiated and inter-connected
through packet queues to create complete applications (super-pipelines).
Pipelines are created and inter-connected through the application configuration file. By using
different configuration files, different applications are effectively created, therefore this appli-
cation can be seen as an application generator. The configuration of each pipeline can be
updated at run-time through the application Command Line Interface (CLI).
Main application components are:
A Library of reusable pipelines
• Each pipeline represents a functional block, e.g. flow classification, firewall, routing,
master, etc.
• Each pipeline type can be instantiated several times in the same application, which each
instance configured separately and mapped to a single CPU core. Each CPU core can
run one or several pipeline instances, which can be of same or different type.
• Pipeline instances are inter-connected through packet queues (for packet processing)
and message queues (for run-time configuration).
• Pipelines are implemented using DPDK Packet Framework.
• More pipeline types can always be built and added to the existing pipeline types.
Design goals
Rapid development
This application enables rapid development through quick connectivity of standard components
called pipelines. These components are built using DPDK Packet Framework and encapsulate
packet processing features at different levels: ports, tables, actions, pipelines and complete
applications.
Pipeline instances are instantiated, configured and inter-connected through low complexity
configuration files loaded during application initialization. Each pipeline instance is mapped to
a single CPU core, with each CPU core able to run one or multiple pipeline instances of same
or different types. By loading a different configuration file, a different application is effectively
started.
Flexibility
The mapping of pipeline instances to CPU cores can be reshuffled through the configuration
file. One or several pipeline instances can be mapped to the same CPU core.
Fig. 3.23: Example of moving pipeline instances across different CPU cores
There is some degree of flexibility for moving tables from one pipeline instance to another.
Based on the configuration arguments passed to each pipeline instance in the configuration
file, specific tables can be enabled or disabled. This way, a specific table can be “moved” from
pipeline instance A to pipeline instance B by simply disabling its associated functionality for
pipeline instance A while enabling it for pipeline instance B.
Due to requirement to have simple syntax for the configuration file, moving tables across dif-
ferent pipeline instances is not as flexible as the mapping of pipeline instances to CPU cores,
or mapping actions to pipeline tables. Complete flexibility in moving tables from one pipeline to
another could be achieved through a complex pipeline description language that would detail
the structural elements of the pipeline (ports, tables and actions) and their connectivity, result-
ing in complex syntax for the configuration file, which is not acceptable. Good configuration file
readability through simple syntax is preferred.
Example: the IP routing pipeline can run the routing function only (with ARP function run by
a different pipeline instance), or it can run both the routing and ARP functions as part of the
same pipeline instance.
When it makes sense, packet processing actions can be moved from one pipeline instance
to another. Based on the configuration arguments passed to each pipeline instance in the
configuration file, specific actions can be enabled or disabled. This way, a specific action can
be “moved” from pipeline instance A to pipeline instance B by simply disabling its associated
functionality for pipeline instance A while enabling it for pipeline instance B.
Example: The flow actions of accounting, traffic metering, application identification, NAT, etc
can be run as part of the flow classification pipeline instance or split across several flow ac-
tions pipeline instances, depending on the number of flow instances and their compute require-
ments.
Fig. 3.25: Example of moving actions across different tables and pipeline instances
Performance
Performance of the application is the highest priority requirement. Flexibility is not provided at
the expense of performance.
The purpose of flexibility is to provide an incremental development methodology that allows
monitoring the performance evolution:
• Apply incremental changes in the configuration (e.g. mapping on pipeline instances to
CPU cores) in order to identify the configuration providing the best performance for a
given application;
• Add more processing incrementally (e.g. by enabling more actions for specific pipeline in-
stances) until the application is feature complete while checking the performance impact
at each step.
Debug capabilities
• Argument: Hexadecimal mask of NIC port IDs to be used by the application. First port
enabled in this mask will be referenced as LINK0 as part of the application configuration
file, next port as LINK1, etc.
-l LOG_LEVEL
• Optional: Yes
• Default: 1 (High priority)
• Argument: Log level to determine which application messages are to be printed to stan-
dard output. Available log levels are: 0 (None), 1 (High priority), 2 (Low priority). Only
application messages whose priority is higher than or equal to the application log level
will be printed.
Application stages
Configuration
During this stage, the application configuration file is parsed and its content is loaded into the
application data structures. In case of any configuration file parse error, an error message is
displayed and the application is terminated. Please refer to the Configuration file syntax for a
description of the application configuration file format.
Configuration checking
In the absence of any parse errors, the loaded content of application data structures is checked
for overall consistency. In case of any configuration check error, an error message is displayed
and the application is terminated.
Initialization
During this stage, the application resources are initialized and the handles to access them are
saved into the application data structures. In case of any initialization error, an error message
is displayed and the application is terminated.
The typical resources to be initialized are: pipeline instances, buffer pools, links (i.e. network
interfaces), hardware device RX/TX queues, software queues, traffic management devices,
etc.
Run-time
Each CPU core runs the pipeline instances assigned to it in time sharing mode and in round
robin order:
1. Packet processing task : The pipeline run-time code is typically a packet processing task
built on top of DPDK Packet Framework rte_pipeline library, which reads bursts of packets
from the pipeline input ports, performs table lookups and executes the identified actions
for all tables in the pipeline, with packet eventually written to pipeline output ports or
dropped.
2. Message handling task : Each CPU core will also periodically execute the message han-
dling code of each of the pipelines mapped to it. The pipeline message handling code is
processing the messages that are pending in the pipeline input message queues, which
are typically sent by the master CPU core for the on-the-fly pipeline configuration: check
that pipeline is still alive (ping), add/delete entries in the pipeline tables, get statistics, etc.
The frequency of executing the message handling code is usually much smaller than the
frequency of executing the packet processing work.
Please refer to the PIPELINE section for more details about the application pipeline module
encapsulation.
Syntax overview
The syntax of the configuration file is designed to be simple, which favors readability. The
configuration file is parsed using the DPDK library librte_cfgfile, which supports simple INI file
format for configuration files.
As result, the configuration file is split into several sections, with each section containing one
or more entries. The scope of each entry is its section, and each entry specifies a variable that
is assigned a specific value. Any text after the ; character is considered a comment and is
therefore ignored.
The following are application specific: number of sections, name of each section, number of
entries of each section, name of the variables used for each section entry, the value format
(e.g. signed/unsigned integer, string, etc) and range of each section entry variable.
Generic example of configuration file section:
[<section_name>]
<variable_name_1> = <value_1>
; ...
<variable_name_N> = <value_N>
having their default values. Optionally, a section with the resource name can be added
to the configuration file to fine tune some or all of the resource parameters.
Example: in section PIPELINE3, variable pktq_in includes SWQ5 as part of its list,
which results in defining a software queue named SWQ5; when there is no SWQ5 section
present in the configuration file, SWQ5 gets registered with default parameters.
PIPELINE section
For example:
CPU socket 0, core 0, hyper-thread 0: 0, c0, s0c0
MEMPOOL section
LINK section
RXQ section
TXQ section
SWQ section
TM section
KNI section
SOURCE section
SINK section
Currently, there are no parameters to be passed to a sink device, so SINK section is not
allowed.
MSGQ section
EAL section
The application generates the EAL parameters rather than reading them from the command
line.
The CPU core mask parameter is generated based on the core entry of all PIPELINE sections.
All the other EAL parameters can be set from this section of the application configuration file.
Pipeline module
Pipeline type
Each pipeline type is made up of a back-end and a front-end. The back-end represents the
packet processing engine of the pipeline, typically implemented using the DPDK Packet Frame-
work libraries, which reads packets from the input packet queues, handles them and eventually
writes them to the output packet queues or drops them. The front-end represents the run-time
configuration interface of the pipeline, which is exposed as CLI commands. The front-end
communicates with the back-end through message queues.
Each pipeline instance is a standalone block that does not have visibility into the other pipeline
instances or the application-level pipeline inter-connectivity. In some cases, it is useful for a
pipeline instance to get application level information related to pipeline connectivity, such as to
identify the output link (e.g. physical NIC port) where one of its output ports connected, either
directly or indirectly by traversing other pipeline instances.
Tracking can be successful or unsuccessful. Typically, tracking for a specific pipeline instance
is successful when each one of its input ports can be mapped to a single output port, meaning
that all packets read from the current input port can only go out on a single output port. De-
pending on the pipeline type, some exceptions may be allowed: a small portion of the packets,
considered exception packets, are sent out on an output port that is pre-configured for this
purpose.
For pass-through pipeline type, the tracking is always successful. For pipeline types as flow
classification, firewall or routing, the tracking is only successful when the number of output
ports for the current pipeline instance is 1.
This feature is used by the IP routing pipeline for adding/removing implicit routes every time a
link is brought up/down.
Table copies
Fast table copy: pipeline table used by pipeline for the packet processing task, updated through
messages, table data structures are optimized for lookup operation.
Slow table copy: used by the configuration layer, typically updated through CLI commands,
kept in sync with the fast copy (its update triggers the fast copy update). Required for executing
advanced table queries without impacting the packet processing task, therefore the slow copy
is typically organized using different criteria than the fast copy.
Examples:
• Flow classification: Search through current set of flows (e.g. list all flows with a specific
source IP address);
• Firewall: List rules in descending order of priority;
• Routing table: List routes sorted by prefix depth and their type (local, remote, default);
• ARP: List entries sorted per output interface.
Packet meta-data
Packet meta-data field offsets provided as argument to pipeline instances are essentially defin-
ing the data structure for the packet meta-data used by the current application use-case. It is
very useful to put it in the configuration file as a comment in order to facilitate the readability of
the configuration file.
The reason to use field offsets for defining the data structure for the packet meta-data is due
to the C language limitation of not being able to define data structures at run-time. Feature to
consider: have the configuration file parser automatically generate and print the data structure
defining the packet meta-data for the current application use-case.
Packet meta-data typically contains:
1. Pure meta-data: intermediate data per packet that is computed internally, passed be-
tween different tables of the same pipeline instance (e.g. lookup key for the ARP table
is obtained from the routing table), or between different pipeline instances (e.g. flow ID,
traffic metering color, etc);
2. Packet fields: typically, packet header fields that are read directly from the packet, or read
from the packet and saved (duplicated) as a working copy at a different location within
the packet meta-data (e.g. Diffserv 5-tuple, IP destination address, etc).
Several strategies are used to design the packet meta-data, as described in the next subsec-
tions.
Store packet meta-data in a different cache line as the packet headers This approach is
able to support protocols with variable header length, like MPLS, where the offset of IP header
from the start of the packet (and, implicitly, the offset of the IP header in the packet buffer) is not
fixed. Since the pipelines typically require the specification of a fixed offset to the packet fields
(e.g. Diffserv 5-tuple, used by the flow classification pipeline, or the IP destination address,
used by the IP routing pipeline), the workaround is to have the packet RX pipeline copy these
fields at fixed offsets within the packet meta-data.
As this approach duplicates some of the packet fields, it requires accessing more cache lines
per packet for filling in selected packet meta-data fields (on RX), as well as flushing selected
packet meta-data fields into the packet (on TX).
Example:
; struct app_pkt_metadata {
; uint32_t ip_da;
; uint32_t hash;
; uint32_t flow_id;
; uint32_t color;
; } __attribute__((__packed__));
;
[PIPELINE1]
; Packet meta-data offsets
ip_da_offset = 0; Used by: routing
hash_offset = 4; Used by: RX, flow classification
flow_id_offset = 8; Used by: flow classification, flow actions
color_offset = 12; Used by: flow actions, routing
Overlay the packet meta-data in the same cache line with the packet headers This ap-
proach is minimizing the number of cache line accessed per packet by storing the packet
metadata in the same cache line with the packet headers. To enable this strategy, either some
headroom is reserved for meta-data at the beginning of the packet headers cache line (e.g. if
16 bytes are needed for meta-data, then the packet headroom can be set to 128+16 bytes, so
that NIC writes the first byte of the packet at offset 16 from the start of the first packet cache
line), or meta-data is reusing the space of some packet headers that are discarded from the
packet (e.g. input Ethernet header).
Example:
; struct app_pkt_metadata {
; uint8_t headroom[RTE_PKTMBUF_HEADROOM]; /* 128 bytes (default) */
; union {
; struct {
; struct ether_hdr ether; /* 14 bytes */
; struct qinq_hdr qinq; /* 8 bytes */
; };
; struct {
; uint32_t hash;
; uint32_t flow_id;
; uint32_t color;
; };
; };
; struct ipv4_hdr ip; /* 20 bytes */
; } __attribute__((__packed__));
;
[PIPELINE2]
; Packet meta-data offsets
qinq_offset = 142; Used by: RX, flow classification
ip_da_offset = 166; Used by: routing
Firewall ACL
1. Allow/Drop 1. Ping
• Key = n-tuple (source:
2. Stats
pkt headers)
3. Rule add/ update/
• Data = none
delete
4. Default rule add/
update/ delete
The pipeline specific CLI commands are part of the pipeline type front-end.
The Test Pipeline application illustrates the use of the DPDK Packet Framework tool suite. Its
purpose is to demonstrate the performance of single-table DPDK pipelines.
Overview
The -c EAL CPU core mask option has to contain exactly 3 CPU cores. The first CPU core in
the core mask is assigned for core A, the second for core B and the third for core C.
The PORTMASK parameter must contain 2 or 4 ports.
Table 3.21 describes the table types used and how they are populated.
The hash tables are pre-populated with 16 million keys. For hash tables, the following param-
eters can be selected:
• Configurable key size implementation or fixed (specialized) key size implementa-
tion (e.g. hash-8-ext or hash-spec-8-ext). The key size specialized implementations
are expected to provide better performance for 8-byte and 16-byte key sizes, while the
key-size-non-specialized implementation is expected to provide better performance for
larger key sizes;
• Key size (e.g. hash-spec-8-ext or hash-spec-16-ext). The available options are 8, 16
and 32 bytes;
• Table type (e.g. hash-spec-16-ext or hash-spec-16-lru). The available options are ext
(extendable bucket) or lru (least recently used).
Input Traffic
Regardless of the table type used for the core B pipeline, the same input traffic can be used
to hit all table entries with uniform distribution, which results in uniform distribution of packets
sent out on the set of output NIC ports. The profile for input traffic is TCP/IPv4 packets with:
• destination IP address as A.B.C.D with A fixed to 0 and B, C,D random
• source IP address fixed to 0.0.0.0
• destination TCP port fixed to 0
• source TCP port fixed to 0
The distributor sample application is a simple example of packet distribution to cores using the
Data Plane Development Kit (DPDK).
Overview
The distributor application performs the distribution of packets that are received on an
RX_PORT to different cores. When processed by the cores, the destination port of a packet
is the port from the enabled port mask adjacent to the one on which the packet was received,
that is, if the first four ports are enabled (port mask 0xf), ports 0 and 1 RX/TX into each other,
and ports 2 and 3 RX/TX into each other.
This application can be used to benchmark performance using the traffic generator as shown
in the figure below.
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
2. To run the application in linuxapp environment with 10 lcores, 4 ports, issue the com-
mand:
$ ./build/distributor_app -c 0x4003fe -n 4 -- -p f
3. Refer to the DPDK Getting Started Guide for general information on running applications
and the Environment Abstraction Layer (EAL) options.
Explanation
The distributor application consists of three types of threads: a receive thread (lcore_rx()), a set
of worker threads(lcore_worker()) and a transmit thread(lcore_tx()). How these threads work
together is shown in Fig. 3.28 below. The main() function launches threads of these three
types. Each thread has a while loop which will be doing processing and which is terminated
only upon SIGINT or ctrl+C. The receive and transmit threads communicate using a software
ring (rte_ring structure).
The receive thread receives the packets using rte_eth_rx_burst() and gives them to the distrib-
utor (using rte_distributor_process() API) which will be called in context of the receive thread
itself. The distributor distributes the packets to workers threads based on the tagging of the
packet - indicated by the hash field in the mbuf. For IP traffic, this field is automatically filled by
the NIC with the “usr” hash value for the packet, which works as a per-flow tag.
More than one worker thread can exist as part of the application, and these worker threads
do simple packet processing by requesting packets from the distributor, doing a simple XOR
operation on the input port mbuf field (to indicate the output port which will be used later for
packet transmission) and then finally returning the packets back to the distributor in the RX
thread.
Meanwhile, the receive thread will call the distributor api rte_distributor_returned_pkts() to get
the packets processed, and will enqueue them to a ring for transfer to the TX thread for trans-
mission on the output port. The transmit thread will dequeue the packets from the ring and
transmit them on the output port specified in packet mbuf.
Users who wish to terminate the running of the application have to press ctrl+C (or send SIG-
INT to the app). Upon this signal, a signal handler provided in the application will terminate all
running threads gracefully and print final statistics to the user.
Debug logging is provided as part of the application; the user needs to uncomment the line
“#define DEBUG” defined in start of the application in main.c to enable debug logs.
Statistics
Upon SIGINT (or) ctrl+C, the print_stats() function displays the count of packets processed at
the different stages in the application.
Application Initialization
Command line parsing is done in the same way as it is done in the L2 Forwarding Sample
Application. See Command Line Arguments.
Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See Mbuf Pool Initialization.
Driver Initialization is done in same way as it is done in the L2 Forwarding Sample Application.
See Driver Initialization.
RX queue initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See RX Queue Initialization.
TX queue initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See TX Queue Initialization.
Introduction
Applications running in Virtual Environments have an abstract view of the underlying hard-
ware on the Host, in particular applications cannot see the binding of virtual to physical hard-
ware. When looking at CPU resourcing, the pinning of Virtual CPUs(vCPUs) to Host Physical
CPUs(pCPUS) is not apparent to an application and this pinning may change over time. Fur-
thermore, Operating Systems on virtual machines do not have the ability to govern their own
power policy; the Machine Specific Registers (MSRs) for enabling P-State transitions are not
exposed to Operating Systems running on Virtual Machines(VMs).
The Virtual Machine Power Management solution shows an example of how a DPDK applica-
tion can indicate its processing requirements using VM local only information(vCPU/lcore) to
a Host based Monitor which is responsible for accepting requests for frequency changes for a
vCPU, translating the vCPU to a pCPU via libvirt and affecting the change in frequency.
The solution is comprised of two high-level components:
1. Example Host Application
Using a Command Line Interface(CLI) for VM->Host communication channel manage-
ment allows adding channels to the Monitor, setting and querying the vCPU to pCPU
pinning, inspecting and manually changing the frequency for each CPU. The CLI runs on
a single lcore while the thread responsible for managing VM requests runs on a second
lcore.
VM requests arriving on a channel for frequency changes are passed to the librte_power
ACPI cpufreq sysfs based library. The Host Application relies on both qemu-kvm and
libvirt to function.
2. librte_power for Virtual Machines
Using an alternate implementation for the librte_power API, requests for frequency
changes are forwarded to the host monitor rather than the APCI cpufreq sysfs interface
used on the host.
The l3fwd-power application will use this implementation when deployed on a VM (see
L3 Forwarding with Power Management Sample Application).
Overview
Performance Considerations
While Haswell Microarchitecture allows for independent power control for each core, earlier
Microarchtectures do not offer such fine grained control. When deployed on pre-Haswell plat-
forms greater care must be taken in selecting which cores are assigned to a VM, for instance
a core will not scale down until its sibling is similarly scaled.
Configuration
BIOS
Enhanced Intel SpeedStep® Technology must be enabled in the platform BIOS if the
power management feature of DPDK is to be used. Otherwise, the sys file folder
/sys/devices/system/cpu/cpu0/cpufreq will not exist, and the CPU frequency-based power
management cannot be used. Consult the relevant BIOS documentation to determine how
these settings can be accessed.
The Host OS must also have the apci_cpufreq module installed, in some cases the intel_pstate
driver may be the default Power Management environment. To enable acpi_cpufreq and dis-
able intel_pstate, add the following to the grub Linux command line:
intel_pstate=disable
Where a single controller of type virtio-serial is created and up to 32 channels can be asso-
ciated with a single controller and multiple controllers can be specified. The convention is to
use the name of the VM in the host path {vm_name} and to increment {channel_num} for each
channel, likewise the port value {N} must be incremented for each channel.
Each channel on the host will appear in path, the directory /tmp/powermonitor/ must first be
created and given qemu permissions
mkdir /tmp/powermonitor/
chown qemu:qemu /tmp/powermonitor
Note that files and directories within /tmp are generally removed upon rebooting the host and
the above steps may need to be carried out after each reboot.
The serial device as it appears on a VM is configured with the target element attribute
name and must be in the form of virtio.serial.port.poweragent.{vm_channel_num}, where
vm_channel_num is typically the lcore channel to be used in DPDK VM applications.
Each channel on a VM will be present at /dev/virtio-
ports/virtio.serial.port.poweragent.{vm_channel_num}
Compiling
1. export RTE_SDK=/path/to/rte_sdk
2. cd ${RTE_SDK}/examples/vm_power_manager
3. make
Running
The application does not have any specific command line options other than EAL:
./build/vm_power_mgr [EAL options]
The application requires exactly two cores to run, one core is dedicated to the CLI, while the
other is dedicated to the channel endpoint monitor, for example to run on cores 0 & 1 on a
system with 4 memory channels:
./build/vm_power_mgr -c 0x3 -n 4
After successful initialization the user is presented with VM Power Manager CLI:
vm_power>
When a {vm_name} is specified with the add_vm command a lookup is performed with libvirt
to ensure that the VM exists, {vm_name} is used as an unique identifier to associate channels
with a particular VM and for executing operations on a VM within the CLI. VMs do not have to
be running in order to add them.
A number of commands can be issued via the CLI in relation to VMs:
Remove a Virtual Machine identified by {vm_name} from the VM Power Manager.
rm_vm {vm_name}
Add communication channels for the specified VM, the virtio channels must be en-
abled in the VM configuration(qemu/libvirt) and the associated VM must be active.
{list} is a comma-separated list of channel numbers to add, using the keyword ‘all’
will attempt to add all channels for the VM:
add_channels {vm_name} {list}|all
Print to the CLI the information on the specified VM, the information lists the number
of vCPUS, the pinning to pCPU(s) as a bit mask, along with any communication
channels associated with each VM, along with the status of each channel:
show_vm {vm_name}
Set the binding of Virtual CPU on VM with name {vm_name} to the Physical CPU
mask:
set_pcpu_mask {vm_name} {vcpu} {pcpu}
Manual control and inspection can also be carried in relation CPU frequency scaling:
Get the current frequency for each core specified in the mask:
show_cpu_freq_mask {mask}
Set the current frequency for the cores specified in {core_mask} by scaling each
up/down/min/max:
set_cpu_freq {core_mask} up|down|min|max
Set the current frequency for the specified core by scaling up/down/min/max:
For compiling and running l3fwd-power, see L3 Forwarding with Power Management Sample
Application.
A guest CLI is also provided for validating the setup.
For both l3fwd-power and guest CLI, the channels for the VM must be monitored by the host
application using the add_channels command on the host.
Compiling
1. export RTE_SDK=/path/to/rte_sdk
2. cd ${RTE_SDK}/examples/vm_power_manager/guest_cli
3. make
Running
The application does not have any specific command line options other than EAL:
./build/vm_power_mgr [EAL options]
The application for example purposes uses a channel for each lcore enabled, for example to
run on cores 0,1,2,3 on a system with 4 memory channels:
./build/guest_vm_power_mgr -c 0xf -n 4
After successful initialization the user is presented with VM Power Manager Guest CLI:
vm_power(guest)>
To change the frequency of a lcore, use the set_cpu_freq command. Where {core_num} is the
lcore and channel to change frequency by scaling up/down/min/max.
set_cpu_freq {core_num} up|down|min|max
The TEP (Tunnel End point) termination sample application simulates a VXLAN Tunnel End-
point (VTEP) termination in DPDK, which is used to demonstrate the offload and filtering ca-
pabilities of Intel® XL710 10/40 Gigabit Ethernet Controller for VXLAN packet. This sample
uses the basic virtio devices management mechanism from vhost example, and also uses the
us-vHost interface and tunnel filtering mechanism to direct a specified traffic to a specific VM.
In addition, this sample is also designed to show how tunneling protocols can be handled.
Background
With virtualization, overlay networks allow a network structure to be built or imposed across
physical nodes which is abstracted away from the actual underlining physical network connec-
tions. This allows network isolation, QOS, etc to be provided on a per client basis.
In a typical setup, the network overlay tunnel is terminated at the Virtual/Tunnel End Point
(VEP/TEP). The TEP is normally located at the physical host level ideally in the software switch.
Due to processing constraints and the inevitable bottleneck that the switch becomes, the ability
to offload overlay support features becomes an important requirement. Intel® XL710 10/40
Gigabit Ethernet network card provides hardware filtering and offload capabilities to support
overlay networks implementations such as MAC in UDP and MAC in GRE.
The DPDK TEP termination sample code demonstrates the offload and filtering capabilities of
Intel® XL710 10/40 Gigabit Ethernet Controller for VXLAN packet.
The sample code is based on vhost library. The vhost library is developed for user space
Ethernet switch to easily integrate with vhost functionality.
The sample will support the followings:
• Tunneling packet recognition.
• The port of UDP tunneling is configurable
• Directing incoming traffic to the correct queue based on the tunnel filter type. The sup-
ported filter type are listed below.
– Inner MAC and VLAN and tenant ID
– Inner MAC and tenant ID, and Outer MAC
– Inner MAC and tenant ID
The tenant ID will be assigned from a static internal table based on the us-vhost device
ID. Each device will receive a unique device ID. The inner MAC will be learned by the first
packet transmitted from a device.
• Decapsulation of RX VXLAN traffic. This is a software only operation.
• Encapsulation of TX VXLAN traffic. This is a software only operation.
• Inner IP and inner L4 checksum offload.
• TSO offload support for tunneling packet.
The following figure shows the framework of the TEP termination sample application based on
DPDK vhost lib.
Supported Distributions
The example in this section have been validated with the following distributions:
• Fedora* 18
• Fedora* 19
• Fedora* 20
3. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
4. Build the application:
cd ${RTE_SDK}
make config ${RTE_TARGET}
make install ${RTE_TARGET}
cd ${RTE_SDK}/examples/tep_termination
make
Note: Please note the huge-dir parameter instructs the DPDK to allocate its memory from the
2 MB page hugetlbfs.
Parameters
Filter Type.
The filter-type option is used to specify which filter type is used to filter UDP tunneling packet to
a specified queue. The default value is 1, which means the filter type of inner MAC and tenant
ID is used.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2 --udp-port 4789 --filter-type 1
TX Checksum.
The tx-checksum option is used to enable or disable the inner header checksum offload. The
default value is 0, which means the checksum offload is disabled.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2 --tx-checksum
Decapsulation option.
The decap option is used to enable or disable decapsulation operation for received VXLAN
packet. The default value is 1.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 4 --udp-port 4789 --decap 1
Encapsulation option.
The encap option is used to enable or disable encapsulation operation for transmitted packet.
The default value is 1.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 4 --udp-port 4789 --encap 1
For a virtio-net device to receive traffic, the traffic’s Layer 2 header must include both the virtio-
net device’s MAC address. The DPDK sample code behaves in a similar manner to a learning
switch in that it learns the MAC address of the virtio-net devices from the first transmitted
packet. On learning the MAC address, the DPDK vhost sample code prints a message with
the MAC address and tenant ID virtio-net device. For example:
DATA: (0) MAC_ADDRESS cc:bb:bb:bb:bb:bb and VNI 1000 registered
The above message indicates that device 0 has been registered with MAC address
cc:bb:bb:bb:bb:bb and VNI 1000. Any packets received on the NIC with these values are
placed on the devices receive queue.
The PTP (Precision Time Protocol) client sample application is a simple example of using the
DPDK IEEE1588 API to communicate with a PTP master clock to synchronize the time on the
NIC and, optionally, on the Linux system.
Note, PTP is a time syncing protocol and cannot be used within DPDK as a time-stamping
mechanism. See the following for an explanation of the protocol: Precision Time Protocol.
Limitations
The PTP sample application is intended as a simple reference implementation of a PTP client
using the DPDK IEEE1588 API. In order to keep the application simple the following assump-
tions are made:
• The first discovered master is the master for the session.
• Only L2 PTP packets are supported.
• Only the PTP v2 protocol is supported.
• Only the slave clock is implemented.
If the command line parameter -T 1 is used the application also synchronizes the PTP PHC
clock with the Linux kernel clock.
To compile the application, export the path to the DPDK source tree and edit the
config/common_linuxapp configuration file to enable IEEE1588:
export RTE_SDK=/path/to/rte_sdk
See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
# Recompile DPDK.
make install T=$RTE_TARGET
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
• -p portmask: Hexadecimal portmask.
• -T 0: Update only the PTP slave clock.
• -T 1: Update the PTP slave clock and synchronize the Linux Kernel to the PTP clock.
Code Explanation
The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.
The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:
The main() also allocates a mempool to hold the mbufs (Message Buffers) used by the ap-
plication:
mbuf_pool = rte_mempool_create("MBUF_POOL",
NUM_MBUFS * nb_ports,
MBUF_SIZE,
MBUF_CACHE_SIZE,
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(),
0);
Mbufs are the packet buffer structure used by DPDK. They are explained in detail in the “Mbuf
Library” section of the DPDK Programmer’s Guide.
The main() function also initializes all the ports using the user defined port_init() function
with portmask provided by user:
for (portid = 0; portid < nb_ports; portid++)
if ((ptp_enabled_port_mask & (1 << portid)) != 0) {
if (port_init(portid, mbuf_pool) == 0) {
ptp_enabled_ports[ptp_enabled_port_nb] = portid;
ptp_enabled_port_nb++;
} else {
rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
portid);
}
}
Once the initialization is complete, the application is ready to launch a function on an lcore. In
this example lcore_main() is called on a single lcore.
lcore_main();
As we saw above the main() function calls an application function on the available lcores.
The main work of the application is done within the loop:
for (portid = 0; portid < ptp_enabled_port_nb; portid++) {
portid = ptp_enabled_ports[portid];
nb_rx = rte_eth_rx_burst(portid, 0, &m, 1);
if (likely(nb_rx == 0))
continue;
rte_pktmbuf_free(m);
}
Packets are received one by one on the RX ports and, if required, PTP response packets are
transmitted on the TX ports.
If the offload flags in the mbuf indicate that the packet is a PTP packet then the packet is parsed
to determine which type:
if (m->ol_flags & PKT_RX_IEEE1588_PTP)
parse_ptp_frames(portid, m);
PTP parsing
if (eth_type == PTP_PROTOCOL) {
ptp_data.m = m;
ptp_data.portid = portid;
ptp_hdr = (struct ptp_header *)(rte_pktmbuf_mtod(m, char *)
+ sizeof(struct ether_hdr));
switch (ptp_hdr->msgtype) {
case SYNC:
parse_sync(&ptp_data);
break;
case FOLLOW_UP:
parse_fup(&ptp_data);
break;
case DELAY_RESP:
parse_drsp(&ptp_data);
print_clock_info(&ptp_data);
break;
default:
break;
}
}
}
There are 3 types of packets on the RX path which we must parse to create a minimal imple-
mentation of the PTP slave client:
• SYNC packet.
• FOLLOW UP packet
• DELAY RESPONSE packet.
When we parse the FOLLOW UP packet we also create and send a DELAY_REQUEST
packet. Also when we parse the DELAY RESPONSE packet, and all conditions are met we
adjust the PTP slave clock.
The performance thread sample application is a derivative of the standard L3 forwarding appli-
cation that demonstrates different threading models.
Overview
For a general description of the L3 forwarding applications capabilities please refer to the doc-
umentation of the standard application in L3 Forwarding Sample Application.
The performance thread sample application differs from the standard L3 forwarding example
in that it divides the TX and RX processing between different threads, and makes it possible to
assign individual threads to different cores.
Three threading models are considered:
1. When there is one EAL thread per physical core.
2. When there are multiple EAL threads per physical core.
3. When there are multiple lightweight threads per EAL thread.
Since DPDK release 2.0 it is possible to launch applications using the --lcores EAL param-
eter, specifying cpu-sets for a physical core. With the performance thread sample application
its is now also possible to assign individual RX and TX functions to different cores.
As an alternative to dividing the L3 forwarding work between different EAL threads the perfor-
mance thread sample introduces the possibility to run the application threads as lightweight
threads (L-threads) within one or more EAL threads.
In order to facilitate this threading model the example includes a primitive cooperative sched-
uler (L-thread) subsystem. More details of the L-thread subsystem can be found in The L-
thread subsystem.
Note: Whilst theoretically possible it is not anticipated that multiple L-thread schedulers would
be run on the same physical core, this mode of operation should not be expected to yield useful
performance and is considered invalid.
The application is located in the sample application folder in the performance-thread folder.
1. Go to the example applications folder
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/performance-thread/l3fwd-thread
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Linux Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
Where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• -P: optional, sets all ports to promiscuous mode so that packets are accepted regardless
of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted.
• --rx (port,queue,lcore,thread)[,(port,queue,lcore,thread)]: the list
of NIC RX ports and queues handled by the RX lcores and threads. The parameters
are explained below.
• --tx (lcore,thread)[,(lcore,thread)]: the list of TX threads identifying the
lcore the thread runs on, and the id of RX thread with which it is associated. The param-
eters are explained below.
• --enable-jumbo: optional, enables jumbo frames.
• --max-pkt-len: optional, maximum packet length in decimal (64-9600).
• --no-numa: optional, disables numa awareness.
• --hash-entry-num: optional, specifies the hash entry number in hex to be setup.
• --ipv6: optional, set it if running ipv6 packets.
• --no-lthreads: optional, disables l-thread model and uses EAL threading model. See
below.
• --stat-lcore: optional, run CPU load stats collector on the specified lcore.
• --parse-ptype: optional, set to use software to analyze packet type. Without this
option, hardware will check the packet type.
The parameters of the --rx and --tx options are:
• --rx parameters
port RX port
queue RX queue that will be read on the specified RX port
lcore Core to use for the thread
thread Thread id (continuously from 0 to N)
• --tx parameters
lcore Core to use for L3 route match and transmit
thread Id of RX thread to be associated with this TX thread
The l3fwd-thread application allows you to start packet processing in two threading models:
L-Threads (default) and EAL Threads (when the --no-lthreads parameter is used). For
consistency all parameters are used in the same way for both models.
When the L-thread model is used (default option), lcore and thread parameters in --rx/--tx
are used to affinitize threads to the selected scheduler.
For example, the following places every l-thread on different lcores:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)"
The following places RX l-threads on lcore 0 and TX l-threads on lcore 1 and 2 and so on:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(1,0)(2,1)"
When the --no-lthreads parameter is used, the L-threading model is turned off and EAL
threads are used for all processing. EAL threads are enumerated in the same way as L-
threads, but the --lcores EAL parameter is used to affinitize threads to the selected cpu-set
(scheduler). Thus it is possible to place every RX and TX thread on different lcores.
For example, the following places every EAL thread on different lcores:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads
To affinitize two or more EAL threads to one cpu-set, the EAL --lcores parameter is used.
The following places RX EAL threads on lcore 0 and TX EAL threads on lcore 1 and 2 and so
on:
l3fwd-thread -c ff -n 2 --lcores="(0,1)@0,(2,3)@1" -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads
Examples
For selected scenarios the command line configuration of the application for L-threads and its
corresponding EAL threads command line can be realized as follows:
1. Start every thread on different scheduler (1:1):
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)"
Start 2 EAL threads for RX on cpu-set 0, and 2 EAL threads for TX on cpu-set 1:
l3fwd-thread -c ff -n 2 --lcores="(0-1)@0,(2-3)@1" -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads
Explanation
To a great extent the sample application differs little from the standard L3 forwarding appli-
cation, and readers are advised to familiarize themselves with the material covered in the L3
Forwarding Sample Application documentation before proceeding.
The following explanation is focused on the way threading is handled in the performance thread
example.
The performance thread sample application has split the RX and TX functionality into two
different threads, and the RX and TX threads are interconnected via software rings. With
respect to these rings the RX threads are producers and the TX threads are consumers.
On initialization the TX and RX threads are started according to the command line parameters.
The RX threads poll the network interface queues and post received packets to a TX thread
via a corresponding software ring.
The TX threads poll software rings, perform the L3 forwarding hash/LPM match, and assemble
packet bursts before performing burst transmit on the network interface.
As with the standard L3 forward application, burst draining of residual packets is performed
periodically with the period calculated from elapsed time using the timestamps counter.
The diagram below illustrates a case with two RX threads and three TX threads.
Like the EAL thread configuration the application has split the RX and TX functionality into
different threads, and the pairs of RX and TX threads are interconnected via software rings.
On initialization an L-thread scheduler is started on every EAL thread. On all but the master
EAL thread only a a dummy L-thread is initially started. The L-thread started on the master
EAL thread then spawns other L-threads on different L-thread schedulers according the the
command line parameters.
The RX threads poll the network interface queues and post received packets to a TX thread
via the corresponding software ring.
The ring interface is augmented by means of an L-thread condition variable that enables the
TX thread to be suspended when the TX ring is empty. The RX thread signals the condition
whenever it posts to the TX ring, causing the TX thread to be resumed.
Additionally the TX L-thread spawns a worker L-thread to take care of polling the software
rings, whilst it handles burst draining of the transmit buffer.
The worker threads poll the software rings, perform L3 route lookup and assemble packet
bursts. If the TX ring is empty the worker thread suspends itself by waiting on the condition
variable associated with the ring.
Burst draining of residual packets, less than the burst size, is performed by the TX thread which
sleeps (using an L-thread sleep function) and resumes periodically to flush the TX buffer.
This design means that L-threads that have no work, can yield the CPU to other L-threads and
avoid having to constantly poll the software rings.
The diagram below illustrates a case with two RX threads and three TX functions (each com-
prising a thread that processes forwarding and a thread that periodically drains the output
buffer of residual packets).
It is possible to display statistics showing estimated CPU load on each core. The statistics
indicate the percentage of CPU time spent: processing received packets (forwarding), polling
queues/rings (waiting for work), and doing any other processing (context switch and other
overhead).
When enabled statistics are gathered by having the application threads set and clear flags
when they enter and exit pertinent code sections. The flags are then sampled in real time by
a statistics collector thread running on another core. This thread displays the data in real time
on the console.
This feature is enabled by designating a statistics collector core, using the --stat-lcore
parameter.
The fundamental difference between the L-thread and pthread models is the way in which
threads are scheduled. The simplest way to think about this is to consider the case of a
processor with a single CPU. To run multiple threads on a single CPU, the scheduler must fre-
quently switch between the threads, in order that each thread is able to make timely progress.
This is the basis of any multitasking operating system.
This section explores the differences between the pthread model and the L-thread model as
implemented in the provided L-thread subsystem. If needed a theoretical discussion of pre-
emptive vs cooperative multi-threading can be found in any good text on operating system
design.
The POSIX pthread library provides an application programming interface to create and syn-
chronize threads. Scheduling policy is determined by the host OS, and may be configurable.
The OS may use sophisticated rules to determine which thread should be run next, threads
may suspend themselves or make other threads ready, and the scheduler may employ a time
slice giving each thread a maximum time quantum after which it will be preempted in favor of
another thread that is ready to run. To complicate matters further threads may be assigned
different scheduling priorities.
By contrast the L-thread subsystem is considerably simpler. Logically the L-thread scheduler
performs the same multiplexing function for L-threads within a single pthread as the OS sched-
uler does for pthreads within an application process. The L-thread scheduler is simply the main
loop of a pthread, and in so far as the host OS is concerned it is a regular pthread just like any
other. The host OS is oblivious about the existence of and not at all involved in the scheduling
of L-threads.
The other and most significant difference between the two models is that L-threads are sched-
uled cooperatively. L-threads cannot not preempt each other, nor can the L-thread scheduler
preempt a running L-thread (i.e. there is no time slicing). The consequence is that programs
implemented with L-threads must possess frequent rescheduling points, meaning that they
must explicitly and of their own volition return to the scheduler at frequent intervals, in order to
allow other L-threads an opportunity to proceed.
In both models switching between threads requires that the current CPU context is saved and
a new context (belonging to the next thread ready to run) is restored. With pthreads this
context switching is handled transparently and the set of CPU registers that must be preserved
between context switches is as per an interrupt handler.
An L-thread context switch is achieved by the thread itself making a function call to the L-thread
scheduler. Thus it is only necessary to preserve the callee registers. The caller is responsible
to save and restore any other registers it is using before a function call, and restore them on
return, and this is handled by the compiler. For X86_64 on both Linux and BSD the System
V calling convention is used, this defines registers RSP, RBP, and R12-R15 as callee-save
registers (for more detailed discussion a good reference is X86 Calling Conventions).
Taking advantage of this, and due to the absence of preemption, an L-thread context switch is
achieved with less than 20 load/store instructions.
The scheduling policy for L-threads is fixed, there is no prioritization of L-threads, all L-threads
are equal and scheduling is based on a FIFO ready queue.
An L-thread is a struct containing the CPU context of the thread (saved on context switch)
and other useful items. The ready queue contains pointers to threads that are ready to run.
The L-thread scheduler is a simple loop that polls the ready queue, reads from it the next
thread ready to run, which it resumes by saving the current context (the current position in the
scheduler loop) and restoring the context of the next thread from its thread struct. Thus an
L-thread is always resumed at the last place it yielded.
A well behaved L-thread will call the context switch regularly (at least once in its main loop)
thus returning to the scheduler’s own main loop. Yielding inserts the current thread at the back
of the ready queue, and the process of servicing the ready queue is repeated, thus the system
runs by flipping back and forth the between L-threads and scheduler loop.
In the case of pthreads, the preemptive scheduling, time slicing, and support for thread prioriti-
zation means that progress is normally possible for any thread that is ready to run. This comes
at the price of a relatively heavier context switch and scheduling overhead.
With L-threads the progress of any particular thread is determined by the frequency of
rescheduling opportunities in the other L-threads. This means that an errant L-thread mo-
nopolizing the CPU might cause scheduling of other threads to be stalled. Due to the lower
cost of context switching, however, voluntary rescheduling to ensure progress of other threads,
if managed sensibly, is not a prohibitive overhead, and overall performance can exceed that of
an application using pthreads.
Mutual exclusion
With pthreads preemption means that threads that share data must observe some form of
mutual exclusion protocol.
The fact that L-threads cannot preempt each other means that in many cases mutual exclusion
devices can be completely avoided.
Locking to protect shared data can be a significant bottleneck in multi-threaded applications
As with applications written for pthreads an application written for L-threads can take advantage
of thread local storage, in this case local to an L-thread. An application may save and retrieve
a single pointer to application data in the L-thread struct.
For legacy and backward compatibility reasons two alternative methods are also offered, the
first is modelled directly on the pthread get/set specific APIs, the second approach is mod-
elled on the RTE_PER_LCORE macros, whereby PER_LTHREAD macros are introduced, in both
cases the storage is local to the L-thread.
API compatibility
The L-thread subsystem provides a set of functions that are logically equivalent to the cor-
responding functions offered by the POSIX pthread library, however not all pthread functions
have a corresponding L-thread equivalent, and not all features available to pthreads are imple-
mented for L-threads.
The pthread library offers considerable flexibility via programmable attributes that can be as-
sociated with threads, mutexes, and condition variables.
By contrast the L-thread subsystem has fixed functionality, the scheduler policy cannot be
varied, and L-threads cannot be prioritized. There are no variable attributes associated with
any L-thread objects. L-threads, mutexes and conditional variables, all have fixed functionality.
(Note: reserved parameters are included in the APIs to facilitate possible future support for
attributes).
The table below lists the pthread and equivalent L-thread APIs with notes on differences and/or
constraints. Where there is no L-thread entry in the table, then the L-thread subsystem pro-
vides no equivalent function.
Note 1:
Neither lthread signal nor broadcast may be called concurrently by L-threads running on differ-
ent schedulers, although multiple L-threads running in the same scheduler may freely perform
signal or broadcast operations. L-threads running on the same or different schedulers may
always safely wait on a condition variable.
Note 2:
Pthread attributes may be used to affinitize a pthread with a cpu-set. The L-thread subsystem
does not support a cpu-set. An L-thread may be affinitized only with a single CPU at any time.
Note 3:
If an L-thread is intended to run on a different NUMA node than the node that creates the
thread then, when calling lthread_create() it is advantageous to specify the destination
core as a parameter of lthread_create(). See Memory allocation and NUMA awareness
for details.
Note 4:
An L-thread can only detach itself, and cannot detach other L-threads.
Note 5:
A wait operation on a pthread condition variable is always associated with and protected by
a mutex which must be owned by the thread at the time it invokes pthread_wait(). By
contrast L-thread condition variables are thread safe (for waiters) and do not use an associated
mutex. Multiple L-threads (including L-threads running on other schedulers) can safely wait on
a L-thread condition variable. As a consequence the performance of an L-thread condition
variables is typically an order of magnitude faster than its pthread counterpart.
Note 6:
Recursive locking is not supported with L-threads, attempts to take a lock recursively will be
detected and rejected.
Note 7:
lthread_yield() will save the current context, insert the current thread to the back of the
ready queue, and resume the next ready thread. Yielding increases ready queue backlog, see
Ready queue backlog for more details about the implications of this.
N.B. The context switch time as measured from immediately before the call to
lthread_yield() to the point at which the next ready thread is resumed, can be an or-
der of magnitude faster that the same measurement for pthread_yield.
Note 8:
lthread_set_affinity() is similar to a yield apart from the fact that the yielding thread
is inserted into a peer ready queue of another scheduler. The peer ready queue is actually a
separate thread safe queue, which means that threads appearing in the peer ready queue can
jump any backlog in the local ready queue on the destination scheduler.
The context switch time as measured from the time just before the call to
lthread_set_affinity() to just after the same thread is resumed on the new
scheduler can be orders of magnitude faster than the same measurement for
pthread_setaffinity_np().
Note 9:
Although there is no pthread_sleep() function, lthread_sleep() and
lthread_sleep_clks() can be used wherever sleep(), usleep() or nanosleep()
might ordinarily be used. The L-thread sleep functions suspend the current thread, start an
rte_timer and resume the thread when the timer matures. The rte_timer_manage()
entry point is called on every pass of the scheduler loop. This means that the worst case jitter
on timer expiry is determined by the longest period between context switches of any running
L-threads.
In a synthetic test with many threads sleeping and resuming then the measured jitter is typically
orders of magnitude lower than the same measurement made for nanosleep().
Note 10:
Spin locks are not provided because they are problematical in a cooperative environment, see
Locks and spinlocks for a more detailed discussion on how to avoid spin locks.
Of the three L-thread local storage options the simplest and most efficient is storing a single
application data pointer in the L-thread struct.
The PER_LTHREAD macros involve a run time computation to obtain the address of the variable
being saved/retrieved and also require that the accesses are de-referenced via a pointer. This
means that code that has used RTE_PER_LCORE macros being ported to L-threads might need
some slight adjustment (see Thread local storage for hints about porting code that makes use
of thread local storage).
The get/set specific APIs are consistent with their pthread counterparts both in use and in
performance.
All memory allocation is from DPDK huge pages, and is NUMA aware. Each scheduler main-
tains its own caches of objects: lthreads, their stacks, TLS, mutexes and condition variables.
These caches are implemented as unbounded lock free MPSC queues. When objects are
created they are always allocated from the caches on the local core (current EAL thread).
If an L-thread has been affinitized to a different scheduler, then it can always safely free re-
sources to the caches from which they originated (because the caches are MPSC queues).
If the L-thread has been affinitized to a different NUMA node then the memory resources
associated with it may incur longer access latency.
The commonly used pattern of setting affinity on entry to a thread after it has started, means
that memory allocation for both the stack and TLS will have been made from caches on the
NUMA node on which the threads creator is running. This has the side effect that access
latency will be sub-optimal after affinitizing.
This side effect can be mitigated to some extent (although not completely) by specifying the
destination CPU as a parameter of lthread_create() this causes the L-thread’s stack and
TLS to be allocated when it is first scheduled on the destination scheduler, if the destination is
a on another NUMA node it results in a more optimal memory allocation.
Note that the lthread struct itself remains allocated from memory on the creating node, this is
unavoidable because an L-thread is known everywhere by the address of this struct.
The per lcore object caches pre-allocate objects in bulk whenever a request to allocate an
object finds a cache empty. By default 100 objects are pre-allocated, this is defined by
LTHREAD_PREALLOC in the public API header file lthread_api.h. This means that the caches
constantly grow to meet system demand.
In the present implementation there is no mechanism to reduce the cache sizes if system
demand reduces. Thus the caches will remain at their maximum extent indefinitely.
A consequence of the bulk pre-allocation of objects is that every 100 (default value) additional
new object create operations results in a call to rte_malloc(). For creation of objects such
as L-threads, which trigger the allocation of even more objects (i.e. their stacks and TLS) then
this can cause outliers in scheduling performance.
If this is a problem the simplest mitigation strategy is to dimension the system, by setting the
bulk object pre-allocation size to some large number that you do not expect to be exceeded.
This means the caches will be populated once only, the very first time a thread is created.
One of the more subtle performance considerations is managing the ready queue backlog.
The fewer threads that are waiting in the ready queue then the faster any particular thread will
get serviced.
In a naive L-thread application with N L-threads simply looping and yielding, this backlog will
always be equal to the number of L-threads, thus the cost of a yield to a particular L-thread will
be N times the context switch time.
This side effect can be mitigated by arranging for threads to be suspended and wait to be
resumed, rather than polling for work by constantly yielding. Blocking on a mutex or condition
variable or even more obviously having a thread sleep if it has a low frequency workload are all
mechanisms by which a thread can be excluded from the ready queue until it really does need
to be run. This can have a significant positive impact on performance.
The L-thread subsystem depends on DPDK for huge page allocation and de-
pends on the rte_timer subsystem. The DPDK EAL initialization and
rte_timer_subsystem_init() MUST be completed before the L-thread sub system
can be used.
Thereafter initialization of the L-thread subsystem is largely transparent to the application.
Constructor functions ensure that global variables are properly initialized. Other than global
variables each scheduler is initialized independently the first time that an L-thread is created
by a particular EAL thread.
If the schedulers are to be run as isolated and independent schedulers, with no intention that L-
threads running on different schedulers will migrate between schedulers or synchronize with L-
threads running on other schedulers, then initialization consists simply of creating an L-thread,
and then running the L-thread scheduler.
If there will be interaction between L-threads running on different schedulers, then it is impor-
tant that the starting of schedulers on different EAL threads is synchronized.
To achieve this an additional initialization step is necessary, this is simply to set the number of
schedulers by calling the API function lthread_num_schedulers_set(n), where n is the
number of EAL threads that will run L-thread schedulers. Setting the number of schedulers to
a number greater than 0 will cause all schedulers to wait until the others have started before
beginning to schedule L-threads.
The L-thread scheduler is started by calling the function lthread_run() and should be called
from the EAL thread and thus become the main loop of the EAL thread.
The function lthread_run(), will not return until all threads running on the
scheduler have exited, and the scheduler has been explicitly stopped by calling
lthread_scheduler_shutdown(lcore) or lthread_scheduler_shutdown_all().
All these function do is tell the scheduler that it can exit when there are no longer any running
L-threads, neither function forces any running L-thread to terminate. Any desired application
shutdown behavior must be designed and built into the application to ensure that L-threads
complete in a timely manner.
Important Note: It is assumed when the scheduler exits that the application is terminating
for good, the scheduler does not free resources before exiting and running the scheduler a
subsequent time will result in undefined behavior.
Legacy code originally written for a pthread environment may be ported to L-threads if the
considerations about differences in scheduling policy, and constraints discussed in the previous
sections can be accommodated.
This section looks in more detail at some of the issues that may have to be resolved when
porting code.
The first step is to establish exactly which pthread APIs the legacy application uses, and to
understand the requirements of those APIs. If there are corresponding L-lthread APIs, and
where the default pthread functionality is used by the application then, notwithstanding the
other issues discussed here, it should be feasible to run the application with L-threads. If the
legacy code modifies the default behavior using attributes then if may be necessary to make
some adjustments to eliminate those requirements.
It is important to understand what other system services the application may be using, bearing
in mind that in a cooperatively scheduled environment a thread cannot block without stalling
the scheduler and with it all other cooperative threads. Any kind of blocking system call, for
example file or socket IO, is a potential problem, a good tool to analyze the application for this
purpose is the strace utility.
There are many strategies to resolve these kind of issues, each with it merits. Possible solu-
tions include:
• Adopting a polled mode of the system API concerned (if available).
• Arranging for another core to perform the function and synchronizing with that core via
constructs that will not block the L-thread.
• Affinitizing the thread to another scheduler devoted (as a matter of policy) to handling
threads wishing to make blocking calls, and then back again when finished.
Locks and spinlocks are another source of blocking behavior that for the same reasons as
system calls will need to be addressed.
If the application design ensures that the contending L-threads will always run on the same
scheduler then it its probably safe to remove locks and spin locks completely.
The only exception to the above rule is if for some reason the code performs any kind of context
switch whilst holding the lock (e.g. yield, sleep, or block on a different lock, or on a condition
variable). This will need to determined before deciding to eliminate a lock.
If a lock cannot be eliminated then an L-thread mutex can be substituted for either kind of lock.
An L-thread blocking on an L-thread mutex will be suspended and will cause another ready
L-thread to be resumed, thus not blocking the scheduler. When default behavior is required, it
can be used as a direct replacement for a pthread mutex lock.
Spin locks are typically used when lock contention is likely to be rare and where the period
during which the lock may be held is relatively short. When the contending L-threads are
running on the same scheduler then an L-thread blocking on a spin lock will enter an infinite
loop stopping the scheduler completely (see Infinite loops below).
If the application design ensures that contending L-threads will always run on different sched-
ulers then it might be reasonable to leave a short spin lock that rarely experiences contention
in place.
If after all considerations it appears that a spin lock can neither be eliminated completely,
replaced with an L-thread mutex, or left in place as is, then an alternative is to loop on a flag,
with a call to lthread_yield() inside the loop (n.b. if the contending L-threads might ever
run on different schedulers the flag will need to be manipulated atomically).
Spinning and yielding is the least preferred solution since it introduces ready queue backlog
(see also Ready queue backlog).
Yet another kind of blocking behavior (albeit momentary) are delay functions like sleep(),
usleep(), nanosleep() etc. All will have the consequence of stalling the L-thread scheduler
and unless the delay is very short (e.g. a very short nanosleep) calls to these functions will
need to be eliminated.
The simplest mitigation strategy is to use the L-thread sleep API functions, of which two
variants exist, lthread_sleep() and lthread_sleep_clks(). These functions start an
rte_timer against the L-thread, suspend the L-thread and cause another ready L-thread to be
resumed. The suspended L-thread is resumed when the rte_timer matures.
Infinite loops
Some applications have threads with loops that contain no inherent rescheduling opportunity,
and rely solely on the OS time slicing to share the CPU. In a cooperative environment this will
stop everything dead. These kind of loops are not hard to identify, in a debug session you will
find the debugger is always stopping in the same loop.
The simplest solution to this kind of problem is to insert an explicit lthread_yield() or
lthread_sleep() into the loop. Another solution might be to include the function performed
by the loop into the execution path of some other loop that does in fact yield, if this is possible.
If the application uses thread local storage, the use case should be studied carefully.
In a legacy pthread application either or both the __thread prefix, or the pthread set/get
specific APIs may have been used to define storage local to a pthread.
In some applications it may be a reasonable assumption that the data could or in fact most
likely should be placed in L-thread local storage.
If the application (like many DPDK applications) has assumed a certain relationship between
a pthread and the CPU to which it is affinitized, there is a risk that thread local storage may
have been used to save some data items that are correctly logically associated with the CPU,
and others items which relate to application context for the thread. Only a good understanding
of the application will reveal such cases.
If the application requires an that an L-thread is to be able to move between schedulers then
care should be taken to separate these kinds of data, into per lcore, and per L-thread storage.
In this way a migrating thread will bring with it the local data it needs, and pick up the new
logical core specific values from pthread local storage at its new home.
Pthread shim
A convenient way to get something working with legacy code can be to use a shim that adapts
pthread API calls to the corresponding L-thread ones. This approach will not mitigate any of
the porting considerations mentioned in the previous sections, but it will reduce the amount
of code churn that would otherwise been involved. It is a reasonable approach to evaluate
L-threads, before investing effort in porting to the native L-thread APIs.
Overview
The L-thread subsystem includes an example pthread shim. This is a partial implementation
but does contain the API stubs needed to get basic applications running. There is a simple
“hello world” application that demonstrates the use of the pthread shim.
A subtlety of working with a shim is that the application will still need to make use of the
genuine pthread library functions, at the very least in order to create the EAL threads in which
the L-thread schedulers will run. This is the case with DPDK initialization, and exit.
To deal with the initialization and shutdown scenarios, the shim is capable of switching on or
off its adaptor functionality, an application can control this behavior by the calling the function
pt_override_set(). The default state is disabled.
The pthread shim uses the dynamic linker loader and saves the loaded addresses of the gen-
uine pthread API functions in an internal table, when the shim functionality is enabled it per-
forms the adaptor function, when disabled it invokes the genuine pthread function.
The function pthread_exit() has additional special handling. The standard system header
file pthread.h declares pthread_exit() with __attribute__((noreturn)) this is an
optimization that is possible because the pthread is terminating and this enables the compiler to
omit the normal handling of stack and protection of registers since the function is not expected
to return, and in fact the thread is being destroyed. These optimizations are applied in both the
callee and the caller of the pthread_exit() function.
In our cooperative scheduling environment this behavior is inadmissible. The pthread is the
L-thread scheduler thread, and, although an L-thread is terminating, there must be a return to
the scheduler in order that the system can continue to run. Further, returning from a function
with attribute noreturn is invalid and may result in undefined behavior.
The solution is to redefine the pthread_exit function with a macro, causing it to be mapped
to a stub function in the shim that does not have the noreturn attribute. This macro is defined
in the file pthread_shim.h. The stub function is otherwise no different than any of the other
stub functions in the shim, and will switch between the real pthread_exit() function or the
lthread_exit() function as required. The only difference is that the mapping to the stub by
macro substitution.
A consequence of this is that the file pthread_shim.h must be included in legacy code
wishing to make use of the shim. It also means that dynamic linkage of a pre-compiled binary
that did not include pthread_shim.h is not be supported.
Given the requirements for porting legacy code outlined in Porting legacy code to run on L-
threads most applications will require at least some minimal adjustment and recompilation to
run on L-threads so pre-compiled binaries are unlikely to be met in practice.
In summary the shim approach adds some overhead but can be a useful tool to help establish
the feasibility of a code reuse project. It is also a fairly straightforward task to extend the shim
if necessary.
Note: Bearing in mind the preceding discussions about the impact of making blocking calls
then switching the shim in and out on the fly to invoke any pthread API this might block is
something that should typically be avoided.
The shim example application is located in the sample application in the performance-thread
folder
To build and run the pthread shim example
1. Go to the example applications folder
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/performance-thread/pthread_shim
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
L-thread Diagnostics
When debugging you must take account of the fact that the L-threads are run in a single
pthread. The current scheduler is defined by RTE_PER_LCORE(this_sched), and the cur-
rent lthread is stored at RTE_PER_LCORE(this_sched)->current_lthread. Thus on a
breakpoint in a GDB session the current lthread can be obtained by displaying the pthread
local variable per_lcore_this_sched->current_lthread.
Another useful diagnostic feature is the possibility to trace significant events in the life of an
L-thread, this feature is enabled by changing the value of LTHREAD_DIAG from 0 to 1 in the
file lthread_diag_api.h.
Tracing of events can be individually masked, and the mask may be programmed at run time.
An unmasked event results in a callback that provides information about the event. The default
callback simply prints trace information. The default mask is 0 (all events off) the mask can be
modified by calling the function lthread_diagniostic_set_mask().
It is possible register a user callback function to implement more sophisticated diagnostic func-
tions. Object creation events (lthread, mutex, and condition variable) accept, and store in the
created object, a user supplied reference value returned by the callback function.
The lthread reference value is passed back in all subsequent event callbacks, the mutex and
APIs are provided to retrieve the reference value from mutexes and condition variables. This
enables a user to monitor, count, or filter for specific events, on specific objects, for example
to monitor for a specific thread signaling a specific condition variable, or to monitor on all timer
events, the possibilities and combinations are endless.
The IPsec Security Gateway application is an example of a “real world” application using DPDK
cryptodev framework.
Overview
The application demonstrates the implementation of a Security Gateway (not IPsec compliant,
see the Constraints section below) using DPDK based on RFC4301, RFC4303, RFC3602 and
RFC2404.
Internet Key Exchange (IKE) is not implemented, so only manual setting of Security Policies
and Security Associations is supported.
The Security Policies (SP) are implemented as ACL rules, the Security Associations (SA) are
stored in a table and the routing is implemented using LPM.
The application classifies the ports as Protected and Unprotected. Thus, traffic received on an
Unprotected or Protected port is consider Inbound or Outbound respectively.
The Path for IPsec Inbound traffic is:
• Read packets from the port.
• Classify packets between IPv4 and ESP.
• Perform Inbound SA lookup for ESP packets based on their SPI.
• Perform Verification/Decryption.
• Remove ESP and outer IP header
• Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
• Routing.
• Write packet to port.
The Path for the IPsec Outbound traffic is:
• Read packets from the port.
• Perform Outbound SP check using ACL of all IPv4 traffic.
• Perform Outbound SA lookup for packets that need IPsec protection.
• Add ESP and outer IP header.
• Perform Encryption/Digest.
• Routing.
• Write packet to port.
Constraints
2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make
4. [Optional] Build the application for debugging: This option adds some extra flags, disables
compiler optimizations and is verbose:
make DEBUG=1
Where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• -P: optional. Sets all ports to promiscuous mode so that packets are accepted regardless
of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted (default is enabled).
• -u PORTMASK: hexadecimal bitmask of unprotected ports
Configurations
The following sections provide the syntax of configurations to initialize your SP, SA and Routing
tables. Configurations shall be specified in the configuration file to be passed to the application.
The file is then parsed by the application. The successful parsing will result in the appropriate
rules being applied to the tables accordingly.
As mention in the overview, the Security Policies are ACL rules. The application parsers the
rules specified in the configuration file and passes them to the ACL table, and replicates them
per socket in use.
Following are the configuration file syntax.
The parse treats one line in the configuration file as one configuration item (unless the line
concatenation symbol exists). Every configuration item shall follow the syntax of either SP, SA,
or Routing rules specified below.
The configuration parser supports the following special symbols:
• Comment symbol #. Any character from this symbol to the end of line is treated as
comment and will not be parsed.
• Line concatenation symbol \. This symbol shall be placed in the end of the line to be
concatenated to the line below. Multiple lines’ concatenation is supported.
SP rule syntax
<sport>
• The source port start and end range
• Optional: yes, default range of 0 to 0 will be used
• Syntax: sport X:Y
<dport>
• The destination port start and end range
• Optional: yes, default range of 0 to 0 will be used
• Syntax: dport X:Y
Example SP rules:
sp ipv4 out esp protect 105 pri 1 dst 192.168.115.0/24 sport 0:65535 \
dport 0:65535
SA rule syntax
<dst_ip>
• The destination IP address and mask
• Optional: Yes, default address 0.0.0.0 and mask of 0 will be used
• Syntax:
– dst X.X.X.X/Y for IPv4
– dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX/Y for IPv6
<port>
• The traffic output port id
• Optional: yes, default output port 0 will be used
• Syntax: port X
Example SP rules:
rt ipv4 dst 172.16.1.5/32 port 0
Figures
Fig. 3.1 Packet Flow
Fig. 3.2 Kernel NIC Application Packet Flow
Fig. 3.4 Performance Benchmark Setup (Basic Environment)
Fig. 3.5 Performance Benchmark Setup (Virtualized Environment)
Fig. 3.6 Performance Benchmark Setup (Basic Environment)
Fig. 3.7 Performance Benchmark Setup (Virtualized Environment)
Fig. 3.3 Encryption flow Through the L2 Forwarding with Crypto Application
Fig. 3.9 A typical IPv4 ACL rule
Fig. 3.10 Rules example
Fig. 3.11 Load Balancer Application Architecture
Fig. 3.13 Example Data Flow in a Symmetric Multi-process Application
Fig. 3.14 Example Data Flow in a Client-Server Symmetric Multi-process Application
Fig. 3.15 Master-slave Process Workflow
Fig. 3.16 Slave Process Recovery Process Flow
Fig. 3.17 QoS Scheduler Application Architecture
Fig. 3.18 Intel® QuickAssist Technology Application Block Diagram
Fig. 3.19 Pipeline Overview
Fig. 3.20 Ring-based Processing Pipeline Performance Setup
Fig. 3.21 Threads and Pipelines
Fig. 3.22 Packet Flow Through the VMDQ and DCB Sample Application
Programmer’s Guide
Introduction
Documentation Roadmap
255
DPDK documentation, Release 17.05.0-rc0
• Sample Applications User Guide: Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.
Related Publications
The following documents provide information that is relevant to the development of applications
using the DPDK:
• Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3A: System Pro-
gramming Guide
Part 1: Architecture Overview
Overview
This section gives a global overview of the architecture of Data Plane Development Kit (DPDK).
The main goal of the DPDK is to provide a simple, complete framework for fast packet process-
ing in data plane applications. Users may use the code to understand some of the techniques
employed, to build upon for prototyping or to add their own protocol stacks. Alternative ecosys-
tem options that use the DPDK are available.
The framework creates a set of libraries for specific environments through the creation of an
Environment Abstraction Layer (EAL), which may be specific to a mode of the Intel® architec-
ture (32-bit or 64-bit), Linux* user space compilers or a specific platform. These environments
are created through the use of make files and configuration files. Once the EAL library is cre-
ated, the user may link with the library to create their own applications. Other libraries, outside
of EAL, including the Hash, Longest Prefix Match (LPM) and rings libraries are also provided.
Sample applications are provided to help show the user how to use various features of the
DPDK.
The DPDK implements a run to completion model for packet processing, where all resources
must be allocated prior to calling Data Plane applications, running as execution units on logical
processing cores. The model does not support a scheduler and all devices are accessed by
polling. The primary reason for not using interrupts is the performance overhead imposed by
interrupt processing.
In addition to the run-to-completion model, a pipeline model may also be used by passing
packets or messages between cores via the rings. This allows work to be performed in stages
and may allow more efficient use of code on cores.
Development Environment
The DPDK project installation requires Linux and the associated toolchain, such as one or more
compilers, assembler, make utility, editor and various libraries to create the DPDK components
and libraries.
Once these libraries are created for the specific environment and architecture, they may then
be used to create the user’s data plane application.
When creating applications for the Linux user space, the glibc library is used. For DPDK
applications, two environmental variables (RTE_SDK and RTE_TARGET) must be configured
before compiling the applications. The following are examples of how the variables can be set:
export RTE_SDK=/home/user/DPDK
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for information on setting up the development environ-
ment.
The Environment Abstraction Layer (EAL) provides a generic interface that hides the environ-
ment specifics from the applications and libraries. The services provided by the EAL are:
• DPDK loading and launching
• Support for multi-process and multi-thread execution types
• Core affinity/assignment procedures
• System memory allocation/de-allocation
• Atomic/lock operations
• Time reference
• PCI bus access
• Trace and debug functions
• CPU feature identification
• Interrupt handling
• Alarm operations
• Memory management (malloc)
The EAL is fully described in Environment Abstraction Layer .
Core Components
The core components are a set of libraries that provide all the elements needed for high-
performance packet processing applications.
The ring structure provides a lockless multi-producer, multi-consumer FIFO API in a finite size
table. It has some advantages over lockless queues; easier to implement, adapted to bulk
operations and faster. A ring is used by the Memory Pool Manager (librte_mempool) and
may be used as a general communication mechanism between cores and/or execution blocks
connected together on a logical core.
This ring buffer and its usage are fully described in Ring Library .
The Memory Pool Manager is responsible for allocating pools of objects in memory. A pool
is identified by name and uses a ring to store free objects. It provides some other optional
services, such as a per-core object cache and an alignment helper to ensure that objects are
padded to spread them equally on all RAM channels.
This memory pool allocator is described in Mempool Library .
The mbuf library provides the facility to create and destroy buffers that may be used by the
DPDK application to store message buffers. The message buffers are created at startup time
and stored in a mempool, using the DPDK mempool library.
This library provides an API to allocate/free mbufs, manipulate control message buffers (ctrlm-
buf) which are generic message buffers, and packet buffers (pktmbuf) which are used to carry
network packets.
Network Packet Buffer Management is described in Mbuf Library .
This library provides a timer service to DPDK execution units, providing the ability to execute
a function asynchronously. It can be periodic function calls, or just a one-shot call. It uses
the timer interface provided by the Environment Abstraction Layer (EAL) to get a precise time
reference and can be initiated on a per-core basis as required.
The library documentation is available in Timer Library .
The DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40GbE, and para virtu-
alized virtio Ethernet controllers which are designed to work without asynchronous, interrupt-
based signaling mechanisms.
See Poll Mode Driver .
The DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM,librte_lpm) libraries to
support the corresponding packet forwarding algorithms.
See Hash Library and LPM Library for more information.
librte_net
The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level re-
sources such as hardware and memory space. It provides a generic interface that hides the
environment specifics from the applications and libraries. It is the responsibility of the initial-
ization routine to decide how to allocate these resources (that is, memory space, PCI devices,
timers, consoles, and so on).
Typical services expected from the EAL are:
• DPDK Loading and Launching: The DPDK and its application are linked as a single
application and must be loaded by some means.
• Core Affinity/Assignment Procedures: The EAL provides mechanisms for assigning exe-
cution units to specific cores as well as creating execution instances.
• System Memory Reservation: The EAL facilitates the reservation of different memory
zones, for example, physical memory areas for device interactions.
• PCI Address Abstraction: The EAL provides an interface to access PCI address space.
• Trace and Debug Functions: Logs, dump_stack, panic and so on.
• Utility Functions: Spinlocks and atomic counters that are not provided in libc.
• CPU Feature Identification: Determine at runtime if a particular feature, for example,
Intel® AVX is supported. Determine if the current CPU supports the feature set that the
binary was compiled for.
• Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt
sources.
• Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time.
In a Linux user space environment, the DPDK application runs as a user-space application
using the pthread library. PCI information about devices and address space is discovered
through the /sys kernel interface and through kernel modules such as uio_pci_generic, or
igb_uio. Refer to the UIO: User-space drivers documentation in the Linux kernel. This memory
is mmap’d in the application.
The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page
sizes to increase performance). This memory is exposed to DPDK service layers such as the
Mempool Library .
At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls,
each execution unit will be assigned to a specific logical core to run as a user-level thread.
The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel
API through a mmap() call.
Part of the initialization is done by the start function of glibc. A check is also performed at
initialization time to ensure that the micro architecture type chosen in the config file is supported
by the CPU. Then, the main() function is called. The core initialization and launch is done
in rte_eal_init() (see the API documentation). It consist of calls to the pthread library (more
specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
Note: Initialization of objects, such as memory zones, rings, memory pools, lpm tables and
hash tables, should be done as part of the overall application initialization on the master lcore.
The creation and initialization functions for these objects are not multi-thread safe. However,
once initialized, the objects themselves can safely be used in multiple threads simultaneously.
Multi-process Support
The allocation of large contiguous physical memory is done using the hugetlbfs kernel filesys-
tem. The EAL provides an API to reserve named memory zones in this contiguous memory.
The physical address of the reserved memory for that memory zone is also returned to the
user by the memory zone reservation API.
Note: Memory reservations done using the APIs provided by rte_malloc are also backed by
pages from the hugetlbfs filesystem.
The existing memory management implementation is based on the Linux kernel hugepage
mechanism. However, Xen Dom0 does not support hugepages, so a new Linux kernel module
rte_dom0_mm is added to workaround this limitation.
The EAL uses IOCTL interface to notify the Linux kernel module rte_dom0_mm to allocate
memory of specified size, and get all memory segments information from the module, and
the EAL uses MMAP interface to map the allocated memory. For each memory segment,
the physical addresses are contiguous within it but actual hardware addresses are contiguous
within 2MB.
PCI Access
The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI
bus. To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX
device file and resource files in /sys that can be mmap’d to obtain access to PCI address
space from the application. The DPDK-specific igb_uio module can also be used for this. Both
drivers use the uio kernel feature (userland driver).
Note: lcore refers to a logical execution unit of the processor, sometimes called a hardware
thread.
Shared variables are the default behavior. Per-lcore variables are implemented using Thread
Local Storage (TLS) to provide per-thread local storage.
Logs
A logging API is provided by EAL. By default, in a Linux application, logs are sent to syslog and
also to the console. However, the log function can be overridden by the user to use a different
logging mechanism.
There are some debug functions to dump the stack in glibc. The rte_panic() function can
voluntarily provoke a SIG_ABORT, which can trigger the generation of a core file, readable by
gdb.
The EAL can query the CPU at runtime (using the rte_cpu_get_feature() function) to determine
which CPU features are available.
Note: In DPDK PMD, the only interrupts handled by the dedicated host thread are those for
link status change, i.e. link up and link down notification.
• RX Interrupt Event
The receive and transmit routines provided by each PMD don’t limit themselves to execute in
polling thread mode. To ease the idle polling with tiny throughput, it’s useful to pause the polling
and wait until the wake-up event happens. The RX interrupt is the first choice to be such kind
of wake-up event, but probably won’t be the only one.
EAL provides the event APIs for this event-driven thread mode. Taking linuxapp as an example,
the implementation relies on epoll. Each thread can monitor an epoll instance in which all the
wake-up events’ file descriptors are added. The event file descriptors are created and mapped
to the interrupt vectors according to the UIO/VFIO spec. From bsdapp’s perspective, kqueue
is the alternative way, but not implemented yet.
EAL initializes the mapping between event file descriptors and interrupt vectors, while each
device initializes the mapping between interrupt vectors and queues. In this way, EAL actually
is unaware of the interrupt cause on the specific vector. The eth_dev driver takes responsibility
to program the latter mapping.
Note: Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-
X vector. In UIO, the RX interrupt together with other interrupt causes shares the same
vector. In this case, when RX interrupt and LSC(link status change) interrupt are both en-
abled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable.
Blacklisting
The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
so they are ignored by the DPDK. The ports to be blacklisted are identified using the PCIe*
description (Domain:Bus:Device.Function).
Misc Functions
The mapping of physical memory is provided by this feature in the EAL. As physical memory
can have gaps, the memory is described in a table of descriptors, and each descriptor (called
rte_memseg ) describes a contiguous portion of memory.
On top of this, the memzone allocator’s role is to reserve contiguous portions of physical mem-
ory. These zones are identified by a unique name when the memory is reserved.
The rte_memzone descriptors are also located in the configuration structure. This structure is
accessed using rte_eal_get_configuration(). The lookup (by name) of a memory zone returns
a descriptor containing the physical address of the memory zone.
Memory zones can be reserved with specific start address alignment by supplying the align
parameter (by default, they are aligned to cache line size). The alignment value should be a
power of two and not less than the cache line size (64 bytes). Memory zones can also be
reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.
Multiple pthread
DPDK usually pins one pthread per core to avoid the overhead of task switching. This allows
for significant performance gains, but lacks flexibility and is not always efficient.
Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency.
However, alternately it is possible to utilize the idle cycles available to take advantage of the
full capability of the CPU.
By taking advantage of cgroup, the CPU utilization quota can be simply assigned. This gives
another way to improve the CPU efficiency, however, there is a prerequisite; DPDK must handle
the context switching between multiple pthreads per core.
For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.
The term “lcore” refers to an EAL thread, which is really a Linux/FreeBSD pthread. “EAL
pthreads” are created and managed by EAL and execute the tasks issued by remote_launch.
In each EAL pthread, there is a TLS (Thread Local Storage) called _lcore_id for unique identi-
fication. As EAL pthreads usually bind 1:1 to the physical CPU, the _lcore_id is typically equal
to the CPU ID.
When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL
pthread and a specified physical CPU. The EAL pthread may have affinity to a CPU set, and
as such the _lcore_id will not be the same as the CPU ID. For this reason, there is an EAL
long option ‘–lcores’ defined to assign the CPU affinity of lcores. For a specified lcore ID or ID
group, the option allows setting the CPU set for that EAL pthread.
The format pattern: –lcores=’<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]’
‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group.
A number is a “digit([0-9]+)”; a range is “<number>-<number>”; a group is “(<num-
ber|range>[,<number|range>,...])”.
If a ‘@cpu_set’ value is not supplied, the value of ‘cpu_set’ will default to the value of ‘lcore_set’.
For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread;
lcore 0 runs on cpuset 0x41 (cpu 0,6);
lcore 1 runs on cpuset 0x2 (cpu 1);
lcore 2 runs on cpuset 0xe0 (cpu 5,6,7);
lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2);
lcore 6 runs on cpuset 0x41 (cpu 0,6);
lcore 7 runs on cpuset 0x80 (cpu 7);
lcore 8 runs on cpuset 0x100 (cpu 8).
Using this option, for each given lcore ID, the associated CPUs can be assigned. It’s also
compatible with the pattern of corelist(‘-l’) option.
It is possible to use the DPDK execution context with any user pthread (aka. Non-EAL
pthreads). In a non-EAL pthread, the _lcore_id is always LCORE_ID_ANY which identifies
that it is not an EAL thread with a valid, unique, _lcore_id. Some libraries will use an alter-
native unique ID (e.g. TID), some will not be impacted at all, and some will work but with
limitations (e.g. timer and mempool libraries).
All these impacts are mentioned in Known Issues section.
Known Issues
• rte_mempool
The rte_mempool uses a per-lcore cache inside the mempool. For non-EAL pthreads,
rte_lcore_id() will not return a valid number. So for now, when rte_mempool is used
with non-EAL pthreads, the put/get operations will bypass the default mempool cache and
there is a performance penalty because of this bypass. Only user-owned external caches
can be used in a non-EAL context in conjunction with rte_mempool_generic_put()
and rte_mempool_generic_get() that accept an explicit cache parameter.
• rte_ring
rte_ring supports multi-producer enqueue and multi-consumer dequeue. However, it is
non-preemptive, this has a knock on effect of making rte_mempool non-preemptable.
This does not mean it cannot be used, simply, there is a need to narrow down the situation
when it is used by multi-pthread on the same core.
1. It CAN be used for any single-producer or single-consumer situation.
2. It MAY be used by multi-producer/consumer pthread whose scheduling policy are all
SCHED_OTHER(cfs). User SHOULD be aware of the performance penalty before
using it.
3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling poli-
cies are SCHED_FIFO or SCHED_RR.
RTE_RING_PAUSE_REP_COUNT is defined for rte_ring to reduce contention. It’s mainly
for case 2, a yield is issued after number of times pause repeat.
It adds a sched_yield() syscall if the thread spins for too long while waiting on the other
thread to finish its operations on the ring. This gives the preempted thread a chance to
proceed and finish with the ring enqueue/dequeue operation.
• rte_timer
Running rte_timer_manager() on a non-EAL pthread is not allowed. However, re-
setting/stopping the timer from a non-EAL pthread is allowed.
• rte_log
In non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are
used.
• misc
The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in a non-
EAL pthread.
cgroup control
The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1)
doing packet I/O on the same core ($CPU). We expect only 50% of CPU spend on packet IO.
mkdir /sys/fs/cgroup/cpu/pkt_io
mkdir /sys/fs/cgroup/cpuset/pkt_io
cd /sys/fs/cgroup/cpu/pkt_io
echo 100000 > pkt_io/cpu.cfs_period_us
echo 50000 > pkt_io/cpu.cfs_quota_us
Malloc
Cookies
The rte_malloc() takes an align argument that can be used to request a memory area that is
aligned on a multiple of this value (which must be a power of two).
On systems with NUMA support, a call to the rte_malloc() function will return memory that has
been allocated on the NUMA socket of the core which made the call. A set of APIs is also
provided, to allow memory to be explicitly allocated on a NUMA socket directly, or by allocated
on the NUMA socket where another core is located, in the case where the memory is to be
used by a logical core other than on the one doing the memory allocation.
Use Cases
This API is meant to be used by an application that requires malloc-like functions at initialization
time.
For allocating/freeing data at runtime, in the fast-path of an application, the memory pool library
should be used instead.
Internal Implementation
Data Structures
There are two data structure types used internally in the malloc library:
• struct malloc_heap - used to track free space on a per-socket basis
• struct malloc_elem - the basic element of allocation and free-space tracking inside the
library.
Note: The malloc_heap structure does not keep track of in-use blocks of memory, since these
are never touched except when they are to be freed again - at which point the pointer to the
block is an input to the free() function.
Fig. 4.3: Example of a malloc heap and malloc elements within the malloc library
Note: If the usage of a particular field in one of the above three usages is not described, the
field can be assumed to have an undefined value in that situation, for example, for padding
headers only the “state” and “pad” fields have valid values.
• heap - this pointer is a reference back to the heap structure from which this block was
allocated. It is used for normal memory blocks when they are being freed, to add the
newly-freed block to the heap’s free-list.
• prev - this pointer points to the header element/block in the memseg immediately behind
the current one. When freeing a block, this pointer is used to reference the previous block
to check if that block is also free. If so, then the two free blocks are merged to form a
single larger block.
• next_free - this pointer is used to chain the free-list of unallocated memory blocks to-
gether. It is only used in normal memory blocks; on malloc() to find a suitable free
block to allocate and on free() to add the newly freed element to the free-list.
• state - This field can have one of three values: FREE, BUSY or PAD. The former two are
to indicate the allocation state of a normal memory block and the latter is to indicate that
the element structure is a dummy structure at the end of the start-of-block padding, i.e.
where the start of the data within a block is not at the start of the block itself, due to
alignment constraints. In that case, the pad header is used to locate the actual malloc
element header for the block. For the end-of-memseg structure, this is always a BUSY
value, which ensures that no element, on being freed, searches beyond the end of the
memseg for other blocks to merge with into a larger free area.
• pad - this holds the length of the padding present at the start of the block. In the case
of a normal block header, it is added to the address of the end of the header to give the
address of the start of the data area, i.e. the value passed back to the application on
a malloc. Within a dummy header inside the padding, this same value is stored, and is
subtracted from the address of the dummy header to yield the address of the actual block
header.
• size - the size of the data block, including the header itself. For end-of-memseg struc-
tures, this size is given as zero, though it is never actually checked. For normal blocks
which are being freed, this size value is used in place of a “next” pointer to identify the
location of the next block of memory that in the case of being FREE, the two free blocks
can be merged into one.
Memory Allocation
On EAL initialization, all memsegs are setup as part of the malloc heap. This setup involves
placing a dummy structure at the end with BUSY state, which may contain a sentinel value if
CONFIG_RTE_MALLOC_DEBUG is enabled, and a proper element header with FREE at the start
for each memseg. The FREE element is then added to the free_list for the malloc heap.
When an application makes a call to a malloc-like function, the malloc function will first index the
lcore_config structure for the calling thread, and determine the NUMA node of that thread.
The NUMA node is used to index the array of malloc_heap structures which is passed as a
parameter to the heap_alloc() function, along with the requested size, type, alignment and
boundary parameters.
The heap_alloc() function will scan the free_list of the heap, and attempt to find a free block
suitable for storing data of the requested size, with the requested alignment and boundary
constraints.
When a suitable free element has been identified, the pointer to be returned to the user is
calculated. The cache-line of memory immediately preceding this pointer is filled with a struct
malloc_elem header. Because of alignment and boundary constraints, there could be free
space at the start and/or end of the element, resulting in the following behavior:
1. Check for trailing space. If the trailing space is big enough, i.e. > 128 bytes, then the free
element is split. If it is not, then we just ignore it (wasted space).
2. Check for space at the start of the element. If the space at the start is small, i.e. <=128
bytes, then a pad header is used, and the remaining space is wasted. If, however, the
remaining space is greater, then the free element is split.
The advantage of allocating the memory from the end of the existing element is that no ad-
justment of the free list needs to take place - the existing element on the free list just has its
size pointer adjusted, and the following element has its “prev” pointer redirected to the newly
created element.
Freeing Memory
To free an area of memory, the pointer to the start of the data area is passed to the free
function. The size of the malloc_elem structure is subtracted from this pointer to get the
element header for the block. If this header is of type PAD then the pad length is further
subtracted from the pointer to get the proper element header for the entire block.
From this element header, we get pointers to the heap from which the block was allocated and
to where it must be freed, as well as the pointer to the previous element, and via the size field,
we can calculate the pointer to the next element. These next and previous elements are then
checked to see if they are also FREE, and if so, they are merged with the current element. This
means that we can never have two FREE memory blocks adjacent to one another, as they are
always merged into a single block.
Ring Library
The ring allows the management of queues. Instead of having a linked list of infinite size, the
rte_ring has the following properties:
• FIFO
• Maximum size is fixed, the pointers are stored in a table
• Lockless implementation
• Multi-consumer or single-consumer dequeue
• Multi-producer or single-producer enqueue
• Bulk dequeue - Dequeues the specified count of objects if successful; otherwise fails
• Bulk enqueue - Enqueues the specified count of objects if successful; otherwise fails
• Burst dequeue - Dequeue the maximum available objects if the specified count cannot
be fulfilled
• Burst enqueue - Enqueue the maximum available objects if the specified count cannot
be fulfilled
The advantages of this data structure over a linked list queue are as follows:
• Faster; only requires a single Compare-And-Swap instruction of sizeof(void *) instead of
several double-Compare-And-Swap instructions.
• Simpler than a full lockless queue.
• Adapted to bulk enqueue/dequeue operations. As pointers are stored in a table, a de-
queue of several objects will not produce as many cache misses as in a linked queue.
Also, a bulk dequeue of many objects does not cost more than a dequeue of a simple
object.
The disadvantages:
• Size is fixed
• Having many rings costs more in terms of memory than a linked list queue. An empty
ring contains at least N pointers.
A simplified representation of a Ring is shown in with consumer and producer head and tail
pointers to objects stored in the data structure.
The following code was added in FreeBSD 8.0, and is used in some network device drivers (at
least in Intel drivers):
• bufring.h in FreeBSD
• bufring.c in FreeBSD
The following is a link describing the Linux Lockless Ring Buffer Design.
Additional Features
Name
A ring is identified by a unique name. It is not possible to create two rings with the same name
(rte_ring_create() returns NULL if this is attempted).
Water Marking
The ring can have a high water mark (threshold). Once an enqueue operation reaches the high
water mark, the producer is notified, if the water mark is configured.
This mechanism can be used, for example, to exert a back pressure on I/O to inform the LAN
to PAUSE.
Debug
Use Cases
This section explains how a ring buffer operates. The ring structure is composed of two head
and tail couples; one is used by producers and one is used by the consumers. The figures of
the following sections refer to them as prod_head, prod_tail, cons_head and cons_tail.
Each figure represents a simplified state of the ring, which is a circular buffer. The content
of the function local variables is represented on the top of the figure, and the content of ring
structure is represented on the bottom of the figure.
This section explains what occurs when a producer adds an object to the ring. In this example,
only the producer head and tail (prod_head and prod_tail) are modified, and there is only one
producer.
The initial state is to have a prod_head and prod_tail pointing at the same location.
First, ring->prod_head and ring->cons_tail are copied in local variables. The prod_next lo-
cal variable points to the next element of the table, or several elements after in case of bulk
enqueue.
If there is not enough room in the ring (this is detected by checking cons_tail), it returns an
error.
The second step is to modify ring->prod_head in ring structure to point to the same location
as prod_next.
A pointer to the added object is copied in the ring (obj4).
Once the object is added in the ring, ring->prod_tail in the ring structure is modified to point to
the same location as ring->prod_head. The enqueue operation is finished.
This section explains what occurs when a consumer dequeues an object from the ring. In this
example, only the consumer head and tail (cons_head and cons_tail) are modified and there
is only one consumer.
The initial state is to have a cons_head and cons_tail pointing at the same location.
First, ring->cons_head and ring->prod_tail are copied in local variables. The cons_next local
variable points to the next element of the table, or several elements after in the case of bulk
dequeue.
If there are not enough objects in the ring (this is detected by checking prod_tail), it returns an
error.
The second step is to modify ring->cons_head in the ring structure to point to the same location
as cons_next.
The pointer to the dequeued object (obj1) is copied in the pointer given by the user.
Finally, ring->cons_tail in the ring structure is modified to point to the same location as ring-
>cons_head. The dequeue operation is finished.
This section explains what occurs when two producers concurrently add an object to the ring.
In this example, only the producer head and tail (prod_head and prod_tail) are modified.
The initial state is to have a prod_head and prod_tail pointing at the same location.
On both cores, ring->prod_head and ring->cons_tail are copied in local variables. The
prod_next local variable points to the next element of the table, or several elements after in
the case of bulk enqueue.
If there is not enough room in the ring (this is detected by checking cons_tail), it returns an
error.
The second step is to modify ring->prod_head in the ring structure to point to the same location
as prod_next. This operation is done using a Compare And Swap (CAS) instruction, which
does the following operations atomically:
• If ring->prod_head is different to local variable prod_head, the CAS operation fails, and
the code restarts at first step.
• Otherwise, ring->prod_head is set to local prod_next, the CAS operation is successful,
and processing continues.
In the figure, the operation succeeded on core 1, and step one restarted on core 2.
Each core now wants to update ring->prod_tail. A core can only update it if ring->prod_tail is
equal to the prod_head local variable. This is only true on core 1. The operation is finished on
core 1.
Once ring->prod_tail is updated by core 1, core 2 is allowed to update it too. The operation is
also finished on core 2.
In the preceding figures, the prod_head, prod_tail, cons_head and cons_tail indexes are repre-
sented by arrows. In the actual implementation, these values are not between 0 and size(ring)-
1 as would be assumed. The indexes are between 0 and 2^32 -1, and we mask their value
when we access the pointer table (the ring itself). 32-bit modulo also implies that operations
on indexes (such as, add/subtract) will automatically do 2^32 modulo if the result overflows the
32-bit number range.
The following are two examples that help to explain how indexes are used in a ring.
Note: To simplify the explanation, operations with modulo 16-bit are used instead of modulo
32-bit. In addition, the four indexes are defined as unsigned 16-bit integers, as opposed to
unsigned 32-bit integers in the more realistic case.
Note: For ease of understanding, we use modulo 65536 operations in the above examples.
In real execution cases, this is redundant for low efficiency, but is done automatically when the
result overflows.
The code always maintains a distance between producer and consumer between 0 and
size(ring)-1. Thanks to this property, we can do subtractions between 2 index values in a
modulo-32bit base: that’s why the overflow of the indexes is not a problem.
At any time, entries and free_entries are between 0 and size(ring)-1, even if only the first term
of subtraction has overflowed:
uint32_t entries = (prod_tail - cons_head);
uint32_t free_entries = (mask + cons_tail -prod_head);
References
Mempool Library
A memory pool is an allocator of a fixed-sized object. In the DPDK, it is identified by name and
uses a mempool handler to store free objects. The default mempool handler is ring based. It
provides some other optional services such as a per-core object cache and an alignment helper
to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
This library is used by the Mbuf Library .
Cookies
Stats
Note: The command line must always have the number of memory channels specified for the
processor.
Examples of alignment for different DIMM architectures are shown in Fig. 4.18 and Fig. 4.19.
In this case, the assumption is that a packet is 16 blocks of 64 bytes, which is not true.
The Intel® 5520 chipset has three channels, so in most cases, no padding is required between
objects (except for objects whose size are n x 3 x 64 bytes blocks).
When creating a new pool, the user can specify to use this feature or not.
Local Cache
In terms of CPU usage, the cost of multiple cores accessing a memory pool’s ring of free
buffers may be high since each access requires a compare-and-set (CAS) operation. To avoid
having too many access requests to the memory pool’s ring, the memory pool allocator can
maintain a per-core cache and do bulk requests to the memory pool’s ring, via the cache with
many fewer locks on the actual memory pool structure. In this way, each core has full access
to its own cache (with locks) of free objects and only when the cache fills does the core need to
shuffle some of the free objects back to the pools ring or obtain more objects when the cache
is empty.
While this may mean a number of buffers may sit idle on some core’s cache, the speed at
which a core can access its own cache for a specific memory pool without locks provides
performance gains.
The cache is composed of a small, per-core table of pointers and its length (used as a stack).
This internal cache can be enabled or disabled at creation of the pool.
The maximum size of the cache is static and is defined at compilation time (CON-
FIG_RTE_MEMPOOL_CACHE_MAX_SIZE).
Fig. 4.20 shows a cache in operation.
Alternatively to the internal default per-lcore local cache, an application can cre-
ate and manage external caches through the rte_mempool_cache_create(),
rte_mempool_cache_free() and rte_mempool_cache_flush() calls. These
user-owned caches can be explicitly passed to rte_mempool_generic_put() and
rte_mempool_generic_get(). The rte_mempool_default_cache() call returns the
default internal cache if any. In contrast to the default caches, user-owned caches can be
used by non-EAL threads too.
Mempool Handlers
This allows external memory subsystems, such as external hardware memory management
systems and software based memory allocators, to be used with DPDK.
There are two aspects to a mempool handler.
• Adding the code for your new mempool operations (ops). This is achieved by adding a
new mempool ops code, and using the MEMPOOL_REGISTER_OPS macro.
• Using the new API to call rte_mempool_create_empty() and
rte_mempool_set_ops_byname() to create a new mempool and specifying
which ops to use.
Several different mempool handlers may be used in the same application. A new mem-
pool can be created by using the rte_mempool_create_empty() function, then using
rte_mempool_set_ops_byname() to point the mempool to the relevant mempool handler
callback (ops) structure.
Legacy applications may continue to use the old rte_mempool_create() API call, which
uses a ring based mempool handler by default. These applications will need to be modified to
use a new mempool handler.
For applications that use rte_pktmbuf_create(), there is a config setting
(RTE_MBUF_DEFAULT_MEMPOOL_OPS) that allows the application to make use of an
alternative mempool handler.
Use Cases
All allocations that require a high level of performance should use a pool-based memory allo-
cator. Below are some examples:
• Mbuf Library
• Environment Abstraction Layer , for logging service
• Any application that needs to allocate fixed-sized objects in the data plane and that will
be continuously utilized by the system.
Mbuf Library
The mbuf library provides the ability to allocate and free buffers (mbufs) that may be used by
the DPDK application to store message buffers. The message buffers are stored in a mempool,
using the Mempool Library .
A rte_mbuf struct can carry network packet buffers or generic control buffers (indicated by the
CTRL_MBUF_FLAG). This can be extended to other types. The rte_mbuf header structure is
kept as small as possible and currently uses just two cache lines, with the most frequently used
fields being on the first of the two cache lines.
For the storage of the packet data (including protocol headers), two approaches were consid-
ered:
1. Embed metadata within a single memory buffer the structure followed by a fixed size area
for the packet data.
2. Use separate memory buffers for the metadata structure and for the packet data.
The advantage of the first method is that it only needs one operation to allocate/free the whole
memory representation of a packet. On the other hand, the second method is more flexible
and allows the complete separation of the allocation of metadata structures from the allocation
of packet data buffers.
The first method was chosen for the DPDK. The metadata contains control information such as
message type, length, offset to the start of the data and a pointer for additional mbuf structures
allowing buffer chaining.
Message buffers that are used to carry network packets can handle buffer chaining where
multiple buffers are required to hold the complete packet. This is the case for jumbo frames
that are composed of many mbufs linked together through their next field.
For a newly allocated mbuf, the area at which the data begins in the message buffer is
RTE_PKTMBUF_HEADROOM bytes after the beginning of the buffer, which is cache aligned.
Message buffers may be used to carry control information, packets, events, and so on between
different entities in the system. Message buffers may also use their buffer pointers to point to
other message buffer data sections or other structures.
Fig. 4.21 and Fig. 4.22 show some of these scenarios.
The Buffer Manager implements a fairly standard set of buffer access functions to manipulate
network packets.
The Buffer Manager uses the Mempool Library to allocate buffers. Therefore, it ensures
that the packet header is interleaved optimally across the channels and ranks for L3 pro-
cessing. An mbuf contains a field indicating the pool that it originated from. When calling
rte_ctrlmbuf_free(m) or rte_pktmbuf_free(m), the mbuf returns to its original pool.
Constructors
Packet and control mbuf constructors are provided by the API. The rte_pktmbuf_init() and
rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that are not modified by
the user once created (mbuf type, origin pool, buffer start address, and so on). This function is
given as a callback function to the rte_mempool_create() function at pool creation time.
Allocating a new mbuf requires the user to specify the mempool from which the mbuf
should be taken. For any newly-allocated mbuf, it contains one segment, with a length
of 0. The offset to data is initialized to have some bytes of headroom in the buffer
(RTE_PKTMBUF_HEADROOM).
Freeing a mbuf means returning it into its original mempool. The content of an mbuf is not
modified when it is stored in a pool (as a free mbuf). Fields initialized by the constructor do not
need to be re-initialized at mbuf allocation.
When freeing a packet mbuf that contains several segments, all of them are freed and returned
to their original mempool.
Manipulating mbufs
This library provides some functions for manipulating the data in a packet mbuf. For instance:
• Get data length
• Get a pointer to the start of data
• Prepend data before data
• Append data after data
• Remove data at the beginning of the buffer (rte_pktmbuf_adj())
• Remove data at the end of the buffer (rte_pktmbuf_trim()) Refer to the DPDK API Refer-
ence for details.
Meta Information
Some information is retrieved by the network driver and stored in an mbuf to make process-
ing easier. For instance, the VLAN, the RSS hash result (see Poll Mode Driver ) and a flag
indicating that the checksum was computed by hardware.
An mbuf also contains the input port (where it comes from), and the number of segment mbufs
in the chain.
For chained buffers, only the first mbuf of the chain stores this meta information.
For instance, this is the case on RX side for the IEEE1588 packet timestamp mechanism, the
VLAN tagging and the IP checksum computation.
On TX side, it is also possible for an application to delegate some processing to the hardware
if it supports it. For instance, the PKT_TX_IP_CKSUM flag allows to offload the computation
of the IPv4 checksum.
The following examples explain how to configure different TX offloads on a vxlan-encapsulated
tcp packet: out_eth/out_ip/out_udp/vxlan/in_eth/in_ip/in_tcp/payload
• calculate checksum of out_ip:
mb->l2_len = len(out_eth)
mb->l3_len = len(out_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet
This is similar to case 1), but l2_len is different. It is supported on hardware advertising
DEV_TX_OFFLOAD_IPV4_CKSUM. Note that it can only work if outer L4 checksum is
0.
• calculate checksum of in_ip and in_tcp:
mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
mb->l3_len = len(in_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
This is similar to case 2), but l2_len is different. It is supported on hardware advertising
DEV_TX_OFFLOAD_IPV4_CKSUM and DEV_TX_OFFLOAD_TCP_CKSUM. Note that
it can only work if outer L4 checksum is 0.
A direct buffer is a buffer that is completely separate and self-contained. An indirect buffer
behaves like a direct buffer but for the fact that the buffer pointer and data offset in it refer to
data in another direct buffer. This is useful in situations where packets need to be duplicated
or fragmented, since indirect buffers provide the means to reuse the same packet data across
multiple buffers.
A buffer becomes indirect when it is “attached” to a direct buffer using the rte_pktmbuf_attach()
function. Each buffer has a reference counter field and whenever an indirect buffer is attached
to the direct buffer, the reference counter on the direct buffer is incremented. Similarly, when-
ever the indirect buffer is detached, the reference counter on the direct buffer is decremented.
If the resulting reference counter is equal to 0, the direct buffer is freed since it is no longer in
use.
There are a few things to remember when dealing with indirect buffers. First of all, an indirect
buffer is never attached to another indirect buffer. Attempting to attach buffer A to indirect buffer
B that is attached to C, makes rte_pktmbuf_attach() automatically attach A to C, effectively
cloning B. Secondly, for a buffer to become indirect, its reference counter must be equal to 1,
that is, it must not be already referenced by another indirect buffer. Finally, it is not possible to
reattach an indirect buffer to the direct buffer (unless it is detached first).
While the attach/detach operations can be invoked directly using the recommended
rte_pktmbuf_attach() and rte_pktmbuf_detach() functions, it is suggested to use the higher-
level rte_pktmbuf_clone() function, which takes care of the correct initialization of an indirect
Debug
Use Cases
The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode
Drivers.
A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user
space, to configure the devices and their respective queues. In addition, a PMD accesses the
RX and TX descriptors directly without any interrupts (with the exception of Link Status Change
interrupts) to quickly receive, process and deliver packets in the user’s application. This section
describes the requirements of the PMDs, their global design principles and proposes a high-
level architecture and a generic external API for the Ethernet PMDs.
The DPDK environment for packet processing applications allows for two models, run-to-
completion and pipe-line:
• In the run-to-completion model, a specific port’s RX descriptor ring is polled for packets
through an API. Packets are then processed on the same core and placed on a port’s TX
descriptor ring through an API for transmission.
• In the pipe-line model, one core polls one or more port’s RX descriptor ring through
an API. Packets are received and passed to another core via a ring. The other core
continues to process the packet which then may be placed on a port’s TX descriptor ring
through an API for transmission.
In a synchronous run-to-completion model, each logical core assigned to the DPDK executes
a packet processing loop that includes the following steps:
• Retrieve input packets through the PMD receive API
• Process each received packet one at a time, up to its forwarding
• Send pending output packets through the PMD transmit API
Conversely, in an asynchronous pipe-line model, some logical cores may be dedicated to the
retrieval of received packets and other logical cores to the processing of previously received
packets. Received packets are exchanged between logical cores through rings. The loop for
packet retrieval includes the following steps:
• Retrieve input packets through the PMD receive API
• Provide received packets to processing lcores through packet queues
The loop for packet processing includes the following steps:
• Retrieve the received packet from the packet queue
• Process the received packet, up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not
use any asynchronous notification mechanisms. Whenever needed and appropriate, asyn-
chronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment. To address this issue,
PMDs are designed to work with per-core private resources as much as possible. For example,
a PMD maintains a separate transmit queue per-core, per-port. In the same way, every receive
queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to
assign to each logical core a private buffer pool in local memory to minimize remote memory
access. The configuration of packet buffer pools should take into account the underlying physi-
cal memory architecture in terms of DIMMS, channels and ranks. The application must ensure
that appropriate parameters are given at memory pool creation time. See Mempool Library .
Design Principles
The API and architecture of the Ethernet* PMDs are designed with the following guidelines in
mind.
PMDs must help global policy-oriented decisions to be enforced at the upper application level.
Conversely, NIC PMD functions should not impede the benefits expected by upper-level global
policies, or worse prevent such policies from being applied.
For instance, both the receive and transmit functions of a PMD have a maximum number of
packets/descriptors to poll. This allows a run-to-completion processing stack to statically fix or
to dynamically adapt its overall behavior through different global loop policies, such as:
• Receive, process immediately and transmit packets one at a time in a piecemeal fashion.
• Receive as many packets as possible, then process all received packets, transmitting
them immediately.
• Receive a given maximum number of packets, process the received packets, accumulate
them and finally send all accumulated packets to transmit.
To achieve optimal performance, overall software design choices and pure software optimiza-
tion techniques must be considered and balanced against available low-level hardware-based
optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on). The
case of packet transmission is an example of this software/hardware tradeoff issue when opti-
mizing burst-oriented network packet processing engines. In the initial case, the PMD could ex-
port only an rte_eth_tx_one function to transmit one packet at a time on a given queue. On top
of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one
The DPDK supports NUMA allowing for better performance when a processor’s logical cores
and interfaces utilize its local memory. Therefore, mbuf allocation associated with local PCIe*
interfaces should be allocated from memory pools created in the local memory. The buffers
should, if possible, remain on the local processor to obtain the best performance results and RX
and TX buffer descriptors should be populated with mbufs allocated from a mempool allocated
from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local
memory instead of a remote processors memory. This is also true for the pipe-line model
provided all logical cores used are located on the same processor.
Multiple logical cores should never share receive or transmit queues for interfaces since this
would require global locks and hinder performance.
Device Identification
Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI identifiers as-
signed by the PCI probing/enumeration function executed at DPDK initialization. Based on
their PCI identifier, NIC ports are assigned two other identifiers:
• A port index used to designate the NIC port in all functions exported by the PMD API.
• A port name used to designate the port in console messages, for administration or de-
bugging purposes. For ease of use, the port name includes the port index.
Device Configuration
On-the-Fly Configuration
All device features that can be started or stopped “on the fly” (that is, without stopping the
device) do not require the PMD API to export dedicated functions for this purpose.
All that is required is the mapping address of the device PCI registers to implement the config-
uration of these features in specific functions outside of the drivers.
For this purpose, the PMD API exports a function that provides all the information associated
with a device that can be used to set up a given device feature outside of the driver. This
includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI
device registers, and the name of the driver.
The main advantage of this approach is that it gives complete freedom on the choice of the
API used to configure, to start, and to stop such features.
As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Giga-
bit Ethernet Controller and the Intel® 82599 10 Gigabit Ethernet Controller controllers in the
testpmd application.
Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in
the same way. Ethernet* flow control (pause frame) can be configured on the individual port.
Refer to the testpmd source code for details. Also, L4 (UDP/TCP/ SCTP) checksum offload by
the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly.
See Hardware Offload for details.
• The minimum transmit packets to free threshold (tx_free_thresh). When the number of
descriptors used to transmit packets exceeds this threshold, the network adaptor should
be checked to see if it has written back descriptors. A value of 0 can be passed during
the TX queue configuration to indicate the default value should be used. The default
value for tx_free_thresh is 32. This ensures that the PMD does not search for completed
descriptors until at least 32 have been processed by the NIC for this queue.
• The minimum RS bit threshold. The minimum number of transmit descriptors to use be-
fore setting the Report Status (RS) bit in the transmit descriptor. Note that this parameter
may only be valid for Intel 10 GbE network adapters. The RS bit is set on the last de-
scriptor used to transmit a packet if the number of descriptors used since the last RS bit
setting, up to the first descriptor used to transmit the packet, exceeds the transmit RS
bit threshold (tx_rs_thresh). In short, this parameter controls which transmit descriptors
are written back to host memory by the network adapter. A value of 0 can be passed
during the TX queue configuration to indicate that the default value should be used. The
default value for tx_rs_thresh is 32. This ensures that at least 32 descriptors are used
before the network adapter writes back the most recently used descriptor. This saves
upstream PCIe* bandwidth resulting from TX descriptor write-backs. It is important to
note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh
is greater than 1. Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for
more details.
The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
• tx_rs_thresh must be greater than 0.
• tx_rs_thresh must be less than the size of the ring minus 2.
• tx_rs_thresh must be less than or equal to tx_free_thresh.
• tx_free_thresh must be greater than 0.
• tx_free_thresh must be less than the size of the ring minus 3.
• For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater
than 1.
One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence
the maximum threshold constraints.
Note: When configuring for DCB operation, at port initialization, both the number of transmit
queues and the number of receive queues must be set to 128.
Hardware Offload
Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed not to be
invoked in parallel on different logical cores to work on the same target object. For instance,
a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX
queue of the same port. Of course, this function can be invoked in parallel by different logical
cores on different RX queues. It is the responsibility of the upper-level application to enforce
this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly pro-
tected by dedicated inline lock-aware functions built on top of their corresponding lock-free
functions of the PMD API.
The Ethernet device API exported by the Ethernet PMDs is described in the DPDK API Refer-
ence.
The extended statistics API allows each individual PMD to expose a unique set of statistics.
Accessing these from application programs is done via two functions:
• rte_eth_xstats_get: Fills in an array of struct rte_eth_xstat with extended
statistics.
• rte_eth_xstats_get_names: Fills in an array of struct rte_eth_xstat_name
with extended statistic name lookup information.
Each struct rte_eth_xstat contains an identifier and value pair, and each
struct rte_eth_xstat_name contains a string. Each identifier within the struct
rte_eth_xstat lookup array must have a corresponding entry in the struct
rte_eth_xstat_name lookup array. Within the latter the index of the entry is the identi-
fier the string is associated with. These identifiers, as well as the number of extended statistic
exposed, must remain constant during runtime. Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
A naming scheme exists for the strings exposed to clients of the API. This is to allow scraping of
the API for statistics of interest. The naming scheme uses strings split by a single underscore
_. The scheme is as follows:
• direction
• detail 1
• detail 2
• detail n
• unit
Examples of common statistics xstats strings, formatted to comply to the scheme proposed
above:
• rx_bytes
• rx_crc_errors
• tx_multicast_packets
The scheme, although quite simple, allows flexibility in presenting and reading information
from the statistic strings. The following example illustrates the naming scheme:rx_packets.
In this example, the string is split into two components. The first component rx indicates that
the statistic is associated with the receive side of the NIC. The second component packets
indicates that the unit of measure is packets.
A more complicated example: tx_size_128_to_255_packets. In this example, tx indi-
cates transmission, size is the first detail, 128 etc are more details, and packets indicates
that this is a packet counter.
Some additions in the metadata scheme are as follows:
• If the first part does not match rx or tx, the statistic does not have an affinity with either
receive of transmit.
• If the first letter of the second part is q and this q is followed by a number, this statistic is
part of a specific queue.
An example where queue numbers are used is as follows: tx_q7_bytes which indicates this
statistic applies to queue number 7, and represents the number of transmitted bytes on that
queue.
Overview
This API provides a generic means to configure hardware to match specific ingress or egress
traffic, alter its fate and query related counters according to any number of user-defined rules.
It is named rte_flow after the prefix used for all its symbols, and is defined in rte_flow.h.
• Matching can be performed on packet data (protocol headers, payload) and properties
(e.g. associated physical port, virtual device function ID).
Flow rule
Description
A flow rule is the combination of attributes with a matching pattern and a list of actions. Flow
rules form the basis of this API.
Flow rules can have several distinct actions (such as counting, encapsulating, decapsulating
before redirecting packets to a particular queue, etc.), instead of relying on several rules to
achieve this and having applications deal with hardware implementation details regarding their
order.
Support for different priority levels on a rule basis is provided, for example in order to force a
more specific rule to come before a more generic one for packets matched by both. However
hardware support for more than a single priority level cannot be guaranteed. When supported,
the number of available priority levels is usually low, which is why they can also be implemented
in software by PMDs (e.g. missing priority levels may be emulated by reordering rules).
In order to remain as hardware-agnostic as possible, by default all rules are considered to
have the same priority, which means that the order between overlapping rules (when a packet
is matched by several filters) is undefined.
PMDs may refuse to create overlapping rules at a given priority level when they can be detected
(e.g. if a pattern matches an existing filter).
Thus predictable results for a given priority level can only be achieved with non-overlapping
rules, using perfect matching on all protocol layers.
Flow rules can also be grouped, the flow rule priority is specific to the group they belong to. All
flow rules in a given group are thus processed either before or after another group.
Support for multiple actions per rule may be implemented internally on top of non-default hard-
ware priorities, as a result both features may not be simultaneously available to applications.
Considering that allowed pattern/actions combinations cannot be known in advance and would
result in an impractically large number of capabilities to expose, a method is provided to vali-
date a given rule from the current device configuration state.
This enables applications to check if the rule types they need is supported at initialization time,
before starting their data path. This method can be used anytime, its only requirement being
that the resources needed by a rule should exist (e.g. a target RX queue should be configured
first).
Each defined rule is associated with an opaque handle managed by the PMD, applications are
responsible for keeping it. These can be used for queries and rules management, such as
retrieving counters or other data and destroying them.
To avoid resource leaks on the PMD side, handles must be explicitly destroyed by the applica-
tion before releasing associated resources such as queues and ports.
The following sections cover:
• Attributes (represented by struct rte_flow_attr): properties of a flow rule such
as its direction (ingress or egress) and priority.
• Pattern item (represented by struct rte_flow_item): part of a matching pattern
that either matches specific packet data or traffic properties. It can also describe proper-
ties of the pattern itself, such as inverted matching.
• Matching pattern: traffic properties to look for, a combination of any number of items.
• Actions (represented by struct rte_flow_action): operations to perform when-
ever a packet is matched by a pattern.
Attributes
Attribute: Group
Flow rules can be grouped by assigning them a common group number. Lower values have
higher priority. Group 0 has the highest priority.
Although optional, applications are encouraged to group similar rules as much as possible
to fully take advantage of hardware capabilities (e.g. optimized matching) and work around
limitations (e.g. a single pattern type possibly allowed in a given group).
Note that support for more than a single group is not guaranteed.
Attribute: Priority
A priority level can be assigned to a flow rule. Like groups, lower values denote higher priority,
with 0 as the maximum.
A rule with priority 0 in group 8 is always matched after a rule with priority 8 in group 0.
Group and priority levels are arbitrary and up to the application, they do not need to be con-
tiguous nor start from 0, however the maximum number varies between devices and may be
affected by existing flow rules.
If a packet is matched by several rules of a given group for a given priority level, the outcome
is undefined. It can take any path, may be duplicated or even cause unrecoverable errors.
Note that support for more than a single priority level is not guaranteed.
Pattern item
• dst: ??:??:??:??:01
• type: 0x????
Matching pattern
A pattern is formed by stacking items starting from the lowest protocol layer to match. This
stacking restriction does not apply to meta items which can be placed anywhere in the stack
without affecting the meaning of the resulting pattern.
Patterns are terminated by END items.
Examples:
Table 4.5:
UDPv6 any-
where
Index Item
0 IPv6
1 UDP
2 END
If supported by the PMD, omitting one or several protocol layers at the bottom of the stack
as in the above example (missing an Ethernet specification) enables looking up anywhere in
packets.
It is unspecified whether the payload of supported encapsulations (e.g. VXLAN payload) is
matched by such a pattern, which may apply to inner, outer or both packets.
They match meta-data or affect pattern processing instead of matching packet data directly,
most of them do not need a specification structure. This particularity allows them to be speci-
fied anywhere in the stack without causing any side effect.
Item: END
End marker for item lists. Prevents further processing of items, thereby ending the pattern.
Item: VOID
Item: INVERT
Inverted matching, i.e. process packets that do not match the pattern.
• spec, last and mask are ignored.
Table 4.10:
INVERT
Field Value
spec ignored
last ignored
mask ignored
Usage example, matching non-TCPv4 packets only:
Table 4.11:
Anything but TCPv4
Index Item
0 INVERT
1 Ethernet
2 IPv4
3 TCP
4 END
Item: PF
Table 4.12: PF
Field Value
spec unset
last unset
mask unset
Item: VF
Table 4.13: VF
Field Subfield Value
spec id destination VF ID
last id upper range value
mask id zeroed to match any VF ID
Item: PORT
Matches packets coming from the specified physical port of the underlying device.
The first PORT item overrides the physical port normally associated with the specified DPDK
input port (port_id). This item can be provided several times to match additional physical ports.
Note that physical ports are not necessarily tied to DPDK input ports (port_id) when those are
not under DPDK control. Possible values are specific to each device, they are not necessarily
indexed from zero and may not be contiguous.
As a device property, the list of allowed values as well as the value associated with a port_id
should be retrieved by other means.
• Default mask matches any port index.
Most of these are basically protocol header definitions with associated bit-masks. They must
be specified (stacked) from lowest to highest protocol layer to form a matching pattern.
The following list is not exhaustive, new protocols will be added in the future.
Item: ANY
Matches any protocol in place of the current layer, a single ANY may also stand for several
protocol layers.
This is usually specified as the first pattern item when looking for a protocol anywhere in a
packet.
• Default mask stands for any number of layers.
Item: RAW
Example pattern looking for several strings at various offsets of a UDP payload, using com-
bined RAW items:
Note that matching subsequent pattern items would resume after “baz”, not “bar” since match-
ing is always performed after the previous item of the stack.
Item: ETH
Item: VLAN
Item: IPV4
Item: IPV6
Item: ICMP
Item: UDP
Item: TCP
Item: SCTP
Item: VXLAN
Actions
Each possible action is represented by a type. Some have associated configuration structures.
Several actions combined in a list can be affected to a flow rule. That list is not ordered.
They fall in three categories:
• Terminating actions (such as QUEUE, DROP, RSS, PF, VF) that prevent processing
matched packets by subsequent flow rules, unless overridden with PASSTHRU.
• Non-terminating actions (PASSTHRU, DUP) that leave matched packets up for additional
processing by subsequent flow rules.
• Other non-terminating meta actions that do not affect the fate of packets (END, VOID,
MARK, FLAG, COUNT).
When several actions are combined in a flow rule, they should all have different types (e.g.
dropping a packet twice is not possible).
Only the last action of a given type is taken into account. PMDs still perform error checking on
the entire list.
Like matching patterns, action lists are terminated by END items.
Note that PASSTHRU is the only action able to override a terminating rule.
Example of action that redirects packets to queue index 10:
Table 4.19:
Queue action
Field Value
index 10
Action lists examples, their order is not significant, applications must consider all actions to be
performed simultaneously:
Action types
Common action types are described in this section. Like pattern item types, this list is not
exhaustive as new actions will be added in the future.
Action: END
End marker for action lists. Prevents further processing of actions, thereby ending the list.
• Its numeric value is 0 for convenience.
• PMD support is mandatory.
• No configurable properties.
Table 4.24:
END
Field
no properties
Action: VOID
Table 4.25:
VOID
Field
no properties
Action: PASSTHRU
Leaves packets up for additional processing by subsequent flow rules. This is the default when
a rule does not contain a terminating action, but can be specified to force a rule to become
non-terminating.
• No configurable properties.
Table 4.26:
PASSTHRU
Field
no properties
Example to copy a packet to a queue and continue processing by subsequent flow rules:
Action: MARK
Attaches an integer value to packets and sets PKT_RX_FDIR and PKT_RX_FDIR_ID mbuf
flags.
This value is arbitrary and application-defined. Maximum allowed value depends on the under-
lying implementation. It is returned in the hash.fdir.hi mbuf field.
Action: FLAG
Flags packets. Similar to Action: MARK without a specific value; only sets the PKT_RX_FDIR
mbuf flag.
• No configurable properties.
Table 4.29:
FLAG
Field
no properties
Action: QUEUE
Action: DROP
Drop packets.
• No configurable properties.
• Terminating by default.
Table 4.31:
DROP
Field
no properties
Action: COUNT
Table 4.32:
COUNT
Field
no properties
Query structure to retrieve and reset flow rule counters:
Action: DUP
Action: RSS
Similar to QUEUE, except RSS is additionally performed on packets to spread them among
several queues according to the provided parameters.
Note: RSS hash result is stored in the hash.rss mbuf field which overlaps hash.fdir.lo.
Since Action: MARK sets the hash.fdir.hi field only, both can be requested simultane-
ously.
• Terminating by default.
Action: PF
Table 4.36: PF
Field
no properties
Action: VF
Table 4.37: VF
Field Value
original use original VF ID if possible
vf VF ID to redirect packets to
Negative types
Planned types
Rules management
A rather simple API with few functions is provided to fully manage flow rules.
Each created flow rule is associated with an opaque, PMD-specific handle pointer. The appli-
cation is responsible for keeping it until the rule is destroyed.
Flows rules are represented by struct rte_flow objects.
Validation
Given that expressing a definite set of device capabilities is not practical, a dedicated function
is provided to check if a flow rule is supported and can be created.
int
rte_flow_validate(uint8_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error);
While this function has no effect on the target device, the flow rule is validated against its
current configuration state and the returned value should be considered valid by the caller for
that state only.
The returned value is guaranteed to remain valid only as long as no successful calls to
rte_flow_create() or rte_flow_destroy() are made in the meantime and no device
parameter affecting flow rules in any way are modified, due to possible collisions or resource
limitations (although in such cases EINVAL should not be returned).
Arguments:
• port_id: port identifier of Ethernet device.
• attr: flow rule attributes.
• pattern: pattern specification (list terminated by the END pattern item).
• actions: associated actions (list terminated by the END action).
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 if flow rule is valid and can be created. A negative errno value otherwise (rte_errno
is also set), the following errors are defined.
• -ENOSYS: underlying device does not support this functionality.
Creation
Creating a flow rule is similar to validating one, except the rule is actually created and a handle
returned.
struct rte_flow *
rte_flow_create(uint8_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action *actions[],
struct rte_flow_error *error);
Arguments:
• port_id: port identifier of Ethernet device.
• attr: flow rule attributes.
• pattern: pattern specification (list terminated by the END pattern item).
• actions: associated actions (list terminated by the END action).
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
A valid handle in case of success, NULL otherwise and rte_errno is set to the positive
version of one of the error codes defined for rte_flow_validate().
Destruction
Flow rules destruction is not automatic, and a queue or a port should not be released if any
are still attached to them. Applications must take care of performing this step before releasing
resources.
int
rte_flow_destroy(uint8_t port_id,
struct rte_flow *flow,
struct rte_flow_error *error);
Failure to destroy a flow rule handle may occur when other flow rules depend on it, and de-
stroying it would result in an inconsistent state.
This function is only guaranteed to succeed if handles are destroyed in reverse order of their
creation.
Arguments:
• port_id: port identifier of Ethernet device.
• flow: flow rule handle to destroy.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.
Flush
Convenience function to destroy all flow rule handles associated with a port. They are released
as with successive calls to rte_flow_destroy().
int
rte_flow_flush(uint8_t port_id,
struct rte_flow_error *error);
In the unlikely event of failure, handles are still considered destroyed and no longer valid but
the port must be assumed to be in an inconsistent state.
Arguments:
• port_id: port identifier of Ethernet device.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.
Query
Arguments:
• port_id: port identifier of Ethernet device.
• flow: flow rule handle to query.
• action: action type to query.
• data: pointer to storage for the associated query data type.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.
The defined errno values may not be accurate enough for users or application developers
who want to investigate issues related to flow rules management. A dedicated error object is
defined for this purpose:
enum rte_flow_error_type {
RTE_FLOW_ERROR_TYPE_NONE, /**< No error. */
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
RTE_FLOW_ERROR_TYPE_HANDLE, /**< Flow rule (handle). */
RTE_FLOW_ERROR_TYPE_ATTR_GROUP, /**< Group field. */
RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, /**< Priority field. */
RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, /**< Ingress field. */
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, /**< Egress field. */
RTE_FLOW_ERROR_TYPE_ATTR, /**< Attributes structure. */
RTE_FLOW_ERROR_TYPE_ITEM_NUM, /**< Pattern length. */
RTE_FLOW_ERROR_TYPE_ITEM, /**< Specific pattern item. */
RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions. */
RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */
};
struct rte_flow_error {
enum rte_flow_error_type type; /**< Cause field and error types. */
const void *cause; /**< Object responsible for the error. */
const char *message; /**< Human-readable error message. */
};
Error type RTE_FLOW_ERROR_TYPE_NONE stands for no error, in which case remaining fields
can be ignored. Other error types describe the type of the object pointed by cause.
If non-NULL, cause points to the object responsible for the error. For a flow rule, this may be
a pattern item or an individual action.
If non-NULL, message provides a human-readable error message.
This object is normally allocated by applications and set by PMDs in case of error, the message
points to a constant string which does not need to be freed by the application, however its
pointer can be considered valid only as long as its associated DPDK port remains configured.
Closing the underlying device or unloading the PMD invalidates it.
Caveats
• DPDK does not keep track of flow rules definitions or flow rule objects automatically.
Applications may keep track of the former and must keep track of the latter. PMDs may
also do it for internal needs, however this must not be relied on by applications.
• Flow rules are not maintained between successive port initializations. An application
exiting without releasing them and restarting must re-create them from scratch.
• API operations are synchronous and blocking (EAGAIN cannot be returned).
• There is no provision for reentrancy/multi-thread safety, although nothing should prevent
different devices from being configured at the same time. PMDs may protect their control
path functions accordingly.
• Stopping the data path (TX/RX) should not be necessary when managing flow rules. If
this cannot be achieved naturally or with workarounds (such as temporarily replacing the
burst function pointers), an appropriate error code must be returned (EBUSY).
• PMDs, not applications, are responsible for maintaining flow rules configuration when
stopping and restarting a port or performing other actions which may affect them. They
can only be destroyed explicitly by applications.
For devices exposing multiple ports sharing global settings affected by flow rules:
• All ports under DPDK control must behave consistently, PMDs are responsible for making
sure that existing flow rules on a port are not affected by other ports.
• Ports not under DPDK control (unaffected or handled by other applications) are user’s
responsibility. They may affect existing flow rules and cause undefined behavior. PMDs
aware of this may prevent flow rules creation altogether in such cases.
PMD interface
Device compatibility
The following sections provide a few examples of such cases and describe how PMDs should
handle them, they are based on limitations built into the previous APIs.
Global bit-masks
Each flow rule comes with its own, per-layer bit-masks, while hardware may support only a
single, device-wide bit-mask for a given layer type, so that two IPv4 rules cannot use different
bit-masks.
The expected behavior in this case is that PMDs automatically configure global bit-masks ac-
cording to the needs of the first flow rule created.
Subsequent rules are allowed only if their bit-masks match those, the EEXIST error code
should be returned otherwise.
Many protocols can be simulated by crafting patterns with the Item: RAW type.
PMDs can rely on this capability to simulate support for protocols with headers not directly
recognized by hardware.
This pattern item stands for anything, which can be difficult to translate to something hardware
would understand, particularly if followed by more specific types.
Consider the following pattern:
Unsupported actions
• When combined with Action: QUEUE, packet counting (Action: COUNT ) and tagging
(Action: MARK or Action: FLAG) may be implemented in software as long as the target
queue is used by a single rule.
• A rule specifying both Action: DUP + Action: QUEUE may be translated to two hidden
rules combining Action: QUEUE and Action: PASSTHRU.
• When a single target queue is provided, Action: RSS can also be implemented through
Action: QUEUE.
While it would naturally make sense, flow rules cannot be assumed to be processed by hard-
ware in the same order as their creation for several reasons:
• They may be managed internally as a tree or a hash table instead of a list.
• Removing a flow rule before adding another one can either put the new rule at the end of
the list or reuse a freed entry.
• Duplication may occur when packets are matched by several rules.
For overlapping rules (particularly in order to use Action: PASSTHRU) predictable behavior is
only guaranteed by using different priority levels.
Priority levels are not necessarily implemented in hardware, or may be severely limited (e.g. a
single priority bit).
For these reasons, priority levels may be implemented purely in software by PMDs.
• For devices expecting flow rules to be added in the correct order, PMDs may destroy and
re-create existing rules after adding a new one with a higher priority.
• A configurable number of dummy or empty rules can be created at initialization time to
save high priority slots for later.
• In order to save priority levels, PMDs may evaluate whether rules are likely to collide and
adjust their priority accordingly.
Future evolutions
• A device profile selection function which could be used to force a permanent profile in-
stead of relying on its automatic configuration based on existing flow rules.
• A method to optimize rte_flow rules with specific pattern items and action types gener-
ated on the fly by PMDs. DPDK should assign negative numbers to these in order to not
collide with the existing types. See Negative types.
• Adding specific egress pattern items and actions as described in Attribute: Traffic direc-
tion.
• Optional software fallback when PMDs are unable to handle requested flow rules so
applications do not have to implement their own.
API migration
Exhaustive list of deprecated filter types (normally prefixed with RTE_ETH_FILTER_) found in
rte_eth_ctrl.h and methods to convert them to rte_flow rules.
MACVLAN can be translated to a basic Item: ETH flow rule with a terminating Action: VF or
Action: PF .
ETHERTYPE is basically an Item: ETH flow rule with a terminating Action: QUEUE or Action:
DROP.
FLEXIBLE can be translated to one Item: RAW pattern with a terminating Action: QUEUE and
a defined priority level.
SYN is a Item: TCP rule with only the syn bit enabled and masked, and a terminating Action:
QUEUE.
Priority level can be set to simulate the high priority bit.
NTUPLE is similar to specifying an empty L2, Item: IPV4 as L3 with Item: TCP or Item: UDP
as L4 and a terminating Action: QUEUE.
A priority level can be specified as well.
FDIR is more complex than any other type, there are several methods to emulate its function-
ality. It is summarized for the most part in the table below.
A few features are intentionally not supported:
• The ability to configure the matching input set and masks for the entire device, PMDs
should take care of it automatically according to the requested flow rules.
For example if a device supports only one bit-mask per protocol type, source/address
IPv4 bit-masks can be made immutable by the first created rule. Subsequent IPv4 or
TCPv4 rules can only be created if they are compatible.
Note that only protocol bit-masks affected by existing flow rules are immutable, others can
be changed later. They become mutable again after the related flow rules are destroyed.
• Returning four or eight bytes of matched data when using flex bytes filtering. Although a
specific action could implement it, it conflicts with the much more useful 32 bits tagging
on devices that support it.
• Side effects on RSS processing of the entire device. Flow rules that conflict with the
current device configuration should not be allowed. Similarly, device configuration should
not be allowed when it affects existing flow rules.
• Device modes of operation. “none” is unsupported since filtering cannot be disabled as
long as a flow rule is present.
• “MAC VLAN” or “tunnel” perfect matching modes should be automatically set according
to the created flow rules.
• Signature mode of operation is not defined but could be handled through a specific item
type if needed.
HASH
There is no counterpart to this filter type because it translates to a global device setting instead
of a pattern item. Device settings are automatically set according to the created flow rules.
All packets are matched. This type alters incoming packets to encapsulate them in a chosen
tunnel type, optionally redirect them to a VF as well.
The destination pool for tag based forwarding can be emulated with other flow rules using
Action: DUP.
The cryptodev library provides a Crypto device framework for management and provisioning
of hardware and software Crypto poll mode drivers, defining generic APIs which support a
number of different Crypto operations. The framework currently only supports cipher, authen-
tication, chained cipher/authentication and AEAD symmetric Crypto operations.
Design Principles
The cryptodev library follows the same basic principles as those used in DPDKs Ethernet
Device framework. The Crypto framework provides a generic Crypto device framework which
supports both physical (hardware) and virtual (software) Crypto devices as well as a generic
Crypto API which allows Crypto devices to be managed and configured and supports Crypto
operations to be provisioned on Crypto poll mode driver.
Device Management
Device Creation
Physical Crypto devices are discovered during the PCI probe/enumeration of the EAL function
which is executed at DPDK initialization, based on their PCI device identifier, each unique
PCI BDF (bus/bridge, device, function). Specific physical Crypto devices, like other physical
devices in DPDK can be white-listed or black-listed using the EAL command line options.
Virtual devices can be created by two mechanisms, either using the EAL command line options
or from within the application using an EAL API directly.
From the command line using the –vdev EAL option
--vdev 'cryptodev_aesni_mb_pmd0,max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0'
Device Identification
Device Configuration
struct {
uint32_t nb_objs;
uint32_t cache_size;
} session_mp;
/**< Session mempool configuration */
};
struct rte_cryptodev_qp_conf {
uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
};
The Crypto device Library as the Poll Mode Driver library support NUMA for when a processor’s
logical cores and interfaces utilize its local memory. Therefore Crypto operations, and in the
case of symmetric Crypto operations, the session and the mbuf being operated on, should
be allocated from memory pools created in the local memory. The buffers should, if possible,
remain on the local processor to obtain the best performance results and buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better, especially in the case of virtual Crypto de-
vices, if the Crypto operation and session and data buffer is in local memory instead of a
remote processor’s memory. This is also true for the pipe-line model provided all logical cores
used are located on the same processor.
Multiple logical cores should never share the same queue pair for enqueuing operations or de-
queuing operations on the same Crypto device since this would require global locks and hinder
performance. It is however possible to use a different logical core to dequeue an operation on
a queue pair from the logical core which it was enqueued on. This means that a crypto burst
enqueue/dequeue APIs are a logical place to transition from one logical core to another in a
packet processing pipeline.
Crypto devices define their functionality through two mechanisms, global device features and
algorithm capabilities. Global devices features identify device wide level features which are
applicable to the whole device such as the device having hardware acceleration or supporting
symmetric Crypto operations,
The capabilities mechanism defines the individual algorithms/functions which the device sup-
ports, such as a specific symmetric Crypto cipher or authentication operation.
Device Features
Crypto capabilities which identify particular algorithm which the Crypto PMD supports are de-
fined by the operation type, the operation transform, the transform identifier and then the par-
ticulars of the transform. For the full scope of the Crypto capability see the definition of the
structure in the DPDK API Reference.
struct rte_cryptodev_capabilities;
Each Crypto poll mode driver defines its own private array of capabilities for the operations it
supports. Below is an example of the capabilities for a PMD which supports the authentication
algorithm SHA1_HMAC and the cipher algorithm AES_CBC.
static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
.sym = {
.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
.auth = {
.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
.block_size = 64,
.key_size = {
.min = 64,
.max = 64,
.increment = 0
},
.digest_size = {
.min = 12,
.max = 12,
.increment = 0
},
.aad_size = { 0 }
}
}
},
{ /* AES CBC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
.sym = {
.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
.cipher = {
.algo = RTE_CRYPTO_CIPHER_AES_CBC,
.block_size = 16,
.key_size = {
.min = 16,
.max = 32,
.increment = 8
},
.iv_size = {
.min = 16,
.max = 16,
.increment = 0
}
}
}
}
}
Capabilities Discovery
Discovering the features and capabilities of a Crypto device poll mode driver is achieved
through the rte_cryptodev_info_get function.
void rte_cryptodev_info_get(uint8_t dev_id,
struct rte_cryptodev_info *dev_info);
This allows the user to query a specific Crypto PMD and get all the device features and ca-
pabilities. The rte_cryptodev_info structure contains all the relevant information for the
device.
struct rte_cryptodev_info {
const char *driver_name;
enum rte_cryptodev_type dev_type;
struct rte_pci_device *pci_dev;
uint64_t feature_flags;
unsigned max_nb_queue_pairs;
struct {
unsigned max_nb_sessions;
} sym;
};
Operation Processing
Scheduling of Crypto operations on DPDK’s application data path is performed using a burst
oriented asynchronous API set. A queue pair on a Crypto device accepts a burst of Crypto
operations using enqueue burst API. On physical Crypto devices the enqueue burst API will
place the operations to be processed on the devices hardware input queue, for virtual devices
the processing of the Crypto operations is usually completed during the enqueue call to the
Crypto device. The dequeue burst API will retrieve any processed operations available from the
queue pair on the Crypto device, from physical devices this is usually directly from the devices
processed queue, and for virtual device’s from a rte_ring where processed operations are
place after being processed on the enqueue call.
The burst enqueue API uses a Crypto device identifier and a queue pair identifier to specify the
Crypto device queue pair to schedule the processing on. The nb_ops parameter is the number
of operations to process which are supplied in the ops array of rte_crypto_op structures.
The enqueue function returns the number of operations it actually enqueued for processing, a
return value equal to nb_ops means that all packets have been enqueued.
uint16_t rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
The dequeue API uses the same format as the enqueue API of processed but the nb_ops
and ops parameters are now used to specify the max processed operations the user wishes
to retrieve and the location in which to store them. The API call returns the actual number of
processed operations returned, this can never be larger than nb_ops.
uint16_t rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)
Operation Representation
The operation structure includes the operation type and the operation status, a reference to
the operation specific data, which can vary in size and content depending on the operation
being provisioned. It also contains the source mempool for the operation, if it allocate from a
mempool. Finally an opaque pointer for user specific data is provided.
If Crypto operations are allocated from a Crypto operation mempool, see next section, there is
also the ability to allocate private memory with the operation for applications purposes.
Application software is responsible for specifying all the operation specific fields in the
rte_crypto_op structure which are then used by the Crypto PMD to process the requested
operation.
The cryptodev library provides an API set for managing Crypto operations which utilize the
Mempool Library to allocate operation buffers. Therefore, it ensures that the crytpo op-
eration is interleaved optimally across the channels and ranks for optimal processing. A
rte_crypto_op contains a field indicating the pool that it originated from. When calling
rte_crypto_op_free(op), the operation returns to its original pool.
The cryptodev library currently provides support for the following symmetric Crypto operations;
cipher, authentication, including chaining of these operations, as well as also supporting AEAD
operations.
Session are used in symmetric cryptographic processing to store the immutable data defined in
a cryptographic transform which is used in the operation processing of a packet flow. Sessions
are used to manage information such as expand cipher keys and HMAC IPADs and OPADs,
which need to be calculated for a particular Crypto operation, but are immutable on a packet
to packet basis for a flow. Crypto sessions cache this immutable data in a optimal way for the
underlying PMD and this allows further acceleration of the offload of Crypto workloads.
The Crypto device framework provides a set of session pool management APIs for the creation
and freeing of the sessions, utilizing the Mempool Library.
The framework also provides hooks so the PMDs can pass the amount of memory required for
that PMDs private session parameters, as well as initialization functions for the configuration
of the session parameters and freeing function so the PMD can managed the memory on
destruction of a session.
Note: Sessions created on a particular device can only be used on Crypto devices of the same
type, and if you try to use a session on a device different to that on which it was created then
the Crypto operation will fail.
Note: For AEAD operations the algorithm selected for authentication and ciphering must
aligned, eg AES_GCM.
The API does not place a limit on the number of transforms that can be chained together but
this will be limited by the underlying Crypto device poll mode driver which is processing the
operation.
Symmetric Operations
The symmetric Crypto operation structure contains all the mutable data relating to performing
symmetric cryptographic processing on a referenced mbuf data buffer. It is used for either
cipher, authentication, AEAD and chained operations.
As a minimum the symmetric operation must have a source data buffer (m_src), the session
type (session-based/less), a valid session (or transform chain if in session-less mode) and
the minimum authentication/ cipher parameters required depending on the type of operation
specified in the session or the transform chain.
struct rte_crypto_sym_op {
struct rte_mbuf *m_src;
struct rte_mbuf *m_dst;
union {
struct rte_cryptodev_sym_session *session;
/**< Handle for the initialised session context */
struct rte_crypto_sym_xform *xform;
/**< Session-less API Crypto operation parameters */
};
struct {
struct {
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for ciphering */
struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} iv; /**< Initialisation vector parameters */
} cipher;
struct {
struct {
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for authentication */
struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} digest; /**< Digest parameters */
struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} aad; /**< Additional authentication parameters */
} auth;
}
Asymmetric Cryptography
The cryptodev Library API is described in the DPDK API Reference document.
In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware, DPDK also includes
a pure-software library that allows physical PMD’s to be bonded together to create a single
logical PMD.
Note: The Link Bonding PMD Library is enabled by default in the build configuration files, the
library can be disabled by setting CONFIG_RTE_LIBRTE_PMD_BOND=n and recompiling the
DPDK.
Currently the Link Bonding PMD library supports following modes of operation:
• Round-Robin (Mode 0):
Note: The coloring differences of the packets are used to identify different flow classification
calculated by the selected transmit policy
Implementation Details
The librte_pmd_bond bonded device are compatible with the Ethernet device API exported by
the Ethernet PMDs described in the DPDK API Reference.
The Link Bonding Library supports the creation of bonded devices at application startup time
during EAL initialization using the --vdev option as well as programmatically via the C API
rte_eth_bond_create function.
Bonded devices support the dynamical addition and removal of slave devices using the
rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
After a slave device is added to a bonded device slave is stopped using rte_eth_dev_stop
and then reconfigured using rte_eth_dev_configure the RX and TX queues are also re-
configured using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup with the pa-
rameters use to configure the bonding device. If RSS is enabled for bonding device, this mode
is also enabled on new slave and configured as well.
Setting up multi-queue mode for bonding device to RSS, makes it fully RSS-capable, so all
slaves are synchronized with its configuration. This mode is intended to provide RSS configu-
ration on slaves transparent for client application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash function and RSS
key, used to set up its slaves. That let to define the meaning of RSS configuration of bonding
device as desired configuration of whole bonding (as one unit), without pointing any of slave
inside. It is required to ensure consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions supported
by all bonded slaves. RETA size is a GCD of all its RETA’s sizes, so it can be easily used as
a pattern providing expected behavior, even if slave RETAs’ sizes are different. If RSS Key is
not set for bonded device, it’s not changed on the slaves and default key for device is used.
All settings are managed through the bonding port API and always are propagated in one
direction (from bonding to slaves).
Link bonding devices support the registration of a link status change callback, using the
rte_eth_dev_callback_register API, this will be called when the status of the bond-
ing device changes. For example in the case of a bonding device which has 3 slaves, the link
status will change to up when one slave becomes active or change to down when all slaves
become inactive. There is no callback notification when a single slave changes state and the
previous conditions are not met. If a user wishes to monitor individual slaves then they must
register callbacks with that slave directly.
The link bonding library also supports devices which do not implement link status change
interrupts, this is achieved by polling the devices link status at a defined period which is
set using the rte_eth_bond_link_monitoring_set API, the default polling interval is
10ms. When a device is added as a slave to a bonding device it is determined using the
RTE_PCI_DRV_INTR_LSC flag whether the device supports interrupts or whether the link sta-
tus should be monitored by polling it.
Requirements / Limitations
The current implementation only supports devices that support the same speed and duplex to
be added as a slaves to the same bonded device. The bonded device inherits these attributes
from the first active slave added to the bonded device and then all further slaves added to the
bonded device must support these parameters.
A bonding device must have a minimum of one slave before the bonding device itself can be
started.
To use a bonding device dynamic RSS configuration feature effectively, it is also required, that
all slaves should be RSS-capable and support, at least one common hash function available
for each of them. Changing RSS key is only possible, when all slave devices support the same
key size.
To prevent inconsistency on how slaves process packets, once a device is added to a bonding
device, RSS configuration should be managed through the bonding device API, and not directly
on the slave.
Like all other PMD, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
It should also be noted that the PMD receive function should not be invoked directly on a slave
devices after they have been to a bonded device since packets read directly from the slave
device will no longer be available to the bonded device to read.
Configuration
Link bonding devices are created using the rte_eth_bond_create API which requires a
unique device name, the bonding mode, and the socket Id to allocate the bonding device’s
resources on. The other configurable parameters for a bonded device are its slave devices, its
primary slave, a user defined MAC address and transmission policy to use if the device is in
balance XOR mode.
Slave Devices
Primary Slave
The primary slave is used to define the default port to use when a bonded device is in active
backup mode. A different port will only be used if, and only if, the current primary port goes
down. If the user does not specify a primary port it will default to being the first port added to
the bonded device.
MAC Address
The bonded device can be configured with a user specified MAC address, this address will be
inherited by the some/all slave devices depending on the operating mode. If the device is in
active backup mode then only the primary device will have the user specified MAC, all other
slaves will retain their original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure
with the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will default to using the
primary slaves MAC address.
There are 3 supported transmission policies for bonded device running in Balance XOR mode.
Layer 2, Layer 2+3, Layer 3+4.
• Layer 2: Ethernet MAC address based balancing is the default transmission policy for
Balance XOR bonding mode. It uses a simple XOR calculation on the source MAC
address and destination MAC address of the packet and then calculate the modulus of
this value to calculate the slave device to transmit the packet on.
• Layer 2 + 3: Ethernet MAC address & IP Address based balancing uses a combination of
source/destination MAC addresses and the source/destination IP addresses of the data
packet to decide which slave port the packet will be transmitted on.
• Layer 3 + 4: IP Address & UDP Port based balancing uses a combination of
source/destination IP Address and the source/destination UDP ports of the packet of
the data packet to decide which slave port the packet will be transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6 and UDP
protocols for load balancing.
The librte_pmd_bond library supports two modes of device creation, the libraries export full C
API or using the EAL command line to statically configure link bonding devices at application
startup. Using the EAL option it is possible to use link bonding functionality transparently
without specific knowledge of the libraries API, this can be used, for example, to add bonding
functionality, such as active backup, to an existing application which has no knowledge of the
link bonding C API.
Using the librte_pmd_bond libraries API it is possible to dynamically create and manage
link bonding device from within any application. Link bonding devices are created using the
rte_eth_bond_create API which requires a unique device name, the link bonding mode
to initial the device in and finally the socket Id which to allocate the devices resources onto.
After successful creation of a bonding device it must be configured using the generic Ethernet
device configure API rte_eth_dev_configure and then the RX and TX queues which will
be used must be setup using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup.
Slave devices can be dynamically added and removed from a link bonding device us-
ing the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs but at least
one slave device must be added to the link bonding device before it can be started using
rte_eth_dev_start.
The link status of a bonded device is dictated by that of its slaves, if all slave device link status
are down or if all slaves are removed from the link bonding device then the link status of the
bonding device will go down.
It is also possible to configure / query the configuration of the control param-
eters of a bonded device using the provided APIs rte_eth_bond_mode_set/
get, rte_eth_bond_primary_set/get, rte_eth_bond_mac_set/reset and
rte_eth_bond_xmit_policy_set/get.
Link bonding devices can be created at application startup time using the --vdev EAL com-
mand line option. The device name must start with the net_bond prefix followed by numbers
or letters. The name must be unique for each device. Each device can have multiple options
arranged in a comma separated list. Multiple devices definitions can be arranged by calling the
--vdev option multiple times.
Device names and bonding options must be separated by commas as shown below:
$RTE_TARGET/app/testpmd -c f -n 4 --vdev 'net_bond0,bond_opt0=..,bond opt1=..'--vdev 'net_bond1
There are multiple ways of definitions that can be assessed and combined as long as the
following two rules are respected:
• A unique device name, in the format of net_bondX is provided, where X can be any
combination of numbers and/or letters, and the name is no greater than 32 characters
long.
• A least one slave device is provided with for each bonded device definition.
• The operation mode of the bonded device being created is provided.
The different options are:
• mode: Integer value defining the bonding mode of the device. Currently supports modes
0,1,2,3,4,5 (round-robin, active backup, balance, broadcast, link aggregation, transmit
load balancing).
mode=2
• slave: Defines the PMD device which will be added as slave to the bonded de-
vice. This option can be selected multiple times, for each device to be added as a
slave. Physical devices should be specified using their PCI address, in the format do-
main:bus:devid.function
slave=0000:0a:00.0,slave=0000:0a:00.1
• primary: Optional parameter which defines the primary slave port, is used in active
backup mode to select the primary slave for data TX/RX if it is available. The primary
port also is used to select the MAC address to use when it is not defined by the user.
This defaults to the first slave added to the device if it is specified. The primary device
must be a slave of the bonded device.
primary=0000:0a:00.0
• socket_id: Optional parameter used to select which socket on a NUMA device the bonded
devices resources will be allocated on.
socket_id=0
• mac: Optional parameter to select a MAC address for link bonding device, this overrides
the value of the primary slave device.
mac=00:1e:67:1d:fd:1d
• xmit_policy: Optional parameter which defines the transmission policy when the bonded
device is in balance mode. If not user specified this defaults to l2 (layer 2) forwarding, the
other transmission policies available are l23 (layer 2+3) and l34 (layer 3+4)
xmit_policy=l23
Examples of Usage
Create a bonded device in round robin mode with two slaves specified by their PCI address:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000
Create a bonded device in round robin mode with two slaves specified by their PCI address
and an overriding MAC address:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000
Create a bonded device in active backup mode with two slaves specified, and a primary slave
specified by their PCI addresses:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=1, slave=0000:00a:00.01,slave=0000
Create a bonded device in balance mode with two slaves specified by their PCI addresses,
and a transmission policy of layer 3 + 4 forwarding:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=2, slave=0000:00a:00.01,slave=0000
Timer Library
The Timer library provides a timer service to DPDK execution units to enable execution of
callback functions asynchronously. Features of the library are:
• Timers can be periodic (multi-shot) or single (one-shot).
• Timers can be loaded from one core and executed on another. It has to be specified in
the call to rte_timer_reset().
• Timers provide high precision (depends on the call frequency to rte_timer_manage() that
checks timer expiration for the local core).
• If not required in the application, timers can be disabled at compilation time by not calling
the rte_timer_manage() to increase performance.
The timer library uses the rte_get_timer_cycles() function that uses the High Precision Event
Timer (HPET) or the CPUs Time Stamp Counter (TSC) to provide a reliable time reference.
This library provides an interface to add, delete and restart a timer. The API is based on BSD
callout() with a few differences. Refer to the callout manual.
Implementation Details
Timers are tracked on a per-lcore basis, with all pending timers for a core being maintained
in order of timer expiry in a skiplist data structure. The skiplist used has ten levels and each
entry in the table appears in each level with probability ¼^level. This means that all entries are
present in level 0, 1 in every 4 entries is present at level 1, one in every 16 at level 2 and so on
up to level 9. This means that adding and removing entries from the timer list for a core can be
done in log(n) time, up to 4^10 entries, that is, approximately 1,000,000 timers per lcore.
A timer structure contains a special field called status, which is a union of a timer state
(stopped, pending, running, config) and an owner (lcore id). Depending on the timer state,
we know if a timer is present in a list or not:
• STOPPED: no owner, not in a list
• CONFIG: owned by a core, must not be modified by another core, maybe in a list or not,
depending on previous state
• PENDING: owned by a core, present in a list
• RUNNING: owned by a core, must not be modified by another core, present in a list
Resetting or stopping a timer while it is in a CONFIG or RUNNING state is not allowed. When
modifying the state of a timer, a Compare And Swap instruction should be used to guarantee
that the status (state+owner) is modified atomically.
Inside the rte_timer_manage() function, the skiplist is used as a regular list by iterating along
the level 0 list, which contains all timer entries, until an entry which has not yet expired has
been encountered. To improve performance in the case where there are entries in the timer
list but none of those timers have yet expired, the expiry time of the first list entry is maintained
within the per-core timer list structure itself. On 64-bit platforms, this value can be checked
without the need to take a lock on the overall structure. (Since expiry times are maintained
as 64-bit values, a check on the value cannot be done on 32-bit platforms without using either
a compare-and-swap (CAS) instruction or using a lock, so this additional check is skipped in
favor of checking as normal once the lock has been taken.) On both 64-bit and 32-bit platforms,
a call to rte_timer_manage() returns without taking a lock in the case where the timer list for
the calling core is empty.
Use Cases
The timer library is used for periodic calls, such as garbage collectors, or some state machines
(ARP, bridging, and so on).
References
• callout manual - The callout facility that provides timers with a mechanism to execute a
function at a given time.
Hash Library
The DPDK provides a Hash Library for creating hash table for fast lookup. The hash table is
a data structure optimized for searching through a set of entries that are each identified by a
unique key. For increased performance the DPDK Hash requires that all the keys have the
same number of bytes which is set at the hash creation time.
The actual data associated with each key can be either managed by the user using a separate
table that mirrors the hash in terms of number of entries and position of each entry, as shown
in the Flow Classification use case describes in the following sections, or stored in the hash
table itself.
The example hash tables in the L2/L3 Forwarding sample applications defines which port to
forward a packet to based on a packet flow identified by the five-tuple lookup. However, this
table could also be used for more sophisticated features and provide many other functions and
actions that could be performed on the packets and flows.
Multi-process support
The hash library can be used in a multi-process environment, minding that only lookups
are thread-safe. The only function that can only be used in single-process mode is
rte_hash_set_cmp_func(), which sets up a custom compare function, which is assigned to
a function pointer (therefore, it is not supported in multi-process mode).
Implementation Details
it was stored and/or the data associated to that key if there is a match. If signature is not in
the primary bucket, the secondary bucket is looked up, where same procedure is carried out.
If there is no match there either, key is considered not to be in the table.
Example of addition:
Like lookup, the primary and secondary buckets are identified. If there is an empty slot in the
primary bucket, primary and secondary signatures are stored in that slot, key and data (if any)
are added to the second table and an index to the position in the second table is stored in
the slot of the first table. If there is no space in the primary bucket, one of the entries on that
bucket is pushed to its alternative location, and the key to be added is inserted in its position.
To know where the alternative bucket of the evicted entry is, the secondary signature is looked
up and alternative bucket index is calculated from doing the modulo, as seen above. If there is
room in the alternative bucket, the evicted entry is stored in it. If not, same process is repeated
(one of the entries gets pushed) until a non full bucket is found. Notice that despite all the
entry movement in the first table, the second table is not touched, which would impact greatly
in performance.
In the very unlikely event that table enters in a loop where same entries are being evicted
indefinitely, key is considered not able to be stored. With random keys, this method allows the
user to get around 90% of the table utilization, without having to drop any stored entry (LRU)
or allocate more memory (extended buckets).
As mentioned above, Cuckoo hash implementation pushes elements out of their bucket, if there
is a new entry to be added which primary location coincides with their current bucket, being
pushed to their alternative location. Therefore, as user adds more entries to the hash table,
distribution of the hash values in the buckets will change, being most of them in their primary
location and a few in their secondary location, which the later will increase, as table gets busier.
This information is quite useful, as performance may be lower as more entries are evicted to
their secondary location.
See the tables below showing example entry distribution as table utilization increases.
Note: Last values on the tables above are the average maximum table utilization with random
keys and using Jenkins hash function.
Flow classification is used to map each input packet to the connection/flow it belongs to. This
operation is necessary as the processing of each input packet is usually done in the context
of their connection, so the same set of operations is applied to all the packets from the same
flow.
Applications using flow classification typically have a flow table to manage, with each separate
flow having an entry associated with it in this table. The size of the flow table entry is application
specific, with typical values of 4, 16, 32 or 64 bytes.
Each application using flow classification typically has a mechanism defined to uniquely iden-
tify a flow based on a number of fields read from the input packet that make up the flow key.
One example is to use the DiffServ 5-tuple made up of the following fields of the IP and trans-
port layer packet headers: Source IP Address, Destination IP Address, Protocol, Source Port,
Destination Port.
The DPDK hash provides a generic method to implement an application specific flow classifi-
cation mechanism. Given a flow table implemented as an array, the application should create
a hash object with the same number of entries as the flow table and with the hash key size set
to the number of bytes in the selected flow key.
The flow table operations on the application side are described below:
• Add flow: Add the flow key to hash. If the returned position is valid, use it to access the
flow entry in the flow table for adding a new flow or updating the information associated
with an existing flow. Otherwise, the flow addition failed, for example due to lack of free
entries for storing new flows.
• Delete flow: Delete the flow key from the hash. If the returned position is valid, use it to
access the flow entry in the flow table to invalidate the information associated with the
flow.
• Lookup flow: Lookup for the flow key in the hash. If the returned position is valid (flow
lookup hit), use the returned position to access the flow entry in the flow table. Otherwise
(flow lookup miss) there is no flow registered for the current packet.
References
• Donald E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching
(2nd Edition), 1998, Addison-Wesley Professional
Introduction
In Data Centers today, clustering and scheduling of distributed workloads is a very common
task. Many workloads require a deterministic partitioning of a flat key space among a cluster
of machines. When a packet enters the cluster, the ingress node will direct the packet to its
handling node. For example, data-centers with disaggregated storage use storage metadata
tables to forward I/O requests to the correct back end storage cluster, stateful packet inspection
will use match incoming flows to signatures in flow tables to send incoming packets to their
intended deep packet inspection (DPI) devices, and so on.
EFD is a distributor library that uses perfect hashing to determine a target/value for a given
incoming flow key. It has the following advantages: first, because it uses perfect hashing
it does not store the key itself and hence lookup performance is not dependent on the key
size. Second, the target/value can be any arbitrary value hence the system designer and/or
operator can better optimize service rates and inter-cluster network traffic locating. Third,
since the storage requirement is much smaller than a hash-based flow table (i.e. better fit for
CPU cache), EFD can scale to millions of flow keys. Finally, with the current optimized library
implementation, performance is fully scalable with any number of CPU cores.
Flow distribution and/or load balancing can be simply done using a stateless computation, for
instance using round-robin or a simple computation based on the flow key as an input. For
example, a hash function can be used to direct a certain flow to a target based on the flow key
(e.g. h(key) mod n) where h(key) is the hash value of the flow key and n is the number of
possible targets.
In this scheme (Fig. 4.30), the front end server/distributor/load balancer extracts the flow key
from the input packet and applies a computation to determine where this flow should be di-
rected. Intuitively, this scheme is very simple and requires no state to be kept at the front end
node, and hence, storage requirements are minimum.
A widely used flow distributor that belongs to the same category of computation-based
schemes is consistent hashing, shown in Fig. 4.31. Target destinations (shown in red)
are hashed into the same space as the flow keys (shown in blue), and keys are mapped to the
nearest target in a clockwise fashion. Dynamically adding and removing targets with consistent
hashing requires only K/n keys to be remapped on average, where K is the number of keys,
and n is the number of targets. In contrast, in a traditional hash-based scheme, a change in
the number of targets causes nearly all keys to be remapped.
Although computation-based schemes are simple and need very little storage requirement,
they suffer from the drawback that the system designer/operator can’t fully control the target
to assign a specific key, as this is dictated by the hash function. Deterministically co-locating
of keys together (for example, to minimize inter-server traffic or to optimize for network traffic
conditions, target load, etc.) is simply not possible.
When using a Flow-Table based scheme to handle flow distribution/load balancing, in contrast
with computation-based schemes, the system designer has the flexibility of assigning a given
flow to any given target. The flow table (e.g. DPDK RTE Hash Library) will simply store both
the flow key and the target value.
As shown in Fig. 4.32, when doing a lookup, the flow-table is indexed with the hash of the flow
key and the keys (more than one is possible, because of hash collision) stored in this index
and corresponding values are retrieved. The retrieved key(s) is matched with the input flow key
and if there is a match the value (target id) is returned.
The drawback of using a hash table for flow distribution/load balancing is the storage require-
ment, since the flow table need to store keys, signatures and target values. This doesn’t allow
this scheme to scale to millions of flow keys. Large tables will usually not fit in the CPU cache,
and hence, the lookup performance is degraded because of the latency to access the main
memory.
EFD combines the advantages of both flow-table based and computation-based schemes.
It doesn’t require the large storage necessary for flow-table based schemes (because EFD
doesn’t store the key as explained below), and it supports any arbitrary value for any given key.
The basic idea of EFD is when a given key is to be inserted, a family of hash functions is
searched until the correct hash function that maps the input key to the correct value is found, as
shown in Fig. 4.33. However, rather than explicitly storing all keys and their associated values,
EFD stores only indices of hash functions that map keys to values, and thereby consumes
much less space than conventional flow-based tables. The lookup operation is very simple,
similar to a computational-based scheme: given an input key the lookup operation is reduced
to hashing that key with the correct hash function.
Intuitively, finding a hash function that maps each of a large number (millions) of input keys
to the correct output value is effectively impossible, as a result EFD, as shown in Fig. 4.34,
breaks the problem into smaller pieces (divide and conquer). EFD divides the entire input key
set into many small groups. Each group consists of approximately 20-28 keys (a configurable
parameter for the library), then, for each small group, a brute force search to find a hash
function that produces the correct outputs for each key in the group.
It should be mentioned that, since the online lookup table for EFD doesn’t store the key itself,
the size of the EFD table is independent of the key size and hence EFD lookup performance
which is almost constant irrespective of the length of the key which is a highly desirable feature
especially for longer keys.
In summary, EFD is a set separation data structure that supports millions of keys. It is used to
distribute a given key to an intended target. By itself EFD is not a FIB data structure with an
exact match the input flow key.
EFD can be used along the data path of many network functions and middleboxes. As previ-
ously mentioned, it can used as an index table for <key,value> pairs, meta-data for objects, a
flow-level load balancer, etc. Fig. 4.35 shows an example of using EFD as a flow-level load
balancer, where flows are received at a front end server before being forwarded to the target
back end server for processing. The system designer would deterministically co-locate flows
together in order to minimize cross-server interaction. (For example, flows requesting certain
webpage objects are co-located together, to minimize forwarding of common objects across
servers).
As shown in Fig. 4.35, the front end server will have an EFD table that stores for each group
what is the perfect hash index that satisfies the correct output. Because the table size is small
and fits in cache (since keys are not stored), it sustains a large number of flows (N*X, where N
is the maximum number of flows served by each back end server of the X possible targets).
With an input flow key, the group id is computed (for example, using last few bits of CRC hash)
and then the EFD table is indexed with the group id to retrieve the corresponding hash index to
use. Once the index is retrieved the key is hashed using this hash function and the result will
be the intended correct target where this flow is supposed to be processed.
It should be noted that as a result of EFD not matching the exact key but rather distributing
the flows to a target back end node based on the perfect hash index, a key that has not been
inserted before will be distributed to a valid target. Hence, a local table which stores the flows
served at each node is used and is exact matched with the input key to rule out new never
seen before flows.
The EFD library API is created with a very similar semantics of a hash-index or a flow table.
The application creates an EFD table for a given maximum number of flows, a function is called
to insert a flow key with a specific target value, and another function is used to retrieve target
values for a given individual flow key or a bulk of keys.
The function rte_efd_create() is used to create and return a pointer to an EFD table that
is sized to hold up to num_flows key. The online version of the EFD table (the one that does
not store the keys and is used for lookups) will be allocated and created in the last level cache
(LLC) of the socket defined by the online_socket_bitmask, while the offline EFD table (the
one that stores the keys and is used for key inserts and for computing the perfect hashing) is
allocated and created in the LLC of the socket defined by offline_socket_bitmask. It should
be noted, that for highest performance the socket id should match that where the thread is
running, i.e. the online EFD lookup table should be created on the same socket as where the
lookup thread is running.
The EFD function to insert a key or update a key to a new value is rte_efd_update().
This function will update an existing key to a new value (target) if the key has already been
inserted before, or will insert the <key,value> pair if this key has not been inserted before. It
will return 0 upon success. It will return EFD_UPDATE_WARN_GROUP_FULL (1) if the op-
eration is insert, and the last available space in the key’s group was just used. It will return
EFD_UPDATE_FAILED (2) when the insertion or update has failed (either it failed to find a
suitable perfect hash or the group was full). The function will return EFD_UPDATE_NO_CHANGE
(3) if there is no change to the EFD table (i.e, same value already exists).
Note: This function is not multi-thread safe and should only be called from one thread.
EFD Lookup
To lookup a certain key in an EFD table, the function rte_efd_lookup() is used to return the
value associated with single key. As previously mentioned, if the key has been inserted, the cor-
rect value inserted is returned, if the key has not been inserted before, a ‘random’ value (based
on hashing of the key) is returned. For better performance and to decrease the overhead of
function calls per key, it is always recommended to use a bulk lookup function (simultaneous
lookup of multiple keys) instead of a single key lookup function. rte_efd_lookup_bulk()
is the bulk lookup function, that looks up num_keys simultaneously stored in the key_list and
the corresponding return values will be returned in the value_list.
Note: This function is multi-thread safe, but there should not be other threads writing in the
EFD table, unless locks are used.
EFD Delete
To delete a certain key in an EFD table, the function rte_efd_delete() can be used. The
function returns zero upon success when the key has been found and deleted. Socket_id is
the parameter to use to lookup the existing value, which is ideally the caller’s socket id. The
previous value associated with this key will be returned in the prev_value argument.
Note: This function is not multi-thread safe and should only be called from one thread.
Library Internals
This section provides the brief high-level idea and an overview of the library internals to accom-
pany the RFC. The intent of this section is to explain to readers the high-level implementation
of insert, lookup and group rebalancing in the EFD library.
As previously mentioned the EFD divides the whole set of keys into groups of a manageable
size (e.g. 28 keys) and then searches for the perfect hash that satisfies the intended target
value for each key. EFD stores two version of the <key,value> table:
• Offline Version (in memory): Only used for the insertion/update operation, which is less
frequent than the lookup operation. In the offline version the exact keys for each group is
stored. When a new key is added, the hash function is updated that will satisfy the value
for the new key together with the all old keys already inserted in this group.
• Online Version (in cache): Used for the frequent lookup operation. In the online version,
as previously mentioned, the keys are not stored but rather only the hash index for each
group.
Fig. 4.36 depicts the group assignment for 7 flow keys as an example. Given a flow key, a hash
function (in our implementation CRC hash) is used to get the group id. As shown in the figure,
the groups can be unbalanced. (We highlight group rebalancing further below).
Fig. 4.37: Perfect Hash Search - Assigned Keys & Target Value
Focusing on one group that has four keys, Fig. 4.37 depicts the search algorithm to find the
perfect hash function. Assuming that the target value bit for the keys is as shown in the figure,
then the online EFD table will store a 16 bit hash index and 16 bit lookup table per group per
value bit.
For a given keyX, a hash function (h(keyX, seed1) + index * h(keyX, seed2)) is
used to point to certain bit index in the 16bit lookup_table value, as shown in Fig. 4.38. The
insert function will brute force search for all possible values for the hash index until a non
conflicting lookup_table is found.
For example, since both key3 and key7 have a target bit value of 1, it is okay if the hash function
of both keys point to the same bit in the lookup table. A conflict will occur if a hash index is
used that maps both Key4 and Key7 to the same index in the lookup_table, as shown in Fig.
4.39, since their target value bit are not the same. Once a hash index is found that produces
a lookup_table with no contradictions, this index is stored for this group. This procedure is
repeated for each bit of target value.
The design principle of EFD is that lookups are much more frequent than inserts, and hence,
EFD’s design optimizes for the lookups which are faster and much simpler than the slower
insert procedure (inserts are slow, because of perfect hash search as previously discussed).
Fig. 4.40 depicts the lookup operation for EFD. Given an input key, the group id is computed
(using CRC hash) and then the hash index for this group is retrieved from the EFD table. Using
the retrieved hash index, the hash function h(key, seed1) + index *h(key, seed2) is
used which will result in an index in the lookup_table, the bit corresponding to this index will be
the target value bit. This procedure is repeated for each bit of the target value.
When discussing EFD inserts and lookups, the discussion is simplified by assuming that a
group id is simply a result of hash function. However, since hashing in general is not perfect
and will not always produce a uniform output, this simplified assumption will lead to unbalanced
groups, i.e., some group will have more keys than other groups. Typically, and to minimize in-
sert time with an increasing number of keys, it is preferable that all groups will have a balanced
number of keys, so the brute force search for the perfect hash terminates with a valid hash
index. In order to achieve this target, groups are rebalanced during runtime inserts, and keys
are moved around from a busy group to a less crowded group as the more keys are inserted.
Fig. 4.41 depicts the high level idea of group rebalancing, given an input key the hash result is
split into two parts a chunk id and 8-bit bin id. A chunk contains 64 different groups and 256
bins (i.e. for any given bin it can map to 4 distinct groups). When a key is inserted, the bin id is
computed, for example in Fig. 4.41 bin_id=2, and since each bin can be mapped to one of four
different groups (2 bit storage), the four possible mappings are evaluated and the one that will
result in a balanced key distribution across these four is selected the mapping result is stored
in these two bits.
References
1- EFD is based on collaborative research work between Intel and Carnegie Mel-
lon University (CMU), interested readers can refer to the paper “Scaling Up Clus-
tered Network Appliances with ScaleBricks;” Dong Zhou et al. at SIGCOMM 2015
(http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p241.pdf ) for more information.
LPM Library
The DPDK LPM library component implements the Longest Prefix Match (LPM) table search
method for 32-bit keys that is typically used to find the best route match in IP forwarding appli-
cations.
The main configuration parameter for LPM component instances is the maximum number of
rules to support. An LPM prefix is represented by a pair of parameters (32- bit key, depth), with
depth in the range of 1 to 32. An LPM rule is represented by an LPM prefix and some user
data associated with the prefix. The prefix serves as the unique identifier of the LPM rule. In
this implementation, the user data is 1-byte long and is called next hop, in correlation with its
main use of storing the ID of the next hop in a routing table entry.
The main methods exported by the LPM component are:
• Add LPM rule: The LPM rule is provided as input. If there is no rule with the same prefix
present in the table, then the new rule is added to the LPM table. If a rule with the same
prefix is already present in the table, the next hop of the rule is updated. An error is
returned when there is no available rule space left.
• Delete LPM rule: The prefix of the LPM rule is provided as input. If a rule with the
specified prefix is present in the LPM table, then it is removed.
• Lookup LPM key: The 32-bit key is provided as input. The algorithm selects the rule that
represents the best match for the given key and returns the next hop of that rule. In the
case that there are multiple rules present in the LPM table that have the same 32-bit key,
the algorithm picks the rule with the highest depth as the best match rule, which means
that the rule has the highest number of most significant bits matching between the input
key and the rule key.
Implementation Details
The current implementation uses a variation of the DIR-24-8 algorithm that trades memory
usage for improved LPM lookup speed. The algorithm allows the lookup operation to be per-
formed with typically a single memory read access. In the statistically rare case when the best
match rule is having a depth bigger than 24, the lookup operation requires two memory read
accesses. Therefore, the performance of the LPM lookup operation is greatly influenced by
whether the specific memory location is present in the processor cache or not.
The main data structure is built using the following elements:
• A table with 2^24 entries.
• A number of tables (RTE_LPM_TBL8_NUM_GROUPS) with 2^8 entries.
The first table, called tbl24, is indexed using the first 24 bits of the IP address to be looked up,
while the second table(s), called tbl8, is indexed using the last 8 bits of the IP address. This
means that depending on the outcome of trying to match the IP address of an incoming packet
to the rule stored in the tbl24 we might need to continue the lookup process in the second level.
Since every entry of the tbl24 can potentially point to a tbl8, ideally, we would have 2^24 tbl8s,
which would be the same as having a single table with 2^32 entries. This is not feasible due
to resource restrictions. Instead, this approach takes advantage of the fact that rules longer
than 24 bits are very rare. By splitting the process in two different tables/levels and limiting the
number of tbl8s, we can greatly reduce memory consumption while maintaining a very good
lookup speed (one memory access, most of the times).
Next hop and depth contain the same information as in the tbl24. The two flags show whether
the entry and the table are valid respectively.
The other main data structure is a table containing the main information about the rules (IP
and next hop). This is a higher level table, used for different things:
• Check whether a rule already exists or not, prior to addition or deletion, without having to
actually perform a lookup.
• When deleting, to check whether there is a rule containing the one that is to be deleted.
This is important, since the main data structure will have to be updated accordingly.
Addition
When adding a rule, there are different possibilities. If the rule’s depth is exactly 24 bits, then:
• Use the rule (IP address) as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then set its next hop to its value,
the valid flag to 1 (meaning this entry is in use), and the external entry flag to 0 (meaning
the lookup process ends at this point, since this is the longest prefix that matches).
If the rule’s depth is exactly 32 bits, then:
• Use the first 24 bits of the rule as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then look for a free tbl8, set
the index to the tbl8 to this value, the valid flag to 1 (meaning this entry is in use), and the
external entry flag to 1 (meaning the lookup process must continue since the rule hasn’t
been explored completely).
If the rule’s depth is any other value, prefix expansion must be performed. This means the rule
is copied to all the entries (as long as they are not in use) which would also cause a match.
As a simple example, let’s assume the depth is 20 bits. This means that there are 2^(24 -
20) = 16 different combinations of the first 24 bits of an IP address that would cause a match.
Hence, in this case, we copy the exact same entry to every position indexed by one of these
combinations.
By doing this we ensure that during the lookup process, if a rule matching the IP address exists,
it is found in either one or two memory accesses, depending on whether we need to move to
the next table or not. Prefix expansion is one of the keys of this algorithm, since it improves the
speed dramatically by adding redundancy.
Lookup
There are different things that limit the number of rules that can be added. The first one is the
maximum number of rules, which is a parameter passed through the API. Once this number is
reached, it is not possible to add any more rules to the routing table unless one or more are
removed.
The second reason is an intrinsic limitation of the algorithm. As explained before, to avoid high
memory consumption, the number of tbl8s is limited in compilation time (this value is by default
256). If we exhaust tbl8s, we won’t be able to add any more rules. How many of them are
necessary for a specific routing table is hard to determine in advance.
A tbl8 is consumed whenever we have a new rule with depth bigger than 24, and the first 24
bits of this rule are not the same as the first 24 bits of a rule previously added. If they are, then
the new rule will share the same tbl8 than the previous one, since the only difference between
the two rules is within the last byte.
With the default value of 256, we can have up to 256 rules longer than 24 bits that differ on
their first three bytes. Since routes longer than 24 bits are unlikely, this shouldn’t be a problem
in most setups. Even if it is, however, the number of tbl8s can be modified.
The LPM algorithm is used to implement Classless Inter-Domain Routing (CIDR) strategy used
by routers implementing IPv4 forwarding.
References
LPM6 Library
The LPM6 (LPM for IPv6) library component implements the Longest Prefix Match (LPM) ta-
ble search method for 128-bit keys that is typically used to find the best match route in IPv6
forwarding applications.
This parameter is related to the number of rules you can have, but there is no way to accurately
predict the number needed to hold a specific number of rules, since it strongly depends on the
depth and IP address of every rule. One tbl8 consumes 1 kb of memory. As a recommendation,
65536 tbl8s should be sufficient to store several thousand IPv6 rules, but the number can vary
depending on the case.
An LPM prefix is represented by a pair of parameters (128-bit key, depth), with depth in the
range of 1 to 128. An LPM rule is represented by an LPM prefix and some user data associated
with the prefix. The prefix serves as the unique identifier for the LPM rule. In this implementa-
tion, the user data is 1-byte long and is called “next hop”, which corresponds to its main use of
storing the ID of the next hop in a routing table entry.
The main methods exported for the LPM component are:
• Add LPM rule: The LPM rule is provided as input. If there is no rule with the same prefix
present in the table, then the new rule is added to the LPM table. If a rule with the same
prefix is already present in the table, the next hop of the rule is updated. An error is
returned when there is no available space left.
• Delete LPM rule: The prefix of the LPM rule is provided as input. If a rule with the
specified prefix is present in the LPM table, then it is removed.
• Lookup LPM key: The 128-bit key is provided as input. The algorithm selects the rule
that represents the best match for the given key and returns the next hop of that rule. In
the case that there are multiple rules present in the LPM table that have the same 128-bit
value, the algorithm picks the rule with the highest depth as the best match rule, which
means the rule has the highest number of most significant bits matching between the
input key and the rule key.
Implementation Details
This is a modification of the algorithm used for IPv4 (see Implementation Details). In this case,
instead of using two levels, one with a tbl24 and a second with a tbl8, 14 levels are used.
The implementation can be seen as a multi-bit trie where the stride or number of bits inspected
on each level varies from level to level. Specifically, 24 bits are inspected on the root node, and
the remaining 104 bits are inspected in groups of 8 bits. This effectively means that the trie
has 14 levels at the most, depending on the rules that are added to the table.
The algorithm allows the lookup operation to be performed with a number of memory accesses
that directly depends on the length of the rule and whether there are other rules with bigger
depths and the same key in the data structure. It can vary from 1 to 14 memory accesses, with
5 being the average value for the lengths that are most commonly used in IPv6.
The main data structure is built using the following elements:
• A table with 224 entries
• A number of tables, configurable by the user through the API, with 28 entries
The first table, called tbl24, is indexed using the first 24 bits of the IP address be looked up,
while the rest of the tables, called tbl8s, are indexed using the rest of the bytes of the IP
address, in chunks of 8 bits. This means that depending on the outcome of trying to match
the IP address of an incoming packet to the rule stored in the tbl24 or the subsequent tbl8s we
might need to continue the lookup process in deeper levels of the tree.
Similar to the limitation presented in the algorithm for IPv4, to store every possible IPv6 rule,
we would need a table with 2^128 entries. This is not feasible due to resource restrictions.
By splitting the process in different tables/levels and limiting the number of tbl8s, we can greatly
reduce memory consumption while maintaining a very good lookup speed (one memory ac-
cess per level).
When deleting, to check whether there is a rule containing the one that is to be deleted. This
is important, since the main data structure will have to be updated accordingly.
Addition
When adding a rule, there are different possibilities. If the rule’s depth is exactly 24 bits, then:
• Use the rule (IP address) as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then set its next hop to its value,
the valid flag to 1 (meaning this entry is in use), and the external entry flag to 0 (meaning
the lookup process ends at this point, since this is the longest prefix that matches).
If the rule’s depth is bigger than 24 bits but a multiple of 8, then:
• Use the first 24 bits of the rule as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then look for a free tbl8, set
the index to the tbl8 to this value, the valid flag to 1 (meaning this entry is in use), and the
external entry flag to 1 (meaning the lookup process must continue since the rule hasn’t
been explored completely).
• Use the following 8 bits of the rule as an index to the next tbl8.
• Repeat the process until the tbl8 at the right level (depending on the depth) has been
reached and fill it with the next hop, setting the next entry flag to 0.
If the rule’s depth is any other value, prefix expansion must be performed. This means the rule
is copied to all the entries (as long as they are not in use) which would also cause a match.
As a simple example, let’s assume the depth is 20 bits. This means that there are 2^(24-20)
= 16 different combinations of the first 24 bits of an IP address that would cause a match.
Hence, in this case, we copy the exact same entry to every position indexed by one of these
combinations.
By doing this we ensure that during the lookup process, if a rule matching the IP address exists,
it is found in, at the most, 14 memory accesses, depending on how many times we need to
move to the next table. Prefix expansion is one of the keys of this algorithm, since it improves
the speed dramatically by adding redundancy.
Prefix expansion can be performed at any level. So, for example, is the depth is 34 bits, it will
be performed in the third level (second tbl8-based level).
Lookup
• Repeat the process until either we find an invalid entry (lookup miss) or a valid entry with
the external entry flag set to 0. Return the next hop in the latter case.
There are different things that limit the number of rules that can be added. The first one is the
maximum number of rules, which is a parameter passed through the API. Once this number is
reached, it is not possible to add any more rules to the routing table unless one or more are
removed.
The second limitation is in the number of tbl8s available. If we exhaust tbl8s, we won’t be able
to add any more rules. How to know how many of them are necessary for a specific routing
table is hard to determine in advance.
In this algorithm, the maximum number of tbl8s a single rule can consume is 13, which is the
number of levels minus one, since the first three bytes are resolved in the tbl24. However:
• Typically, on IPv6, routes are not longer than 48 bits, which means rules usually take up
to 3 tbl8s.
As explained in the LPM for IPv4 algorithm, it is possible and very likely that several rules will
share one or more tbl8s, depending on what their first bytes are. If they share the same first 24
bits, for instance, the tbl8 at the second level will be shared. This might happen again in deeper
levels, so, effectively, two 48 bit-long rules may use the same three tbl8s if the only difference
is in their last byte.
The number of tbl8s is a parameter exposed to the user through the API in this version of the
algorithm, due to its impact in memory consumption and the number or rules that can be added
to the LPM table. One tbl8 consumes 1 kilobyte of memory.
The LPM algorithm is used to implement the Classless Inter-Domain Routing (CIDR) strategy
used by routers implementing IP forwarding.
The DPDK Packet Distributor library is a library designed to be used for dynamic load balancing
of traffic while supporting single packet at a time operation. When using this library, the logical
cores in use are to be considered in two roles: firstly a distributor lcore, which is responsible
for load balancing or distributing packets, and a set of worker lcores which are responsible for
receiving the packets from the distributor and operating on them. The model of operation is
shown in the diagram below.
The distributor core does the majority of the processing for ensuring that packets are fairly
shared among workers. The operation of the distributor is as follows:
1. Packets are passed to the distributor component by having the distributor lcore thread
call the “rte_distributor_process()” API
2. The worker lcores all share a single cache line with the distributor core in order to pass
messages and packets to and from the worker. The process API call will poll all the
worker cache lines to see what workers are requesting packets.
3. As workers request packets, the distributor takes packets from the set of packets passed
in and distributes them to the workers. As it does so, it examines the “tag” – stored in the
RSS hash field in the mbuf – for each packet and records what tags are being processed
by each worker.
4. If the next packet in the input set has a tag which is already being processed by a worker,
then that packet will be queued up for processing by that worker and given to it in prefer-
ence to other packets when that work next makes a request for work. This ensures that
no two packets with the same tag are processed in parallel, and that all packets with the
same tag are processed in input order.
5. Once all input packets passed to the process API have either been distributed to workers
or been queued up for a worker which is processing a given tag, then the process API
returns to the caller.
Other functions which are available to the distributor lcore are:
• rte_distributor_returned_pkts()
• rte_distributor_flush()
• rte_distributor_clear_returns()
Of these the most important API call is “rte_distributor_returned_pkts()” which should only be
called on the lcore which also calls the process API. It returns to the caller all packets which
have finished processing by all worker cores. Within this set of returned packets, all packets
sharing the same tag will be returned in their original order.
NOTE: If worker lcores buffer up packets internally for transmission in bulk afterwards, the
packets sharing a tag will likely get out of order. Once a worker lcore requests a new packet,
the distributor assumes that it has completely finished with the previous packet and therefore
that additional packets with the same tag can safely be distributed to other workers – who may
then flush their buffered packets sooner and cause packets to get out of order.
NOTE: No packet ordering guarantees are made about packets which do not share a common
packet tag.
Using the process and returned_pkts API, the following application workflow can be used, while
allowing packet order within a packet flow – identified by a tag – to be maintained.
The flush and clear_returns API calls, mentioned previously, are likely of less use that the
process and returned_pkts APIS, and are principally provided to aid in unit testing of the li-
brary. Descriptions of these functions and their use can be found in the DPDK API Reference
document.
Worker Operation
Worker cores are the cores which do the actual manipulation of the packets distributed by the
packet distributor. Each worker calls “rte_distributor_get_pkt()” API to request a new packet
when it has finished processing the previous one. [The previous packet should be returned to
the distributor component by passing it as the final parameter to this API call.]
Since it may be desirable to vary the number of worker cores, depending on the traffic load i.e.
to save power at times of lighter load, it is possible to have a worker stop processing packets
by calling “rte_distributor_return_pkt()” to indicate that it has finished the current packet and
does not want a new one.
Reorder Library
The Reorder Library provides a mechanism for reordering mbufs based on their sequence
number.
Operation
The reorder library is essentially a buffer that reorders mbufs. The user inserts out of order
mbufs into the reorder buffer and pulls in-order mbufs from it.
At a given time, the reorder buffer contains mbufs whose sequence number are inside the
sequence window. The sequence window is determined by the minimum sequence number
and the number of entries that the buffer was configured to hold. For example, given a reorder
buffer with 200 entries and a minimum sequence number of 350, the sequence window has
low and high limits of 350 and 550 respectively.
When inserting mbufs, the reorder library differentiates between valid, early and late mbufs
depending on the sequence number of the inserted mbuf:
• valid: the sequence number is inside the window.
• late: the sequence number is outside the window and less than the low limit.
• early: the sequence number is outside the window and greater than the high limit.
The reorder buffer directly returns late mbufs and tries to accommodate early mbufs.
Implementation Details
The reorder library is implemented as a pair of buffers, which referred to as the Order buffer
and the Ready buffer.
On an insert call, valid mbufs are inserted directly into the Order buffer and late mbufs are
returned to the user with an error.
In the case of early mbufs, the reorder buffer will try to move the window (incrementing the
minimum sequence number) so that the mbuf becomes a valid one. To that end, mbufs in the
Order buffer are moved into the Ready buffer. Any mbufs that have not arrived yet are ignored
and therefore will become late mbufs. This means that as long as there is room in the Ready
buffer, the window will be moved to accommodate early mbufs that would otherwise be outside
the reordering window.
For example, assuming that we have a buffer of 200 entries with a 350 minimum sequence
number, and we need to insert an early mbuf with 565 sequence number. That means that we
would need to move the windows at least 15 positions to accommodate the mbuf. The reorder
buffer would try to move mbufs from at least the next 15 slots in the Order buffer to the Ready
buffer, as long as there is room in the Ready buffer. Any gaps in the Order buffer at that point
are skipped, and those packet will be reported as late packets when they arrive. The process
of moving packets to the Ready buffer continues beyond the minimum required until a gap, i.e.
missing mbuf, in the Order buffer is encountered.
When draining mbufs, the reorder buffer would return mbufs in the Ready buffer first and then
from the Order buffer until a gap is found (mbufs that have not arrived yet).
An application using the DPDK packet distributor could make use of the reorder library to
transmit packets in the same order they were received.
A basic packet distributor use case would consist of a distributor with multiple workers cores.
The processing of packets by the workers is not guaranteed to be in order, hence a reorder
buffer can be used to order as many packets as possible.
In such a scenario, the distributor assigns a sequence number to mbufs before delivering them
to the workers. As the workers finish processing the packets, the distributor inserts those mbufs
into the reorder buffer and finally transmit drained mbufs.
NOTE: Currently the reorder buffer is not thread safe so the same thread is responsible for
inserting and draining mbufs.
The IP Fragmentation and Reassembly Library implements IPv4 and IPv6 packet fragmenta-
tion and reassembly.
Packet fragmentation
Packet fragmentation routines divide input packet into number of fragments. Both
rte_ipv4_fragment_packet() and rte_ipv6_fragment_packet() functions assume that input mbuf
data points to the start of the IP header of the packet (i.e. L2 header is already stripped out).
To avoid copying of the actual packet’s data zero-copy technique is used (rte_pktmbuf_attach).
For each fragment two new mbufs are created:
• Direct mbuf – mbuf that will contain L3 header of the new fragment.
• Indirect mbuf – mbuf that is attached to the mbuf with the original packet. It’s data field
points to the start of the original packets data plus fragment offset.
Then L3 header is copied from the original mbuf into the ‘direct’ mbuf and updated to reflect
new fragmented status. Note that for IPv4, header checksum is not recalculated and is set to
zero.
Finally ‘direct’ and ‘indirect’ mbufs for each fragment are linked together via mbuf’s next filed to
compose a packet for the new fragment.
The caller has an ability to explicitly specify which mempools should be used to allocate ‘direct’
and ‘indirect’ mbufs from.
For more information about direct and indirect mbufs, refer to Direct and Indirect Buffers.
Packet reassembly
IP Fragment Table
Fragment table maintains information about already received fragments of the packet.
Each IP packet is uniquely identified by triple <Source IP address>, <Destination IP address>,
<ID>.
Note that all update/lookup operations on Fragment Table are not thread safe. So if different
execution contexts (threads/processes) will access the same table simultaneously, then some
external syncing mechanism have to be provided.
Each table entry can hold information about packets consisting of up to
RTE_LIBRTE_IP_FRAG_MAX (by default: 4) fragments.
Code example, that demonstrates creation of a new Fragment table:
frag_cycles = (rte_get_tsc_hz() + MS_PER_S - 1) / MS_PER_S * max_flow_ttl;
bucket_num = max_flow_num + max_flow_num / 4;
frag_tbl = rte_ip_frag_table_create(max_flow_num, bucket_entries, max_flow_num, frag_cycles, so
Internally Fragment table is a simple hash table. The basic idea is to use two hash functions
and <bucket_entries> * associativity. This provides 2 * <bucket_entries> possible locations
in the hash table for each key. When the collision occurs and all 2 * <bucket_entries> are
occupied, instead of reinserting existing keys into alternative locations, ip_frag_tbl_add() just
returns a failure.
Also, entries that resides in the table longer then <max_cycles> are considered as invalid, and
could be removed/replaced by the new ones.
Note that reassembly demands a lot of mbuf’s to be allocated. At any given time up to (2 *
bucket_entries * RTE_LIBRTE_IP_FRAG_MAX * <maximum number of mbufs per packet>)
can be stored inside Fragment Table waiting for remaining fragments.
Packet Reassembly
The librte_pdump library provides a framework for packet capturing in DPDK. The library
does the complete copy of the Rx and Tx mbufs to a new mempool and hence it slows down
the performance of the applications, so it is recommended to use this library for debugging
purposes.
The library provides the following APIs to initialize the packet capture framework, to enable or
disable the packet capture, and to uninitialize it:
• rte_pdump_init(): This API initializes the packet capture framework.
• rte_pdump_enable(): This API enables the packet capture on a given port and queue.
Note: The filter option in the API is a place holder for future enhancements.
• rte_pdump_enable_by_deviceid(): This API enables the packet capture on a
given device id (vdev name or pci address) and queue. Note: The filter option
in the API is a place holder for future enhancements.
• rte_pdump_disable(): This API disables the packet capture on a given port and
queue.
• rte_pdump_disable_by_deviceid(): This API disables the packet capture on a
given device id (vdev name or pci address) and queue.
• rte_pdump_uninit(): This API uninitializes the packet capture framework.
• rte_pdump_set_socket_dir(): This API sets the server and client socket paths.
Note: This API is not thread-safe.
Operation
The librte_pdump library works on a client/server model. The server is responsible for
enabling or disabling the packet capture and the clients are responsible for requesting the
enabling or disabling of the packet capture.
The packet capture framework, as part of its initialization, creates the pthread and the server
socket in the pthread. The application that calls the framework initialization will have the server
socket created, either under the path that the application has passed or under the default path
i.e. either /var/run/.dpdk for root user or ~/.dpdk for non root user.
Applications that request enabling or disabling of the packet capture will have the client socket
created either under the path that the application has passed or under the default path i.e.
either /var/run/.dpdk for root user or ~/.dpdk for not root user to send the requests to
the server. The server socket will listen for client requests for enabling or disabling the packet
capture.
Implementation Details
The library API rte_pdump_init(), initializes the packet capture framework by creating the
pthread and the server socket. The server socket in the pthread context will be listening to the
client requests to enable or disable the packet capture.
The library APIs rte_pdump_enable() and rte_pdump_enable_by_deviceid() en-
ables the packet capture. On each call to these APIs, the library creates a separate client
socket, creates the “pdump enable” request and sends the request to the server. The server
that is listening on the socket will take the request and enable the packet capture by registering
the Ethernet RX and TX callbacks for the given port or device_id and queue combinations.
Then the server will mirror the packets to the new mempool and enqueue them to the rte_ring
that clients have passed to these APIs. The server also sends the response back to the client
about the status of the request that was processed. After the response is received from the
server, the client socket is closed.
The DPDK app/pdump tool is developed based on this library to capture packets in DPDK.
Users can use this as an example to develop their own packet capturing tools.
Multi-process Support
In the DPDK, multi-process support is designed to allow a group of DPDK processes to work
together in a simple transparent manner to perform packet processing, or other workloads. To
support this functionality, a number of additions have been made to the core DPDK Environ-
ment Abstraction Layer (EAL).
The EAL has been modified to allow different types of DPDK processes to be spawned, each
with different permissions on the hugepage memory used by the applications. For now, there
are two types of process specified:
• primary processes, which can initialize and which have full permissions on shared mem-
ory
• secondary processes, which cannot initialize shared memory, but can attach to pre- ini-
tialized shared memory and create objects in it.
Standalone DPDK processes are primary processes, while secondary processes can only run
alongside a primary process or after a primary process has already configured the hugepage
shared memory for them.
To support these two process types, and other multi-process setups described later, two addi-
tional command-line parameters are available to the EAL:
• --proc-type: for specifying a given process instance as the primary or secondary
DPDK instance
• --file-prefix: to allow processes that do not want to co-operate to have different
memory regions
A number of example applications are provided that demonstrate how multiple DPDK pro-
cesses can be used together. These are more fully documented in the “Multi- process Sample
Application” chapter in the DPDK Sample Application’s User Guide.
Memory Sharing
The key element in getting a multi-process application working using the DPDK is to ensure that
memory resources are properly shared among the processes making up the multi-process ap-
plication. Once there are blocks of shared memory available that can be accessed by multiple
processes, then issues such as inter-process communication (IPC) becomes much simpler.
On application start-up in a primary or standalone process, the DPDK records to memory-
mapped files the details of the memory configuration it is using - hugepages in use, the virtual
addresses they are mapped at, the number of memory channels present, etc. When a sec-
ondary process is started, these files are read and the EAL recreates the same memory con-
figuration in the secondary process so that all memory zones are shared between processes
and all pointers to that memory are valid, and point to the same objects, in both processes.
Note: Refer to Multi-process Limitations for details of how Linux kernel Address-Space Layout
Randomization (ASLR) can affect memory sharing.
The EAL also supports an auto-detection mode (set by EAL --proc-type=auto flag ),
whereby an DPDK process is started as a secondary instance if a primary instance is already
running.
Deployment Models
Symmetric/Peer Processes
DPDK multi-process support can be used to create a set of peer processes where each pro-
cess performs the same workload. This model is equivalent to having multiple threads each
running the same main-loop function, as is done in most of the supplied DPDK sample ap-
plications. In this model, the first of the processes spawned should be spawned using the
--proc-type=primary EAL flag, while all subsequent instances should be spawned using
the --proc-type=secondary flag.
The simple_mp and symmetric_mp sample applications demonstrate this usage model. They
are described in the “Multi-process Sample Application” chapter in the DPDK Sample Applica-
tion’s User Guide.
Asymmetric/Non-Peer Processes
An alternative deployment model that can be used for multi-process applications is to have
a single primary process instance that acts as a load-balancer or server distributing received
packets among worker or client threads, which are run as secondary processes. In this case,
extensive use of rte_ring objects is made, which are located in shared hugepage memory.
The client_server_mp sample application shows this usage model. It is described in the “Multi-
process Sample Application” chapter in the DPDK Sample Application’s User Guide.
In addition to the above scenarios involving multiple DPDK processes working together, it is
possible to run multiple DPDK processes side-by-side, where those processes are all work-
ing independently. Support for this usage scenario is provided using the --file-prefix
parameter to the EAL.
By default, the EAL creates hugepage files on each hugetlbfs filesystem using the rtemap_X
filename, where X is in the range 0 to the maximum number of hugepages -1. Similarly, it cre-
ates shared configuration files, memory mapped in each process, using the /var/run/.rte_config
filename, when run as root (or $HOME/.rte_config when run as a non-root user; if filesystem
and device permissions are set up to allow this). The rte part of the filenames of each of the
above is configurable using the file-prefix parameter.
In addition to specifying the file-prefix parameter, any DPDK applications that are to be run
side-by-side must explicitly limit their memory use. This is done by passing the -m flag to
each process to specify how much hugepage memory, in megabytes, each process can use
(or passing --socket-mem to specify how much hugepage memory on each socket each
process can use).
Note: Independent DPDK instances running side-by-side on a single machine cannot share
any network ports. Any network ports being used by one process should be blacklisted in every
other process.
In the same way that it is possible to run independent DPDK applications side- by-side on a
single system, this can be trivially extended to multi-process groups of DPDK applications run-
ning side-by-side. In this case, the secondary processes must use the same --file-prefix
parameter as the primary process whose shared memory they are connecting to.
Note: All restrictions and issues with multiple independent DPDK processes running side-by-
side apply in this usage scenario also.
Multi-process Limitations
There are a number of limitations to what can be done when running DPDK multi-process
applications. Some of these are documented below:
• The multi-process feature requires that the exact same hugepage memory mappings be
present in all applications. The Linux security feature - Address-Space Layout Random-
ization (ASLR) can interfere with this mapping, so it may be necessary to disable this
feature in order to reliably run multi-process applications.
Warning: Disabling Address-Space Layout Randomization (ASLR) may have security im-
plications, so it is recommended that it be disabled only when absolutely necessary, and
only when the implications of this change have been understood.
• All DPDK processes running as a single application and using shared memory must
have distinct coremask arguments. It is not possible to have a primary and secondary
instance, or two secondary instances, using any of the same logical cores. Attempting to
do so can cause corruption of memory pool caches, among other issues.
• The delivery of interrupts, such as Ethernet* device link status interrupts, do not work
in secondary processes. All interrupts are triggered inside the primary process only.
Any application needing interrupt notification in multiple processes should provide its
own mechanism to transfer the interrupt information from the primary process to any
secondary process that needs the information.
• The use of function pointers between multiple processes running based of different com-
piled binaries is not supported, since the location of a given function in one process may
be different to its location in a second. This prevents the librte_hash library from behav-
ing properly as in a multi-threaded instance, since it uses a pointer to the hash function
internally.
To work around this issue, it is recommended that multi-process applications perform the
hash calculations by directly calling the hashing function from the code and then using the
rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions
which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
• Depending upon the hardware in use, and the number of DPDK processes used, it may
not be possible to have HPET timers available in each DPDK instance. The minimum
number of HPET comparators available to Linux* userspace can be just a single com-
parator, which means that only the first, primary DPDK process instance can open and
mmap /dev/hpet. If the number of required DPDK processes exceeds that of the number
of available HPET comparators, the TSC (which is the default timer in this release) must
be used as a time source across all processes instead of the HPET.
The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux*
control plane.
The benefits of using the DPDK KNI are:
• Faster than existing Linux TUN/TAP interfaces (by eliminating system calls and
copy_to_user()/copy_from_user() operations.
• Allows management of DPDK ports using standard Linux net tools such as ethtool, ifcon-
fig and tcpdump.
• Allows an interface with the kernel network stack.
The components of an application using the DPDK Kernel NIC Interface are shown in Fig.
4.47.
The KNI kernel loadable module provides support for two types of devices:
• A Miscellaneous device (/dev/kni) that:
– Creates net devices (via ioctl calls).
– Maintains a kernel thread context shared by all KNI instances (simulating the RX
side of the net driver).
– For single kernel thread mode, maintains a kernel thread context shared by all KNI
instances (simulating the RX side of the net driver).
– For multiple kernel thread mode, maintains a kernel thread context for each KNI
instance (simulating the RX side of the new driver).
• Net device:
– Net functionality provided by implementing several operations such as netdev_ops,
header_ops, ethtool_ops that are defined by struct net_device, including support for
DPDK mbufs and FIFOs.
– The interface name is provided from userspace.
– The MAC address can be the real NIC MAC address or random.
The KNI interfaces are created by a DPDK application dynamically. The interface name and
FIFO details are provided by the application through an ioctl call using the rte_kni_device_info
struct which contains:
• The interface name.
• Physical addresses of the corresponding memzones for the relevant FIFOs.
• Mbuf mempool details, both physical and virtual (to calculate the offset for mbuf pointers).
• PCI information.
• Core affinity.
Refer to rte_kni_common.h in the DPDK source code for more details.
The physical addresses will be re-mapped into the kernel address space and stored in separate
KNI contexts.
The affinity of kernel RX thread (both single and multi-threaded modes) is controlled by
force_bind and core_id config parameters.
The KNI interfaces can be deleted by a DPDK application dynamically after being created.
Furthermore, all those KNI interfaces not deleted will be deleted on the release operation of
the miscellaneous device (when the DPDK application is closed).
To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed
in userspace only. The kernel module will be aware of mbufs, but all mbuf allocation and free
On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context. This thread
will enqueue the mbuf in the rx_q FIFO. The KNI thread will poll all KNI active devices for the
rx_q. If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via
netif_rx(). The dequeued mbuf must be freed, so the same pointer is sent back in the free_q
FIFO.
The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
For packet egress the DPDK application must first enqueue several mbufs to create an mbuf
cache on the kernel side.
The packet is received from the Linux net stack, by calling the kni_net_tx() callback. The mbuf
is dequeued (without waiting due the cache) and filled with data from sk_buff. The sk_buff is
then freed and the mbuf sent in the tx_q FIFO.
The DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()). It
then puts the mbuf back in the cache.
Ethtool
Ethtool is a Linux-specific tool with corresponding support in the kernel where each net device
must register its own callbacks for the supported operations. The current implementation uses
the igb/ixgbe modified Linux drivers for ethtool support. Ethtool is not supported in i40e and
VMs (VF or EM devices).
Link state and MTU change are network interface specific operations usually done via ifconfig.
The request is initiated from the kernel side (in the context of the ifconfig process) and handled
by the user space DPDK application. The application polls the request, calls the application
handler and returns the response back into the kernel space.
The application handlers can be registered upon interface creation or explicitly regis-
tered/unregistered in runtime. This provides flexibility in multiprocess scenarios (where the
KNI is created in the primary process but the callbacks are handled in the secondary one).
The constraint is that a single process can register and handle the requests.
vHost is a kernel module usually working as the backend of virtio (a para- virtualization driver
framework) to accelerate the traffic from the guest to the host. The DPDK Kernel NIC interface
provides the ability to hookup vHost traffic into userspace DPDK application. Together with
the DPDK PMD virtio, it significantly improves the throughput between guest and host. In the
scenario where DPDK is running as fast path in the host, kni-vhost is an efficient path for the
traffic.
Overview
vHost-net has three kinds of real backend implementations. They are: 1) tap, 2) macvtap and
3) RAW socket. The main idea behind kni-vhost is making the KNI work as a RAW socket,
attaching it as the backend instance of vHost-net. It is using the existing interface with vHost-
net, so it does not require any kernel hacking, and is fully-compatible with the kernel vhost
module. As vHost is still taking responsibility for communicating with the front-end virtio, it
naturally supports both legacy virtio -net and the DPDK PMD virtio. There is a little penalty that
comes from the non-polling mode of vhost. However, it scales throughput well when using KNI
in multi-thread mode.
Packet Flow
There is only a minor difference from the original KNI traffic flows. On transmit side, vhost
kthread calls the RAW socket’s ops sendmsg and it puts the packets into the KNI transmit
FIFO. On the receive side, the kni kthread gets packets from the KNI receive FIFO, puts them
into the queue of the raw socket, and wakes up the task in vhost kthread to begin receiving. All
the packet copying, irrespective of whether it is on the transmit or receive side, happens in the
context of vhost kthread. Every vhost-net device is exposed to a front end virtio device in the
guest.
Sample Usage
Before starting to use KNI as the backend of vhost, the CONFIG_RTE_KNI_VHOST configu-
ration option must be turned on. Otherwise, by default, KNI will not enable its backend support
capability.
Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before
compiling the kernel.
1. Compile the DPDK and insert uio_pci_generic/igb_uio kernel modules as normal.
2. Insert the KNI kernel module:
insmod ./rte_kni.ko
This command runs the kni sample application with two physical ports. Each port pins
two forwarding cores (ingress/egress) in user space.
4. Assign a raw socket to vhost-net during qemu-kvm startup. The DPDK does not provide
a script to do this since it is easy for the user to customize. The following shows the key
steps to launch qemu-kvm with kni-vhost:
#!/bin/bash
echo 1 > /sys/class/net/vEth0/sock_en
fd=`cat /sys/class/net/vEth0/sock_fd`
qemu-kvm \
-name vm1 -cpu host -m 2048 -smp 1 -hda /opt/vm-fc16.img \
-netdev tap,fd=$fd,id=hostnet1,vhost=on \
-device virti-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4
It is simple to enable raw socket using sysfs sock_en and get raw socket fd using sock_fd under
the KNI device node.
Then, using the qemu-kvm command with the -netdev option to assign such raw socket fd as
vhost’s backend.
Note: The key word tap must exist as qemu-kvm now only supports vhost with a tap backend,
so here we cheat qemu-kvm by an existing fd.
The DPDK is comprised of several libraries. Some of the functions in these libraries can be
safely called from multiple threads simultaneously, while others cannot. This section allows the
developer to take these issues into account when building their own application.
The run-time environment of the DPDK is typically a single thread per logical core. In some
cases, it is not only multi-threaded, but multi-process. Typically, it is best to avoid sharing data
structures between threads and/or processes where possible. Where this is not possible, then
the execution blocks must access the data in a thread- safe manner. Mechanisms such as
atomics or locking can be used that will allow execution blocks to operate serially. However,
this can have an effect on the performance of the application.
Fast-Path APIs
Applications operating in the data plane are performance sensitive but certain functions within
those libraries may not be safe to call from multiple threads simultaneously. The hash, LPM
and mempool libraries and RX/TX in the PMD are examples of this.
The hash and LPM libraries are, by design, thread unsafe in order to maintain performance.
However, if required the developer can add layers on top of these libraries to provide thread
safety. Locking is not needed in all situations, and in both the hash and LPM libraries, lookups
of values can be performed in parallel in multiple threads. Adding, removing or modifying
values, however, cannot be done in multiple threads without using locking when a single hash
or LPM table is accessed. Another alternative to locking would be to create multiple instances
of these tables allowing each thread its own copy.
The RX and TX of the PMD are the most critical aspects of a DPDK application and it is
recommended that no locking be used as it will impact performance. Note, however, that these
functions can safely be used from multiple threads when each thread is performing I/O on a
different NIC queue. If multiple threads are to use the same hardware queue on the same NIC
port, then locking, or some other form of mutual exclusion, is necessary.
The ring library is based on a lockless ring-buffer algorithm that maintains its original de-
sign for thread safety. Moreover, it provides high performance for either multi- or single-
consumer/producer enqueue/dequeue operations. The mempool library is based on the DPDK
lockless ring library and therefore is also multi-thread safe.
Outside of the performance sensitive areas described in Section 25.1, the DPDK provides a
thread-safe API for most other libraries. For example, malloc and memzone functions are safe
for use in multi-threaded and multi-process environments.
The setup and configuration of the PMD is not performance sensitive, but is not thread safe
either. It is possible that the multiple read/writes during PMD setup and configuration could be
corrupted in a multi-thread environment. Since this is not performance sensitive, the developer
can choose to add their own layer to provide thread-safe setup and configuration. It is expected
that, in most applications, the initial configuration of the network ports would be done by a
single thread at startup.
Library Initialization
It is recommended that DPDK libraries are initialized in the main thread at application startup
rather than subsequently in the forwarding threads. However, the DPDK performs checks to
ensure that libraries are only initialized once. If initialization is attempted more than once, an
error is returned.
In the multi-process case, the configuration information of shared memory will only be initialized
by the master process. Thereafter, both master and secondary processes can allocate/release
any objects of memory that finally rely on rte_malloc or memzones.
Interrupt Thread
The DPDK works almost entirely in Linux user space in polling mode. For certain infrequent
operations, such as receiving a PMD link status change notification, callbacks may be called
in an additional thread outside the main DPDK processing threads. These function callbacks
should avoid manipulating DPDK objects that are also managed by the normal DPDK threads,
and if they need to do so, it is up to the application to provide the appropriate locking or mutual
exclusion restrictions around those objects.
An example of a complex packet processing pipeline with QoS support is shown in the following
figure.
This pipeline can be built using reusable DPDK software libraries. The main blocks implement-
ing QoS in this pipeline are: the policer, the dropper and the scheduler. A functional description
of each block is provided in the following table.
Hierarchical Scheduler
The hierarchical scheduler block, when present, usually sits on the TX side just before the
transmission stage. Its purpose is to prioritize the transmission of packets from different users
and different traffic classes according to the policy specified by the Service Level Agreements
(SLAs) of each network node.
Overview
The hierarchical scheduler block is similar to the traffic manager block used by network proces-
sors that typically implement per flow (or per group of flows) packet queuing and scheduling. It
typically acts like a buffer that is able to temporarily store a large number of packets just before
their transmission (enqueue operation); as the NIC TX is requesting more packets for trans-
mission, these packets are later on removed and handed over to the NIC TX with the packet
selection logic observing the predefined SLAs (dequeue operation).
The hierarchical scheduler is optimized for a large number of packet queues. When only a
small number of queues are needed, message passing queues should be used instead of this
block. See Worst Case Scenarios for Performance for a more detailed discussion.
Scheduling Hierarchy
The scheduling hierarchy is shown in Fig. 4.53. The first level of the hierarchy is the Ethernet
TX port 1/10/40 GbE, with subsequent hierarchy levels defined as subport, pipe, traffic class
and queue.
Typically, each subport represents a predefined group of users, while each pipe represents an
individual user/subscriber. Each traffic class is the representation of a different traffic type with
specific loss rate, delay and jitter requirements, such as voice, video or data transfers. Each
queue hosts packets from one or multiple connections of the same type belonging to the same
user.
The rte_sched.h file contains configuration functions for port, subport and pipe.
The port scheduler enqueue API is very similar to the API of the DPDK PMD TX function.
int rte_sched_port_enqueue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts
The port scheduler dequeue API is very similar to the API of the DPDK PMD RX function.
int rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts
Usage Example
/* File "application.c" */
#define N_PKTS_RX 64
#define N_PKTS_TX 48
#define NIC_RX_PORT 0
#define NIC_RX_QUEUE 0
#define NIC_TX_PORT 1
#define NIC_TX_QUEUE 0
/* Initialization */
<initialization code>
/* Runtime */
while (1) {
/* Read packets from NIC RX queue */
Implementation
Enqueue and Dequeue for the Same Output Port Running enqueue and dequeue oper-
ations for the same output port from different cores is likely to cause significant impact on
scheduler’s performance and it is therefore not recommended.
The port enqueue and dequeue operations share access to the following data structures:
1. Packet descriptors
2. Queue table
3. Queue storage area
4. Bitmap of active queues
The expected drop in performance is due to:
1. Need to make the queue and bitmap operations thread safe, which requires either using
locking primitives for access serialization (for example, spinlocks/ semaphores) or using
atomic primitives for lockless access (for example, Test and Set, Compare And Swap, an
so on). The impact is much higher in the former case.
2. Ping-pong of cache lines storing the shared data structures between the cache hierar-
chies of the two cores (done transparently by the MESI protocol cache coherency CPU
hardware).
Therefore, the scheduler enqueue and dequeue operations have to be run from the same
thread, which allows the queues and the bitmap operations to be non-thread safe and keeps
the scheduler data structures internal to the same core.
Performance Scaling Scaling up the number of NIC ports simply requires a proportional
increase in the number of CPU cores to be used for traffic scheduling.
Enqueue Pipeline
3. Access the queue array location to store the packet (i.e. write the mbuf pointer).
It should be noted the strong data dependency between these steps, as steps 2 and 3 cannot
start before the result from steps 1 and 2 becomes available, which prevents the processor out
of order execution engine to provide any significant performance optimizations.
Given the high rate of input packets and the large amount of queues, it is expected that the
data structures accessed to enqueue the current packet are not present in the L1 or L2 data
cache of the current core, thus the above 3 memory accesses would result (on average) in L1
and L2 data cache misses. A number of 3 L1/L2 cache misses per packet is not acceptable for
performance reasons.
The workaround is to prefetch the required data structures in advance. The prefetch operation
has an execution latency during which the processor should not attempt to access the data
structure currently under prefetch, so the processor should execute other work. The only other
work available is to execute different stages of the enqueue sequence of operations on other
input packets, thus resulting in a pipelined implementation for the enqueue operation.
Fig. 4.55 illustrates a pipelined implementation for the enqueue operation with 4 pipeline stages
and each stage executing 2 different input packets. No input packet can be part of more than
one pipeline stage at a given time.
Fig. 4.55: Prefetch Pipeline for the Hierarchical Scheduler Enqueue Operation
The congestion management scheme implemented by the enqueue pipeline described above
is very basic: packets are enqueued until a specific queue becomes full, then all the packets
destined to the same queue are dropped until packets are consumed (by the dequeue oper-
ation). This can be improved by enabling RED/WRED as part of the enqueue pipeline which
looks at the queue occupancy and packet priority in order to yield the enqueue/drop decision for
a specific packet (as opposed to enqueuing all packets / dropping all packets indiscriminately).
The sequence of steps to schedule the next packet from the current pipe is:
1. Identify the next active pipe using the bitmap scan operation, prefetch pipe.
2. Read pipe data structure. Update the credits for the current pipe and its subport. Identify
the first active traffic class within the current pipe, select the next queue using WRR,
prefetch queue pointers for all the 16 queues of the current pipe.
3. Read next element from the current WRR queue and prefetch its packet descriptor.
4. Read the packet length from the packet descriptor (mbuf structure). Based on the packet
length and the available credits (of current pipe, pipe traffic class, subport and subport
traffic class), take the go/no go scheduling decision for the current packet.
To avoid the cache misses, the above data structures (pipe, queue, queue array, mbufs) are
prefetched in advance of being accessed. The strategy of hiding the latency of the prefetch
operations is to switch from the current pipe (in grinder A) to another pipe (in grinder B) imme-
diately after a prefetch is issued for the current pipe. This gives enough time to the prefetch
operation to complete before the execution switches back to this pipe (in grinder A).
The dequeue pipe state machine exploits the data presence into the processor cache, therefore
it tries to send as many packets from the same pipe TC and pipe as possible (up to the available
packets and credits) before moving to the next active TC from the same pipe (if any) or to
another active pipe.
Fig. 4.56: Pipe Prefetch State Machine for the Hierarchical Scheduler Dequeue Operation
The output port is modeled as a conveyor belt of byte slots that need to be filled by the sched-
uler with data for transmission. For 10 GbE, there are 1.25 billion byte slots that need to be
filled by the port scheduler every second. If the scheduler is not fast enough to fill the slots, pro-
vided that enough packets and credits exist, then some slots will be left unused and bandwidth
will be wasted.
In principle, the hierarchical scheduler dequeue operation should be triggered by NIC TX.
Usually, once the occupancy of the NIC TX input queue drops below a predefined threshold,
the port scheduler is woken up (interrupt based or polling based, by continuously monitoring
the queue occupancy) to push more packets into the queue.
Internal Time Reference The scheduler needs to keep track of time advancement for the
credit logic, which requires credit updates based on time (for example, subport and pipe traffic
shaping, traffic class upper limit enforcement, and so on).
Every time the scheduler decides to send a packet out to the NIC TX for transmission, the
scheduler will increment its internal time reference accordingly. Therefore, it is convenient
to keep the internal time reference in units of bytes, where a byte signifies the time duration
required by the physical interface to send out a byte on the transmission medium. This way,
as a packet is scheduled for transmission, the time is incremented with (n + h), where n is the
packet length in bytes and h is the number of framing overhead bytes per packet.
Internal Time Reference Re-synchronization The scheduler needs to align its internal time
reference to the pace of the port conveyor belt. The reason is to make sure that the scheduler
does not feed the NIC TX with more bytes than the line rate of the physical medium in order
to prevent packet drop (by the scheduler, due to the NIC TX input queue being full, or later on,
internally by the NIC TX).
The scheduler reads the current time on every dequeue invocation. The CPU time stamp can
be obtained by reading either the Time Stamp Counter (TSC) register or the High Precision
Event Timer (HPET) register. The current CPU time stamp is converted from number of CPU
clocks to number of bytes: time_bytes = time_cycles / cycles_per_byte, where cycles_per_byte
is the amount of CPU cycles that is equivalent to the transmission time for one byte on the wire
(e.g. for a CPU frequency of 2 GHz and a 10GbE port,*cycles_per_byte = 1.6*).
The scheduler maintains an internal time reference of the NIC time. Whenever a packet is
scheduled, the NIC time is incremented with the packet length (including framing overhead).
On every dequeue invocation, the scheduler checks its internal reference of the NIC time
against the current time:
1. If NIC time is in the future (NIC time >= current time), no adjustment of NIC time is
needed. This means that scheduler is able to schedule NIC packets before the NIC
actually needs those packets, so the NIC TX is well supplied with packets;
2. If NIC time is in the past (NIC time < current time), then NIC time should be adjusted by
setting it to the current time. This means that the scheduler is not able to keep up with
the speed of the NIC byte conveyor belt, so NIC bandwidth is wasted due to poor packet
supply to the NIC TX.
Scheduler Accuracy and Granularity The scheduler round trip delay (SRTD) is the time
(number of CPU cycles) between two consecutive examinations of the same pipe by the sched-
uler.
To keep up with the output port (that is, avoid bandwidth loss), the scheduler should be able to
schedule n packets faster than the same n packets are transmitted by NIC TX.
The scheduler needs to keep up with the rate of each individual pipe, as configured for the pipe
token bucket, assuming that no port oversubscription is taking place. This means that the size
of the pipe token bucket should be set high enough to prevent it from overflowing due to big
SRTD, as this would result in credit loss (and therefore bandwidth loss) for the pipe.
Credit Logic
Scheduling Decision The scheduling decision to send next packet from (subport S, pipe P,
traffic class TC, queue Q) is favorable (packet is sent) when all the conditions below are met:
• Pipe P of subport S is currently selected by one of the port grinders;
• Traffic class TC is the highest priority active traffic class of pipe P;
• Queue Q is the next queue selected by WRR within traffic class TC of pipe P;
• Subport S has enough credits to send the packet;
• Subport S has enough credits for traffic class TC to send the packet;
• Pipe P has enough credits to send the packet;
• Pipe P has enough credits for traffic class TC to send the packet.
If all the above conditions are met, then the packet is selected for transmission and the nec-
essary credits are subtracted from subport S, subport S traffic class TC, pipe P, pipe P traffic
class TC.
Framing Overhead As the greatest common divisor for all packet lengths is one byte, the
unit of credit is selected as one byte. The number of credits required for the transmission of a
packet of n bytes is equal to (n+h), where h is equal to the number of framing overhead bytes
per packet.
Traffic Shaping The traffic shaping for subport and pipe is implemented using a token bucket
per subport/per pipe. Each token bucket is implemented using one saturated counter that
keeps track of the number of available credits.
The token bucket generic parameters and operations are presented in Table 4.56 and Table
4.57.
Traffic Classes
Upper Limit Enforcement The traffic classes at the pipe and subport levels are not traffic
shaped, so there is no token bucket maintained in this context. The upper limit for the traffic
classes at the subport and pipe levels is enforced by periodically refilling the subport / pipe
traffic class credit counter, out of which credits are consumed every time a packet is scheduled
for that subport / pipe, as described in Table 4.60 and Table 4.61.
Table 4.60: Subport/Pipe Traffic Class Upper Limit Enforcement Persistent Data Structure
# Subport or pipe field Unit Description
1 tc_time Bytes Time of the next up-
date (upper limit refill)
for the 4 TCs of the
current subport / pipe.
See Section Internal
Time Reference for
the explanation of why
the time is maintained
in byte units.
2 tc_period Bytes Time between two
consecutive updates
for the 4 TCs of the
current subport / pipe.
This is expected to
be many times bigger
than the typical value
of the token bucket
tb_period.
3 tc_credits_per_period Bytes Upper limit for the
number of credits al-
lowed to be consumed
by the current TC dur-
ing each enforcement
period tc_period.
4 tc_credits Bytes Current upper limit for
the number of credits
that can be consumed
by the current traffic
class for the remain-
der of the current en-
forcement period.
Weighted Round Robin (WRR) The evolution of the WRR design solution from simple to
complex is shown in Table 4.62.
Solution Space summarizes some of the possible approaches for handling this problem,
with the third approach selected for implementation.
Table 4.64: Watermark Propagation from Subport Level to Member Pipes at the Beginning of
Each Traffic Class Upper Limit Enforcement Period
No. Subport Traffic Class Opera- Description
tion
1 Initialization Subport level: sub-
port_period_id= 0
Pipe level: pipe_period_id =
0
2 Credit update Subport Level:
if (time>=subport_tc_time)
{ subport_wm = wa-
ter_mark_update();
subport_tc_time = time
+ subport_tc_period;
subport_period_id++;
}
Pipelevel:
if(pipe_period_id != sub-
port_period_id)
{
pipe_ov_credits
= subport_wm *
pipe_weight;
pipe_period_id
= sub-
port_period_id;
}
3 Credit consumption (on Pipe level:
packet scheduling) pkt_credits = pk_len +
frame_overhead;
if(pipe_ov_credits >=
pkt_credits{
pipe_ov_credits -
= pkt_credits;
}
The more queues the scheduler has to examine for packets and credits in order to select one
packet, the lower the performance of the scheduler is.
The scheduler maintains the bitmap of active queues, which skips the non-active queues, but
in order to detect whether a specific pipe has enough credits, the pipe has to be drilled down
using the pipe dequeue state machine, which consumes cycles regardless of the scheduling
result (no packets are produced or at least one packet is produced).
This scenario stresses the importance of the policer for the scheduler performance: if the pipe
does not have enough credits, its packets should be dropped as soon as possible (before they
reach the hierarchical scheduler), thus rendering the pipe queues as not active, which allows
the dequeue side to skip that pipe with no cycles being spent on investigating the pipe credits
that would result in a “not enough credits” status.
The port scheduler performance is optimized for a large number of queues. If the number of
queues is small, then the performance of the port scheduler for the same level of active traffic
is expected to be worse than the performance of a small set of message passing queues.
Dropper
The purpose of the DPDK dropper is to drop packets arriving at a packet scheduler to avoid
congestion. The dropper supports the Random Early Detection (RED), Weighted Random
Early Detection (WRED) and tail drop algorithms. Fig. 4.57 illustrates how the dropper inte-
grates with the scheduler. The DPDK currently does not support congestion management so
the dropper provides the only method for congestion avoidance.
The dropper uses the Random Early Detection (RED) congestion avoidance algorithm as doc-
umented in the reference publication. The purpose of the RED algorithm is to monitor a packet
queue, determine the current congestion level in the queue and decide whether an arriving
packet should be enqueued or dropped. The RED algorithm uses an Exponential Weighted
Moving Average (EWMA) filter to compute average queue size which gives an indication of the
current congestion level in the queue.
For each enqueue operation, the RED algorithm compares the average queue size to minimum
and maximum thresholds. Depending on whether the average queue size is below, above or in
between these thresholds, the RED algorithm calculates the probability that an arriving packet
should be dropped and makes a random decision based on this probability.
The dropper also supports Weighted Random Early Detection (WRED) by allowing the sched-
uler to select different RED configurations for the same packet queue at run-time. In the case
of severe congestion, the dropper resorts to tail drop. This occurs when a packet queue has
reached maximum capacity and cannot store any more packets. In this situation, all arriving
packets are dropped.
The flow through the dropper is illustrated in Fig. 4.58. The RED/WRED algorithm is exercised
first and tail drop second.
The use cases supported by the dropper are:
• – Initialize configuration data
• – Initialize run-time data
• – Enqueue (make a decision to enqueue or drop an arriving packet)
• – Mark empty (record the time at which a packet queue becomes empty)
The configuration use case is explained in Section2.23.3.1, the enqueue operation is explained
in Section 2.23.3.2 and the mark empty operation is explained in Section 2.23.3.3.
Configuration
Enqueue Operation
In the example shown in Fig. 4.59, q (actual queue size) is the input value, avg (average
queue size) and count (number of packets since the last drop) are run-time values, decision is
the output value and the remaining values are configuration parameters.
The purpose of the EWMA Filter microblock is to filter queue size values to smooth out transient
changes that result from “bursty” traffic. The output value is the average queue size which gives
a more stable view of the current congestion level in the queue.
The EWMA filter has one configuration parameter, filter weight, which determines how quickly
or slowly the average queue size output responds to changes in the actual queue size input.
Higher values of filter weight mean that the average queue size responds more quickly to
changes in actual queue size.
Average Queue Size Calculation when the Queue is not Empty The definition of the
EWMA filter is given in the following equation.
Where:
• avg = average queue size
• wq = filter weight
• q = actual queue size
Note:
The filter weight, wq = 1/2^n, where n is the filter weight parameter value passed to the dropper modu
on configuration (see Section2.23.3.1 ).
The EWMA filter does not read time stamps and instead assumes that enqueue operations
will happen quite regularly. Special handling is required when the queue becomes empty as
the queue could be empty for a short time or a long time. When the queue becomes empty,
average queue size should decay gradually to zero instead of dropping suddenly to zero or
remaining stagnant at the last computed value. When a packet is enqueued on an empty
queue, the average queue size is computed using the following formula:
Where:
• m = the number of enqueue operations that could have occurred on this queue while the
queue was empty
In the dropper module, m is defined as:
Where:
• time = current time
• qtime = time the queue became empty
• s = typical time between successive enqueue operations on this queue
The time reference is in units of bytes, where a byte signifies the time duration required by the
physical interface to send out a byte on the transmission medium (see Section Internal Time
Reference). The parameter s is defined in the dropper module as a constant with the value:
s=2^22. This corresponds to the time required by every leaf node in a hierarchy with 64K leaf
nodes to transmit one 64-byte packet onto the wire and represents the worst case scenario.
For much smaller scheduler hierarchies, it may be necessary to reduce the parameter s, which
is defined in the red header source file (rte_red.h) as:
#define RTE_RED_S
Since the time reference is in bytes, the port speed is implied in the expression: time-qtime.
The dropper does not have to be configured with the actual port speed. It adjusts automatically
to low speed and high speed links.
Implementation A numerical method is used to compute the factor (1-wq)^m that appears in
Equation 2.
This method is based on the following identity:
In the dropper module, a look-up table is used to compute log2(1-wq) for each value of wq
supported by the dropper module. The factor (1-wq)^m can then be obtained by multiplying
the table value by m and applying shift operations. To avoid overflow in the multiplication, the
value, m, and the look-up table values are limited to 16 bits. The total size of the look-up table
is 56 bytes. Once the factor (1-wq)^m is obtained using this method, the average queue size
can be calculated from Equation 2.
Alternative Approaches Other methods for calculating the factor (1-wq)^m in the expression
for computing average queue size when the queue is empty (Equation 2) were considered.
These approaches include:
• Floating-point evaluation
• Fixed-point evaluation using a small look-up table (512B) and up to 16 multiplications
(this is the approach used in the FreeBSD* ALTQ RED implementation)
• Fixed-point evaluation using a small look-up table (512B) and 16 SSE multiplications
(SSE optimized version of the approach used in the FreeBSD* ALTQ RED implementa-
tion)
• Large look-up table (76 KB)
The method that was finally selected (described above in Section 26.3.2.2.1) out performs all
of these approaches in terms of run-time performance and memory requirements and also
achieves accuracy comparable to floating-point evaluation. Table 4.67 lists the performance of
each of these alternative approaches relative to the method that is used in the dropper. As can
be seen, the floating-point implementation achieved the worst performance.
Initial Packet Drop Probability The initial drop probability is calculated using the following
equation.
Where:
• maxp = mark probability
• avg = average queue size
• minth = minimum threshold
• maxth = maximum threshold
The calculation of the packet drop probability using Equation 3 is illustrated in Fig. 4.60. If the
average queue size is below the minimum threshold, an arriving packet is enqueued. If the
average queue size is at or above the maximum threshold, an arriving packet is dropped. If
the average queue size is between the minimum and maximum thresholds, a drop probability
is calculated to determine if the packet should be enqueued or dropped.
Actual Drop Probability If the average queue size is between the minimum and maximum
thresholds, then the actual drop probability is calculated from the following equation.
Where:
• Pb = initial drop probability (from Equation 3)
• count = number of packets that have arrived since the last drop
The constant 2, in Equation 4 is the only deviation from the drop probability formulae given in
the reference document where a value of 1 is used instead. It should be noted that the value pa
computed from can be negative or greater than 1. If this is the case, then a value of 1 should
be used instead.
The initial and actual drop probabilities are shown in Fig. 4.61. The actual drop probabil-
ity is shown for the case where the formula given in the reference document1 is used (blue
curve) and also for the case where the formula implemented in the dropper module, is used
(red curve). The formula in the reference document results in a significantly higher drop rate
compared to the mark probability configuration parameter specified by the user. The choice to
deviate from the reference document is simply a design decision and one that has been taken
by other RED implementations, for example, FreeBSD* ALTQ RED.
Fig. 4.61: Initial Drop Probability (pb), Actual Drop probability (pa) Computed Using a Factor
1 (Blue Curve) and a Factor 2 (Red Curve)
The time at which a packet queue becomes empty must be recorded and saved with the RED
run-time data so that the EWMA filter block can calculate the average queue size on the next
enqueue operation. It is the responsibility of the calling application to inform the dropper mod-
ule through the API that a queue has become empty.
The source files for the DPDK dropper are located at:
• DPDK/lib/librte_sched/rte_red.h
• DPDK/lib/librte_sched/rte_red.c
RED functionality in the DPDK QoS scheduler is disabled by default. To enable it, use the
DPDK configuration parameter:
CONFIG_RTE_SCHED_RED=y
This parameter must be set to y. The parameter is found in the build configuration files in
the DPDK/config directory, for example, DPDK/config/common_linuxapp. RED configuration
parameters are specified in the rte_red_params structure within the rte_sched_port_params
structure that is passed to the scheduler on initialization. RED parameters are specified sep-
arately for four traffic classes and three packet colors (green, yellow and red) allowing the
scheduler to implement Weighted Random Early Detection (WRED).
The DPDK QoS Scheduler Application reads a configuration file on start-up. The configura-
tion file includes a section containing RED parameters. The format of these parameters is
described in Section2.23.3.1. A sample RED configuration is shown below. In this example,
the queue size is 64 packets.
Note: For correct operation, the same EWMA filter weight parameter (wred weight) should be
used for each packet color (green, yellow, red) in the same traffic class (tc).
; RED params per traffic class and color (Green / Yellow / Red)
[red]
tc 0 wred min = 28 22 16
tc 0 wred max = 32 32 32
tc 0 wred inv prob = 10 10 10
tc 0 wred weight = 9 9 9
tc 1 wred min = 28 22 16
tc 1 wred max = 32 32 32
tc 1 wred inv prob = 10 10 10
tc 1 wred weight = 9 9 9
tc 2 wred min = 28 22 16
tc 2 wred max = 32 32 32
tc 2 wred inv prob = 10 10 10
tc 2 wred weight = 9 9 9
tc 3 wred min = 28 22 16
tc 3 wred max = 32 32 32
tc 3 wred inv prob = 10 10 10
tc 3 wred weight = 9 9 9
With this configuration file, the RED configuration that applies to green, yellow and red packets
in traffic class 0 is shown in Table 4.68.
Enqueue API
The arguments passed to the enqueue API are configuration data, run-time data, the current
size of the packet queue (in packets) and a value representing the current time. The time
reference is in units of bytes, where a byte signifies the time duration required by the physical
interface to send out a byte on the transmission medium (see Section 26.2.4.5.1 “Internal Time
Reference” ). The dropper reuses the scheduler time stamps for performance reasons.
Empty API
The arguments passed to the empty API are run-time data and the current time in bytes.
Traffic Metering
The traffic metering component implements the Single Rate Three Color Marker (srTCM) and
Two Rate Three Color Marker (trTCM) algorithms, as defined by IETF RFC 2697 and 2698
respectively. These algorithms meter the stream of incoming packets based on the allowance
defined in advance for each traffic flow. As result, each incoming packet is tagged as green,
yellow or red based on the monitored consumption of the flow the packet belongs to.
Functional Overview
The srTCM algorithm defines two token buckets for each traffic flow, with the two buckets
sharing the same token update rate:
• Committed (C) bucket: fed with tokens at the rate defined by the Committed Information
Rate (CIR) parameter (measured in IP packet bytes per second). The size of the C bucket
is defined by the Committed Burst Size (CBS) parameter (measured in bytes);
• Excess (E) bucket: fed with tokens at the same rate as the C bucket. The size of the E
bucket is defined by the Excess Burst Size (EBS) parameter (measured in bytes).
The trTCM algorithm defines two token buckets for each traffic flow, with the two buckets being
updated with tokens at independent rates:
• Committed (C) bucket: fed with tokens at the rate defined by the Committed Information
Rate (CIR) parameter (measured in bytes of IP packet per second). The size of the C
bucket is defined by the Committed Burst Size (CBS) parameter (measured in bytes);
• Peak (P) bucket: fed with tokens at the rate defined by the Peak Information Rate (PIR)
parameter (measured in IP packet bytes per second). The size of the P bucket is defined
by the Peak Burst Size (PBS) parameter (measured in bytes).
Please refer to RFC 2697 (for srTCM) and RFC 2698 (for trTCM) for details on how tokens are
consumed from the buckets and how the packet color is determined.
For both algorithms, the color blind mode is functionally equivalent to the color aware mode
with input color set as green. For color aware mode, a packet with red input color can only get
the red output color, while a packet with yellow input color can only get the yellow or red output
colors.
The reason why the color blind mode is still implemented distinctly than the color aware mode
is that color blind mode can be implemented with fewer operations than the color aware mode.
Implementation Overview
For each input packet, the steps for the srTCM / trTCM algorithms are:
• Update the C and E / P token buckets. This is done by reading the current time (from
the CPU timestamp counter), identifying the amount of time since the last bucket update
and computing the associated number of tokens (according to the pre-configured bucket
rate). The number of tokens in the bucket is limited by the pre-configured bucket size;
• Identify the output color for the current packet based on the size of the IP packet and the
amount of tokens currently available in the C and E / P buckets; for color aware mode
only, the input color of the packet is also considered. When the output color is not red, a
number of tokens equal to the length of the IP packet are subtracted from the C or E /P
or both buckets, depending on the algorithm and the output color of the packet.
Power Management
The DPDK Power Management feature allows users space applications to save power by dy-
namically adjusting CPU frequency or entering into different C-States.
• Adjusting the CPU frequency dynamically according to the utilization of RX queue.
• Entering into different deeper C-States according to the adaptive algorithms to speculate
brief periods of time suspending the application if no packets are received.
The interfaces for adjusting the operating CPU frequency are in the power management library.
C-State control is implemented in applications according to the different use cases.
The Linux kernel provides a cpufreq module for CPU frequency scaling for each lcore. For
example, for cpuX, /sys/devices/system/cpu/cpuX/cpufreq/ has the following sys files for fre-
quency scaling:
• affected_cpus
• bios_limit
• cpuinfo_cur_freq
• cpuinfo_max_freq
• cpuinfo_min_freq
• cpuinfo_transition_latency
• related_cpus
• scaling_available_frequencies
• scaling_available_governors
• scaling_cur_freq
• scaling_driver
• scaling_governor
• scaling_max_freq
• scaling_min_freq
• scaling_setspeed
In the DPDK, scaling_governor is configured in user space. Then, a user space application
can prompt the kernel by writing scaling_setspeed to adjust the CPU frequency according to
the strategies defined by the user space application.
Core state can be altered by speculative sleeps whenever the specified lcore has nothing to
do. In the DPDK, if no packet is received after polling, speculative sleeps can be triggered
according the strategies defined by the user space application.
The main methods exported by power library are for CPU frequency scaling and include the
following:
• Freq up: Prompt the kernel to scale up the frequency of the specific lcore.
• Freq down: Prompt the kernel to scale down the frequency of the specific lcore.
• Freq max: Prompt the kernel to scale up the frequency of the specific lcore to the maxi-
mum.
• Freq min: Prompt the kernel to scale down the frequency of the specific lcore to the
minimum.
• Get available freqs: Read the available frequencies of the specific lcore from the sys
file.
• Freq get: Get the current frequency of the specific lcore.
• Freq set: Prompt the kernel to set the frequency for the specific lcore.
User Cases
The power management mechanism is used to save power when performing L3 forwarding.
References
• l3fwd-power: The sample application in DPDK that performs L3 forwarding with power
management.
• The “L3 Forwarding with Power Management Sample Application” chapter in the DPDK
Sample Application’s User Guide.
The DPDK provides an Access Control library that gives the ability to classify an input packet
based on a set of classification rules.
The ACL library is used to perform an N-tuple search over a set of rules with multiple categories
and find the best match (highest priority) for each category. The library API provides the
following basic operations:
• Create a new Access Control (AC) context.
• Add rules into the context.
• For all rules in the context, build the runtime structures necessary to perform packet
classification.
• Perform input packet classifications.
• Destroy an AC context and its runtime structures and free the associated memory.
Overview
Rule definition
The current implementation allows the user for each AC context to specify its own rule (set of
fields) over which packet classification will be performed. Though there are few restrictions on
the rule fields layout:
• First field in the rule definition has to be one byte long.
• All subsequent fields has to be grouped into sets of 4 consecutive bytes.
This is done mainly for performance reasons - search function processes the first input byte as
part of the flow setup and then the inner loop of the search function is unrolled to process four
input bytes at a time.
},
/*
* Next 2 fields (src & dst ports) form 4 consecutive bytes.
* They share the same input index.
*/
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof (uint16_t),
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct ipv4_5tuple, port_src),
},
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof (uint16_t),
.field_index = 4,
.input_index = 3,
.offset = offsetof (struct ipv4_5tuple, port_dst),
},
};
Any IPv4 packets with protocol ID 17 (UDP), source address 192.168.1.[0-255], destination
address 192.168.2.31, source port [0-65535] and destination port 1234 matches the above
rule.
To define classification for the IPv6 2-tuple: <protocol, IPv6 source address> over the following
IPv6 header structure:
struct struct ipv6_hdr {
uint32_t vtc_flow; /* IP version, traffic class & flow label. */
uint16_t payload_len; /* IP packet length - includes sizeof(ip_header). */
uint8_t proto; /* Protocol, next header. */
uint8_t hop_limits; /* Hop limits. */
uint8_t src_addr[16]; /* IP address of source host. */
uint8_t dst_addr[16]; /* IP address of destination host(s). */
} __attribute__((__packed__));
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 1,
.input_index = 1,
.offset = offsetof (struct ipv6_hdr, src_addr[0]),
},
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 2,
.input_index = 2,
.offset = offsetof (struct ipv6_hdr, src_addr[4]),
},
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct ipv6_hdr, src_addr[8]),
},
{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 4,
.input_index = 4,
.offset = offsetof (struct ipv6_hdr, src_addr[12]),
},
};
Any IPv6 packets with protocol ID 6 (TCP), and source address inside the range
[2001:db8:1234:0000:0000:0000:0000:0000 - 2001:db8:1234:ffff:ffff:ffff:ffff:ffff] matches the
above rule.
In the following example the last element of the search key is 8-bit long. So it is a case
where the 4 consecutive bytes of an input field are not fully occupied. The structure for the
classification is:
struct acl_key {
uint8_t ip_proto;
uint32_t ip_src;
uint32_t ip_dst;
uint8_t tos; /*< This is partially using a 32-bit input element */
};
/*
* Next element of search key (Type of Service) is indeed 1 byte long.
* Anyway we need to allocate all the 4 consecutive bytes for it.
*/
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof (uint32_t), /* All the 4 consecutive bytes are allocated */
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct acl_key, tos),
},
};
Any IPv4 packets with protocol ID 6 (TCP), source address 192.168.1.[0-255], destination
address 192.168.2.31, ToS 1 matches the above rule.
When creating a set of rules, for each rule, additional information must be supplied also:
• priority: A weight to measure the priority of the rules (higher is better). If the input tuple
matches more than one rule, then the rule with the higher priority is returned. Note that
if the input tuple matches more than one rule and these rules have equal priority, it is
undefined which rule is returned as a match. It is recommended to assign a unique
priority for each rule.
• category_mask: Each rule uses a bit mask value to select the relevant category(s) for
the rule. When a lookup is performed, the result for each category is returned. This ef-
fectively provides a “parallel lookup” by enabling a single search to return multiple results
if, for example, there were four different sets of ACL rules, one for access control, one for
routing, and so on. Each set could be assigned its own category and by combining them
into a single database, one lookup returns a result for each of the four sets.
• userdata: A user-defined value. For each category, a successful match returns the
userdata field of the highest priority matched rule. When no rules match, returned value
is zero.
Note: When adding new rules into an ACL context, all fields must be in host byte order (LSB).
When the search is performed for an input tuple, all fields in that tuple must be in network byte
order (MSB).
Build phase (rte_acl_build()) creates for a given set of rules internal structure for further run-
time traversal. With current implementation it is a set of multi-bit tries (with stride == 8).
Depending on the rules set, that could consume significant amount of memory. In attempt
to conserve some space ACL build process tries to split the given rule-set into several non-
intersecting subsets and construct a separate trie for each of them. Depending on the rule-set,
it might reduce RT memory requirements but might increase classification time. There is a
possibility at build-time to specify maximum memory limit for internal RT structures for given
AC context. It could be done via max_size field of the rte_acl_config structure. Setting it to
the value greater than zero, instructs rte_acl_build() to:
• attempt to minimize number of tries in the RT table, but
• make sure that size of RT table wouldn’t exceed given value.
Setting it to zero makes rte_acl_build() to use the default behavior: try to minimize size of the
RT structures, but doesn’t expose any hard limit on it.
That gives the user the ability to decisions about performance/space trade-off. For example:
struct rte_acl_ctx * acx;
struct rte_acl_config cfg;
int ret;
/*
* assuming that acx points to already created and
* populated with rules AC context and cfg filled properly.
*/
/*
* RT structures can't fit into 8MB for given context.
* Try to build without exposing any hard limit.
*/
if (ret == -ERANGE) {
cfg.max_size = 0;
ret = rte_acl_build(acx, &cfg);
}
Classification methods
After rte_acl_build() over given AC context has finished successfully, it can be used to perform
classification - search for a rule with highest priority over the input data. There are several
implementations of classify algorithm:
• RTE_ACL_CLASSIFY_SCALAR: generic implementation, doesn’t require any specific
HW support.
• RTE_ACL_CLASSIFY_SSE: vector implementation, can process up to 8 flows in paral-
lel. Requires SSE 4.1 support.
Note: For more details about the Access Control API, please refer to the DPDK API Refer-
ence.
The following example demonstrates IPv4, 5-tuple classification for rules defined above with
multiple categories in more detail.
RTE_ACL_RULE_DEF(acl_ipv4_rule, RTE_DIM(ipv4_defs));
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192,168,0,0),. mask_range.u32 = 16,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192,168,1,0),. mask_range.u32 = 24,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
},
/* source IPv4 */
.field[1] = {.value.u32 = IPv4(10,1,1,1),. mask_range.u32 = 32,},
/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
},
};
cfg.num_categories = 2;
cfg.num_fields = RTE_DIM(ipv4_defs);
For a tuple with source IP address: 10.1.1.1 and destination IP address: 192.168.1.15, once
the following lines are executed:
uint32_t results[4]; /* make classify for 4 categories. */
• For category 0, both rules 1 and 2 match, but rule 2 has higher priority, therefore results[0]
contains the userdata for rule 2.
• For category 1, both rules 1 and 3 match, but rule 3 has higher priority, therefore results[1]
contains the userdata for rule 3.
• For categories 2 and 3, there are no matches, so results[2] and results[3] contain zero,
which indicates that no matches were found for those categories.
For a tuple with source IP address: 192.168.1.1 and destination IP address: 192.168.2.11,
once the following lines are executed:
uint32_t results[4]; /* make classify by 4 categories. */
Packet Framework
Design Objectives
The main design objectives for the DPDK Packet Framework are:
• Provide standard methodology to build complex packet processing pipelines. Provide
reusable and extensible templates for the commonly used pipeline functional blocks;
• Provide capability to switch between pure software and hardware-accelerated implemen-
tations for the same pipeline functional block;
• Provide the best trade-off between flexibility and performance. Hardcoded pipelines usu-
ally provide the best performance, but are not flexible, while developing flexible frame-
works is never a problem, but performance is usually low;
• Provide a framework that is logically similar to Open Flow.
Overview
Packet processing applications are frequently structured as pipelines of multiple stages, with
the logic of each stage glued around a lookup table. For each incoming packet, the table
defines the set of actions to be applied to the packet, as well as the next stage to send the
packet to.
The DPDK Packet Framework minimizes the development effort required to build packet pro-
cessing pipelines by defining a standard methodology for pipeline development, as well as
providing libraries of reusable templates for the commonly used pipeline blocks.
The pipeline is constructed by connecting the set of input ports with the set of output ports
through the set of tables in a tree-like topology. As result of lookup operation for the current
packet in the current table, one of the table entries (on lookup hit) or the default table entry (on
lookup miss) provides the set of actions to be applied on the current packet, as well as the next
hop for the packet, which can be either another table, an output port or packet drop.
An example of packet processing pipeline is presented in Fig. 4.62:
Fig. 4.62: Example of Packet Processing Pipeline where Input Ports 0 and 1 are Connected
with Output Ports 0, 1 and 2 through Tables 0 and 1
Port Types
Table 4.69 is a non-exhaustive list of ports that can be implemented with the Packet Framework.
Port Interface
Each port is unidirectional, i.e. either input port or output port. Each input/output port is
required to implement an abstract interface that defines the initialization and run-time operation
of the port. The port abstract interface is described in.
Table Types
Table 4.71 is a non-exhaustive list of types of tables that can be implemented with the Packet
Framework.
Table Interface
Each table is required to implement an abstract interface that defines the initialization and
run-time operation of the table. The table abstract interface is described in Table 4.72.
Hash tables are important because the key lookup operation is optimized for speed: instead of
having to linearly search the lookup key through all the keys in the table, the search is limited
to only the keys stored in a single table bucket.
Associative Arrays
An associative array is a function that can be specified as a set of (key, value) pairs, with each
key from the possible set of input keys present at most once. For a given associative array, the
In packet processing context, the sequence of operations involved in hash table operations is
described in Fig. 4.63:
Fig. 4.63: Sequence of Steps for Hash Table Operations in a Packet Processing Context
Flow Classification
Description: The flow classification is executed at least once for each input packet. This oper-
ation maps each incoming packet against one of the known traffic flows in the flow database
that typically contains millions of flows.
Hash table name: Flow classification table
Number of keys: Millions
Key format: n-tuple of packet fields that uniquely identify a traffic flow/connection. Example:
DiffServ 5-tuple of (Source IP address, Destination IP address, L4 protocol, L4 protocol source
port, L4 protocol destination port). For IPv4 protocol and L4 protocols like TCP, UDP or SCTP,
the size of the DiffServ 5-tuple is 13 bytes, while for IPv6 it is 37 bytes.
Key value (key data): actions and action meta-data describing what processing to be applied
for the packets of the current flow. The size of the data associated with each traffic flow can
vary from 8 bytes to kilobytes.
Address Resolution Protocol (ARP)
Description: Once a route has been identified for an IP packet (so the output interface and
the IP address of the next hop station are known), the MAC address of the next hop station is
needed in order to send this packet onto the next leg of the journey towards its destination (as
identified by its destination IP address). The MAC address of the next hop station becomes
the destination MAC address of the outgoing Ethernet frame.
Hash table name: ARP table
Number of keys: Thousands
Key format: The pair of (Output interface, Next Hop IP address), which is typically 5 bytes for
IPv4 and 17 bytes for IPv6.
Key value (key data): MAC address of the next hop station (6 bytes).
Table 4.73 lists the hash table configuration parameters shared by all different hash table types.
Table 4.73: Configuration Parameters Common for All Hash Table Types
# Parameter Details
1 Key size Measured as number of bytes. All keys have the same size.
2 Key value (key Measured as number of bytes.
data) size
3 Number of Needs to be a power of two.
buckets
4 Maximum number Needs to be a power of two.
of keys
5 Hash function Examples: jhash, CRC hash, etc.
6 Hash function Parameter to be passed to the hash function.
seed
7 Key offset Offset of the lookup key byte array within the packet meta-data
stored in the packet buffer.
Bucket Full Problem On initialization, each hash table bucket is allocated space for exactly
4 keys. As keys are added to the table, it can happen that a given bucket already has 4 keys
when a new key has to be added to this bucket. The possible options are:
1. Least Recently Used (LRU) Hash Table. One of the existing keys in the bucket is
deleted and the new key is added in its place. The number of keys in each bucket never
grows bigger than 4. The logic to pick the key to be dropped from the bucket is LRU. The
hash table lookup operation maintains the order in which the keys in the same bucket are
hit, so every time a key is hit, it becomes the new Most Recently Used (MRU) key, i.e.
the last candidate for drop. When a key is added to the bucket, it also becomes the new
MRU key. When a key needs to be picked and dropped, the first candidate for drop, i.e.
the current LRU key, is always picked. The LRU logic requires maintaining specific data
structures per each bucket.
2. Extendable Bucket Hash Table. The bucket is extended with space for 4 more keys.
This is done by allocating additional memory at table initialization time, which is used to
create a pool of free keys (the size of this pool is configurable and always a multiple of 4).
On key add operation, the allocation of a group of 4 keys only happens successfully within
the limit of free keys, otherwise the key add operation fails. On key delete operation, a
group of 4 keys is freed back to the pool of free keys when the key to be deleted is the
only key that was used within its group of 4 keys at that time. On key lookup operation,
if the current bucket is in extended state and a match is not found in the first group of 4
keys, the search continues beyond the first group of 4 keys, potentially until all keys in
this bucket are examined. The extendable bucket logic requires maintaining specific data
structures per table and per each bucket.
Signature Computation The possible options for key signature computation are:
1. Pre-computed key signature. The key lookup operation is split between two CPU cores.
The first CPU core (typically the CPU core that performs packet RX) extracts the key
from the input packet, computes the key signature and saves both the key and the key
signature in the packet buffer as packet meta-data. The second CPU core reads both
the key and the key signature from the packet meta-data and performs the bucket search
step of the key lookup operation.
2. Key signature computed on lookup (“do-sig” version). The same CPU core reads the
key from the packet meta-data, uses it to compute the key signature and also performs
the bucket search step of the key lookup operation.
Table 4.75: Configuration Parameters Specific to Pre-computed Key Signature Hash Table
# Parameter Details
1 Signature Offset of the pre-computed key signature within the packet
offset meta-data.
Key Size Optimized Hash Tables For specific key sizes, the data structures and algorithm
of key lookup operation can be specially handcrafted for further performance improvements,
so following options are possible:
1. Implementation supporting configurable key size.
2. Implementation supporting a single key size. Typical key sizes are 8 bytes and 16
bytes.
The performance of the bucket search logic is one of the main factors influencing the perfor-
mance of the key lookup operation. The data structures and algorithm are designed to make
the best use of Intel CPU architecture resources like: cache memory space, cache memory
bandwidth, external memory bandwidth, multiple execution units working in parallel, out of
order instruction execution, special CPU instructions, etc.
The bucket search logic handles multiple input packets in parallel. It is built as a pipeline of
several stages (3 or 4), with each pipeline stage handling two different packets from the burst
of input packets. On each pipeline iteration, the packets are pushed to the next pipeline stage:
for the 4-stage pipeline, two packets (that just completed stage 3) exit the pipeline, two packets
(that just completed stage 2) are now executing stage 3, two packets (that just completed stage
1) are now executing stage 2, two packets (that just completed stage 0) are now executing
stage 1 and two packets (next two packets to read from the burst of input packets) are entering
the pipeline to execute stage 0. The pipeline iterations continue until all packets from the burst
of input packets execute the last stage of the pipeline.
The bucket search logic is broken into pipeline stages at the boundary of the next memory
access. Each pipeline stage uses data structures that are stored (with high probability) into
the L1 or L2 cache memory of the current CPU core and breaks just before the next memory
access required by the algorithm. The current pipeline stage finalizes by prefetching the data
structures required by the next pipeline stage, so given enough time for the prefetch to com-
plete, when the next pipeline stage eventually gets executed for the same packets, it will read
the data structures it needs from L1 or L2 cache memory and thus avoid the significant penalty
incurred by L2 or L3 cache memory miss.
By prefetching the data structures required by the next pipeline stage in advance (before they
are used) and switching to executing another pipeline stage for different packets, the number of
L2 or L3 cache memory misses is greatly reduced, hence one of the main reasons for improved
performance. This is because the cost of L2/L3 cache memory miss on memory read accesses
is high, as usually due to data dependency between instructions, the CPU execution units have
to stall until the read operation is completed from L3 cache memory or external DRAM memory.
By using prefetch instructions, the latency of memory read accesses is hidden, provided that it
is preformed early enough before the respective data structure is actually used.
By splitting the processing into several stages that are executed on different packets (the pack-
ets from the input burst are interlaced), enough work is created to allow the prefetch instruc-
tions to complete successfully (before the prefetched data structures are actually accessed)
and also the data dependency between instructions is loosened. For example, for the 4-stage
pipeline, stage 0 is executed on packets 0 and 1 and then, before same packets 0 and 1 are
used (i.e. before stage 1 is executed on packets 0 and 1), different packets are used: packets
2 and 3 (executing stage 1), packets 4 and 5 (executing stage 2) and packets 6 and 7 (exe-
cuting stage 3). By executing useful work while the data structures are brought into the L1 or
L2 cache memory, the latency of the read memory accesses is hidden. By increasing the gap
between two consecutive accesses to the same data structure, the data dependency between
instructions is loosened; this allows making the best use of the super-scalar and out-of-order
execution CPU architecture, as the number of CPU core execution units that are active (rather
than idle or stalled due to data dependency constraints between instructions) is maximized.
The bucket search logic is also implemented without using any branch instructions. This avoids
the important cost associated with flushing the CPU core execution pipeline on every instance
of branch misprediction.
Configurable Key Size Hash Table Fig. 4.64, Table 4.76 and Table 4.77 detail the main
data structures used to implement configurable key size hash tables (either LRU or extendable
bucket, either with pre-computed signature or “do-sig”).
Fig. 4.64: Data Structures for Configurable Key Size Hash Tables
Table 4.76: Main Large Data Structures (Arrays) used for Configurable Key Size Hash Tables
# Array name Number of Entry size Description
entries (bytes)
1 Bucket array n_buckets 32 Buckets of the hash table.
(configurable)
2 Bucket n_buckets_ext 32 This array is only created for
extensions (configurable) extendable bucket tables.
array
3 Key array n_keys key_size Keys added to the hash table.
(configurable)
4 Data array n_keys entry_size Key values (key data) associated
(configurable) with the hash table keys.
Table 4.77: Field Description for Bucket Array Entry (Configurable Key Size Hash Tables)
# Field name Field size (bytes) Description
1 Next Ptr/LRU 8 For LRU tables, this
fields represents the
LRU list for the current
bucket stored as array
of 4 entries of 2 bytes
each. Entry 0 stores
the index (0 .. 3) of the
MRU key, while entry
3 stores the index of
the LRU key.
For extendable bucket
tables, this field repre-
sents the next pointer
(i.e. the pointer to
the next group of 4
keys linked to the cur-
rent bucket). The
next pointer is not
NULL if the bucket is
currently extended or
NULL otherwise. To
help the branchless
implementation, bit 0
(least significant bit) of
this field is set to 1 if
the next pointer is not
NULL and to 0 other-
wise.
2 Sig[0 .. 3] 4x2 If key X (X = 0 .. 3)
is valid, then sig X bits
15 .. 1 store the most
significant 15 bits of
key X signature and
sig X bit 0 is set to 1.
If key X is not valid,
then sig X is set to
zero.
3 Key Pos [0 .. 3] 4x4 If key X is valid (X
= 0 .. 3), then Key
Pos X represents the
index into the key ar-
ray where key X is
stored, as well as the
index into the data ar-
ray where the value
associated with key X
is stored.
If key X is not valid,
then the value of Key
Pos X is undefined.
4.26. Packet Framework 424
DPDK documentation, Release 17.05.0-rc0
Fig. 4.65 and Table 4.78 detail the bucket search pipeline stages (either LRU or extendable
bucket, either with pre-computed signature or “do-sig”). For each pipeline stage, the described
operations are applied to each of the two packets handled by that stage.
Fig. 4.65: Bucket Search Pipeline for Key Lookup Operation (Configurable Key Size Hash
Tables)
Table 4.78: Description of the Bucket Search Pipeline Stages (Configurable Key Size Hash Ta-
bles)
# Stage name Description
0 Prefetch packet meta-data Select next two packets from
the burst of input packets.
Prefetch packet meta-data
containing the key and key
signature.
1 Prefetch table bucket Read the key signature from
the packet meta-data (for ex-
tendable bucket hash tables)
or read the key from the
packet meta-data and com-
pute key signature (for LRU
tables).
Identify the bucket ID using
the key signature.
Set bit 0 of the signature to
1 (to match only signatures of
valid keys from the table).
Prefetch the bucket.
2 Prefetch table key Read the key signatures from
the bucket.
Compare the signature of the
input key against the 4 key
signatures from the packet.
As result, the following is ob-
tained:
match = equal to TRUE if
there was at least one sig-
nature match and to FALSE
in the case of no signature
match;
match_many = equal to
TRUE is there were more
than one signature matches
(can be up to 4 signature
matches in the worst case
scenario) and to FALSE
otherwise;
match_pos = the index of the
first key that produced signa-
ture match (only valid if match
is true).
For extendable bucket hash
tables only, set match_many
to TRUE if next pointer is
valid.
Prefetch the bucket key indi-
cated by match_pos (even if
match_pos does not point to
valid key valid).
4.26.
3 Packet Framework Prefetch table data 426
Read the bucket key indicated
by match_pos.
Compare the bucket key
DPDK documentation, Release 17.05.0-rc0
Additional notes:
1. The pipelined version of the bucket search algorithm is executed only if there are at least
7 packets in the burst of input packets. If there are less than 7 packets in the burst of input
packets, a non-optimized implementation of the bucket search algorithm is executed.
2. Once the pipelined version of the bucket search algorithm has been executed for all the
packets in the burst of input packets, the non-optimized implementation of the bucket
search algorithm is also executed for any packets that did not produce a lookup hit, but
have the match_many flag set. As result of executing the non-optimized version, some
of these packets may produce a lookup hit or lookup miss. This does not impact the
performance of the key lookup operation, as the probability of matching more than one
signature in the same group of 4 keys or of having the bucket in extended state (for
extendable bucket hash tables only) is relatively small.
Key Signature Comparison Logic
The key signature comparison logic is described in Table 4.79.
Table 4.80: Collapsed Lookup Tables for Match, Match_Many and Match_Pos
Bit array Hexadecimal value
match 1111_1111_1111_1110 0xFFFELLU
match_many 1111_1110_1110_1000 0xFEE8LLU
match_pos 0001_0010_0001_0011__0001_0010_0001_0000 0x12131210LLU
Single Key Size Hash Tables Fig. 4.66, Fig. 4.67, Table 4.81 and Table 4.82 detail the
main data structures used to implement 8-byte and 16-byte key hash tables (either LRU or
extendable bucket, either with pre-computed signature or “do-sig”).
Table 4.81: Main Large Data Structures (Arrays) used for 8-byte and 16-byte Key Size Hash Tables
# Array name Number of entries Entry size (bytes) Description
1 Bucket array n_buckets (con- 8-byte key size: Buckets of the
figurable) 64 + 4 x en- hash table.
try_size
16-byte key size:
128 + 4 x en-
try_size
2 Bucket exten- n_buckets_ext 8-byte key size: This array is only
sions array (configurable) 64 + 4 x en- created for ex-
try_size tendable bucket
16-byte key size: tables.
128 + 4 x en-
try_size
Table 4.82: Field Description for Bucket Array Entry (8-byte and 16-byte Key Hash Tables)
# Field name Field size (bytes) Description
1 Valid 8 Bit X (X = 0 .. 3) is set
to 1 if key X is valid or
to 0 otherwise.
Bit 4 is only used for
extendable bucket ta-
bles to help with the
implementation of the
branchless logic. In
this case, bit 4 is set to
1 if next pointer is valid
(not NULL) or to 0 oth-
erwise.
2 Next Ptr/LRU 8 For LRU tables, this
fields represents the
LRU list for the current
bucket stored as array
of 4 entries of 2 bytes
each. Entry 0 stores
the index (0 .. 3) of the
MRU key, while entry
3 stores the index of
the LRU key.
For extendable bucket
tables, this field repre-
sents the next pointer
(i.e. the pointer to
the next group of 4
keys linked to the cur-
rent bucket). The
next pointer is not
NULL if the bucket is
currently extended or
NULL otherwise.
3 Key [0 .. 3] 4 x key_size Full keys.
4 Data [0 .. 3] 4 x entry_size Full key values (key
data) associated with
keys 0 .. 3.
and detail the bucket search pipeline used to implement 8-byte and 16-byte key hash tables
(either LRU or extendable bucket, either with pre-computed signature or “do-sig”). For each
pipeline stage, the described operations are applied to each of the two packets handled by that
stage.
Fig. 4.68: Bucket Search Pipeline for Key Lookup Operation (Single Key Size Hash Tables)
Table 4.83: Description of the Bucket Search Pipeline Stages (8-byte and 16-byte Key Hash
Tables)
# Stage name Description
0 Prefetch packet meta-data
1. Select next two packets
from the burst of input
packets.
2. Prefetch packet meta-
data containing the key
and key signature.
Additional notes:
1. The pipelined version of the bucket search algorithm is executed only if there are at least
5 packets in the burst of input packets. If there are less than 5 packets in the burst of input
packets, a non-optimized implementation of the bucket search algorithm is executed.
2. For extendable bucket hash tables only, once the pipelined version of the bucket search
algorithm has been executed for all the packets in the burst of input packets, the non-
optimized implementation of the bucket search algorithm is also executed for any packets
that did not produce a lookup hit, but have the bucket in extended state. As result of
executing the non-optimized version, some of these packets may produce a lookup hit or
lookup miss. This does not impact the performance of the key lookup operation, as the
probability of having the bucket in extended state is relatively small.
To avoid any dependencies on the order in which pipeline elements are created, the connec-
tivity of pipeline elements is defined after all the pipeline input ports, output ports and tables
have been created.
General connectivity rules:
1. Each input port is connected to a single table. No input port should be left unconnected;
2. The table connectivity to other tables or to output ports is regulated by the next hop
actions of each table entry and the default table entry. The table connectivity is fluid, as
the table entries and the default table entry can be updated during run-time.
• A table can have multiple entries (including the default entry) connected to the same
output port. A table can have different entries connected to different output ports.
Different tables can have entries (including default table entry) connected to the
same output port.
• A table can have multiple entries (including the default entry) connected to another
table, in which case all these entries have to point to the same table. This constraint
is enforced by the API and prevents tree-like topologies from being created (allow-
ing table chaining only), with the purpose of simplifying the implementation of the
pipeline run-time execution engine.
Port Actions
An action handler can be assigned to each input/output port to define actions to be executed
on each input packet that is received by the port. Defining the action handler for a specific
input/output port is optional (i.e. the action handler can be disabled).
For input ports, the action handler is executed after RX function. For output ports, the action
handler is executed before the TX function.
The action handler can decide to drop packets.
Table Actions
An action handler to be executed on each input packet can be assigned to each table. Defining
the action handler for a specific table is optional (i.e. the action handler can be disabled).
The action handler is executed after the table lookup operation is performed and the table
entry associated with each input packet is identified. The action handler can only handle the
user-defined actions, while the reserved actions (e.g. the next hop actions) are handled by the
Packet Framework. The action handler can decide to drop the input packet.
Reserved Actions
The reserved actions are handled directly by the Packet Framework without the user being able
to change their meaning through the table action handler configuration. A special category of
the reserved actions is represented by the next hop actions, which regulate the packet flow
between input ports, tables and output ports through the pipeline. Table 4.84 lists the next hop
actions.
User Actions
For each table, the meaning of user actions is defined through the configuration of the table
action handler. Different tables can be configured with different action handlers, therefore the
meaning of the user actions and their associated meta-data is private to each table. Within
the same table, all the table entries (including the table default entry) share the same definition
for the user actions and their associated meta-data, with each table entry having its own set
of enabled user actions and its own copy of the action meta-data. Table 4.85 contains a non-
exhaustive list of user action examples.
Multicore Scaling
A complex application is typically split across multiple cores, with cores communicating through
SW queues. There is usually a performance limit on the number of table lookups and actions
that can be fitted on the same CPU core due to HW constraints like: available CPU cycles,
cache memory size, cache transfer BW, memory transfer BW, etc.
As the application is split across multiple CPU cores, the Packet Framework facilitates the
creation of several pipelines, the assignment of each such pipeline to a different CPU core
and the interconnection of all CPU core-level pipelines into a single application-level complex
pipeline. For example, if CPU core A is assigned to run pipeline P1 and CPU core B pipeline
P2, then the interconnection of P1 with P2 could be achieved by having the same set of SW
queues act like output ports for P1 and input ports for P2.
This approach enables the application development using the pipeline, run-to-completion (clus-
tered) or hybrid (mixed) models.
It is allowed for the same core to run several pipelines, but it is not allowed for several cores to
run the same pipeline.
The threads performing table lookup are actually table writers rather than just readers. Even if
the specific table lookup algorithm is thread-safe for multiple readers (e. g. read-only access
of the search algorithm data structures is enough to conduct the lookup operation), once the
table entry for the current packet is identified, the thread is typically expected to update the
action meta-data stored in the table entry (e.g. increment the counter tracking the number of
packets that hit this table entry), and thus modify the table entry. During the time this thread
is accessing this table entry (either writing or reading; duration is application specific), for data
consistency reasons, no other threads (threads performing table lookup or entry add/delete
operations) are allowed to modify this table entry.
Mechanisms to share the same table between multiple threads:
The presence of accelerators is usually detected during the initialization phase by inspecting
the HW devices that are part of the system (e.g. by PCI bus enumeration). Typical devices
with acceleration capabilities are:
• Inline accelerators: NICs, switches, FPGAs, etc;
• Look-aside accelerators: chipsets, FPGAs, etc.
Usually, to support a specific functional block, specific implementation of Packet Framework
tables and/or ports and/or actions has to be provided for each accelerator, with all the imple-
mentations sharing the same API: pure SW implementation (no acceleration), implementation
using accelerator A, implementation using accelerator B, etc. The selection between these
implementations could be done at build time or at run-time (recommended), based on which
accelerators are present in the system, with no application changes required.
Vhost Library
The vhost library implements a user space virtio net server allowing the user to manipulate the
virtio ring directly. In another words, it allows the user to fetch/put packets from/to the VM virtio
net device. To achieve this, a vhost library should be able to:
• Access the guest memory:
For QEMU, this is done by using the -object
memory-backend-file,share=on,... option. Which means QEMU will cre-
ate a file to serve as the guest RAM. The share=on option allows another process to
map that file, which means it can access the guest RAM.
• Know all the necessary information about the vring:
Information such as where the available ring is stored. Vhost defines some messages
(passed through a Unix domain socket file) to tell the backend all the information it needs
to know how to manipulate the vring.
* zero copy is not good for small packets (typically for packet size below 512).
* zero copy is really good for VM2VM case. For iperf between two VMs, the boost
could be above 70% (when TSO is enableld).
* for VM2NIC case, the nb_tx_desc has to be small enough: <= 64 if virtio
indirect feature is not enabled and <= 128 if it is enabled.
This is because when dequeue zero copy is enabled, guest Tx used vring will
be updated only when corresponding mbuf is freed. Thus, the nb_tx_desc has
to be small enough so that the PMD driver will run out of available Tx descriptors
and free mbufs timely. Otherwise, guest Tx vring would be starved.
* Guest memory should be backended with huge pages to achieve better perfor-
mance. Using 1G page size is the best.
When dequeue zero copy is enabled, the guest phys address and host phys
address mapping has to be established. Using non-huge pages means far more
page segments. To make it simple, DPDK vhost does a linear search of those
segments, thus the fewer the segments, the quicker we will get the mapping.
NOTE: we may speed it by using tree searching in future.
• rte_vhost_driver_session_start()
This function starts the vhost session loop to handle vhost messages. It starts an infinite
loop, therefore it should be called in a dedicated thread.
• rte_vhost_driver_callback_register(virtio_net_device_ops)
This function registers a set of callbacks, to let DPDK applications take the appropriate
action when some events happen. The following events are currently supported:
– new_device(int vid)
This callback is invoked when a virtio net device becomes ready. vid is the virtio
net device ID.
– destroy_device(int vid)
This callback is invoked when a virtio net device shuts down (or when the vhost
connection is broken).
– vring_state_changed(int vid, uint16_t queue_id, int enable)
This callback is invoked when a specific queue’s state is changed, for example to
enabled or disabled.
• rte_vhost_enqueue_burst(vid, queue_id, pkts, count)
Transmits (enqueues) count packets from host to guest.
• rte_vhost_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count)
Receives (dequeues) count packets from guest, and stored them at pkts.
• rte_vhost_feature_disable/rte_vhost_feature_enable(feature_mask)
This function disables/enables some features. For example, it can be used to disable
mergeable buffers and TSO features, which both are enabled by default.
Vhost-user Implementations
Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vhost-
user implementation has two options:
• DPDK vhost-user acts as the server.
DPDK will create a Unix domain socket server file and listen for connections from the
frontend.
Note, this is the default mode, and the only mode before DPDK v16.07.
• DPDK vhost-user acts as the client.
Unlike the server mode, this mode doesn’t create the socket file; it just tries to connect to
the server (which responses to create the file instead).
When the DPDK vhost-user application restarts, DPDK vhost-user will try to connect to
the server again. This is how the “reconnect” feature works.
Note:
– The “reconnect” feature requires QEMU v2.7 (or above).
– The vhost supported features must be exactly the same before and after the restart.
For example, if TSO is disabled and then enabled, nothing will work and issues
undefined might happen.
No matter which mode is used, once a connection is established, DPDK vhost-user will start
receiving and processing vhost messages from QEMU.
For messages with a file descriptor, the file descriptor can be used directly in the vhost process
as it is already installed by the Unix domain socket.
The supported vhost messages are:
• VHOST_SET_MEM_TABLE
• VHOST_SET_VRING_KICK
• VHOST_SET_VRING_CALL
• VHOST_SET_LOG_FD
• VHOST_SET_VRING_ERR
For VHOST_SET_MEM_TABLE message, QEMU will send information for each memory region
and its file descriptor in the ancillary data of the message. The file descriptor is used to map
that region.
VHOST_SET_VRING_KICK is used as the signal to put the vhost device into the data plane,
and VHOST_GET_VRING_BASE is used as the signal to remove the vhost device from the data
plane.
When the socket connection is closed, vhost will destroy the device.
For more vhost details and how to support vhost in vSwitch, please refer to the vhost example
in the DPDK Sample Applications Guide.
The Port Hotplug Framework provides DPDK applications with the ability to attach and detach
ports at runtime. Because the framework depends on PMD implementation, the ports that
PMDs cannot handle are out of scope of this framework. Furthermore, after detaching a port
from a DPDK application, the framework doesn’t provide a way for removing the devices from
the system. For the ports backed by a physical NIC, the kernel will need to support PCI Hotplug
feature.
Overview
To attach new physical device ports, the device will be recognized by userspace driver I/O
framework in kernel at first. Then DPDK applications can call the Port Hotplug functions
to attach the ports. For detaching, steps are vice versa.
• Before detaching, they must be stopped and closed.
DPDK applications must call “rte_eth_dev_stop()” and “rte_eth_dev_close()” APIs before
detaching ports. These functions will start finalization sequence of the PMDs.
• The framework doesn’t affect legacy DPDK applications behavior.
If the Port Hotplug functions aren’t called, all legacy DPDK apps can still work without
modifications.
• Attaching a port
“rte_eth_dev_attach()” API attaches a port to DPDK application, and returns the attached
port number. Before calling the API, the device should be recognized by an userspace
driver I/O framework. The API receives a pci address like “0000:01:00.0” or a virtual
device name like “net_pcap0,iface=eth0”. In the case of virtual device name, the format
is the same as the general “–vdev” option of DPDK.
• Detaching a port
“rte_eth_dev_detach()” API detaches a port from DPDK application, and returns a pci
address of the detached device or a virtual device name of the device.
Reference
Limitations
Source Organization
Note: In the following descriptions, RTE_SDK is the environment variable that points to the
base directory into which the tarball was extracted. See Useful Variables Provided by the Build
System for descriptions of other variables.
Makefiles that are provided by the DPDK libraries and applications are located in
$(RTE_SDK)/mk.
Config templates are located in $(RTE_SDK)/config. The templates describe the options
that are enabled for each target. The config file also contains items that can be enabled and
disabled for many of the DPDK libraries, including debug options. The user should look at
the config file and become familiar with these options. The config file is also used to create a
header file, which will be located in the new build directory.
Libraries
Drivers
Drivers are special libraries which provide poll-mode driver implementations for devices: either
hardware devices or pseudo/virtual devices. They are contained in the drivers subdirectory,
classified by type, and each compiles to a library with the format librte_pmd_X.a where X
is the driver name.
The drivers directory has a net subdirectory which contains:
drivers/net
+-- af_packet # Poll mode driver based on Linux af_packet
+-- bonding # Bonding poll mode driver
+-- cxgbe # Chelsio Terminator 10GbE/40GbE poll mode driver
Note: Several of the driver/net directories contain a base sub-directory. The base direc-
tory generally contains code the shouldn’t be modified directly by the user. Any enhancements
should be done via the X_osdep.c and/or X_osdep.h files in that directory. Refer to the local
README in the base directories for driver specific instructions.
Applications
Applications are source files that contain a main() function. They are located in the
$(RTE_SDK)/app and $(RTE_SDK)/examples directories.
The app directory contains sample applications that are used to test DPDK (such as autotests)
or the Poll Mode Drivers (test-pmd):
app
+-- chkincs # Test program to check include dependencies
+-- cmdline_test # Test the commandline library
+-- test # Autotests to validate DPDK features
+-- test-acl # Test the ACL library
+-- test-pipeline # Test the IP Pipeline framework
+-- test-pmd # Test and benchmark poll mode drivers
The examples directory contains sample applications that show how libraries can be used:
examples
+-- cmdline # Example of using the cmdline library
+-- dpdk_qat # Sample integration with Intel QuickAssist
+-- exception_path # Sending packets to and from Linux TAP device
+-- helloworld # Basic Hello World example
+-- ip_reassembly # Example showing IP reassembly
+-- ip_fragmentation # Example showing IPv4 fragmentation
+-- ipv4_multicast # Example showing IPv4 multicast
+-- kni # Kernel NIC Interface (KNI) example
+-- l2fwd # L2 forwarding with and without SR-IOV
+-- l3fwd # L3 forwarding example
+-- l3fwd-power # L3 forwarding example with power management
+-- l3fwd-vf # L3 forwarding example with SR-IOV
+-- link_status_interrupt # Link status change interrupt example
+-- load_balancer # Load balancing across multiple cores/sockets
+-- multi_process # Example apps using multiple DPDK processes
+-- qos_meter # QoS metering example
+-- qos_sched # QoS scheduler and dropper example
+-- timer # Example of using librte_timer library
+-- vmdq_dcb # Example of VMDQ and DCB receiving
+-- vmdq # Example of VMDQ receiving
+-- vhost # Example of userspace vhost and switch
Note: The actual examples directory may contain additional sample applications to those
shown above. Check the latest DPDK source files for details.
The DPDK requires a build system for compilation activities and so on. This section describes
the constraints and the mechanisms used in the DPDK framework.
There are two use-cases for the framework:
• Compilation of the DPDK libraries and sample applications; the framework generates
specific binary libraries, include files and sample applications
• Compilation of an external application or library, using an installed binary DPDK
After installation, a build directory structure is created. Each build directory contains include
files, libraries, and applications.
A build directory is specific to a configuration that includes architecture + execution environ-
ment + toolchain. It is possible to have several build directories sharing the same sources with
different configurations.
For instance, to create a new build directory called my_sdk_build_dir using the default config-
uration template config/defconfig_x86_64-linuxapp, we use:
cd ${RTE_SDK}
make config T=x86_64-native-linuxapp-gcc O=my_sdk_build_dir
This creates a new my_sdk_build_dir directory. After that, we can compile by doing:
cd my_sdk_build_dir
make
+-- lib
+-- librte_eal
| `-- ...
+-- librte_mempool
| +-- mempool-file1.o
| +-- .mempool-file1.o.cmd
| +-- .mempool-file1.o.d
| +-- mempool-file2.o
| +-- .mempool-file2.o.cmd
| +-- .mempool-file2.o.d
| `-- mempool.a
`-- ...
Refer to Development Kit Root Makefile Help for details about make commands that can be
used from the root of DPDK.
Since DPDK is in essence a development kit, the first objective of end users will be to create
an application using this SDK. To compile an application, the user must set the RTE_SDK and
RTE_TARGET environment variables.
export RTE_SDK=/opt/DPDK
export RTE_TARGET=x86_64-native-linuxapp-gcc
cd /path/to/my_app
For a new application, the user must create their own Makefile that includes some .mk files,
such as ${RTE_SDK}/mk/rte.vars.mk, and ${RTE_SDK}/mk/ rte.app.mk. This is described in
Building Your Own Application.
Depending on the chosen target (architecture, machine, executive environment, toolchain) de-
fined in the Makefile or as an environment variable, the applications and libraries will com-
pile using the appropriate .h files and will link with the appropriate .a files. These files are
located in ${RTE_SDK}/arch-machine-execenv-toolchain, which is referenced internally by
${RTE_BIN_SDK}.
To compile their application, the user just has to call make. The compilation result will be
located in /path/to/my_app/build directory.
Sample applications are provided in the examples directory.
Makefile Description
# binary name
APP = helloworld
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
include $(RTE_SDK)/mk/rte.extapp.mk
Makefile Types
Depending on the .mk file which is included at the end of the user Makefile, the Makefile will
have a different role. Note that it is not possible to build a library and an application in the
same Makefile. For that, the user must create two separate Makefiles, possibly in two different
directories.
In any case, the rte.vars.mk file must be included in the user Makefile as soon as possible.
Application
Library
Generate a .a library.
• rte.lib.mk: Library in the development kit framework
• rte.extlib.mk: external library
• rte.hostlib.mk: host library in the development kit framework
Install
• rte.install.mk: Does not build anything, it is only used to create links or copy files to the
installation directory. This is useful for including files in the development kit framework.
Kernel Module
Objects
• rte.obj.mk: Object aggregation (merge several .o in one) in the development kit frame-
work.
• rte.extobj.mk: Object aggregation (merge several .o in one) outside the development kit
framework.
Misc
app/dpdk-pmdinfogen
dpdk-pmdinfogen scans an object (.o) file for various well known symbol names. These well
known symbol names are defined by various macros and used to export important information
about hardware support and usage for pmd files. For instance the macro:
RTE_PMD_REGISTER_PCI(name, drv)
Which dpdk-pmdinfogen scans for. Using this information other relevant bits of data can
be exported from the object file and used to produce a hardware support description, that
dpdk-pmdinfogen then encodes into a json formatted string in the following format:
static char <name_pmd_string>="PMD_INFO_STRING=\"{'name' : '<name>', ...}\"";
These strings can then be searched for by external tools to determine the hardware support of
a given library or application.
• RTE_SDK: The absolute path to the DPDK sources. When compiling the development
kit, this variable is automatically set by the framework. It has to be defined by the user as
an environment variable if compiling an external application.
• RTE_SRCDIR: The path to the root of the sources. When compiling the development kit,
RTE_SRCDIR = RTE_SDK. When compiling an external application, the variable points
to the root of external application sources.
• RTE_OUTPUT: The path to which output files are written. Typically, it is
$(RTE_SRCDIR)/build, but it can be overridden by the O= option in the make command
line.
• RTE_TARGET: A string identifying the target for which we are building. The format is
arch-machine-execenv-toolchain. When compiling the SDK, the target is deduced by the
build system from the configuration (.config). When building an external application, it
must be specified by the user in the Makefile or as an environment variable.
• RTE_SDK_BIN: References $(RTE_SDK)/$(RTE_TARGET).
• RTE_ARCH: Defines the architecture (i686, x86_64). It is the same value as CON-
FIG_RTE_ARCH but without the double-quotes around the string.
• RTE_MACHINE: Defines the machine. It is the same value as CONFIG_RTE_MACHINE
but without the double-quotes around the string.
• RTE_TOOLCHAIN: Defines the toolchain (gcc , icc). It is the same value as CON-
FIG_RTE_TOOLCHAIN but without the double-quotes around the string.
• RTE_EXEC_ENV: Defines the executive environment (linuxapp). It is the same value as
CONFIG_RTE_EXEC_ENV but without the double-quotes around the string.
• RTE_KERNELDIR: This variable contains the absolute path to the kernel sources that
will be used to compile the kernel modules. The kernel headers must be the same as the
ones that will be used on the target machine (the machine that will run the application).
By default, the variable is set to /lib/modules/$(shell uname -r)/build, which is correct
when the target machine is also the build machine.
• RTE_DEVEL_BUILD: Stricter options (stop on warning). It defaults to y in a git tree.
• VPATH: The path list that the build system will search for sources. By default,
RTE_SRCDIR will be included in VPATH.
• CFLAGS: Flags to use for C compilation. The user should use += to append data in this
variable.
• LDFLAGS: Flags to use for linking. The user should use += to append data in this vari-
able.
• ASFLAGS: Flags to use for assembly. The user should use += to append data in this
variable.
• CPPFLAGS: Flags to use to give flags to C preprocessor (only useful when assembling
.S files). The user should use += to append data in this variable.
• LDLIBS: In an application, the list of libraries to link with (for example, -L /path/to/libfoo
-lfoo ). The user should use += to append data in this variable.
• SRC-y: A list of source files (.c, .S, or .o if the source is a binary) in case of application,
library or object Makefiles. The sources must be available from VPATH.
Variables that can be Set/Overridden by the User on the Command Line Only
Some variables can be used to configure the build system behavior. They are documented in
Development Kit Root Makefile Help and External Application/Library Makefile Help
• WERROR_CFLAGS: By default, this is set to a specific value that depends on the com-
piler. Users are encouraged to use this variable as follows:
CFLAGS += $(WERROR_CFLAGS)
This avoids the use of different cases depending on the compiler (icc or gcc). Also, this variable
can be overridden from the command line, which allows bypassing of the flags for testing
purposes.
• EXTRA_ASFLAGS: The content of this variable is appended after ASFLAGS when as-
sembling.
• EXTRA_CPPFLAGS: The content of this variable is appended after CPPFLAGS when
using a C preprocessor on assembly files.
The DPDK provides a root level Makefile with targets for configuration, building, cleaning, test-
ing, installation and others. These targets are explained in the following sections.
Configuration Targets
The configuration target requires the name of the target, which is specified using T=mytarget
and it is mandatory. The list of available targets are in $(RTE_SDK)/config (remove the def-
config _ prefix).
Configuration targets also support the specification of the name of the output directory, using
O=mybuilddir. This is an optional parameter, the default output directory is build.
• Config
This will create a build directory, and generates a configuration from a template. A Make-
file is also created in the new build directory.
Example:
make config O=mybuild T=x86_64-native-linuxapp-gcc
Build Targets
Build targets support the optional specification of the name of the output directory, using
O=mybuilddir. The default output directory is build.
• all, build or just make
Build the DPDK in the output directory previously created by a make config.
Example:
make O=mybuild
• clean
Clean all objects created using make build.
Example:
make clean O=mybuild
• %_sub
Build a subdirectory only, without managing dependencies on other directories.
Example:
make lib/librte_eal_sub O=mybuild
• %_clean
Clean a subdirectory only.
Example:
make lib/librte_eal_clean O=mybuild
Install Targets
• Install
The list of available targets are in $(RTE_SDK)/config (remove the defconfig_ prefix).
The GNU standards variables may be used: http://gnu.org/prep/standards/html_node/Directory-
Variables.html and http://gnu.org/prep/standards/html_node/DESTDIR.html
Example:
make install DESTDIR=myinstall prefix=/usr
Test Targets
• test
Launch automatic tests for a build directory specified using O=mybuilddir. It is optional,
the default output directory is build.
Example:
make test O=mybuild
Documentation Targets
• doc
Generate the documentation (API and guides).
• doc-api-html
Generate the Doxygen API documentation in html.
• doc-guides-html
Generate the guides documentation in html.
• doc-guides-pdf
Generate the guides documentation in pdf.
Deps Targets
• depdirs
This target is implicitly called by make config. Typically, there is no need for a user to call
it, except if DEPDIRS-y variables have been updated in Makefiles. It will generate the file
$(RTE_OUTPUT)/.depdirs.
Example:
• depgraph
This command generates a dot graph of dependencies. It can be displayed to debug
circular dependency issues, or just to understand the dependencies.
Example:
make depgraph O=mybuild > /tmp/graph.dot && dotty /tmp/ graph.dot
Misc Targets
• help
Show a quick help.
All targets described above are called from the SDK root $(RTE_SDK). It is possible to run the
same Makefile targets inside the build directory. For instance, the following command:
cd $(RTE_SDK)
make config O=mybuild T=x86_64-native-linuxapp-gcc
make O=mybuild
is equivalent to:
cd $(RTE_SDK)
make config O=mybuild T=x86_64-native-linuxapp-gcc
cd mybuild
To compile the DPDK and sample applications with debugging information included and the
optimization level set to 0, the EXTRA_CFLAGS environment variable should be set before
compiling as follows:
export EXTRA_CFLAGS='-O0 -g'
This chapter describes how a developer can extend the DPDK to provide a new library, a new
target, or support a new target.
Declaration is in foo.h:
extern void foo(void);
2. Update lib/Makefile:
vi ${RTE_SDK}/lib/Makefile
# add:
# DIRS-$(CONFIG_RTE_LIBFOO) += libfoo
3. Create a new Makefile for this library, for example, derived from mempool Makefile:
cp ${RTE_SDK}/lib/librte_mempool/Makefile ${RTE_SDK}/lib/libfoo/
vi ${RTE_SDK}/lib/libfoo/Makefile
# replace:
# librte_mempool -> libfoo
# rte_mempool -> foo
4. Update mk/DPDK.app.mk, and add -lfoo in LDLIBS variable when the option is enabled.
This will automatically add this flag when linking a DPDK application.
5. Build the DPDK with the new library (we only show a specific target here):
cd ${RTE_SDK}
make config T=x86_64-native-linuxapp-gcc
make
The test application is used to validate all functionality of the DPDK. Once you have added a
library, a new test case should be added in the test application.
• A new test_foo.c file should be added, that includes foo.h and calls the foo() function from
test_foo(). When the test passes, the test_foo() function should return 0.
• Makefile, test.h and commands.c must be updated also, to handle the new test case.
• Test report generation: autotest.py is a script that is used to generate the test re-
port that is available in the ${RTE_SDK}/doc/rst/test_report/autotests directory. This
script must be updated also. If libfoo is in a new test family, the links in
${RTE_SDK}/doc/rst/test_report/test_report.rst must be updated.
• Build the DPDK with the updated test application (we only show a specific target here):
cd ${RTE_SDK}
make config T=x86_64-native-linuxapp-gcc
make
When compiling a sample application (for example, hello world), the following variables must
be exported: RTE_SDK and RTE_TARGET.
~/DPDK$ cd examples/helloworld/
~/DPDK/examples/helloworld$ export RTE_SDK=/home/user/DPDK
~/DPDK/examples/helloworld$ export RTE_TARGET=x86_64-native-linuxapp-gcc
~/DPDK/examples/helloworld$ make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
The sample application (Hello World) can be duplicated in a new directory as a starting point
for your development:
~$ cp -r DPDK/examples/helloworld my_rte_app
~$ cd my_rte_app/
~/my_rte_app$ export RTE_SDK=/home/user/DPDK
~/my_rte_app$ export RTE_TARGET=x86_64-native-linuxapp-gcc
~/my_rte_app$ make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map
Customizing Makefiles
Application Makefile
The default makefile provided with the Hello World sample application is a good starting point.
It includes:
• $(RTE_SDK)/mk/rte.vars.mk at the beginning
• $(RTE_SDK)/mk/rte.extapp.mk at the end
The user must define several variables:
• APP: Contains the name of the application.
• SRCS-y: List of source files (*.c, *.S).
Library Makefile
Some variables can be defined to customize Makefile actions. The most common are listed
below. Refer to Makefile Description section in Development Kit Build System
chapter for details.
• VPATH: The path list where the build system will search for sources. By default,
RTE_SRCDIR will be included in VPATH.
• CFLAGS_my_file.o: The specific flags to add for C compilation of my_file.c.
• CFLAGS: The flags to use for C compilation.
• LDFLAGS: The flags to use for linking.
• CPPFLAGS: The flags to use to provide flags to the C preprocessor (only useful when
assembling .S files)
• LDLIBS: A list of libraries to link with (for example, -L /path/to/libfoo - lfoo)
External applications or libraries should include specific Makefiles from RTE_SDK, located in
mk directory. These Makefiles are:
• ${RTE_SDK}/mk/rte.extapp.mk: Build an application
• ${RTE_SDK}/mk/rte.extlib.mk: Build a static library
• ${RTE_SDK}/mk/rte.extobj.mk: Build objects (.o)
Prerequisites
Build Targets
Build targets support the specification of the name of the output directory, using O=mybuilddir.
This is optional; the default output directory is build.
• all, “nothing” (meaning just make)
Build the application or the library in the specified output directory.
Example:
make O=mybuild
• clean
Clean all objects created using make build.
Example:
make clean O=mybuild
Help Targets
• help
Show this help.
• M=
Specify the Makefile to call once the output directory is created. By default, it uses
$(S)/Makefile.
• V=
Enable verbose build (show full compilation command line and some intermediate com-
mands).
• D=
Enable dependency debugging. This provides some useful information about why a tar-
get must be rebuilt or not.
• EXTRA_CFLAGS=, EXTRA_LDFLAGS=, EXTRA_ASFLAGS=, EXTRA_CPPFLAGS=
Append specific compilation, link or asm flags.
• CROSS=
Specify a cross-toolchain header that will prefix all gcc/binutils applications. This only
works when using gcc.
It is possible to run the Makefile from another directory, by specifying the output and the source
dir. For example:
export RTE_SDK=/path/to/DPDK
export RTE_TARGET=x86_64-native-linuxapp-icc
make -f /path/to/my_app/Makefile S=/path/to/my_app O=/path/to/build_dir
Introduction
The following sections describe optimizations used in the DPDK and optimizations that should
be considered for a new applications.
They also highlight the performance-impacting coding techniques that should, and should not
be, used when developing an application using the DPDK.
And finally, they give an introduction to application profiling using a Performance Analyzer from
Intel to optimize the software.
This chapter provides some tips for developing efficient code using the DPDK. For additional
and more general information, please refer to the Intel® 64 and IA-32 Architectures Optimiza-
tion Reference Manual which is a valuable reference to writing efficient code.
Memory
This section describes some key memory considerations when developing applications in the
DPDK environment.
Many libc functions are available in the DPDK, via the Linux* application environment. This
can ease the porting of applications and the development of the configuration plane. However,
many of these functions are not designed for performance. Functions such as memcpy() or
strcpy() should not be used in the data plane. To copy small structures, the preference is for
a simpler technique that can be optimized by the compiler. Refer to the VTune™ Performance
Analyzer Essentials publication from Intel Press for recommendations.
For specific functions that are called often, it is also a good idea to provide a self-made opti-
mized function, which should be declared as static inline.
The DPDK API provides an optimized rte_memcpy() function.
Memory Allocation
Other functions of libc, such as malloc(), provide a flexible way to allocate and free memory. In
some cases, using dynamic allocation is necessary, but it is really not advised to use malloc-
like functions in the data plane because managing a fragmented heap can be costly and the
allocator may not be optimized for parallel allocation.
If you really need dynamic allocation in the data plane, it is better to use a memory pool of
fixed-size objects. This API is provided by librte_mempool. This data structure provides several
services that increase performance, such as memory alignment of objects, lockless access to
objects, NUMA awareness, bulk get/put and per-lcore cache. The rte_malloc () function uses
a similar concept to mempools.
Read-Write (RW) access operations by several lcores to the same memory area can generate
a lot of data cache misses, which are very costly. It is often possible to use per-lcore variables,
for example, in the case of statistics. There are at least two solutions for this:
• Use RTE_PER_LCORE variables. Note that in this case, data on lcore X is not available
to lcore Y.
• Use a table of structures (one per lcore). In this case, each structure must be cache-
aligned.
Read-mostly variables can be shared among lcores without performance losses if there are no
RW variables in the same cache line.
NUMA
On a NUMA system, it is preferable to access local memory since remote memory access
is slower. In the DPDK, the memzone, ring, rte_malloc and mempool APIs provide a way to
create a pool on a specific socket.
Sometimes, it can be a good idea to duplicate data to optimize speed. For read-mostly vari-
ables that are often accessed, it should not be a problem to keep them in one socket only,
since data will be present in cache.
Modern memory controllers have several memory channels that can load or store data in par-
allel. Depending on the memory controller and its configuration, the number of channels and
the way the memory is distributed across the channels varies. Each channel has a bandwidth
limit, meaning that if all memory access operations are done on the first channel only, there is
a potential bottleneck.
By default, the Mempool Library spreads the addresses of objects among memory channels.
while (1) {
/* Process as many elements as can be dequeued. */
count = rte_ring_dequeue_burst(ring, obj_table, MAX_BULK);
if (unlikely(count == 0))
continue;
my_process_bulk(obj_table, count);
}
PMD Driver
The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode, allowing the factor-
ization of some code for each call in the send or receive function.
Avoid partial writes. When PCI devices write to system memory through DMA, it costs less if
the write operation is on a full cache line as opposed to part of it. In the PMD code, actions
have been taken to avoid partial writes as much as possible.
Traditionally, there is a trade-off between throughput and latency. An application can be tuned
to achieve a high throughput, but the end-to-end latency of an average packet will typically
increase as a result. Similarly, the application can be tuned to have, on average, a low end-to-
end latency, at the cost of lower throughput.
In order to achieve higher throughput, the DPDK attempts to aggregate the cost of processing
each packet individually by processing packets in bursts.
Using the testpmd application as an example, the burst size can be set on the command line
to a value of 16 (also the default value). This allows the application to request 16 packets at
a time from the PMD. The testpmd application then immediately attempts to transmit all the
packets that were received, in this case, all 16 packets.
The packets are not transmitted until the tail pointer is updated on the corresponding TX queue
of the network port. This behavior is desirable when tuning for high throughput because the
cost of tail pointer updates to both the RX and TX queues can be spread across 16 packets,
effectively hiding the relatively slow MMIO cost of writing to the PCIe* device. However, this
is not very desirable when tuning for low latency because the first packet that was received
must also wait for another 15 packets to be received. It cannot be transmitted until the other
15 packets have also been processed because the NIC will not know to transmit the packets
until the TX tail pointer has been updated, which is not done until all 16 packets have been
processed for transmission.
To consistently achieve low latency, even under heavy system load, the application developer
should avoid processing packets in bunches. The testpmd application can be configured from
the command line to use a burst value of 1. This will allow a single packet to be processed at
a time, providing lower latency, but with the added cost of lower throughput.
Atomic operations imply a lock prefix before the instruction, causing the processor’s LOCK#
signal to be asserted during execution of the following instruction. This has a big impact on
performance in a multicore environment.
Performance can be improved by avoiding lock mechanisms in the data plane. It can often be
replaced by other solutions like per-lcore variables. Also, some locking techniques are more
efficient than others. For instance, the Read-Copy-Update (RCU) algorithm can frequently
replace simple rwlocks.
Coding Considerations
Inline Functions
Small functions can be declared as static inline in the header file. This avoids the cost of a call
instruction (and the associated context saving). However, this technique is not always efficient;
it depends on many factors including the compiler.
Branch Prediction
The Intel® C/C++ Compiler (icc)/gcc built-in helper functions likely() and unlikely() allow the
developer to indicate if a code branch is likely to be taken or not. For instance:
if (likely(x > 1))
do_stuff();
pends on the compiler’s ability to optimize for a specific microarchitecture, therefore it is prefer-
able to use the latest compiler versions whenever possible.
If the compiler version does not support the specific feature set (for example, the Intel® AVX in-
struction set), the build process gracefully degrades to whatever latest feature set is supported
by the compiler.
Since the build and runtime targets may not be the same, the resulting binary also contains
a platform check that runs before the main() function and checks if the current machine is
suitable for running the binary.
Along with compiler optimizations, a set of preprocessor defines are automatically added to the
build process (regardless of the compiler version). These defines correspond to the instruc-
tion sets that the target CPU should be able to support. For example, a binary compiled for
any SSE4.2-capable processor will have RTE_MACHINE_CPUFLAG_SSE4_2 defined, thus
enabling compile-time code path selection for different platforms.
The following sections describe methods of profiling DPDK applications on different architec-
tures.
Profiling on x86
Intel processors provide performance counters to monitor events. Some tools provided by
Intel, such as VTune, can be used to profile and benchmark an application. See the VTune
Performance Analyzer Essentials publication from Intel Press for more information.
For a DPDK application, this can be done in a Linux* application environment only.
The main situations that should be monitored through event counters are:
• Cache misses
• Branch mis-predicts
• DTLB misses
• Long latency instructions and exceptions
Refer to the Intel Performance Analysis Guide for details about application profiling.
Profiling on ARM64
The ARM64 architecture provide performance counters to monitor events. The Linux perf
tool can be used to profile and benchmark an application. In addition to the standard events,
perf can be used to profile arm64 specific PMU (Performance Monitor Unit) events through
raw events (-e -rXX).
For more derails refer to the ARM64 specific PMU events enumeration.
The default cntvct_el0 based rte_rdtsc() provides a portable means to get a wall clock
counter in user space. Typically it runs at <= 100MHz.
The alternative method to enable rte_rdtsc() for a high resolution wall clock counter is
through the armv8 PMU subsystem. The PMU cycle counter runs at CPU frequency. However,
access to the PMU cycle counter from user space is not enabled by default in the arm64 linux
kernel. It is possible to enable cycle counter for user space access by configuring the PMU
from the privileged mode (kernel space).
By default the rte_rdtsc() implementation uses a portable cntvct_el0
scheme. Application can choose the PMU based implementation with
CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU.
The example below shows the steps to configure the PMU based cycle counter on an armv8
machine.
git clone https://github.com/jerinjacobk/armv8_pmu_cycle_counter_el0
cd armv8_pmu_cycle_counter_el0
make
sudo insmod pmu_el0_cycle_counter.ko
cd $DPDK_DIR
make config T=arm64-armv8a-linuxapp-gcc
echo "CONFIG_RTE_ARM_EAL_RDTSC_USE_PMU=y" >> build/.config
make
Warning: The PMU based scheme is useful for high accuracy performance profiling with
rte_rdtsc(). However, this method can not be used in conjunction with Linux userspace
profiling tools like perf as this scheme alters the PMU registers state.
Glossary
Data Plane In contrast to the control plane, the data plane in a network architecture are the lay-
ers involved when forwarding packets. These layers must be highly optimized to achieve
good performance.
DIMM Dual In-line Memory Module
Doxygen A documentation generator used in the DPDK to generate the API reference.
DPDK Data Plane Development Kit
DRAM Dynamic Random Access Memory
EAL The Environment Abstraction Layer (EAL) provides a generic interface that hides the
environment specifics from the applications and libraries. The services expected from the
EAL are: development kit loading and launching, core affinity/ assignment procedures,
system memory allocation/description, PCI bus access, inter-partition communication.
FIFO First In First Out
FPGA Field Programmable Gate Array
GbE Gigabit Ethernet
HW Hardware
HPET High Precision Event Timer; a hardware timer that provides a precise time reference on
x86 platforms.
ID Identifier
IOCTL Input/Output Control
I/O Input/Output
IP Internet Protocol
IPv4 Internet Protocol version 4
IPv6 Internet Protocol version 6
lcore A logical execution unit of the processor, sometimes called a hardware thread.
KNI Kernel Network Interface
L1 Layer 1
L2 Layer 2
L3 Layer 3
L4 Layer 4
LAN Local Area Network
LPM Longest Prefix Match
master lcore The execution unit that executes the main() function and that launches other
lcores.
mbuf An mbuf is a data structure used internally to carry messages (mainly network packets).
The name is derived from BSD stacks. To understand the concepts of packet buffers or
mbuf, refer to TCP/IP Illustrated, Volume 2: The Implementation.
MESI Modified Exclusive Shared Invalid (CPU cache coherency protocol)
Fig. 4.55 Prefetch Pipeline for the Hierarchical Scheduler Enqueue Operation
Fig. 4.56 Pipe Prefetch State Machine for the Hierarchical Scheduler Dequeue Operation
Fig. 4.57 High-level Block Diagram of the DPDK Dropper
Fig. 4.58 Flow Through the Dropper
Fig. 4.59 Example Data Flow Through Dropper
Fig. 4.60 Packet Drop Probability for a Given RED Configuration
Fig. 4.61 Initial Drop Probability (pb), Actual Drop probability (pa) Computed Using a Factor 1
(Blue Curve) and a Factor 2 (Red Curve)
Fig. 4.62 Example of Packet Processing Pipeline where Input Ports 0 and 1 are Connected
with Output Ports 0, 1 and 2 through Tables 0 and 1
Fig. 4.63 Sequence of Steps for Hash Table Operations in a Packet Processing Context
Fig. 4.64 Data Structures for Configurable Key Size Hash Tables
Fig. 4.65 Bucket Search Pipeline for Key Lookup Operation (Configurable Key Size Hash
Tables)
Fig. 4.66 Data Structures for 8-byte Key Hash Tables
Fig. 4.67 Data Structures for 16-byte Key Hash Tables
Fig. 4.68 Bucket Search Pipeline for Key Lookup Operation (Single Key Size Hash Tables)
Fig. 4.30 Load Balancing Using Front End Node
Fig. 4.31 Consistent Hashing
Fig. 4.32 Table Based Flow Distribution
Fig. 4.33 Searching for Perfect Hash Function
Fig. 4.34 Divide and Conquer for Millions of Keys
Fig. 4.35 EFD as a Flow-Level Load Balancer
Fig. 4.36 Group Assignment
Fig. 4.37 Perfect Hash Search - Assigned Keys & Target Value
Fig. 4.38 Perfect Hash Search - Satisfy Target Values
Fig. 4.39 Finding Hash Index for Conflict Free lookup_table
Fig. 4.40 EFD Lookup Operation
Tables
Table 4.51 Packet Processing Pipeline Implementing QoS
Table 4.52 Infrastructure Blocks Used by the Packet Processing Pipeline
Table 4.53 Port Scheduling Hierarchy
Table 4.54 Scheduler Internal Data Structures per Port
Table 4.55 Ethernet Frame Overhead Fields
Table 4.56 Token Bucket Generic Operations
HowTo Guides
Overview
It is not possible to migrate a Virtual Machine which has an SR-IOV Virtual Function (VF).
To get around this problem the bonding PMD is used.
The following sections show an example of how to do this.
Test Setup
A bonded device is created in the VM. The virtio and VF PMD’s are added as slaves to the
bonded device. The VF is set as the primary slave of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the backend of the
Virtio device and the Physical Function (PF) device.
To test the Live Migration two servers with identical operating systems installed are used. KVM
and Qemu 2.3 is also required on the servers.
In this example, the servers have Niantic and or Fortville NIC’s installed. The NIC’s on both
servers are connected to a switch which is also connected to the traffic generator.
The switch is configured to broadcast traffic on all the NIC ports. A Sample switch configuration
can be found in this section.
The host is running the Kernel PF driver (ixgbe or i40e).
The ip address of host_server_1 is 10.237.212.46
The ip address of host_server_2 is 10.237.212.131
The sample scripts mentioned in the steps below can be found in the Sample host scripts and
Sample VM scripts sections.
465
DPDK documentation, Release 17.05.0-rc0
On host_server_1: Terminal 1
cd /root/dpdk/host_scripts
./setup_vf_on_212_46.sh
On host_server_1: Terminal 2
cd /root/dpdk/host_scripts
./setup_bridge_on_212_46.sh
./connect_to_qemu_mon_on_host.sh
(qemu)
On host_server_1: Terminal 1
In VM on host_server_1:
cd /root/dpdk/vm_scripts
./setup_dpdk_in_vm.sh
./run_testpmd_bonding_in_vm.sh
Port close should remove VF MAC address, it does not remove perm_addr.
The mac_addr command only works with the kernel PF for Niantic.
testpmd> mac_addr remove 1 AA:BB:CC:DD:EE:FF
testpmd> port detach 1
Port '0000:00:04.0' is detached. Now total ports is 2
testpmd> show port stats all
On host_server_1: Terminal 2
On host_server_1: Terminal 1
In VM on host_server_1:
testpmd> show bonding config 2
On host_server_2: Terminal 1
cd /root/dpdk/host_scripts
./setup_vf_on_212_131.sh
./vm_virtio_one_migrate.sh
On host_server_2: Terminal 2
./setup_bridge_on_212_131.sh
./connect_to_qemu_mon_on_host.sh
(qemu) info status
VM status: paused (inmigrate)
(qemu)
On host_server_1: Terminal 2
On host_server_2: Terminal 1
In VM on host_server_2:
Hit Enter key. This brings the user to the testpmd prompt.
testpmd>
On host_server_2: Terminal 2
On host_server_2: Terminal 1
In VM on host_server_2:
testomd> show port info all
testpmd> show port stats all
testpmd> show bonding config 2
testpmd> port attach 0000:00:04.0
Port 1 is attached.
Now total ports is 3
Done
The mac_addr command only works with the Kernel PF for Niantic.
testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
testpmd> add bonding slave 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
setup_vf_on_212_46.sh
# set up Niantic VF
cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
rmmod ixgbevf
# set up Fortville VF
cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
rmmod i40evf
vm_virtio_vf_one_212_46.sh
# Memory
MEM=1536
setup_bridge_on_212_46.sh
ifconfig ens3f0 up
ifconfig tap1 up
ifconfig ens6f0 up
ifconfig virbr0 up
connect_to_qemu_mon_on_host.sh
#!/bin/sh
# This script is run on both hosts when the VM is up,
# to connect to the Qemu Monitor.
telnet 0 3333
setup_vf_on_212_131.sh
# set up Niantic VF
cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
rmmod ixgbevf
# set up Fortville VF
cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
rmmod i40evf
vm_virtio_one_migrate.sh
KVM_PATH="/usr/bin/qemu-system-x86_64"
# Memory
MEM=1536
setup_bridge_on_212_131.sh
ifconfig ens4f0 up
ifconfig tap1 up
ifconfig ens5f0 up
ifconfig virbr0 up
Sample VM scripts
setup_dpdk_in_vm.sh
#!/bin/sh
# this script matches the vm_virtio_vf_one script
# virtio port is 03
# vf port is 04
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
ifconfig -a
/root/dpdk/usertools/dpdk-devbind.py --status
modprobe uio
insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
/root/dpdk/usertools/dpdk-devbind.py --status
run_testpmd_bonding_in_vm.sh
# The test system has 8 cpus (0-7), use cpus 2-7 for VM
# Use taskset -pc <core number> <thread_id>
/root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
-c f -n 4 --socket-mem 350 -- --i --port-topology=chained
The Intel switch is used to connect the traffic generator to the NIC’s on host_server_1 and
host_server_2.
In order to run the switch configuration two console windows are required.
Log in as root in both windows.
TestPointShared, run_switch.sh and load /root/switch_config must be executed in the sequence
below.
On Switch: Terminal 1
run TestPointShared
/usr/bin/TestPointShared
On Switch: Terminal 2
execute run_switch.sh
/root/run_switch.sh
On Switch: Terminal 1
Overview
Live Migration of a VM with DPDK Virtio PMD on a host which is running the Vhost sample
application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).
The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC’s.
The following sections show an example of how to do this migration.
Test Setup
To test the Live Migration two servers with identical operating systems installed are used. KVM
and QEMU is also required on the servers.
QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
In this example, the servers have Niantic and or Fortville NIC’s installed. The NIC’s on both
servers are connected to a switch which is also connected to the traffic generator.
The switch is configured to broadcast traffic on all the NIC ports.
The ip address of host_server_1 is 10.237.212.46
The ip address of host_server_2 is 10.237.212.131
The sample scripts mentioned in the steps below can be found in the Sample host scripts and
Sample VM scripts sections.
On host_server_1: Terminal 1
On host_server_1: Terminal 2
On host_server_1: Terminal 3
For Fortville and Niantic NIC’s reset SRIOV and run the vhost_user sample application (vhost-
switch) on host_server_1.
cd /root/dpdk/host_scripts
./reset_vf_on_212_46.sh
./run_vhost_switch_on_host.sh
On host_server_1: Terminal 1
On host_server_1: Terminal 4
On host_server_1: Terminal 1
In VM on host_server_1:
Setup DPDK in the VM and run testpmd in the VM.
cd /root/dpdk/vm_scripts
./setup_dpdk_in_vm.sh
./run_testpmd_in_vm.sh
On host_server_2: Terminal 1
On host_server_2: Terminal 2
On host_server_2: Terminal 3
For Fortville and Niantic NIC’s reset SRIOV, and run the vhost_user sample application on
host_server_2.
cd /root/dpdk/host_scripts
./reset_vf_on_212_131.sh
./run_vhost_switch_on_host.sh
On host_server_2: Terminal 1
On host_server_2: Terminal 4
On host_server_1: Terminal 4
On host_server_2: Terminal 1
In VM on host_server_2:
Hit Enter key. This brings the user to the testpmd prompt.
testpmd>
On host_server_2: Terminal 4
On host_server_2: Terminal 1
In VM on host_server_2:
testomd> show port info all
testpmd> show port stats all
reset_vf_on_212_46.sh
#!/bin/sh
# This script is run on the host 10.237.212.46 to reset SRIOV
vm_virtio_vhost_user.sh
#/bin/sh
# Script for use with vhost_user sample application
# The host system has 8 cpu's (0-7)
# Memory
MEM=1024
VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
# Socket Path
SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
-nographic \
-hda $DISK_IMG \
-chardev socket,id=chr0,path=$SOCKET_PATH \
-netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
-device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
-chardev socket,id=chr1,path=$SOCKET_PATH \
-netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
-device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
-monitor telnet::3333,server,nowait
connect_to_qemu_mon_on_host.sh
#!/bin/sh
# This script is run on both hosts when the VM is up,
# to connect to the Qemu Monitor.
telnet 0 3333
reset_vf_on_212_131.sh
#!/bin/sh
# This script is run on the host 10.237.212.131 to reset SRIOV
vm_virtio_vhost_user_migrate.sh
#/bin/sh
# Script for use with vhost user sample application
# The host system has 8 cpu's (0-7)
# Memory
MEM=1024
VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
# Socket Path
SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
-smp $VCPUS_NR \
-object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem,nodeid=0 \
-cpu host \
-name VM1 \
-no-reboot \
-net none \
-vnc none \
-nographic \
-hda $DISK_IMG \
-chardev socket,id=chr0,path=$SOCKET_PATH \
-netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
-device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
-chardev socket,id=chr1,path=$SOCKET_PATH \
-netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
-device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
-incoming tcp:0:5555 \
-monitor telnet::3333,server,nowait
Sample VM scripts
setup_dpdk_virtio_in_vm.sh
#!/bin/sh
# this script matches the vm_virtio_vhost_user script
# virtio port is 03
# virtio port is 04
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
ifconfig -a
/root/dpdk/usertools/dpdk-devbind.py --status
rmmod virtio-pci
modprobe uio
insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
/root/dpdk/usertools/dpdk-devbind.py --status
run_testpmd_in_vm.sh
#!/bin/sh
# Run testpmd for use with vhost_user sample app.
# test system has 8 cpus (0-7), use cpus 2-7 for VM
/root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
-c 3f -n 4 --socket-mem 350 -- --burst=64 --i --disable-hw-vlan-filter
Flow Bifurcation is a mechanism which uses hardware capable Ethernet devices to split traffic
between Linux user space and kernel space. Since it is a hardware assisted feature this
approach can provide line rate processing capability. Other than KNI, the software is just
required to enable device configuration, there is no need to take care of the packet movement
during the traffic split. This can yield better performance with less CPU overhead.
The Flow Bifurcation splits the incoming data traffic to user space applications (such as DPDK
applications) and/or kernel space programs (such as the Linux kernel stack). It can direct some
traffic, for example data plane traffic, to DPDK, while directing some other traffic, for example
control plane traffic, to the traditional Linux networking stack.
There are a number of technical options to achieve this. A typical example is to combine the
technology of SR-IOV and packet classification filtering.
SR-IOV is a PCI standard that allows the same physical adapter to be split as multiple virtual
functions. Each virtual function (VF) has separated queues with physical functions (PF). The
network adapter will direct traffic to a virtual function with a matching destination MAC address.
In a sense, SR-IOV has the capability for queue division.
Packet classification filtering is a hardware capability available on most network adapters. Fil-
ters can be configured to direct specific flows to a given receive queue by hardware. Different
NICs may have different filter types to direct flows to a Virtual Function or a queue that belong
to it.
In this way the Linux networking stack can receive specific traffic through the kernel driver while
a DPDK application can receive specific traffic bypassing the Linux kernel by using drivers like
VFIO or the DPDK igb_uio module.
On Intel 82599 10 Gigabit Ethernet Controller series NICs Flow Bifurcation can be achieved
by SR-IOV and Intel Flow Director technologies. Traffic can be directed to queues by the Flow
Director capability, typically by matching 5-tuple of UDP/TCP packets.
The typical procedure to achieve this is as follows:
1. Boot the system without iommu, or with iommu=pt.
2. Create Virtual Functions:
echo 2 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
Where:
4. Compile the DPDK application and insert igb_uio or probe the vfio-pci kernel mod-
ules as normal.
5. Bind the virtual functions:
modprobe vfio-pci
dpdk-devbind.py -b vfio-pci 01:10.0
dpdk-devbind.py -b vfio-pci 01:10.1
In this example, traffic matching the rules will go through the VF by matching the filter rule. All
other traffic, not matching the rules, will go through the default queue or scaling on queues in
the PF. That is to say UDP packets with the specified IP source and destination addresses will
go through the DPDK application. All other traffic, with different hosts or different protocols, will
go through the Linux networking stack.
Note:
• The above steps work on the Linux kernel v4.2.
• The Flow Bifurcation is implemented in Linux kernel and ixgbe kernel driver using the
following patches:
– ethtool: Add helper routines to pass vf to rx_flow_spec
– ixgbe: Allow flow director to use entire queue space
• The Ethtool version used in this example is 3.18.
On Intel X710/XL710 series Ethernet Controllers Flow Bifurcation can be achieved by SR-IOV,
Cloud Filter and L3 VEB switch. The traffic can be directed to queues by the Cloud Filter and
L3 VEB switch’s matching rule.
• L3 VEB filters work for non-tunneled packets. It can direct a packet just by the Destination
IP address to a queue in a VF.
• Cloud filters work for the following types of tunneled packets.
– Inner mac.
– Inner mac + VNI.
– Outer mac + Inner mac + VNI.
– Inner mac + Inner vlan + VNI.
– Inner mac + Inner vlan.
Note: Output such as add vxlan port 8472, index 0 success should be found
in the system log.
• Inner mac, for a route whose inner destination mac is 0:0:0:0:9:0 to PF’s queue 6.
ethtool -N <dev_name> flow-type ether dst 00:00:00:00:00:00 \
m ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \
user-def 0xffffffff00000003 action 6 loc 1
• Inner mac + VNI, for a route whose inner destination mac is 0:0:0:0:9:0 and VNI is
8 to PF’s queue 4.
ethtool -N <dev_name> flow-type ether dst 00:00:00:00:00:00 \
m ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \
user-def 0x800000003 action 4 loc 4
• Outer mac + Inner mac + VNI, for a route whose outer mac is 68:05:ca:24:03:8b,
inner destination mac is c2:1a:e1:53:bc:57, and VNI is 8 to PF’s queue 2.
ethtool -N <dev_name> flow-type ether dst 68:05:ca:24:03:8b \
m 00:00:00:00:00:00 src c2:1a:e1:53:bc:57 m 00:00:00:00:00:00 \
user-def 0x800000003 action 2 loc 2
• Inner mac + Inner vlan + VNI, for a route whose inner destination mac is
00:00:00:00:20:00, inner vlan is 10, and VNI is 8 to VF 0’s queue 1.
ethtool -N <dev_name> flow-type ether dst 00:00:00:00:01:00 \
m ff:ff:ff:ff:ff:ff src 00:00:00:00:20:00 m 00:00:00:00:00:00 \
vlan 10 user-def 0x800000000 action 1 loc 5
• Inner mac + Inner vlan, for a route whose inner destination mac is
00:00:00:00:20:00, and inner vlan is 10 to VF 0’s queue 1.
ethtool -N <dev_name> flow-type ether dst 00:00:00:00:01:00 \
m ff:ff:ff:ff:ff:ff src 00:00:00:00:20:00 m 00:00:00:00:00:00 \
vlan 10 user-def 0xffffffff00000000 action 1 loc 5
Note:
• If the upper 32 bits of ‘user-def’ are 0xffffffff, then the filter can be used for
programming an L3 VEB filter, otherwise the upper 32 bits of ‘user-def’ can carry
the tenant ID/VNI if specified/required.
• Cloud filters can be defined with inner mac, outer mac, inner ip, inner vlan and VNI
as part of the cloud tuple. It is always the destination (not source) mac/ip that these
filters use. For all these examples dst and src mac address fields are overloaded
dst == outer, src == inner.
• The filter will direct a packet matching the rule to a vf id specified in the lower 32 bit
of user-def to the queue specified by ‘action’.
• If the vf id specified by the lower 32 bit of user-def is greater than or equal to
max_vfs, then the filter is for the PF queues.
6. Compile the DPDK application and insert igb_uio or probe the vfio-pci kernel mod-
ules as normal.
7. Bind the virtual function:
modprobe vfio-pci
dpdk-devbind.py -b vfio-pci 01:10.0
dpdk-devbind.py -b vfio-pci 01:10.1
Note:
• The above steps work on the i40e Linux kernel driver v1.5.16.
• The Ethtool version used in this example is 3.18. The mask ff means ‘not involved’,
while 00 or no mask means ‘involved’.
• For more details of the configuration, refer to the cloud filter test plan
This guide lists the steps required to setup a PVP benchmark using testpmd as a simple
forwarder between NICs and Vhost interfaces. The goal of this setup is to have a reference
PVP benchmark without using external vSwitches (OVS, VPP, ...) to make it easier to obtain
reproducible results and to facilitate continuous integration testing.
The guide covers two ways of launching the VM, either by directly calling the QEMU command
line, or by relying on libvirt. It has been tested with DPDK v16.11 using RHEL7 for both host
and guest.
Setup overview
In this diagram, each red arrow represents one logical core. This use-case requires 6 dedicated
logical cores. A forwarding configuration with a single NIC is also possible, requiring 3 logical
cores.
Host setup
In this setup, we isolate 6 cores (from CPU2 to CPU7) on the same NUMA node. Two cores
are assigned to the VM vCPUs running testpmd and four are assigned to testpmd on the host.
Host tuning
4. Disable NMIs:
echo 0 > /proc/sys/kernel/nmi_watchdog
Qemu build
Build Qemu:
git clone git://git.qemu.org/qemu.git
cd qemu
mkdir bin
cd bin
../configure --target-list=x86_64-softmmu
DPDK build
Build DPDK:
git clone git://dpdk.org/dpdk
cd dpdk
export RTE_SDK=$PWD
make install T=x86_64-native-linuxapp-gcc DESTDIR=install
Testpmd launch
Note: The Sandy Bridge family seems to have some IOMMU limitations giving poor
performance results. To achieve good performance on these machines consider using
UIO instead.
With this command, isolated CPUs 2 to 5 will be used as lcores for PMD threads.
3. In testpmd interactive mode, set the portlist to obtain the correct port chaining:
set portlist 0,2,1,3
start
VM launch
Qemu way
Launch QEMU with two Virtio-net devices paired to the vhost-user sockets created by testpmd.
Below example uses default Virtio-net options, but options may be specified, for example to
disable mergeable buffers or indirect descriptors.
<QEMU path>/bin/x86_64-softmmu/qemu-system-x86_64 \
-enable-kvm -cpu host -m 3072 -smp 3 \
-chardev socket,id=char0,path=/tmp/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 \
-chardev socket,id=char1,path=/tmp/vhost-user2 \
-netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 \
-object memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc \
-net user,hostfwd=tcp::1002$1-:22 -net nic \
-qmp unix:/tmp/qmp.socket,server,nowait \
-monitor stdio <vm_image>.qcow2
Libvirt way
Some initial steps are required for libvirt to be able to connect to testpmd’s sockets.
First, SELinux policy needs to be set to permissive, since testpmd is generally run as root
(note, as reboot is required):
cat /etc/selinux/config
Once the domain created, the following snippet is an extract of he most important information
(hugepages, vCPU pinning, Virtio PCI devices):
<domain type='kvm'>
<memory unit='KiB'>3145728</memory>
<currentMemory unit='KiB'>3145728</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='0'/>
</hugepages>
<locked/>
</memoryBacking>
<vcpu placement='static'>3</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='6'/>
<vcpupin vcpu='2' cpuset='7'/>
<emulatorpin cpuset='0'/>
</cputune>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<cpu mode='host-passthrough'>
<topology sockets='1' cores='3' threads='1'/>
<numa>
<cell id='0' cpus='0-2' memory='3145728' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<devices>
<interface type='vhostuser'>
<mac address='56:48:4f:53:54:01'/>
Guest setup
Guest tuning
2. Disable NMIs:
echo 0 > /proc/sys/kernel/nmi_watchdog
DPDK build
Build DPDK:
git clone git://dpdk.org/dpdk
cd dpdk
export RTE_SDK=$PWD
make install T=x86_64-native-linuxapp-gcc DESTDIR=install
Testpmd launch
Start testpmd:
$RTE_SDK/install/bin/testpmd -l 0,1,2 --socket-mem 1024 -n 4 \
--proc-type auto --file-prefix pg -- \
--portmask=3 --forward-mode=macswap --port-topology=chained \
--disable-hw-vlan --disable-rss -i --rxq=1 --txq=1 \
--rxd=256 --txd=256 --nb-cores=2 --auto-start
Results template
Container becomes more and more popular for strengths, like low overhead, fast boot-up time,
and easy to deploy, etc. How to use DPDK to accelerate container networking becomes a
common question for users. There are two use models of running DPDK inside containers, as
shown in Fig. 5.3.
Overview
The virtual device, virtio-user, with unmodified vhost-user backend, is designed for high perfor-
mance user space container networking or inter-process communication (IPC).
The overview of accelerating container networking by virtio-user is shown in Fig. 5.4.
Different virtio PCI devices we usually use as a para-virtualization I/O in the context of
QEMU/VM, the basic idea here is to present a kind of virtual devices, which can be attached
and initialized by DPDK. The device emulation layer by QEMU in VM’s context is saved by just
registering a new kind of virtual device in DPDK’s ether layer. And to minimize the change, we
reuse already-existing virtio PMD code (driver/net/virtio/).
Virtio, in essence, is a shm-based solution to transmit/receive packets. How is memory
shared? In VM’s case, qemu always shares the whole physical layout of VM to vhost back-
end. But it’s not feasible for a container, as a process, to share all virtual memory regions to
backend. So only those virtual memory regions (aka, hugepages initialized in DPDK) are sent
to backend. It restricts that only addresses in these areas can be used to transmit or receive
packets.
Sample Usage
Here we use Docker as container engine. It also applies to LXC, Rocket with some minor
changes.
1. Compile DPDK.
make install RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
Note: If we run all above setup on the host, it’s a shm-based IPC.
Limitations
The virtual device, virtio-user, was originally introduced with vhost-user backend, as a high
performance solution for IPC (Inter-Process Communication) and user space container net-
working.
Virtio_user with vhost-kernel backend is a solution for exceptional path, such as KNI which
exchanges packets with kernel networking stack. This solution is very promising in:
• Maintenance
All kernel modules needed by this solution, vhost and vhost-net (kernel), are upstreamed
and extensively used kernel module.
• Features
vhost-net is born to be a networking solution, which has lots of networking related feat-
uers, like multi queue, tso, multi-seg mbuf, etc.
• Performance
similar to KNI, this solution would use one or more kthreads to send/receive packets
from user space DPDK applications, which has little impact on user space polling thread
(except that it might enter into kernel space to wake up those kthreads if necessary).
The overview of an application using virtio-user as exceptional path is shown in Fig. 5.5.
Sample Usage
As a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before compiling the
kernel and those kernel modules should be inserted.
1. Compile DPDK and bind a physical NIC to igb_uio/uio_pci_generic/vfio-pci.
This physical NIC is for communicating with outside.
2. Run testpmd.
$(testpmd) -c 0xc -n 4 \
--vdev=virtio_user0,path=/dev/vhost-net,queue_size=1024 \
-- -i --txqflags=0x0 --disable-hw-vlan --enable-lro --crc-strip
--enable-rx-cksum --rxd=1024 --txd=1024
This command runs testpmd with two ports, one physical NIC to communicate with out-
side, and one virtio-user to communicate with kernel.
• --enable-lro
This is used to negotiate VIRTIO_NET_F_GUEST_TSO4 and VIR-
TIO_NET_F_GUEST_TSO6 feature so that large packets from kernel can be
transmitted DPDK application and further TSOed by physical NIC.
• --enable-rx-cksum
This is used to negotiate VIRTIO_NET_F_GUEST_CSUM so that packets from
kernel can be deemed as valid Rx checksumed.
• queue_size
256 by default. To avoid shortage of descriptors, we can increase it to 1024.
• queues
Number of multi-queues. Each qeueue will be served by a kthread. For exam-
ple:
$(testpmd) -c 0xc -n 4 \
--vdev=virtio_user0,path=/dev/vhost-net,queues=2,queue_size=1024 \
-- -i --txqflags=0x0 --disable-hw-vlan --enable-lro \
--crc-strip --enable-rx-cksum --txq=2 --rxq=2 --rxd=1024 \
--txd=1024
1. Start testpmd:
(testpmd) start
Note: The tap device will be named tap0, tap1, etc, by kernel.
Then, all traffic from physical NIC can be forwarded into kernel stack, and all traffic on the tap0
can be sent out from physical NIC.
Limitations
dpdk-procinfo Application
The dpdk-procinfo application is a Data Plane Development Kit (DPDK) application that runs as
a DPDK secondary process and is capable of retrieving port statistics, resetting port statistics
and printing DPDK memory information. This application extends the original functionality that
was supported by dump_cfg.
Parameters
dpdk-pdump Application
The dpdk-pdump tool is a Data Plane Development Kit (DPDK) tool that runs as a DPDK
secondary process and is capable of enabling packet capture on dpdk ports.
Note:
493
DPDK documentation, Release 17.05.0-rc0
• The dpdk-pdump tool can only be used in conjunction with a primary applica-
tion which has the packet capture framework initialized already.
• The dpdk-pdump tool depends on libpcap based PMD which is disabled by
default in the build configuration files, owing to an external dependency on
the libpcap development files which must be installed on the board. Once the
libpcap development files are installed, the libpcap based PMD can be enabled
by setting CONFIG_RTE_LIBRTE_PMD_PCAP=y and recompiling the DPDK.
The --pdump command line option is mandatory and it takes various sub arguments which
are described in below section.
Note:
• Parameters inside the parentheses represents mandatory parameters.
• Parameters inside the square brackets represents optional parameters.
• Multiple instances of --pdump can be passed to capture packets on different
port and queue combinations.
The --server-socket-path command line option is optional. This represents the server
socket directory. If no value is passed default values are used i.e. /var/run/.dpdk/ for root
users and ~/.dpdk/ for non root users.
The --client-socket-path command line option is optional. This represents the client
socket directory. If no value is passed default values are used i.e. /var/run/.dpdk/ for root
users and ~/.dpdk/ for non root users.
Note:
• As of now the dpdk-pdump tool cannot capture the packets of virtual devices
in the primary process due to a bug in the ethdev library. Due to this bug, in
a multi process context, when the primary and secondary have different ports
set, then the secondary process (here the dpdk-pdump tool) overwrites the
rte_eth_devices[] entries of the primary process.
queue: Queue id of the eth device on which packets should be captured. The user can pass
a queue value of * to enable packet capture on all queues of the eth device.
rx-dev: Can be either a pcap file name or any Linux iface.
tx-dev: Can be either a pcap file name or any Linux iface.
Note:
• To receive ingress packets only, rx-dev should be passed.
• To receive egress packets only, tx-dev should be passed.
• To receive ingress and egress packets separately rx-dev and tx-dev should
both be passed with the different file names or the Linux iface names.
• To receive ingress and egress packets together, rx-dev and tx-dev should
both be passed with the same file name or the same Linux iface name.
ring-size: Size of the ring. This value is used internally for ring creation. The ring will be
used to enqueue the packets from the primary application to the secondary. This is an optional
parameter with default size 16384.
mbuf-size: Size of the mbuf data. This is used internally for mempool creation. Ideally this
value must be same as the primary application’s mempool’s mbuf data size which is used for
packet RX. This is an optional parameter with default size 2176.
total-num-mbufs: Total number mbufs in mempool. This is used internally for mempool
creation. This is an optional parameter with default value 65535.
Example
dpdk-pmdinfo Application
The dpdk-pmdinfo tool is a Data Plane Development Kit (DPDK) utility that can dump a
PMDs hardware support info.
Note:
• Parameters inside the square brackets represents optional parameters.
dpdk-devbind Application
The dpdk-devbind tool is a Data Plane Development Kit (DPDK) utility that helps binding
and unbinding devices from specific drivers. As well as checking their status in that regard.
OPTIONS
• --help, --usage
Display usage information and quit
• -s, --status
Print the current status of all known network interfaces. For each device, it
displays the PCI domain, bus, slot and function, along with a text description
of the device. Depending upon whether the device is being used by a ker-
nel driver, the igb_uio driver, or no driver, other relevant information will be
displayed: - the Linux interface name e.g. if=eth0 - the driver being used
e.g. drv=igb_uio - any suitable drivers not currently using that device e.g.
unused=igb_uio NOTE: if this flag is passed along with a bind/unbind op-
tion, the status display will always occur after the other operations have taken
place.
• -b driver, --bind=driver
Select the driver to use or “none” to unbind the device
• -u, --unbind
Unbind a device (Equivalent to -b none)
• --force
By default, devices which are used by Linux - as indicated by having routes
in the routing table - cannot be modified. Using the --force flag overrides
this behavior, allowing active links to be forcibly unbound. WARNING: This can
lead to loss of network connection and should be used with caution.
Warning: Due to the way VFIO works, there are certain limitations to which devices can
be used with VFIO. Mainly it comes down to how IOMMU groups work. Any Virtual Function
device can be used with VFIO on its own, but physical devices will require either all ports
bound to VFIO, or some of them bound to VFIO while others not being bound to anything at
all.
If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU
group in which your device is in. Therefore, the bridge driver should also be unbound from
the bridge PCI device for VFIO to work with devices behind the bridge.
Warning: While any user can run the dpdk-devbind.py script to view the status of the
network ports, binding or unbinding network ports requires root privileges.
Examples
To bind eth1 from the current driver and move to use igb_uio:
dpdk-devbind --bind=igb_uio eth1
To check status of all network ports, assign one to the igb_uio driver and check status again:
# Check the status of the available devices.
dpdk-devbind --status
Network devices using DPDK-compatible driver
============================================
<none>
dpdk-test-crypto-perf Application
The dpdk-test-crypto-perf tool is a Data Plane Development Kit (DPDK) utility that al-
lows measuring performance parameters of PMDs available in the crypto tree. There are
available two measurement types: throughput and latency. User can use multiply cores to run
tests on but only one type of crypto PMD can be measured during single application execution.
Cipher parameters, type of device, type of operation and chain mode have to be specified in
the command line as application parameters. These parameters are checked using device
capabilities structure.
Limitations
On hardware devices the cycle-count doesn’t always represent the actual offload cost. The
cycle-count only represents the offload cost when the hardware accelerator is not fully loaded,
when loaded the cpu cycles freed up by the offload are still consumed by the test tool and
included in the cycle-count. These cycles are consumed by retries and inefficient API calls
enqueuing and dequeuing smaller bursts than specified by the cmdline parameter. This re-
sults in a larger cycle-count measurement and should not be interpreted as an offload cost
measurement.
On hardware devices the throughput measurement is not necessarily the maximum possible
for the device, e.g. it may be necessary to use multiple cores to keep the hardware accelerator
fully loaded and so measure maximum throughput.
EAL Options
The following are the EAL command-line options that can be used in conjunction with the
dpdk-test-crypto-perf applcation. See the DPDK Getting Started Guides for more infor-
mation on these options.
• -c <COREMASK>
Set the hexadecimal bitmask of the cores to run on.
• -w <PCI>
Add a PCI device in white list.
• --vdev <driver><id>
Add a virtual device.
Appication Options
• --silent
Disable options dump.
• --pool-sz <n>
Set the number of mbufs to be allocated in the mbuf pool.
• --total-ops <n>
Set the number of total operations performed.
• --burst-sz <n>
Set the number of packets per burst.
• --buffer-sz <n>
Set the size of single packet (plaintext or ciphertext in it).
• --segments-nb <n>
Set the number of segments per packet.
• --devtype <name>
Set device type, where name is one of the following:
crypto_null
crypto_aesni_mb
crypto_aesni_gcm
crypto_openssl
crypto_qat
crypto_snow3g
crypto_kasumi
crypto_zuc
• --optype <name>
Set operation type, where name is one of the following:
cipher-only
auth-only
cipher-then-auth
auth-then-cipher
aead
• --sessionless
Enable session-less crypto operations mode.
• --out-of-place
Enable out-of-place crypto operations mode.
• --verify
Enable verify that all crypto operations were successful. The verification is
done after the performance test.
• --test-file <name>
Set test vector file path. See the Test Vector File chapter.
• --test-name <name>
Set specific test name section in the test vector file.
• --cipher-algo <name>
Set cipher algorithm name, where name is one of the following:
3des-cbc
3des-ecb
3des-ctr
aes-cbc
aes-ccm
aes-ctr
aes-ecb
aes-gcm
aes-f8
aes-xts
arc4
null
kasumi-f8
snow3g-uea2
zuc-eea3
• --cipher-op <mode>
Set cipher operation mode, where mode is one of the following:
encrypt
decrypt
• --cipher-key-sz <n>
Set the size of cipher key.
• --cipher-iv-sz <n>
Set the size of cipher iv.
• --auth-algo <name>
Set authentication algorithm name, where name is one of the following:
3des-cbc
aes-cbc-mac
aes-ccm
aes-cmac
aes-gcm
aes-gmac
aes-xcbc-mac
md5
md5-hmac
sha1
sha1-hmac
sha2-224
sha2-224-hmac
sha2-256
sha2-256-hmac
sha2-384
sha2-384-hmac
sha2-512
sha2-512-hmac
kasumi-f9
snow3g-uia2
zuc-eia3
• --auth-op <mode>
Set authentication operation mode, where mode is one of the following:
verify
generate
• --auth-key-sz <n>
Set the size of authentication key.
• --auth-digest-sz <n>
Set the size of authentication digest.
• --auth-aad-sz <n>
Set the size of authentication aad.
• --csv-friendly
Enable test result output CSV friendly rather than human friendly.
The test vector file is a text file contain information about test vectors. The file is made of the
sections. The first section doesn’t have header. It contain global information used in each
test variant vectors - typicaly information about plaintext, ciphertext, cipher key, aut key, initial
vector. All other sections begin header. The sections contain particular information typicaly
digest.
Format of the file:
Each line beginig with sign ‘#’ contain comment and it is ignored by parser:
# <comment>
Data line contain information tocken then sign ‘=’ and a string of bytes in C byte array format:
<tocken> = <C byte array>
Tockens list:
• plaintext
Original plaintext to be crypted.
• ciphertext
Encrypted plaintext string.
• cipher_key
Key used in cipher operation.
• auth_key
Key used in auth operation.
• iv
Initial vector.
• aad
Additional data.
• digest
Digest string.
Examples
Call application for performance throughput test of single Aesni MB PMD for cipher encryption
aes-cbc and auth generation sha1-hmac, one milion operations, burst size 32, packet size 64:
dpdk-test-crypto-perf -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
--cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo
sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12
--total-ops 10000000 --burst-sz 32 --buffer-sz 64
Call application for performance latency test of two Aesni MB PMD executed on two cores for
cipher encryption aes-cbc, ten operations in silent mode:
dpdk-test-crypto-perf -c 0xf0 --vdev crypto_aesni_mb_pmd1
--vdev crypto_aesni_mb_pmd2 -w 0000:00:00.0 -- --devtype crypto_aesni_mb
--cipher-algo aes-cbc --cipher-key-sz 16 --cipher-iv-sz 16
--cipher-op encrypt --optype cipher-only --silent
--ptest latency --total-ops 10
Call application for performance latency test of single open ssl PMD for cipher encryption aes-
gcm and auth generation aes-gcm,ten operations in silent mode, test vector provide in file
“test_aes_gcm.data” with packet verification:
Test vector file for cipher algorithm aes cbc 256 with authorization sha:
# Global Section
plaintext =
0xff, 0xca, 0xfb, 0xf1, 0x38, 0x20, 0x2f, 0x7b, 0x24, 0x98, 0x26, 0x7d, 0x1d, 0x9f, 0xb3, 0x93,
0xd9, 0xef, 0xbd, 0xad, 0x4e, 0x40, 0xbd, 0x60, 0xe9, 0x48, 0x59, 0x90, 0x67, 0xd7, 0x2b, 0x7b,
0x8a, 0xe0, 0x4d, 0xb0, 0x70, 0x38, 0xcc, 0x48, 0x61, 0x7d, 0xee, 0xd6, 0x35, 0x49, 0xae, 0xb4,
0xaf, 0x6b, 0xdd, 0xe6, 0x21, 0xc0, 0x60, 0xce, 0x0a, 0xf4, 0x1c, 0x2e, 0x1c, 0x8d, 0xe8, 0x7b
ciphertext =
0x77, 0xF9, 0xF7, 0x7A, 0xA3, 0xCB, 0x68, 0x1A, 0x11, 0x70, 0xD8, 0x7A, 0xB6, 0xE2, 0x37, 0x7E,
0xD1, 0x57, 0x1C, 0x8E, 0x85, 0xD8, 0x08, 0xBF, 0x57, 0x1F, 0x21, 0x6C, 0xAD, 0xAD, 0x47, 0x1E,
0x0D, 0x6B, 0x79, 0x39, 0x15, 0x4E, 0x5B, 0x59, 0x2D, 0x76, 0x87, 0xA6, 0xD6, 0x47, 0x8F, 0x82,
0xB8, 0x51, 0x91, 0x32, 0x60, 0xCB, 0x97, 0xDE, 0xBE, 0xF0, 0xAD, 0xFC, 0x23, 0x2E, 0x22, 0x02
cipher_key =
0xE4, 0x23, 0x33, 0x8A, 0x35, 0x64, 0x61, 0xE2, 0x49, 0x03, 0xDD, 0xC6, 0xB8, 0xCA, 0x55, 0x7A,
0xd0, 0xe7, 0x4b, 0xfb, 0x5d, 0xe5, 0x0c, 0xe7, 0x6f, 0x21, 0xb5, 0x52, 0x2a, 0xbb, 0xc7, 0xf7
auth_key =
0xaf, 0x96, 0x42, 0xf1, 0x8c, 0x50, 0xdc, 0x67, 0x1a, 0x43, 0x47, 0x62, 0xc7, 0x04, 0xab, 0x05,
0xf5, 0x0c, 0xe7, 0xa2, 0xa6, 0x23, 0xd5, 0x3d, 0x95, 0xd8, 0xcd, 0x86, 0x79, 0xf5, 0x01, 0x47,
0x4f, 0xf9, 0x1d, 0x9d, 0x36, 0xf7, 0x68, 0x1a, 0x64, 0x44, 0x58, 0x5d, 0xe5, 0x81, 0x15, 0x2a,
0x41, 0xe4, 0x0e, 0xaa, 0x1f, 0x04, 0x21, 0xff, 0x2c, 0xf3, 0x73, 0x2b, 0x48, 0x1e, 0xd2, 0xf7
iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
# Section sha 1 hmac buff 32
[sha1_hmac_buff_32]
digest =
0x36, 0xCA, 0x49, 0x6A, 0xE3, 0x54, 0xD8, 0x4F, 0x0B, 0x76, 0xD8, 0xAA, 0x78, 0xEB, 0x9D, 0x65,
0x2C, 0xCA, 0x1F, 0x97
# Section sha 256 hmac buff 32
[sha256_hmac_buff_32]
digest =
0x1C, 0xB2, 0x3D, 0xD1, 0xF9, 0xC7, 0x6C, 0x49, 0x2E, 0xDA, 0x94, 0x8B, 0xF1, 0xCF, 0x96, 0x43,
0x67, 0x50, 0x39, 0x76, 0xB5, 0xA1, 0xCE, 0xA1, 0xD7, 0x77, 0x10, 0x07, 0x43, 0x37, 0x05, 0xB4
Introduction
This document is a user guide for the testpmd example application that is shipped as part of
the Data Plane Development Kit.
The testpmd application can be used to test the DPDK in a packet forwarding mode and also
to access NIC hardware features such as Flow Director. It also serves as a example of how to
build a more fully-featured application using the DPDK SDK.
The guide shows how to build and run the testpmd application and how to configure the appli-
cation from the command line and the run-time environment.
The testpmd application is compiled as part of the main compilation of the DPDK libraries
and tools. Refer to the DPDK Getting Started Guides for details. The basic compilation steps
are:
1. Set the required environmental variables and go to the source directory:
export RTE_SDK=/path/to/rte_sdk
cd $RTE_SDK
The following are the EAL command-line options that can be used in conjunction with the
testpmd, or any other DPDK application. See the DPDK Getting Started Guides for more
504
DPDK documentation, Release 17.05.0-rc0
Lcore and CPU lists are grouped by ( and ) Within the group. The - character is used
as a range separator and , is used as a single number separator. The grouping () can
be omitted for single element group. The @ can be omitted if cpus and lcores have the
same value.
• --master-lcore ID
Core ID that is used as master.
• -n NUM
Set the number of memory channels to use.
• -b, --pci-blacklist domain:bus:devid.func
Blacklist a PCI device to prevent EAL from using it. Multiple -b options are allowed.
• -d LIB.so
Load an external driver. Multiple -d options are allowed.
• -w, --pci-whitelist domain:bus:devid:func
Add a PCI device in white list.
• -m MB
Memory to allocate. See also --socket-mem.
• -r NUM
Set the number of memory ranks (auto-detected by default).
• -v
Display the version information on startup.
• --xen-dom0
Support application running on Xen Domain0 without hugetlbfs.
• --syslog
Set the syslog facility.
• --socket-mem
Set the memory to allocate on specific sockets (use comma separated values).
• --huge-dir
Specify the directory where the hugetlbfs is mounted.
• --proc-type
Set the type of the current process.
• --file-prefix
Prefix for hugepage filenames.
• -vmware-tsc-map
Use VMware TSC map instead of native RDTSC.
• --vdev
Add a virtual device using the format:
<driver><id>[,key=val, ...]
For example:
--vdev 'net_pcap0,rx_pcap=input.pcap,tx_pcap=output.pcap'
• --base-virtaddr
Specify base virtual address.
• --create-uio-dev
Create /dev/uioX (usually done by hotplug).
• --no-shconf
No shared config (mmap-ed files).
• --no-pci
Disable pci.
• --no-hpet
Disable hpet.
• --no-huge
Use malloc instead of hugetlbfs.
The following are the command-line options for the testpmd applications. They must be sepa-
rated from the EAL options, shown in the previous section, with a -- separator:
sudo ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
• -i, --interactive
Run testpmd in interactive mode. In this mode, the testpmd starts with a prompt that can
be used to start and stop forwarding, configure the application and display stats on the
current packet processing session. See Testpmd Runtime Functions for more details.
In non-interactive mode, the application starts with the configuration specified on the
command-line and immediately enters forwarding mode.
• -h, --help
Display a help message and quit.
• -a, --auto-start
Start forwarding on initialization.
• --nb-cores=N
Set the number of forwarding cores, where 1 <= N <= “number of cores” or
CONFIG_RTE_MAX_LCORE from the configuration file. The default value is 1.
• --nb-ports=N
Set the number of forwarding ports, where 1 <= N <= “number of ports” on the board
or CONFIG_RTE_MAX_ETHPORTS from the configuration file. The default value is the
number of ports on the board.
• --coremask=0xXX
Set the hexadecimal bitmask of the cores running the packet forwarding test. The master
lcore is reserved for command line parsing only and cannot be masked on for packet
forwarding.
• --portmask=0xXX
Set the hexadecimal bitmask of the ports used by the packet forwarding test.
• --numa
Enable NUMA-aware allocation of RX/TX rings and of RX memory buffers (mbufs).
• --port-numa-config=(port,socket)[,(port,socket)]
Specify the socket on which the memory pool to be used by the port will be allocated.
• --ring-numa-config=(port,flag,socket)[,(port,flag,socket)]
Specify the socket on which the TX/RX rings for the port will be allocated. Where flag is
1 for RX, 2 for TX, and 3 for RX and TX.
• --socket-num=N
Set the socket from which all memory is allocated in NUMA mode, where 0 <= N <
number of sockets on the board.
• --mbuf-size=N
Set the data size of the mbufs used to N bytes, where N < 65536. The default value is
2048.
• --total-num-mbufs=N
Set the number of mbufs to be allocated in the mbuf pools, where N > 1024.
• --max-pkt-len=N
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
• --eth-peers-configfile=name
Use a configuration file containing the Ethernet addresses of the peer ports. The config-
uration file should contain the Ethernet addresses on separate lines:
XX:XX:XX:XX:XX:01
XX:XX:XX:XX:XX:02
...
• --eth-peer=N,XX:XX:XX:XX:XX:XX
Set the MAC address XX:XX:XX:XX:XX:XX of the peer port N, where 0 <= N <
CONFIG_RTE_MAX_ETHPORTS from the configuration file.
• --pkt-filter-mode=mode
Set Flow Director mode where mode is either none (the default), signature or
perfect. See flow_director_filter for more details.
• --pkt-filter-report-hash=mode
Set Flow Director hash match reporting mode where mode is none, match (the default)
or always.
• --pkt-filter-size=N
Set Flow Director allocated memory size, where N is 64K, 128K or 256K. Sizes are in
kilobytes. The default is 64.
• --pkt-filter-flexbytes-offset=N
Set the flexbytes offset. The offset is defined in words (not bytes) counted from the first
byte of the destination Ethernet MAC address, where N is 0 <= N <= 32. The default
value is 0x6.
• --pkt-filter-drop-queue=N
Set the drop-queue. In perfect filter mode, when a rule is added with queue = -1, the
packet will be enqueued into the RX drop-queue. If the drop-queue does not exist, the
packet is dropped. The default value is N=127.
• --crc-strip
Enable hardware CRC stripping.
• --enable-lro
Enable large receive offload.
• --enable-rx-cksum
Enable hardware RX checksum offload.
• --enable-scatter
Enable scatter (multi-segment) RX.
• --disable-hw-vlan
Disable hardware VLAN.
• --disable-hw-vlan-filter
Disable hardware VLAN filter.
• --disable-hw-vlan-strip
Disable hardware VLAN strip.
• --disable-hw-vlan-extend
Disable hardware VLAN extend.
• --enable-drop-en
Enable per-queue packet drop for packets with no descriptors.
• --disable-rss
Disable RSS (Receive Side Scaling).
• --port-topology=mode
Set port topology, where mode is paired (the default) or chained.
In paired mode, the forwarding is between pairs of ports, for example: (0,1), (2,3), (4,5).
In chained mode, the forwarding is to the next available port in the port mask, for exam-
ple: (0,1), (1,2), (2,0).
The ordering of the ports can be changed using the portlist testpmd runtime function.
• --forward-mode=mode
Set the forwarding mode where mode is one of the following:
io (the default)
mac
mac_swap
flowgen
rxonly
txonly
csum
icmpecho
ieee1588
• --rss-ip
Set RSS functions for IPv4/IPv6 only.
• --rss-udp
Set RSS functions for IPv4/IPv6 and UDP.
• --rxq=N
Set the number of RX queues per port to N, where 1 <= N <= 65535. The default value
is 1.
• --rxd=N
Set the number of descriptors in the RX rings to N, where N > 0. The default value is
128.
• --txq=N
Set the number of TX queues per port to N, where 1 <= N <= 65535. The default value
is 1.
• --txd=N
Set the number of descriptors in the TX rings to N, where N > 0. The default value is 512.
• --burst=N
Set the number of packets per burst to N, where 1 <= N <= 512. The default value is 16.
• --mbcache=N
Set the cache of mbuf memory pools to N, where 0 <= N <= 512. The default value is 16.
• --rxpt=N
Set the prefetch threshold register of RX rings to N, where N >= 0. The default value is
8.
• --rxht=N
Set the host threshold register of RX rings to N, where N >= 0. The default value is 8.
• --rxfreet=N
Set the free threshold of RX descriptors to N, where 0 <= N < value of –rxd. The default
value is 0.
• --rxwt=N
Set the write-back threshold register of RX rings to N, where N >= 0. The default value
is 4.
• --txpt=N
Set the prefetch threshold register of TX rings to N, where N >= 0. The default value is
36.
• --txht=N
Set the host threshold register of TX rings to N, where N >= 0. The default value is 0.
• --txwt=N
Set the write-back threshold register of TX rings to N, where N >= 0. The default value is
0.
• --txfreet=N
Set the transmit free threshold of TX rings to N, where 0 <= N <= value of --txd. The
default value is 0.
• --txrst=N
Set the transmit RS bit threshold of TX rings to N, where 0 <= N <= value of --txd. The
default value is 0.
• --txqflags=0xXXXXXXXX
Set the hexadecimal bitmask of TX queue flags, where 0 <= N <= 0x7FFFFFFF. The
default value is 0.
Note: When using hardware offload functions such as vlan or checksum add
txqflags=0 to force the full-featured TX code path. In some PMDs this may already be
the default.
• --rx-queue-stats-mapping=(port,queue,mapping)[,(port,queue,mapping)]
Set the RX queues statistics counters mapping 0 <= mapping <= 15.
• --tx-queue-stats-mapping=(port,queue,mapping)[,(port,queue,mapping)]
Set the TX queues statistics counters mapping 0 <= mapping <= 15.
• --no-flush-rx
Don’t flush the RX streams before starting forwarding. Used mainly with the PCAP PMD.
• --txpkts=X[,Y]
Set TX segment sizes or total packet length. Valid for tx-only and flowgen forwarding
modes.
• --disable-link-check
Disable check on link status when starting/stopping ports.
The testpmd prompt has some, limited, readline support. Common bash command-line func-
tions such as Ctrl+a and Ctrl+e to go to the start and end of the prompt line are supported
as well as access to the command history via the up-arrow.
There is also support for tab completion. If you type a partial command and hit <TAB> you get
a list of the available completions:
testpmd> show port <TAB>
Note: Some examples in this document are too long to fit on one line are are shown wrapped
at “\” for display purposes:
testpmd> set flow_ctrl rx (on|off) tx (on|off) (high_water) (low_water) \
(pause_time) (send_xon) (port_id)
Help Functions
The testpmd has on-line help for the functions that are available at runtime. These are divided
into sections and can be accessed using help, help section or help all:
testpmd> help
Control Functions
start
start tx_first
Start packet forwarding with current configuration after sending specified number of bursts of
packets:
testpmd> start tx_first (""|burst_num)
stop
quit
Quit to prompt:
testpmd> quit
Display Functions
The functions in the following sections are used to display information about the testpmd con-
figuration or the NIC status.
show port
Display the RSS hash functions and RSS hash key of a port:
testpmd> show port (port_id) rss-hash ipv4|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp|ipv4-other|ipv
clear port
Clear the port statistics for a given port or for all ports:
testpmd> clear port (info|stats|xstats|fdir|stat_qmap) (port_id|all)
For example:
testpmd> clear port stats all
show (rxq|txq)
show config
Displays the configuration of the application. The configuration comes from the command-line,
the runtime or the application defaults:
testpmd> show config (rxtx|cores|fwd|txpkts)
set fwd
read rxd
For example:
testpmd> read rxd 0 0 4
0x0000000B - 0x001D0180 / 0x0000000B - 0x001D0180
read txd
For example:
testpmd> read txd 0 0 4
0x00000001 - 0x24C3C440 / 0x000F0000 - 0x2330003C
Configuration Functions
The testpmd application can be configured from the runtime as well as from the command-line.
This section details the available configuration functions that are available.
set default
set verbose
Currently the only available levels are 0 (silent except for error) and 1 (fully verbose).
set nbport
set nbcore
Note: The number of cores used must not be greater than number of ports used multiplied by
the number of queues per port.
set coremask
Note: The master lcore is reserved for command line parsing only and cannot be masked on
for packet forwarding.
set portmask
set burst
set txpkts
Set the length of each segment of the TX-ONLY packets or length of packet for FLOWGEN
mode:
testpmd> set txpkts (x[,y]*)
set txsplit
Set the split policy for the TX packets, applicable for TX-ONLY and CSUM forwarding modes:
testpmd> set txsplit (off|on|rand)
Where:
• off disable packet copy & split for CSUM mode.
• on split outgoing packet into multiple segments. Size of each segment and number of
segments per packet is determined by set txpkts command (see above).
• rand same as ‘on’, but number of segments per each packet is a random value between
1 and total number of segments.
set corelist
Note: The cores are used in the same order as specified on the command line.
set portlist
set tx loopback
Enable/disable tx loopback:
testpmd> set tx loopback (port_id) (on|off)
set macsec sc
Note: The pi argument is ignored for tx. Check the NIC Datasheet for hardware limits.
set macsec sa
Note: The IDX value must be 0 or 1. Check the NIC Datasheet for hardware limits.
Set VLAN strip for all queues in a pool for a VF from the PF:
testpmd> set vf vlan stripq (port_id) (vf_id) (on|off)
Set the inner or outer VLAN TPID for packet filtering on a port:
testpmd> vlan set (inner|outer) tpid (value) (port_id)
rx_vlan add
Add a VLAN ID, or all identifiers, to the set of VLAN identifiers filtered by port ID:
testpmd> rx_vlan add (vlan_id|all) (port_id)
Note: VLAN filter must be set on that port. VLAN ID < 4096. Depending on the NIC used,
number of vlan_ids may be limited to the maximum entries in VFTA table. This is important if
enabling all vlan_ids.
rx_vlan rm
Remove a VLAN ID, or all identifiers, from the set of VLAN identifiers filtered by port ID:
testpmd> rx_vlan rm (vlan_id|all) (port_id)
Add a VLAN ID, to the set of VLAN identifiers filtered for VF(s) for port ID:
testpmd> rx_vlan add (vlan_id) port (port_id) vf (vf_mask)
Remove a VLAN ID, from the set of VLAN identifiers filtered for VF(s) for port ID:
testpmd> rx_vlan rm (vlan_id) port (port_id) vf (vf_mask)
tunnel_filter add
Set an IP-in-GRE tunnel on port 0, and the filter type is Outer IP.
tunnel_filter remove
rx_vxlan_port add
rx_vxlan_port remove
tx_vlan set
tx_vlan reset
csum set
Select hardware or software calculation of the checksum when transmitting a packet using the
csum forwarding engine:
testpmd> csum set (ip|udp|tcp|sctp|outer-ip) (hw|sw) (port_id)
Where:
• ip|udp|tcp|sctp always relate to the inner layer.
• outer-ip relates to the outer IP layer (only for IPv4) in the case where the packet is rec-
ognized as a tunnel packet by the forwarding engine (vxlan, gre and ipip are supported).
See also the csum parse-tunnel command.
csum parse-tunnel
Define how tunneled packets should be handled by the csum forward engine:
testpmd> csum parse-tunnel (on|off) (tx_port_id)
If enabled, the csum forward engine will try to recognize supported tunnel headers (vxlan, gre,
ipip).
If disabled, treat tunnel packets as non-tunneled packets (a inner header is handled as a packet
payload).
Note: The port argument is the TX port like in the csum set command.
Example:
Consider a packet in packet like the following:
eth_out/ipv4_out/udp_out/vxlan/eth_in/ipv4_in/tcp_in
csum show
tso set
tso show
mac_addr add
mac_addr remove
set port-uta
set promisc
Set the promiscuous mode on for a port or for all ports. In promiscuous mode packets are not
dropped if they aren’t for the specified MAC address:
testpmd> set promisc (port_id|all) (on|off)
set allmulti
Same as the ifconfig (8) option. Controls how multicast packets are handled.
Set the unicast promiscuous mode for a VF from PF. It’s supported by Intel i40e NICs now. In
promiscuous mode packets are not dropped if they aren’t for the specified MAC address:
testpmd> set vf promisc (port_id) (vf_id) (on|off)
Set the multicast promiscuous mode for a VF from PF. It’s supported by Intel i40e NICs now.
In promiscuous mode packets are not dropped if they aren’t for the specified MAC address:
testpmd> set vf allmulti (port_id) (vf_id) (on|off)
set flow_ctrl rx
Where:
• high_water (integer): High threshold value to trigger XOFF.
• low_water (integer): Low threshold value to trigger XON.
• pause_time (integer): Pause quota in the Pause frame.
• send_xon (0/1): Send XON frame.
• mac_ctrl_frame_fwd: Enable receiving MAC control frames.
• autoneg: Change the auto-negotiation parameter.
set pfc_ctrl rx
Where:
• high_water (integer): High threshold value.
• low_water (integer): Low threshold value.
• pause_time (integer): Pause quota in the Pause frame.
set stat_qmap
set flush_rx
Set the flush on RX streams before forwarding. The default is flush on. Mainly used with PCAP
drivers to turn off the default behavior of flushing the first 512 packets on RX streams:
testpmd> set flush_rx off
Set the bypass mode for the lowest port on bypass enabled NIC:
testpmd> set bypass mode (normal|bypass|isolate) (port_id)
Set the event required to initiate specified bypass mode for the lowest port on a bypass en-
abled:
testpmd> set bypass event (timeout|os_on|os_off|power_on|power_off) \
mode (normal|bypass|isolate) (port_id)
Where:
• timeout: Enable bypass after watchdog timeout.
• os_on: Enable bypass when OS/board is powered on.
• os_off: Enable bypass when OS/board is powered off.
• power_on: Enable bypass when power supply is turned on.
• power_off: Enable bypass when power supply is turned off.
Show the bypass configuration for a bypass enabled NIC using the lowest port on the NIC:
testpmd> show bypass config (port_id)
set link up
E-tag set
Delete an E-tag forwarding filter on a port:: testpmd> E-tag set filter del e-tag-id (value)
port (port_id)
Port Functions
Note: Port configuration changes only become active when forwarding is started/restarted.
port attach
To attach a new pci device, the device should be recognized by kernel first. Then it should be
moved under DPDK management. Finally the port can be attached to testpmd.
For example, to move a pci device using ixgbe under DPDK management:
# Check the status of the available devices.
./usertools/dpdk-devbind.py --status
To attach a port created by virtual device, above steps are not needed.
For example, to attach a port whose pci address is 0000:0a:00.0.
testpmd> port attach 0000:0a:00.0
Attaching a new port...
EAL: PCI device 0000:0a:00.0 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f83bfa00000
EAL: PCI memory mapped at 0x7f83bfa80000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
Port 0 is attached. Now total ports is 1
Done
In this case, identifier is net_pcap0. This identifier format is the same as --vdev format of
DPDK applications.
For example, to re-attach a bonded port which has been previously detached, the mode and
slave parameters must be given.
testpmd> port attach net_bond_0,mode=0,slave=1
Attaching a new port...
port detach
To remove a pci device completely from the system, first detach the port from testpmd. Then
the device should be moved under kernel management. Finally the device can be removed
using kernel pci hotplug functionality.
For example, to move a pci device under kernel management:
sudo ./usertools/dpdk-devbind.py -b ixgbe 0000:0a:00.0
./usertools/dpdk-devbind.py --status
To remove a port created by a virtual device, above steps are not needed.
port start
port stop
port close
Set the speed and duplex mode for all ports or a specific port:
testpmd> port config (port_id|all) speed (10|100|1000|10000|25000|40000|50000|100000|auto) \
duplex (half|full|auto)
This is equivalent to the --rxq, --txq, --rxd and --txd command-line options.
Set packet drop for packets with no descriptors on or off for all ports:
testpmd> port config all drop-en (on|off)
RSS is on by default.
The none option is equivalent to the --disable-rss command-line option.
The Link Bonding functions make it possible to dynamically create and manage link bonding
devices from within testpmd interactive prompt.
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10):
testpmd> add bonding slave 6 10
For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10):
testpmd> remove bonding slave 6 10
For example, to set the bonding mode of a Link Bonding device (port 10) to broadcast (mode
3):
testpmd> set bonding mode 3 10
Set an Ethernet slave device as the primary device on a Link Bonding device:
testpmd> set bonding primary (slave id) (port id)
For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding
device (port 10):
testpmd> set bonding primary 6 10
For example, to set the MAC address of a Link Bonding device (port 10) to 00:00:00:00:00:01:
testpmd> set bonding mac 10 00:00:00:00:00:01
Set the transmission policy for a Link Bonding device when it is in Balance XOR mode:
testpmd> set bonding xmit_balance_policy (port_id) (l2|l23|l34)
For example, set a Link Bonding device (port 10) to use a balance policy of layer 3+4 (IP
addresses & UDP ports):
testpmd> set bonding xmit_balance_policy 10 l34
Set the link status monitoring polling period in milliseconds for a bonding device.
This adds support for PMD slave devices which do not support link status interrupts. When the
mon_period is set to a value greater than 0 then all PMD’s which do not support link status ISR
will be queried every polling interval to check if their link status has changed:
testpmd> set bonding mon_period (port_id) (value)
For example, to set the link status monitoring polling period of bonded device (port 5) to 150ms:
testpmd> set bonding mon_period 5 150
For example, to show the configuration a Link Bonding device (port 9) with 3 slave devices (1,
3, 4) in balance mode with a transmission policy of layer 2+3:
testpmd> show bonding config 9
Bonding mode: 2
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
Slaves (3): [1 3 4]
Active Slaves (3): [1 3 4]
Primary: [3]
Register Functions
The Register Functions can be used to read from and write to registers on the network card
referenced by a port number. This is mainly useful for debugging purposes. Reference should
be made to the appropriate datasheet for the network card for details on the register addresses
and fields that can be accessed.
read reg
For example, to examine the Flow Director control register (FDIRCTL, 0x0000EE000) on an
Intel 82599 10 GbE Controller:
read regfield
For example, reading the lowest two bits from the register in the example above:
testpmd> read regfield 0 0xEE00 0 1
port 0 PCI register at offset 0xEE00: bits[0, 1]=0x1 (1)
read regbit
For example, reading the lowest bit from the register in the example above:
testpmd> read regbit 0 0xEE00 0
port 0 PCI register at offset 0xEE00: bit 0=1
write reg
write regfield
write regbit
For example, to set the high bit in the register from the example above:
testpmd> write regbit 0 0xEE00 31 1
port 0 PCI register at offset 0xEE00: 0x8000000A (2147483658)
Filter Functions
This section details the available filter functions that are available.
Note these functions interface the deprecated legacy filtering framework, superseded by
rte_flow. See Flow rules management.
ethertype_filter
Add or delete a L2 Ethertype filter, which identify packets by their L2 Ethertype mainly assign
them to a receive queue:
ethertype_filter (port_id) (add|del) (mac_addr|mac_ignr) (mac_address) \
ethertype (ether_type) (drop|fwd) queue (queue_id)
2tuple_filter
Add or delete a 2-tuple filter, which identifies packets by specific protocol and destination
TCP/UDP port and forwards packets into one of the receive queues:
2tuple_filter (port_id) (add|del) dst_port (dst_port_value) \
protocol (protocol_value) mask (mask_value) \
tcp_flags (tcp_flags_value) priority (prio_value) \
queue (queue_id)
5tuple_filter
Add or delete a 5-tuple filter, which consists of a 5-tuple (protocol, source and destination IP
addresses, source and destination TCP/UDP/SCTP port) and routes packets into one of the
receive queues:
5tuple_filter (port_id) (add|del) dst_ip (dst_address) src_ip \
(src_address) dst_port (dst_port_value) \
src_port (src_port_value) protocol (protocol_value) \
mask (mask_value) tcp_flags (tcp_flags_value) \
priority (prio_value) queue (queue_id)
syn_filter
Using the SYN filter, TCP packets whose SYN flag is set can be forwarded to a separate
queue:
flex_filter
With flex filter, packets can be recognized by any arbitrary pattern within the first 128 bytes of
the packet and routed into one of the receive queues:
flex_filter (port_id) (add|del) len (len_value) bytes (bytes_value) \
mask (mask_value) priority (prio_value) queue (queue_id)
flow_director_filter
The Flow Director works in receive mode to identify specific flows or sets of flows and route
them to specific queues.
Four types of filtering are supported which are referred to as Perfect Match, Signature, Perfect-
mac-vlan and Perfect-tunnel filters, the match mode is set by the --pkt-filter-mode
command-line parameter:
• Perfect match filters. The hardware checks a match between the masked fields of the
received packets and the programmed filters. The masked fields are for IP flow.
• Signature filters. The hardware checks a match between a hash-based signature of the
masked fields of the received packet.
• Perfect-mac-vlan match filters. The hardware checks a match between the masked fields
of the received packets and the programmed filters. The masked fields are for MAC VLAN
flow.
• Perfect-tunnel match filters. The hardware checks a match between the masked fields of
the received packets and the programmed filters. The masked fields are for tunnel flow.
The Flow Director filters can match the different fields for different type of packet: flow type,
specific input set per flow type and the flexible payload.
The Flow Director can also mask out parts of all of these fields so that filters are only applied
to certain fields or parts of the fields.
Different NICs may have different capabilities, command show port fdir (port_id) can be used
to acquire the information.
# Commands to add flow director filters of different flow types:
flow_director_filter (port_id) mode IP (add|del|update) \
flow (ipv4-other|ipv4-frag|ipv6-other|ipv6-frag) \
src (src_ip_address) dst (dst_ip_address) \
tos (tos_value) proto (proto_value) ttl (ttl_value) \
vlan (vlan_value) flexbytes (flexbytes_value) \
(drop|fwd) pf|vf(vf_id) queue (queue_id) \
fd_id (fd_id_value)
flush_flow_director
flow_director_mask
flow_director_flex_mask
set masks of flow director’s flexible payload based on certain flow type:
testpmd> flow_director_flex_mask (port_id) \
flow (none|ipv4-other|ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp| \
ipv6-other|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp| \
l2_payload|all) (mask)
Example, to set flow director’s flex mask for all flow type on port 0:
testpmd> flow_director_flex_mask 0 flow all \
(0xff,0xff,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
flow_director_flex_payload
For example, to select the first 16 bytes from the offset 4 (bytes) of packet’s payload as flexible
payload:
testpmd> flow_director_flex_payload 0 l4 \
(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19)
get_sym_hash_ena_per_port
set_sym_hash_ena_per_port
get_hash_global_config
set_hash_global_config
set_hash_input_set
udp-key|gre-key|fld-1st|fld-2nd|fld-3rd|fld-4th|fld-5th|fld-6th|fld-7th| \
fld-8th|none) (select|add)
For example, to add source IP to hash input set for flow type of ipv4-udp on port 0:
testpmd> set_hash_input_set 0 ipv4-udp src-ipv4 add
set_fdir_input_set
The Flow Director filters can match the different fields for different type of packet, i.e. specific
input set on per flow type and the flexible payload. This command can be used to change input
set for each flow type.
Set the input set for flow director:
set_fdir_input_set (port_id) (ipv4-frag|ipv4-tcp|ipv4-udp|ipv4-sctp| \
ipv4-other|ipv6|ipv6-frag|ipv6-tcp|ipv6-udp|ipv6-sctp|ipv6-other| \
l2_payload) (ivlan|ethertype|src-ipv4|dst-ipv4|src-ipv6|dst-ipv6|ipv4-tos| \
ipv4-proto|ipv4-ttl|ipv6-tc|ipv6-next-header|ipv6-hop-limits| \
tudp-src-port|udp-dst-port|cp-src-port|tcp-dst-port|sctp-src-port| \
sctp-dst-port|sctp-veri-tag|none) (select|add)
For example to add source IP to FD input set for flow type of ipv4-udp on port 0:
testpmd> set_fdir_input_set 0 ipv4-udp src-ipv4 add
global_config
For example to set GRE key length for input set to 4 bytes on port 0:
testpmd> global_config 0 gre-key-len 4
Control of the generic flow API (rte_flow) is fully exposed through the flow command (valida-
tion, creation, destruction and queries).
Considering rte_flow overlaps with all Filter Functions, using both features simultaneously may
cause undefined side-effects and is therefore not recommended.
flow syntax
Because the flow command uses dynamic tokens to handle the large number of possible flow
rules combinations, its behavior differs slightly from other commands, in particular:
• Pressing ? or the <tab> key displays contextual help for the current token, not that of the
entire command.
• Optional and repeated parameters are supported (provided they are listed in the contex-
tual help).
The first parameter stands for the operation mode. Possible operations and their general syntax
are described below. They are covered in detail in the following sections.
flow validate reports whether a flow rule would be accepted by the underlying device in
its current state but stops short of creating it. It is bound to rte_flow_validate():
flow validate {port_id}
[group {group_id}] [priority {level}] [ingress] [egress]
pattern {item} [/ {item} [...]] / end
actions {action} [/ {action} [...]] / end
This command uses the same parameters as flow create, their format is described in Cre-
ating flow rules.
Check whether redirecting any Ethernet packet received on port 0 to RX queue index 6 is
supported:
testpmd> flow validate 0 ingress pattern eth / end
actions queue index 6 / end
Flow rule validated
testpmd>
flow create validates and creates the specified flow rule. It is bound to
rte_flow_create():
flow create {port_id}
[group {group_id}] [priority {level}] [ingress] [egress]
pattern {item} [/ {item} [...]] / end
actions {action} [/ {action} [...]] / end
Note that PMDs may refuse rules that essentially do nothing such as this one.
All unspecified object values are automatically initialized to 0.
Attributes
These tokens affect flow rule attributes (struct rte_flow_attr) and are specified before
the pattern token.
• group {group id}: priority group.
• priority {level}: priority level within group.
• ingress: rule applies to ingress traffic.
• egress: rule applies to egress traffic.
Each instance of an attribute specified several times overrides the previous value as shown
below (group 4 is used):
testpmd> flow create 0 group 42 group 24 group 4 [...]
Matching pattern
A matching pattern starts after the pattern token. It is made of pattern items and is termi-
nated by a mandatory end item.
Items are named after their type (RTE_FLOW_ITEM_TYPE_ from enum
rte_flow_item_type).
The / token is used as a separator between pattern items as shown below:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end [...]
Note that protocol items like these must be stacked from lowest to highest layer to make sense.
For instance, the following rule is either invalid or unlikely to match any packet:
testpmd> flow create 0 ingress pattern eth / udp / ipv4 / end [...]
This rule matches all IPv4 traffic with the specified properties.
In this example, src and dst are field names of the underlying struct
rte_flow_item_ipv4 object. All item properties can be specified in a similar fashion.
The is token means that the subsequent value must be matched exactly, and assigns spec
and mask fields in struct rte_flow_item accordingly. Possible assignment tokens are:
• is: match value perfectly (with full bit-mask).
• spec: match value according to configured bit-mask.
• last: specify upper bound to establish a range.
• mask: specify bit-mask with relevant bits set to one.
• prefix: generate bit-mask from a prefix length.
These yield identical results:
ipv4 src is 10.1.1.1
ipv4 src spec 10.1.1.1 src mask 255.255.255.255
ipv4 src spec 10.1.1.1 src prefix 32
ipv4 src is 10.1.1.1 src last 10.1.1.1 # range with a single value
ipv4 src is 10.1.1.1 src last 0 # 0 disables range
Pattern items
This section lists supported pattern items and their attributes, if any.
• end: end list of pattern items.
• void: no-op pattern item.
• invert: perform actions when pattern does not match.
• any: match any protocol for the current layer.
– num {unsigned}: number of layers covered.
• pf: match packets addressed to the physical function.
• vf: match packets addressed to a virtual function ID.
– id {unsigned}: destination VF ID.
• port: device-specific physical port index to use.
– index {unsigned}: physical port index.
• raw: match an arbitrary byte string.
– relative {boolean}: look for pattern after the previous item.
– search {boolean}: search pattern from offset (see also limit).
– offset {integer}: absolute or relative offset for pattern.
– limit {unsigned}: search area limit for start of pattern.
– pattern {string}: byte string to look for.
• eth: match Ethernet header.
– dst {MAC-48}: destination MAC.
– src {MAC-48}: source MAC.
– type {unsigned}: EtherType.
• vlan: match 802.1Q/ad VLAN tag.
– tpid {unsigned}: tag protocol identifier.
– tci {unsigned}: tag control information.
– pcp {unsigned}: priority code point.
– dei {unsigned}: drop eligible indicator.
– vid {unsigned}: VLAN identifier.
• ipv4: match IPv4 header.
– tos {unsigned}: type of service.
– ttl {unsigned}: time to live.
– proto {unsigned}: next protocol ID.
Actions list
A list of actions starts after the actions token in the same fashion as Matching pattern; actions
are separated by / tokens and the list is terminated by a mandatory end action.
Actions are named after their type (RTE_FLOW_ACTION_TYPE_ from enum
rte_flow_action_type).
Dropping all incoming UDPv4 packets can be expressed as follows:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end
actions drop / end
Several actions have configurable properties which must be specified when there is no valid
default value. For example, queue requires a target queue index.
This rule redirects incoming UDPv4 traffic to queue index 6:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end
actions queue index 6 / end
As defined by rte_flow, the list is not ordered, all actions of a given rule are performed simulta-
neously. These are equivalent:
queue index 6 / void / mark id 42 / end
void / mark id 42 / queue index 6 / end
All actions in a list should have different types, otherwise only the last action of a given type is
taken into account:
queue index 4 / queue index 5 / queue index 6 / end # will use queue 6
drop / drop / drop / end # drop is performed only once
mark id 42 / queue index 3 / mark id 24 / end # mark will be 24
Considering they are performed simultaneously, opposite and overlapping actions can some-
times be combined when the end result is unambiguous:
drop / queue index 6 / end # drop has no effect
drop / dup index 6 / end # same as above
queue index 6 / rss queues 6 7 8 / end # queue has no effect
drop / passthru / end # drop has no effect
Actions
flow destroy destroys one or more rules from their rule ID (as returned by flow create),
this command calls rte_flow_destroy() as many times as necessary:
flow destroy {port_id} rule {rule_id} [...]
It does not report anything for rule IDs that do not exist. The usual error message is shown
when a rule cannot be destroyed:
Caught error type [...] ([...]): [...]
flow flush destroys all rules on a device and does not take extra arguments. It is bound to
rte_flow_flush():
flow flush {port_id}
flow query queries a specific action of a flow rule having that ability. Such actions collect
information that can be reported using this command. It is bound to rte_flow_query():
flow query {port_id} {rule_id} {action}
If successful, it will display either the retrieved data for known actions or the following message:
Cannot display result for action type [...] ([...])
Otherwise, it will complain either that the rule does not exist or that some error occurred:
Flow rule #[...] not found
Caught error type [...] ([...]): [...]
Currently only the count action is supported. This action reports the number of packets that
hit the flow rule and the total number of bytes. Its output has the following format:
count:
hits_set: [...] # whether "hits" contains a valid value
bytes_set: [...] # whether "bytes" contains a valid value
hits: [...] # number of packets
bytes: [...] # number of bytes
flow list lists existing flow rules sorted by priority and optionally filtered by group identifiers:
flow list {port_id} [group {group_id}] [...]
This command only fails with the following message if the device does not exist:
Invalid port [...]
Output consists of a header line followed by a short description of each flow rule, one per line.
There is no output at all when no flow rules are configured on the device:
Rules are sorted by priority (i.e. group ID first, then priority level):
testpmd> flow list 1
ID Group Prio Attr Rule
0 0 0 i- ETH => COUNT
6 0 500 i- ETH IPV6 TCP => DROP COUNT
5 0 1000 i- ETH IPV6 ICMP => QUEUE
1 24 0 i- ETH IPV4 UDP => QUEUE
4 24 10 i- ETH IPV4 TCP => DROP
3 24 20 i- ETH IPV4 => DROP
2 24 42 i- ETH IPV4 UDP => QUEUE
7 63 0 i- ETH IPV6 UDP VXLAN => MARK QUEUE
testpmd>
554
DPDK documentation, Release 17.05.0-rc0
Note: Features marked with “P” are partially supported. Refer to the appropriate NIC guide
The BNX2X poll mode driver library (librte_pmd_bnx2x) implements support for QLogic
578xx 10/20 Gbps family of adapters as well as their virtual functions (VF) in SR-IOV con-
text. It is supported on several standard Linux distros like Red Hat 7.x and SLES12 OS. It is
compile-tested under FreeBSD OS.
More information can be found at QLogic Corporation’s Official Website.
Supported Features
Non-supported Features
Co-existence considerations
• BCM578xx being a CNA can have both NIC and Storage personalities. However, coex-
istence with storage protocol drivers (cnic, bnx2fc and bnx2fi) is not supported on the
same adapter. So storage personality has to be disabled on that adapter when used in
DPDK applications.
• For SR-IOV case, bnx2x PMD will be used to bind to SR-IOV VF device and Linux native
kernel driver (bnx2x) will be attached to SR-IOV PF.
• 578xx
Prerequisites
• Requires firmware version 7.2.51.0. It is included in most of the standard Linux distros.
If it is not available visit QLogic Driver Download Center to get the required firmware.
Pre-Installation Configuration
The following options can be modified in the .config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_BNX2X_PMD (default y)
Toggle compilation of bnx2x driver.
• CONFIG_RTE_LIBRTE_BNX2X_DEBUG (default n)
Toggle display of generic debugging messages.
• CONFIG_RTE_LIBRTE_BNX2X_DEBUG_INIT (default n)
Toggle display of initialization related messages.
• CONFIG_RTE_LIBRTE_BNX2X_DEBUG_TX (default n)
Toggle display of transmit fast path run-time messages.
• CONFIG_RTE_LIBRTE_BNX2X_DEBUG_RX (default n)
Toggle display of receive fast path run-time messages.
• CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC (default n)
Toggle display of register reads and writes.
Driver Compilation
BNX2X PMD for Linux x86_64 gcc target, run the following “make” command:
cd <DPDK-source-directory>
make config T=x86_64-native-linuxapp-gcc install
To compile BNX2X PMD for Linux x86_64 clang target, run the following “make” command:
cd <DPDK-source-directory>
make config T=x86_64-native-linuxapp-clang install
To compile BNX2X PMD for Linux i686 gcc target, run the following “make” command:
cd <DPDK-source-directory>
make config T=i686-native-linuxapp-gcc install
To compile BNX2X PMD for Linux i686 gcc target, run the following “make” command:
cd <DPDK-source-directory>
make config T=i686-native-linuxapp-gcc install
To compile BNX2X PMD for FreeBSD x86_64 clang target, run the following “gmake” com-
mand:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-clang install
To compile BNX2X PMD for FreeBSD x86_64 gcc target, run the following “gmake” command:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-gcc install -Wl,-rpath=/usr/local/lib/gcc49 CC=gcc49
To compile BNX2X PMD for FreeBSD x86_64 gcc target, run the following “gmake” command:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-gcc install -Wl,-rpath=/usr/local/lib/gcc49 CC=gcc49
Linux
Linux Installation
This section demonstrates how to launch testpmd with QLogic 578xx devices managed by
librte_pmd_bnx2x in Linux operating system.
1. Request huge pages:
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/nr_hugepages
or
modprobe vfio-pci
3. Bind the QLogic adapters to igb_uio or vfio-pci loaded in the previous step:
./usertools/dpdk-devbind.py --bind igb_uio 0000:84:00.0 0000:84:00.1
or
Setup VFIO permissions for regular users and then bind to vfio-pci:
sudo chmod a+x /dev/vfio
Example output:
[...]
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL: probe driver: 14e4:168e rte_bnx2x_pmd
EAL: PCI memory mapped at 0x7f14f6fe5000
EAL: PCI memory mapped at 0x7f14f67e5000
Example output:
[...]
Capabilities: [1b8 v1] Alternative Routing-ID Interpretation (ARI)
[...]
Capabilities: [1c0 v1] Single Root I/O Virtualization (SR-IOV)
[...]
Kernel driver in use: igb_uio
Example output:
systemd-udevd[4848]: renamed network interface eth0 to ens5f0
systemd-udevd[4848]: renamed network interface eth1 to ens5f1
4. Create VF device(s):
Echo the number of VFs to be created into “sriov_numvfs” sysfs entry of the parent PF.
Example output:
echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
Example output:
ip link set ens5f0 vf 0 mac 52:54:00:2f:9d:e8
6. PCI Passthrough:
The VF devices may be passed through to the guest VM using virt-manager or virsh etc.
bnx2x PMD should be used to bind the VF devices in the guest VM using the instructions
outlined in the Application notes below.
Limitations
With the current driver, allocated mbufs must be large enough to hold the entire received frame.
If the mbufs are not large enough, the packets will be dropped. This is most limiting when jumbo
frames are used.
The CXGBE PMD (librte_pmd_cxgbe) provides poll mode driver support for Chelsio T5 10/40
Gbps family of adapters. CXGBE PMD has support for the latest Linux and FreeBSD operating
systems.
More information can be found at Chelsio Communications Official Website.
Features
• VLAN filtering
• Checksum offload
• Promiscuous mode
• All multicast mode
• Port hardware statistics
• Jumbo frames
Limitations
The Chelsio T5 devices provide two/four ports but expose a single PCI bus address, thus, li-
brte_pmd_cxgbe registers itself as a PCI driver that allocates one Ethernet device per detected
port.
For this reason, one cannot whitelist/blacklist a single port without whitelisting/blacklisting the
other ports on the same device.
• 1G NICs: T502-BT
• 10G NICs: T520-BT, T520-CR, T520-LL-CR, T520-SO-CR, T540-CR
• 40G NICs: T580-CR, T580-LP-CR, T580-SO-CR
• Other T5 NICs: T522-CR
Prerequisites
• Requires firmware version 1.13.32.0 and higher. Visit Chelsio Download Center to get
latest firmware bundled with the latest Chelsio Unified Wire package.
For Linux, installing and loading the latest cxgb4 kernel driver from the Chelsio Unified
Wire package should get you the latest firmware. More information can be obtained from
the User Guide that is bundled with the Chelsio Unified Wire package.
For FreeBSD, the latest firmware obtained from the Chelsio Unified Wire package must
be manually flashed via cxgbetool available in FreeBSD source repository.
Instructions on how to manually flash the firmware are given in section Linux Installation
for Linux and section FreeBSD Installation for FreeBSD.
Pre-Installation Configuration
The following options can be modified in the .config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_CXGBE_PMD (default y)
Toggle compilation of librte_pmd_cxgbe driver.
• CONFIG_RTE_LIBRTE_CXGBE_DEBUG (default n)
Toggle display of generic debugging messages.
• CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG (default n)
Toggle display of registers related run-time check messages.
• CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX (default n)
Toggle display of firmware mailbox related run-time check messages.
• CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX (default n)
Toggle display of transmission data path run-time check messages.
• CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX (default n)
Toggle display of receiving data path run-time check messages.
Driver Compilation
To compile CXGBE PMD for Linux x86_64 gcc target, run the following “make” command:
cd <DPDK-source-directory>
make config T=x86_64-native-linuxapp-gcc install
To compile CXGBE PMD for FreeBSD x86_64 clang target, run the following “gmake” com-
mand:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-clang install
Linux
Linux Installation
Steps to manually install the latest firmware from the downloaded Chelsio Unified Wire package
for Linux operating system are as follows:
1. Load the kernel module:
modprobe cxgb4
Example output:
p1p1 Link encap:Ethernet HWaddr 00:07:43:2D:EA:C0
p1p2 Link encap:Ethernet HWaddr 00:07:43:2D:EA:C8
3. Install cxgbtool:
cd <path_to_uwire>/tools/cxgbtool
make install
4. Use cxgbtool to load the firmware config file onto the card:
cxgbtool p1p1 loadcfg <path_to_uwire>/src/network/firmware/t5-config.txt
Example output:
firmware-version: 1.13.32.0, TP 0.1.4.8
Running testpmd
This section demonstrates how to launch testpmd with Chelsio T5 devices managed by li-
brte_pmd_cxgbe in Linux operating system.
1. Change to DPDK source directory where the target has been compiled in section Driver
Compilation:
cd <DPDK-source-directory>
3. Get the PCI bus addresses of the interfaces bound to cxgb4 driver:
dmesg | tail -2
Example output:
cxgb4 0000:02:00.4 p1p1: renamed from eth0
cxgb4 0000:02:00.4 p1p2: renamed from eth1
Note: Both the interfaces of a Chelsio T5 2-port adapter are bound to the same PCI bus
address.
or
modprobe vfio-pci
8. Bind the Chelsio T5 adapters to igb_uio or vfio-pci loaded in the previous step:
./usertools/dpdk-devbind.py --bind igb_uio 0000:02:00.4
or
Setup VFIO permissions for regular users and then bind to vfio-pci:
sudo chmod a+x /dev/vfio
Note: Currently, CXGBE PMD only supports the binding of PF4 for Chelsio T5 NICs.
Example output:
[...]
EAL: PCI device 0000:02:00.4 on NUMA socket -1
EAL: probe driver: 1425:5401 rte_cxgbe_pmd
EAL: PCI memory mapped at 0x7fd7c0200000
EAL: PCI memory mapped at 0x7fd77cdfd000
EAL: PCI memory mapped at 0x7fd7c10b7000
PMD: rte_cxgbe_pmd: fw: 1.13.32.0, TP: 0.1.4.8
PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
Interactive-mode selected
Configuring Port 0 (socket 0)
Port 0: 00:07:43:2D:EA:C0
Configuring Port 1 (socket 0)
Port 1: 00:07:43:2D:EA:C8
Checking link statuses...
PMD: rte_cxgbe_pmd: Port0: passive DA port module inserted
PMD: rte_cxgbe_pmd: Port1: passive DA port module inserted
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
Note: Flow control pause TX/RX is disabled by default and can be enabled via testpmd. Refer
section Enable/Disable Flow Control for more details.
FreeBSD
FreeBSD Installation
Steps to manually install the latest firmware from the downloaded Chelsio Unified Wire package
for FreeBSD operating system are as follows:
1. Load the kernel module:
kldload if_cxgbe
2. Use dmesg to get the t5nex instance assigned to the Chelsio card:
dmesg | grep "t5nex"
Example output:
Example output:
dev.t5nex.0.firmware_version: 1.13.32.0
Running testpmd
This section demonstrates how to launch testpmd with Chelsio T5 devices managed by li-
brte_pmd_cxgbe in FreeBSD operating system.
1. Change to DPDK source directory where the target has been compiled in section Driver
Compilation:
cd <DPDK-source-directory>
The above lines load the contigmem kernel module during boot process and allocate 2 x
1G blocks of contiguous memory to be used for DPDK later on. This is to avoid issues
with potential memory fragmentation during later system up time, which may result in
failure of allocating the contiguous memory required for the contigmem kernel module.
4. Restart the system and ensure the contigmem module is loaded successfully:
reboot
kldstat | grep "contigmem"
Example output:
2 1 0xffffffff817f1000 3118 contigmem.ko
5. Repeat step 1 to ensure that you are in the DPDK source directory.
7. Get the PCI bus addresses of the interfaces bound to t5nex driver:
pciconf -l | grep "t5nex"
Example output:
t5nex0@pci0:2:0:4: class=0x020000 card=0x00001425 chip=0x54011425 rev=0x00
Note: Both the interfaces of a Chelsio T5 2-port adapter are bound to the same PCI bus
address.
This automatically binds 2:0:4 to nic_uio kernel driver when it is loaded in the next step.
Note: Currently, CXGBE PMD only supports the binding of PF4 for Chelsio T5 NICs.
Example output:
[...]
EAL: PCI device 0000:02:00.4 on NUMA socket 0
EAL: probe driver: 1425:5401 rte_cxgbe_pmd
EAL: PCI memory mapped at 0x8007ec000
EAL: PCI memory mapped at 0x842800000
EAL: PCI memory mapped at 0x80086c000
PMD: rte_cxgbe_pmd: fw: 1.13.32.0, TP: 0.1.4.8
PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
Interactive-mode selected
Configuring Port 0 (socket 0)
Port 0: 00:07:43:2D:EA:C0
Configuring Port 1 (socket 0)
Port 1: 00:07:43:2D:EA:C8
Checking link statuses...
PMD: rte_cxgbe_pmd: Port0: passive DA port module inserted
PMD: rte_cxgbe_pmd: Port1: passive DA port module inserted
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
Note: Flow control pause TX/RX is disabled by default and can be enabled via testpmd. Refer
section Enable/Disable Flow Control for more details.
Flow control pause TX/RX is disabled by default and can be enabled via testpmd as follows:
testpmd> set flow_ctrl rx on tx on 0 0 0 0 mac_ctrl_frame_fwd off autoneg on 0
testpmd> set flow_ctrl rx on tx on 0 0 0 0 mac_ctrl_frame_fwd off autoneg on 1
Jumbo Mode
There are two ways to enable sending and receiving of jumbo frames via testpmd. One method
involves using the mtu command, which changes the mtu of an individual port without having
to stop the selected port. Another method involves stopping all the ports first and then running
max-pkt-len command to configure the mtu of all the ports with a single command.
• To configure each port individually, run the mtu command as follows:
testpmd> port config mtu 0 9000
testpmd> port config mtu 1 9000
• To configure all the ports at once, stop all the ports first and run the max-pkt-len command
as follows:
testpmd> port stop all
testpmd> port config all max-pkt-len 9000
The DPDK EM poll mode driver supports the following emulated devices:
• qemu-kvm emulated Intel® 82540EM Gigabit Ethernet Controller (qemu e1000 device)
• VMware* emulated Intel® 82545EM Gigabit Ethernet Controller
• VMware emulated Intel® 8274L Gigabit Ethernet Controller.
Validated Hypervisors
When using an older kernel or a kernel from a distribution without the kvm modules, you
must download (from the same link), compile and install the modules yourself:
tar xjf kvm-kmod-release.tar.bz2
cd kvm-kmod-release
./configure
make
sudo make install
sudo /sbin/modprobe kvm-intel
Note: The Qemu provides several choices for the emulated network device backend.
Most commonly used is a TAP networking backend that uses a TAP networking device in
the host. For more information about Qemu supported networking backends and different
options for configuring networking at Qemu, please refer to:
— http://www.linux-kvm.org/page/Networking
— http://wiki.qemu.org/Documentation/Networking
— http://qemu.weilnetz.de/qemu-doc.html
For example, to start a VM with two emulated e1000 devices, issue the following com-
mand:
/usr/local/kvm/bin/qemu-system-x86_64 -cpu host -smp 4 -hda qemu1.raw -m 1024
-net nic,model=e1000,vlan=1,macaddr=DE:AD:1E:00:00:01
-net tap,vlan=1,ifname=tapvm01,script=no,downscript=no
-net nic,model=e1000,vlan=2,macaddr=DE:AD:1E:00:00:02
-net tap,vlan=2,ifname=tapvm02,script=no,downscript=no
where:
— -m = memory to assign
— -smp = number of smp cores
— -hda = virtual disk image
This command starts a new virtual machine with two emulated 82540EM devices, backed
up with two TAP networking host interfaces, tapvm01 and tapvm02.
# ip tuntap show
tapvm01: tap
tapvm02: tap
The ENA PMD is a DPDK poll-mode driver for the Amazon Elastic Network Adapter (ENA)
family.
Overview
The ENA driver exposes a lightweight management interface with a minimal set of memory
mapped registers and an extendable command set through an Admin Queue.
The driver supports a wide range of ENA adapters, is link-speed independent (i.e., the same
driver is used for 10GbE, 25GbE, 40GbE, etc.), and it negotiates and supports an extendable
feature set.
ENA adapters allow high speed and low overhead Ethernet traffic processing by providing a
dedicated Tx/Rx queue pair per CPU core.
The ENA driver supports industry standard TCP/IP offload features such as checksum offload
and TCP transmit segmentation offload (TSO).
Receive-side scaling (RSS) is supported for multi-core scaling.
Some of the ENA devices support a working mode called Low-latency Queue (LLQ), which
saves several more microseconds.
Management Interface
I/O operations are based on Tx and Rx Submission Queues (Tx SQ and Rx SQ correspond-
ingly). Each SQ has a completion queue (CQ) associated with it.
The SQs and CQs are implemented as descriptor rings in contiguous physical memory.
Refer to ena_eth_io_defs.h for the detailed structure of the descriptor
The driver supports multi-queue for both Tx and Rx.
Configuration information
Building DPDK
See the DPDK Getting Started Guide for Linux for instructions on how to build DPDK.
By default the ENA PMD library will be built into the DPDK library.
For configuring and using UIO and VFIO frameworks, please also refer the documentation that
comes with DPDK suite.
Any Linux distribution fulfilling the conditions described in System Requirements section of
the DPDK documentation or refer to DPDK Release Notes.
Supported features
• Jumbo frames up to 9K
• Port Hardware Statistics
• IPv4/TCP/UDP checksum offload
• TSO offload
• Multiple receive and transmit queues
• RSS
• Low Latency Queue for Tx
Unsupported features
The features supported by the device and not yet supported by this PMD include:
• Asynchronous Event Notification Queue (AENQ)
Prerequisites
1. Prepare the system as recommended by DPDK suite. This includes environment vari-
ables, hugepages configuration, tool-chains and configuration
2. Insert igb_uio kernel module using the command ‘modprobe igb_uio’
3. Bind the intended ENA device to igb_uio module
At this point the system should be ready to run DPDK applications. Once the application runs
to completion, the ENA can be detached from igb_uio if necessary.
Usage example
This section demonstrates how to launch testpmd with Amazon ENA devices managed by
librte_pmd_ena.
1. Load the kernel modules:
modprobe uio
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
Note: Currently Amazon ENA PMD driver depends on igb_uio user space I/O kernel
module
3. Bind UIO driver to ENA device (using provided by DPDK binding tool):
./usertools/dpdk-devbind.py --bind=igb_uio 0000:02:00.1
Example output:
[...]
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL: probe driver: 1d0f:ec20 rte_ena_pmd
EAL: PCI memory mapped at 0x7f9b6c400000
PMD: eth_ena_dev_init(): Initializing 0:2:0.1
Interactive-mode selected
Configuring Port 0 (socket 0)
Port 0: 00:00:00:11:00:01
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
ENIC PMD is the DPDK poll-mode driver for the Cisco System Inc. VIC Ethernet NICs. These
adapters are also referred to as vNICs below. If you are running or would like to run DPDK soft-
ware applications on Cisco UCS servers using Cisco VIC adapters the following documentation
is relevant.
ENIC PMD support is integrated into the DPDK suite. dpdk-<version>.tar.gz should be down-
loaded from http://dpdk.org
Configuration information
* The number of WQs should be greater or equal to the value of the expected
nb_tx_q parameter in the call to the rte_eth_dev_configure()
* The number of CQs should set so that there is one CQ for each WQ, and one
CQ for each pair of RQs.
For example: If the application requires 3 Rx queues, and 3 Tx queues, the vNIC
should be configured to have at least 3 WQs, 6 RQs (3 pairs), and 6 CQs (3 for use
by WQs + 3 for use by the 3 pairs of RQs).
– Size of Queues
Likewise, the number of receive and transmit descriptors are configurable on a
per vNIC bases via the UCS Manager and should be greater than or equal to
the nb_rx_desc and nb_tx_desc parameters expected to be used in the calls to
rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup() respectively. An applica-
tion requesting more than the set size will be limited to that size.
Unless there is a lack of resources due to creating many vNICs, it is recommended
that the WQ and RQ sizes be set to the maximum. This gives the application the
greatest amount of flexibility in its queue configuration.
* Note: Since the introduction of rx scatter, for performance reasons, this PMD
uses two RQs on the vNIC per receive queue in DPDK. One RQ holds descrip-
tors for the start of a packet the second RQ holds the descriptors for the rest
of the fragments of a packet. This means that the nb_rx_desc parameter to
rte_eth_rx_queue_setup() can be a greater than 4096. The exact amount will
depend on the size of the mbufs being used for receives, and the MTU size.
For example: If the mbuf size is 2048, and the MTU is 9000, then receiving a
full size packet will take 5 descriptors, 1 from the start of packet queue, and 4
from the second queue. Assuming that the RQ size was set to the maximum
of 4096, then the application can specify up to 1024 + 4096 as the nb_rx_desc
parameter to rte_eth_rx_queue_setup().
– Interrupts
Only one interrupt per vNIC interface should be configured in the UCS manager re-
gardless of the number receive/transmit queues. The ENIC PMD uses this interrupt
to get information about link status and errors in the fast path.
Advanced filtering support was added to 1300 series VIC firmware starting with version 2.0.13
for C-series UCS servers and version 3.1.2 for UCSM managed blade servers. In order to
enable advanced filtering the ‘Advanced filter’ radio button should be enabled via CIMC or
UCSM followed by a reboot of the server.
With advanced filters, perfect matching of all fields of IPv4, IPv6 headers as well as TCP, UDP
and SCTP L4 headers is available through flow director. Masking of these feilds for partial
match is also supported.
Without advanced filter support, the flow director is limited to IPv4 perfect filtering of the 5-tuple
with no masking of fields supported.
Limitations
• Limited flow director support on 1200 series and 1300 series Cisco VIC adapters with old
firmware. Please see Flow director support.
• Flow director features are not supported on generation 1 Cisco VIC adapters (M81KR
and P81E)
The build instructions for the DPDK suite should be followed. By default the ENIC PMD library
will be built into the DPDK library.
For configuring and using UIO and VFIO frameworks, please refer the documentation that
comes with DPDK suite.
ENIC PMD supports all recent generations of Cisco VIC adapters including:
• VIC 1280
• VIC 1240
• VIC 1225
• VIC 1285
• VIC 1225T
• VIC 1227
• VIC 1227T
• VIC 1380
• VIC 1340
• VIC 1385
• VIC 1387
Any Linux distribution fulfilling the conditions described in Dependencies section of DPDK doc-
umentation.
Supported features
Prerequisites
• Prepare the system as recommended by DPDK suite. This includes environment vari-
ables, hugepages configuration, tool-chains and configuration
• Insert vfio-pci kernel module using the command ‘modprobe vfio-pci’ if the user wants to
use VFIO framework
• Insert uio kernel module using the command ‘modprobe uio’ if the user wants to use UIO
framework
• DPDK suite should be configured based on the user’s decision to use VFIO or UIO frame-
work
• If the vNIC device(s) to be used is bound to the kernel mode Ethernet driver (enic),
use ‘ifconfig’ to bring the interface down. The dpdk-devbind.py tool can then be used to
unbind the device’s bus id from the enic kernel mode driver.
• Bind the intended vNIC to vfio-pci in case the user wants ENIC PMD to use VFIO frame-
work using dpdk-devbind.py.
• Bind the intended vNIC to igb_uio in case the user wants ENIC PMD to use UIO frame-
work using dpdk-devbind.py.
At this point the system should be ready to run DPDK applications. Once the application runs
to completion, the vNIC can be detached from vfio-pci or igb_uio if necessary.
Root privilege is required to bind and unbind vNICs to/from VFIO/UIO. VFIO framework helps
an unprivileged user to run the applications. For an unprivileged user to run the applications
on DPDK and ENIC PMD, it may be necessary to increase the maximum locked memory of
the user. The following command could be used to do this.
sudo sh -c "ulimit -l <value in Kilo Bytes>"
The value depends on the memory configuration of the application, DPDK and PMD. Typically,
the limit has to be raised to higher than 2GB. e.g., 2621440
The compilation of any unused drivers can be disabled using the configuration file in config/
directory (e.g., config/common_linuxapp). This would help in bringing down the time taken for
building the libraries and the initialization time of the application.
Additional Reference
• http://www.cisco.com/c/en/us/products/servers-unified-computing
Contact Information
Any questions or bugs should be reported to DPDK community and to the ENIC PMD main-
tainers:
• John Daley <johndale@cisco.com>
• Nelson Escobar <neescoba@cisco.com>
The FM10K poll mode driver library provides support for the Intel FM10000 (FM10K) family of
40GbE/100GbE adapters.
FTAG Based Forwarding is a unique feature of FM10K. The FM10K family of NICs support
the addition of a Fabric Tag (FTAG) to carry special information. The FTAG is placed at the
beginning of the frame, it contains information such as where the packet comes from and goes,
and the vlan tag. In FTAG based forwarding mode, the switch logic forwards packets according
to glort (global resource tag) information, rather than the mac and vlan table. Currently this
feature works only on PF.
To enable this feature, the user should pass a devargs parameter to the eal like “-w 84:00.0,en-
able_ftag=1”, and the application should make sure an appropriate FTAG is inserted for every
frame on TX side.
Vector PMD (vPMD) uses Intel® SIMD instructions to optimize packet I/O. It improves
load/store bandwidth efficiency of L1 data cache by using a wider SSE/AVX ‘’register (1)’‘.
The wider register gives space to hold multiple packet buffers so as to save on the number of
instructions when bulk processing packets.
There is no change to the PMD API. The RX/TX handlers are the only two entries for vPMD
packet I/O. They are transparently registered at runtime RX/TX execution if all required condi-
tions are met.
1. To date, only an SSE version of FM10K vPMD is available. To ensure that vPMD is in the
binary code, set CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y in the configure file.
Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
The following sections explain RX and TX constraints in the vPMD.
RX Constraints
For Vector RX it is assumed that the number of descriptor rings will be a power of 2. With this
pre-condition, the ring pointer can easily scroll back to the head after hitting the tail without
a conditional check. In addition Vector RX can use this assumption to do a bit mask using
ring_size - 1.
Some features are not supported when trying to increase the throughput in vPMD. They are:
• IEEE1588
• Flow director
• Header split
• RX checksum offload
Other features are supported using optional MACRO configuration. They include:
• HW VLAN strip
• L3/L4 packet type
To enable via RX_OLFLAGS use RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y.
To guarantee the constraint, the following configuration flags in dev_conf.rxmode will be
checked:
• hw_vlan_extend
• hw_ip_checksum
• header_split
• fdir_conf->mode
RX Burst Size
TX Constraint
TX vPMD only works when txq_flags is set to FM10K_SIMPLE_TX_FLAG. This means that it
does not support TX multi-segment, VLAN offload or TX csum offload. The following MACROs
are used for these three features:
• ETH_TXQ_FLAGS_NOMULTSEGS
• ETH_TXQ_FLAGS_NOVLANOFFL
• ETH_TXQ_FLAGS_NOXSUMSCTP
• ETH_TXQ_FLAGS_NOXSUMUDP
• ETH_TXQ_FLAGS_NOXSUMTCP
Limitations
Switch manager
The Intel FM10000 family of NICs integrate a hardware switch and multiple host interfaces.
The FM10000 PMD driver only manages host interfaces. For the switch component another
switch driver has to be loaded prior to to the FM10000 PMD driver. The switch driver can be
acquired from Intel support. Only Testpoint is validated with DPDK, the latest version that has
been validated with DPDK is 4.1.6.
CRC striping
The FM10000 family of NICs strip the CRC for every packets coming into the host interface. So,
CRC will be stripped even when the rxmode.hw_strip_crc member is set to 0 in struct
rte_eth_conf.
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value is fixed
and cannot be changed. So, even when the rxmode.max_rx_pkt_len member of struct
rte_eth_conf is set to a value lower than 15364, frames up to 15364 bytes can still reach
the host interface.
The FM10000 NICs expose a set of statistics via the PCI BARs. These statistics are read
from the hardware registers when rte_eth_stats_get() or rte_eth_xstats_get() is
called. The packet counting registers are 32 bits while the byte counting registers are 48 bits.
As a result, the statistics must be polled regularly in order to ensure the consistency of the
returned reads.
Given the PCIe Gen3 x8, about 50Gbps of traffic can occur. With 64 byte packets this gives
almost 100 million packets/second, causing 32 bit integer overflow after approx 40 seconds.
To ensure these overflows are detected and accounted for in the statistics, it is necessary to
read statistic regularly. It is suggested to read stats every 20 seconds, which will ensure the
statistics are accurate.
Interrupt mode
The FM10000 family of NICS need one separate interrupt for mailbox. So only drivers which
support multiple interrupt vectors e.g. vfio-pci can work for fm10k interrupt mode.
The I40E PMD (librte_pmd_i40e) provides poll mode driver support for the Intel
X710/XL710/X722 10/40 Gbps family of adapters.
Features
Prerequisites
• Identifying your adapter using Intel Support and get the latest NVM/FW images.
• Follow the DPDK Getting Started Guide for Linux to setup the basic DPDK environment.
• To get better performance on Intel platforms, please follow the “How to get best perfor-
mance with NICs on Intel platforms” section of the Getting Started Guide for Linux.
Pre-Installation Configuration
The following options can be modified in the config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_I40E_PMD (default y)
Toggle compilation of the librte_pmd_i40e driver.
• CONFIG_RTE_LIBRTE_I40E_DEBUG_* (default n)
Toggle display of generic debugging messages.
• CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC (default y)
Toggle bulk allocation for RX.
• CONFIG_RTE_LIBRTE_I40E_INC_VECTOR (default n)
Toggle the use of Vector PMD instead of normal RX/TX path. To enable vPMD for RX,
bulk allocation for Rx must be allowed.
• CONFIG_RTE_LIBRTE_I40E_RX_OLFLAGS_ENABLE (default y)
Toggle to enable RX olflags. This is only meaningful when Vector PMD is used.
• CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC (default n)
Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
• CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF (default 64)
Number of queues reserved for PF.
• CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF (default 4)
Number of queues reserved for each SR-IOV VF.
• CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM (default 4)
Number of queues reserved for each VMDQ Pool.
• CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL (default -1)
Interrupt Throttling interval.
Driver Compilation
To compile the I40E PMD see Getting Started Guide for Linux or Getting Started Guide for
FreeBSD depending on your platform.
Linux
Running testpmd
This section demonstrates how to launch testpmd with Intel XL710/X710 devices managed
by librte_pmd_i40e in the Linux operating system.
1. Load igb_uio or vfio-pci driver:
modprobe uio
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
or
modprobe vfio-pci
2. Bind the XL710/X710 adapters to igb_uio or vfio-pci loaded in the previous step:
./usertools/dpdk-devbind.py --bind igb_uio 0000:83:00.0
Or setup VFIO permissions for regular users and then bind to vfio-pci:
./usertools/dpdk-devbind.py --bind vfio-pci 0000:83:00.0
Example output:
...
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL: probe driver: 8086:1572 rte_i40e_pmd
EAL: PCI memory mapped at 0x7f7f80000000
EAL: PCI memory mapped at 0x7f7f80800000
PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
Interactive-mode selected
Configuring Port 0 (socket 0)
...
...
Port 0: 68:05:CA:26:85:84
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd>
3. Create VF device(s):
Echo the number of VFs to be created into the sriov_numvfs sysfs entry of the parent
PF.
Example:
echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
Example:
ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
5. Assign VF to VM, and bring up the VM. Please see the documentation for the
I40E/IXGBE/IGB Virtual Function Driver.
Vlan filter
Flow Director
The Flow Director works in receive mode to identify specific flows or sets of flows and route
them to specific queues. The Flow Director filters can match the different fields for different
type of packet: flow type, specific input set per flow type and the flexible payload.
The default input set of each flow type is:
ipv4-other : src_ip_address, dst_ip_address
ipv4-frag : src_ip_address, dst_ip_address
ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
verification_tag
ipv6-other : src_ip_address, dst_ip_address
ipv6-frag : src_ip_address, dst_ip_address
ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
verification_tag
l2_payload : ether_type
The flex payload is selected from offset 0 to 15 of packet’s payload by default, while it is masked
out from matching.
Floating VEB
The Intel® Ethernet Controller X710 and XL710 Family support a feature called “Floating VEB”.
A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term for functionality
that allows local switching between virtual endpoints within a physical endpoint and also with
an external bridge/network.
A “Floating” VEB doesn’t have an uplink connection to the outside world so all switching is
done internally and remains within the host. As such, this feature provides security benefits.
In addition, a Floating VEB overcomes a limitation of normal VEBs where they cannot forward
packets when the physical link is down. Floating VEBs don’t need to connect to the NIC port
so they can still forward traffic from VF to VF even when the physical link is down.
Therefore, with this feature enabled VFs can be limited to communicating with each other but
not an outside network, and they can do so even when there is no physical uplink on the
associated NIC port.
To enable this feature, the user should pass a devargs parameter to the EAL, for example:
-w 84:00.0,enable_floating_veb=1
In this configuration the PMD will use the floating VEB feature for all the VFs created by this
PF device.
Alternatively, the user can specify which VFs need to connect to this floating VEB using the
floating_veb_list argument:
-w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
In this example VF1, VF3 and VF4 connect to the floating VEB, while other VFs connect to the
normal VEB.
The current implementation only supports one floating VEB and one regular VEB. VFs can
connect to a floating VEB or a regular VEB according to the configuration passed on the EAL
command line.
The floating VEB functionality requires a NIC firmware version of 5.0 or greater.
For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC. The L2 Pay-
load flow type in flow director can be used to classify MPLS packet by using a command in
testpmd like:
testpmd> flow_director_filter 0 mode IP add flow l2_payload ether 0x8847
flexbytes () fwd pf queue <N> fd_id <M>
With the NIC firmware version 5.0 or greater, some limited MPLS support is added: Native
MPLS (MPLS in Ethernet) skip is implemented, while no new packet type, no classification or
offload are possible. With this change, L2 Payload flow type in flow director cannot be used to
classify MPLS packet as with previous firmware versions. Meanwhile, the Ethertype filter can
be used to classify MPLS packet by using a command in testpmd like:
testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype
0x8847 fwd queue <M>
If the Linux i40e kernel driver is used as host driver, while DPDK i40e PMD is used as the
VF driver, DPDK cannot choose 16 byte receive descriptor. That is to say, user should keep
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n in config file.
Link down with i40e kernel driver after DPDK application exist
After DPDK application quit, and the device is bound back to Linux i40e kernel driver, the link
cannot be up after ifconfig <dev> up. To work around this issue, ethtool -s <dev>
autoneg on should be set first and then the link can be brought up through ifconfig
<dev> up.
NOTE: requires Linux kernel i40e driver version >= 1.4.X
Due to the FW limitation, PF can receive packets with Ethertype 0x88A8 only when floating
VEB is disabled.
When a packet is over maximum frame size, the packet is dropped. However the Rx statistics,
when calling rte_eth_stats_get incorrectly shows it as received.
IXGBE Driver
Vector PMD uses Intel® SIMD instructions to optimize packet I/O. It improves load/store band-
width efficiency of L1 data cache by using a wider SSE/AVX register 1 (1). The wider register
gives space to hold multiple packet buffers so as to save instruction number when processing
bulk of packets.
There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet
I/O. They are transparently registered at runtime RX/TX execution if all condition checks pass.
1. To date, only an SSE version of IX GBE vPMD is available. To ensure that vPMD is in
the binary code, ensure that the option CONFIG_RTE_IXGBE_INC_VECTOR=y is in the
configure file.
Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
The following sections explain RX and TX constraints in the vPMD.
RX Constraints
• To enable vPMD to work for RX, bulk allocation for Rx must be allowed.
Ensure that the following pre-conditions are satisfied:
• rxq->rx_free_thresh >= RTE_PMD_IXGBE_RX_MAX_BURST
• rxq->rx_free_thresh < rxq->nb_rx_desc
• (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
• rxq->nb_rx_desc < (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
These conditions are checked in the code.
Scattered packets are not supported in this mode. If an incoming packet is greater than the
maximum acceptable length of one “mbuf” data size (by default, the size is 2 KB), vPMD for
RX would be disabled.
By default, IXGBE_MAX_RING_DESC is set to 4096 and
RTE_PMD_IXGBE_RX_MAX_BURST is set to 32.
Some features are not supported when trying to increase the throughput in vPMD. They are:
• IEEE1588
• FDIR
• Header split
• RX checksum off load
Other features are supported using optional MACRO configuration. They include:
• HW VLAN strip
• HW extend dual VLAN
• Enabled by RX_OLFLAGS (RTE_IXGBE_RX_OLFLAGS_ENABLE=y)
To guarantee the constraint, configuration flags in dev_conf.rxmode will be checked:
• hw_vlan_strip
• hw_vlan_extend
• hw_ip_checksum
• header_split
• dev_conf
fdir_conf->mode will also be checked.
RX Burst Size
As vPMD is focused on high throughput, it assumes that the RX burst size is equal to or greater
than 32 per burst. It returns zero if using nb_pkt < 32 as the expected packet number in the
receive handler.
TX Constraint
Prerequisite
In DPDK release v16.11 an API for ixgbe specific functions has been added to the ixgbe PMD.
The declarations for the API functions are in the header rte_pmd_ixgbe.h.
testpmd
l3fwd
When running l3fwd with vPMD, there is one thing to note. In the configuration, ensure that
port_conf.rxmode.hw_ip_checksum=0. Otherwise, by default, RX vPMD is disabled.
load_balancer
The Intel x550 series NICs support a feature called MDD (Malicious Driver Detection) which
checks the behavior of the VF driver. If this feature is enabled, the VF must use the advanced
context descriptor correctly and set the CC (Check Context) bit. DPDK PF doesn’t support
MDD, but kernel PF does. We may hit problem in this scenario kernel PF + DPDK VF. If
user enables MDD in kernel PF, DPDK VF will not work. Because kernel PF thinks the VF is
malicious. But actually it’s not. The only reason is the VF doesn’t act as MDD required. There’s
significant performance impact to support MDD. DPDK should check if the advanced context
descriptor should be set and set it. And DPDK has to ask the info about the header length from
the upper layer, because parsing the packet itself is not acceptable. So, it’s too expensive to
support MDD. When using kernel PF + DPDK VF on x550, please make sure using the kernel
driver that disables MDD or can disable MDD. (Some kernel driver can use this CLI ‘insmod
ixgbe.ko MDD=0,0’ to disable MDD. Some kernel driver disables it by default.)
Statistics
The statistics of ixgbe hardware must be polled regularly in order for it to remain consistent.
Running a DPDK application without polling the statistics will cause registers on hardware to
count to the maximum value, and “stick” at that value.
In order to avoid statistic registers every reaching the maximum value, read the statistics from
the hardware using rte_eth_stats_get() or rte_eth_xstats_get().
The maximum time between statistics polls that ensures consistent results can be calculated
as follows:
max_read_interval = UINT_MAX / max_packets_per_second
max_read_interval = 4294967295 / 14880952
max_read_interval = 288.6218096127183 (seconds)
max_read_interval = ~4 mins 48 sec.
MTU setting
Although the user can set the MTU separately on PF and VF ports, the ixgbe NIC only supports
one global MTU per physical port. So when the user sets different MTUs on PF and VF ports
in one physical port, the real MTU for all these PF and VF ports is the largest value set. This
behavior is based on the kernel driver behavior.
Supported Intel® Ethernet Controllers (see the DPDK Release Notes for details) support the
following modes of operation in a virtualized environment:
• SR-IOV mode: Involves direct assignment of part of the port resources to different guest
operating systems using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
also known as “native mode” or “pass-through” mode. In this chapter, this mode is re-
ferred to as IOV mode.
• VMDq mode: Involves central management of the networking resources by an IO Virtual
Machine (IOVM) or a Virtual Machine Monitor (VMM), also known as software switch
acceleration mode. In this chapter, this mode is referred to as the Next Generation VMDq
mode.
The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode. Therefore,
it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
expose them to a virtual machine as a separate PCI function called a “Virtual Function”. Refer
to Fig. 8.1.
Therefore, a NIC is logically distributed among multiple virtual machines (as shown in Fig. 8.1),
while still having global data in common to share with the Physical Function and other Virtual
Functions. The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves
for the Intel® 82576 Gigabit Ethernet Controller, Intel® Ethernet Controller I350 family, Intel®
82599 10 Gigabit Ethernet Controller NIC, Intel® Fortville 10/40 Gigabit Ethernet Controller
NIC’s virtual PCI function, or PCIe host-interface of the Intel Ethernet Switch FM10000 Series.
Meanwhile the DPDK Poll Mode Driver (PMD) also supports “Physical Function” of such NIC’s
on the host.
The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit
Ethernet Controller, Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet
Controller, and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose
it for inter virtual machine traffic in SR-IOV mode.
For more detail on SR-IOV, please refer to the following documents:
• SR-IOV provides hardware based I/O sharing
• PCI-SIG-Single Root I/O Virtualization Support on IA
• Scalable I/O Virtualized Servers
The following describes the Physical Function and Virtual Functions infrastructure for the sup-
ported Ethernet Controller NICs.
Virtual Functions operate under the respective Physical Function on the same NIC Port and
therefore have no access to the global NIC resources that are shared between other functions
for the same NIC port.
A Virtual Function has basic access to the queue resources and control structures of the
queues assigned to it. For global resource access, a Virtual Function has to send a request to
the Physical Function for that port, and the Physical Function operates on the global resources
on behalf of the Virtual Function. For this out-of-band communication, an SR-IOV enabled NIC
provides a memory buffer for each Virtual Function, which is called a “Mailbox”.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for
a dual-port NIC. When you enable the four Virtual Functions with the above command, the four
enabled functions have a Function# represented by (Bus#, Device#, Function#) in sequence
starting from 0 to 3. However:
• Virtual Functions 0 and 2 belong to Physical Function 0
• Virtual Functions 1 and 3 belong to Physical Function 1
Note: The above is an important consideration to take into account when targeting specific
packets to a selected port.
In a virtualized environment, the programmer can enable a maximum of 128 Virtual Functions
(VF) globally per Intel® Fortville 10/40 Gigabit Ethernet Controller NIC device. Each VF can
have a maximum of 16 queue pairs. The Physical Function in host could be either configured
by the Linux* i40e driver (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by
DPDK PMD PF driver. When using both DPDK PMD PF/VF drivers, the whole NIC will be
taken over by DPDK based application.
For example,
• Using Linux* i40e driver:
rmmod i40e (To remove the i40e module)
insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
Launch the DPDK testpmd/example or your own host daemon application using the
DPDK PMD library.
• Using the DPDK PMD PF ixgbe driver to enable VF RSS:
Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI
device, and launch the DPDK testpmd/example or your own host daemon application
using the DPDK PMD library.
The available queue number(at most 4) per VF depends on the total number of pool,
which is determined by the max number of VF at PF initialization stage and the number
of queue specified in config:
– If the max number of VF is set in the range of 1 to 32:
If the number of rxq is specified as 4(e.g. ‘–rxq 4’ in testpmd), then there are totally
32 pools(ETH_32_POOLS), and each VF could have 4 or less(e.g. 2) queues;
If the number of rxq is specified as 2(e.g. ‘–rxq 2’ in testpmd), then there are totally
32 pools(ETH_32_POOLS), and each VF could have 2 queues;
– If the max number of VF is in the range of 33 to 64:
If the number of rxq is 4 (‘–rxq 4’ in testpmd), then error message is expected as rxq
is not correct at this case;
If the number of rxq is 2 (‘–rxq 2’ in testpmd), then there is totally 64
pools(ETH_64_POOLS), and each VF have 2 queues;
On host, to enable VF RSS functionality, rx mq mode should be set as
ETH_MQ_RX_VMDQ_RSS or ETH_MQ_RX_RSS mode, and SRIOV mode should be
activated(max_vfs >= 1). It also needs config VF RSS information like hash function,
RSS key, RSS key length.
testpmd -c 0xffff -n 4 -- --coremask=<core-mask> --rxq=4 --txq=4 -i
The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is: The hash
and key are shared among PF and all VF, the RETA table with 128 entries is also shared
among PF and all VF; So it could not to provide a method to query the hash and reta
content per VF on guest, while, if possible, please query them on host(PF) for the shared
RETA information.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for
a dual-port NIC. When you enable the four Virtual Functions with the above command, the four
enabled functions have a Function# represented by (Bus#, Device#, Function#) in sequence
starting from 0 to 3. However:
• Virtual Functions 0 and 2 belong to Physical Function 0
• Virtual Functions 1 and 3 belong to Physical Function 1
Note: The above is an important consideration to take into account when targeting specific
packets to a selected port.
The programmer can enable a maximum of 63 Virtual Functions and there must be one Phys-
ical Function per Intel® 82599 10 Gigabit Ethernet Controller NIC port. The reason for this is
that the device allows for a maximum of 128 queues per port and a virtual/physical function
has to have at least one queue pair (RX/TX). The current implementation of the DPDK ixgbevf
driver supports a single queue pair (RX/TX) per Virtual Function. The Physical Function in host
could be either configured by the Linux* ixgbe driver (in the case of the Linux Kernel-based Vir-
tual Machine [KVM]) or by DPDK PMD PF driver. When using both DPDK PMD PF/VF drivers,
the whole NIC will be taken over by DPDK based application.
For example,
• Using Linux* ixgbe driver:
rmmod ixgbe (To remove the ixgbe module)
insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
Launch the DPDK testpmd/example or your own host daemon application using the
DPDK PMD library.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for
a dual-port NIC. When you enable the four Virtual Functions with the above command, the four
enabled functions have a Function# represented by (Bus#, Device#, Function#) in sequence
starting from 0 to 3. However:
• Virtual Functions 0 and 2 belong to Physical Function 0
• Virtual Functions 1 and 3 belong to Physical Function 1
Note: The above is an important consideration to take into account when targeting specific
packets to a selected port.
Intel® 82576 Gigabit Ethernet Controller and Intel® Ethernet Controller I350 Family VF Infras-
tructure
Launch DPDK testpmd/example or your own host daemon application using the DPDK
PMD library.
Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for
a four-port NIC. When you enable the four Virtual Functions with the above command, the four
enabled functions have a Function# represented by (Bus#, Device#, Function#) in sequence,
starting from 0 to 7. However:
• Virtual Functions 0 and 4 belong to Physical Function 0
• Virtual Functions 1 and 5 belong to Physical Function 1
• Virtual Functions 2 and 6 belong to Physical Function 2
• Virtual Functions 3 and 7 belong to Physical Function 3
Note: The above is an important consideration to take into account when targeting specific
packets to a selected port.
Validated Hypervisors
When using an older kernel, or a kernel from a distribution without the kvm modules, you
must download (from the same link), compile and install the modules yourself:
tar xjf kvm-kmod-release.tar.bz2
cd kvm-kmod-release
./configure
make
sudo make install
sudo /sbin/modprobe kvm-intel
When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the num-
ber of VF by sysfs max_vfs:
modprobe uio
insmod igb_uio
./dpdk-devbind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
echo 2 > /sys/bus/pci/devices/0000\:0e\:00.1/max_vfs
Note: You need to explicitly specify number of vfs for each port, for example, in the
command above, it creates two vfs for the first two ixgbe ports.
7. List the PCI devices connected and notice that the Host OS shows two Physical Functions
(traditional ports) and four Virtual Functions (two for each port). This is the result of the
previous step.
8. Insert the pci_stub module to hold the PCI devices that are freed from
the default driver using the following command (see http://www.linux-
kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM Section 4 for more in-
formation):
sudo /sbin/modprobe pci-stub
Unbind the default driver from the PCI devices representing the Virtual Functions. A
script to perform this action is as follows:
echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
echo 0000:08:10.0 > /sys/bus/pci/drivers/pci-stub/bind
where, 0000:08:10.0 belongs to the Virtual Function visible in the Host OS.
9. Now, start the Virtual Machine by running the following command:
/usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device pci-
where:
— -m = memory to assign
—-smp = number of smp cores
— -boot = boot option
—-hda = virtual disk image
— -device = device to attach
Note: — The pci-assign,host=08:10.0 value indicates that you want to attach a PCI
device to a Virtual Machine and the respective (Bus:Device.Function) numbers should
be passed for the Virtual Function to be attached.
— qemu-kvm-0.14.0 allows a maximum of four PCI devices assigned to a VM, but this
is qemu-kvm version dependent since qemu-kvm-0.14.1 allows a maximum of five PCI
devices.
— qemu-system-x86_64 also has a -cpu command line option that is used to select the
cpu_model to emulate in a Virtual Machine. Therefore, it can be used as:
/usr/local/kvm/bin/qemu-system-x86_64 -cpu ?
10. Install and run DPDK host app to take over the Physical Function. Eg.
make install T=x86_64-native-linuxapp-gcc
./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
11. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the
lspci command output in the Guest OS. The virtual functions will be listed as available for
use.
12. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the
Guest OS as normal, that is, there is no change to the normal installation procedure.
make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
cd x86_64-native-linuxapp-gcc
make
Note: If you are unable to compile the DPDK and you are getting “error: CPU you selected
does not support x86-64 instruction set”, power off the Guest OS and start the virtual machine
with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9. You
must select the best x86_64 cpu_model to emulate or you can select host option if available.
Note: Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled. For
the expected benchmark performance, you must pin the cores from the Guest OS to the Host
OS (taskset can be used to do this) and you must also look at the PCI Bus layout on the board
to ensure you are not running the traffic over the QPI Interface.
Note:
• The Virtual Machine Manager (the Fedora package name is virt-manager) is a utility
for virtual machine management that can also be used to create, start, stop and delete
virtual machines. If this option is used, step 2 and 6 in the instructions provided will be
different.
• virsh, a command line utility for virtual machine management, can also be used to bind
and unbind devices to a virtual machine in Ubuntu. If this option is used, step 6 in the
instructions provided will be different.
• The Virtual Machine Monitor (see Fig. 8.2) is equivalent to a Host OS with KVM installed
as described in the instructions.
Software Defined Network (SDN) trends are demanding fast host-based packet handling. In a
virtualization environment, the DPDK VF PMD driver performs the same throughput result as
a non-VT native environment.
With such host instance fast packet processing, lots of services such as filtering, QoS, DPI can
be offloaded on the host fast path.
Fig. 8.3 shows the scenario where some VMs directly communicate externally via a VFs, while
others connect to a virtual switch and share the same uplink bandwidth.
Inter-VM data communication is one of the traffic bottle necks in virtualization platforms. SR-
IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in
the NIC. So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware accel-
eration. However, when VF crosses physical ports (VM0<->VM2), there is no such hardware
bridge. In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
Fig. 8.4 shows an example. In this case an update of the MAC address lookup tables in both
the NIC and host DPDK application is required.
In the NIC, writing the destination of a MAC address belongs to another cross device VM to
the PF specific pool. So when a packet comes in, its destination MAC address will match and
forward to the host DPDK PMD application.
In the host DPDK application, the behavior is similar to L2 forwarding, that is, the packet is
forwarded to the correct PF pool. The SR-IOV NIC switch forwards the packet to a specific VM
according to the MAC destination address which belongs to the destination VF on the VM.
The MLX4 poll mode driver library (librte_pmd_mlx4) implements support for Mellanox
ConnectX-3 and Mellanox ConnectX-3 Pro 10/40 Gbps adapters as well as their virtual func-
tions (VF) in SR-IOV context.
Information and documentation about this family of adapters can be found on the Mellanox
website. Help is also provided by the Mellanox community.
There is also a section dedicated to this poll mode driver.
Note: Due to external dependencies, this driver is disabled by default. It must be enabled
manually by setting CONFIG_RTE_LIBRTE_MLX4_PMD=y and recompiling DPDK.
Implementation details
Most Mellanox ConnectX-3 devices provide two ports but expose a single PCI bus address,
thus unlike most drivers, librte_pmd_mlx4 registers itself as a PCI driver that allocates one
Ethernet device per detected port.
For this reason, one cannot white/blacklist a single port without also white/blacklisting the oth-
ers on the same device.
Besides its dependency on libibverbs (that implies libmlx4 and associated kernel support), li-
brte_pmd_mlx4 relies heavily on system calls for control operations such as querying/updating
the MTU and flow control parameters.
For security reasons and robustness, this driver only deals with virtual memory addresses. The
way resources allocations are handled by the kernel combined with hardware specifications
that allow it to handle virtual memory addresses directly ensure that DPDK applications cannot
access random physical memory (or memory that does not belong to the current process).
This capability allows the PMD to coexist with kernel network interfaces which remain func-
tional, although they stop receiving unicast packets as long as they share the same MAC
address.
Compiling librte_pmd_mlx4 causes DPDK to be linked against libibverbs.
Features
• RSS, also known as RCA, is supported. In this mode the number of configured RX
queues must be a power of two.
Limitations
Configuration
Compilation options
This value is always 1 for RX queues since they use a single MP.
• CONFIG_RTE_LIBRTE_MLX4_SOFT_COUNTERS (default 1)
Toggle software counters. No counters are available if this option is disabled since hard-
ware counters are not supported.
Environment variables
• MLX4_INLINE_RECV_SIZE
A nonzero value enables inline receive for packets up to that size. May significantly
improve performance in some cases but lower it in others. Requires careful testing.
Run-time configuration
• The only constraint when RSS mode is requested is to make sure the number of RX
queues is a power of two. This is a hardware requirement.
• librte_pmd_mlx4 brings kernel network interfaces up during initialization because it is
affected by their state. Forcing them down prevents packets reception.
• ethtool operations on related kernel interfaces also affect the PMD.
The mlx4_core kernel module has several parameters that affect the behavior and/or the per-
formance of librte_pmd_mlx4. Some of them are described below.
• num_vfs (integer or triplet, optionally prefixed by device address strings)
Create the given number of VFs on the specified devices.
• log_num_mgm_entry_size (integer)
Device-managed flow steering (DMFS) is required by DPDK applications. It is enabled
by using a negative value, the last four bits of which have a special meaning.
– -1: force device-managed flow steering (DMFS).
– -7: configure optimized steering mode to improve performance with the following
limitation: VLAN filtering is not supported with this mode. This is the recommended
mode in case VLAN filter is not needed.
Prerequisites
This driver relies on external libraries and kernel drivers for resources allocations and initial-
ization. The following dependencies are not part of DPDK and must be installed separately:
• libibverbs
User space verbs framework used by librte_pmd_mlx4. This library provides a generic
interface between the kernel and low-level user space drivers such as libmlx4.
It allows slow and privileged operations (context initialization, hardware resources allo-
cations) to be managed by the kernel and fast operations to never leave user space.
• libmlx4
Low-level user space driver library for Mellanox ConnectX-3 devices, it is automatically
loaded by libibverbs.
This library basically implements send/receive calls to the hardware queues.
• Kernel modules (mlnx-ofed-kernel)
They provide the kernel-side verbs API and low level device drivers that manage actual
hardware initialization and resources sharing with user space processes.
Unlike most other PMDs, these modules must remain loaded and bound to their devices:
– mlx4_core: hardware driver managing Mellanox ConnectX-3 devices.
– mlx4_en: Ethernet device driver that provides kernel network interfaces.
– mlx4_ib: InifiniBand device driver.
– ib_uverbs: user space driver for verbs (entry point for libibverbs).
• Firmware update
Mellanox OFED releases include firmware updates for ConnectX-3 adapters.
Because each release provides new features, these updates must be applied to match
the kernel modules and libraries they come with.
Note: Both libraries are BSD and GPL licensed. Linux kernel modules are GPL licensed.
While these libraries and kernel modules are available on OpenFabrics Alliance’s website and
provided by package managers on most distributions, this PMD requires Ethernet extensions
that may not be supported at the moment (this is a work in progress).
Mellanox OFED includes the necessary support and should be used in the meantime. For
DPDK, only libibverbs, libmlx4, mlnx-ofed-kernel packages and firmware updates are required
from that distribution.
Note: Several versions of Mellanox OFED are available. Installing the version this DPDK
release was developed and tested against is strongly recommended. Please check the pre-
requisites.
Supported NICs
Usage example
This section demonstrates how to launch testpmd with Mellanox ConnectX-3 devices man-
aged by librte_pmd_mlx4.
1. Load the kernel modules:
modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
Note: User space I/O kernel modules (uio and igb_uio) are not used and do not have to
be loaded.
2. Make sure Ethernet interfaces are in working order and linked to kernel verbs. Related
sysfs entries should be present:
ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
Example output:
eth2
eth3
eth4
eth5
Example output:
-w 0000:83:00.0
-w 0000:83:00.0
-w 0000:84:00.0
-w 0000:84:00.0
Note: There are only two distinct PCI bus addresses because the Mellanox ConnectX-3
adapters installed on this system are dual port.
Example output:
[...]
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL: probe driver: 15b3:1007 librte_pmd_mlx4
PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_0" (VF: false)
PMD: librte_pmd_mlx4: 2 port(s) detected
PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:b7:50
PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:b7:51
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL: probe driver: 15b3:1007 librte_pmd_mlx4
PMD: librte_pmd_mlx4: PCI information matches, using device "mlx4_1" (VF: false)
PMD: librte_pmd_mlx4: 2 port(s) detected
PMD: librte_pmd_mlx4: port 1 MAC address is 00:02:c9:b5:ba:b0
PMD: librte_pmd_mlx4: port 2 MAC address is 00:02:c9:b5:ba:b1
Interactive-mode selected
Configuring Port 0 (socket 0)
PMD: librte_pmd_mlx4: 0x867d60: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx4: 0x867d60: RX queues number update: 0 -> 2
Port 0: 00:02:C9:B5:B7:50
Configuring Port 1 (socket 0)
PMD: librte_pmd_mlx4: 0x867da0: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx4: 0x867da0: RX queues number update: 0 -> 2
Port 1: 00:02:C9:B5:B7:51
Configuring Port 2 (socket 0)
PMD: librte_pmd_mlx4: 0x867de0: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx4: 0x867de0: RX queues number update: 0 -> 2
Port 2: 00:02:C9:B5:BA:B0
Configuring Port 3 (socket 0)
PMD: librte_pmd_mlx4: 0x867e20: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx4: 0x867e20: RX queues number update: 0 -> 2
Port 3: 00:02:C9:B5:BA:B1
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 40000 Mbps - full-duplex
Port 2 Link Up - speed 10000 Mbps - full-duplex
Port 3 Link Up - speed 40000 Mbps - full-duplex
Done
testpmd>
The MLX5 poll mode driver library (librte_pmd_mlx5) provides support for Mel-
lanox ConnectX-4, Mellanox ConnectX-4 Lx and Mellanox ConnectX-5 families of
10/25/40/50/100 Gb/s adapters as well as their virtual functions (VF) in SR-IOV context.
Information and documentation about these adapters can be found on the Mellanox website.
Help is also provided by the Mellanox community.
There is also a section dedicated to this poll mode driver.
Note: Due to external dependencies, this driver is disabled by default. It must be enabled
manually by setting CONFIG_RTE_LIBRTE_MLX5_PMD=y and recompiling DPDK.
Implementation details
Besides its dependency on libibverbs (that implies libmlx5 and associated kernel support), li-
brte_pmd_mlx5 relies heavily on system calls for control operations such as querying/updating
the MTU and flow control parameters.
For security reasons and robustness, this driver only deals with virtual memory addresses. The
way resources allocations are handled by the kernel combined with hardware specifications
that allow it to handle virtual memory addresses directly ensure that DPDK applications cannot
access random physical memory (or memory that does not belong to the current process).
This capability allows the PMD to coexist with kernel network interfaces which remain func-
tional, although they stop receiving unicast packets as long as they share the same MAC
address.
Enabling librte_pmd_mlx5 causes DPDK applications to be linked against libibverbs.
Features
Limitations
Configuration
Compilation options
Environment variables
• MLX5_PMD_ENABLE_PADDING
Enables HW packet padding in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer bytes are
written to the PCI bus. Enabling padding makes such packets aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve performance by
10%.
This is disabled by default since this can also decrease performance for unaligned packet
sizes.
Run-time configuration
Prerequisites
This driver relies on external libraries and kernel drivers for resources allocations and initial-
ization. The following dependencies are not part of DPDK and must be installed separately:
• libibverbs
User space Verbs framework used by librte_pmd_mlx5. This library provides a generic
interface between the kernel and low-level user space drivers such as libmlx5.
It allows slow and privileged operations (context initialization, hardware resources allo-
cations) to be managed by the kernel and fast operations to never leave user space.
• libmlx5
Low-level user space driver library for Mellanox ConnectX-4/ConnectX-5 devices, it is
automatically loaded by libibverbs.
This library basically implements send/receive calls to the hardware queues.
• Kernel modules (mlnx-ofed-kernel)
They provide the kernel-side Verbs API and low level device drivers that manage actual
hardware initialization and resources sharing with user space processes.
Unlike most other PMDs, these modules must remain loaded and bound to their devices:
– mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5 devices
and related Ethernet kernel network devices.
– mlx5_ib: InifiniBand device driver.
– ib_uverbs: user space driver for Verbs (entry point for libibverbs).
• Firmware update
Mellanox OFED releases include firmware updates for ConnectX-4/ConnectX-5
adapters.
Because each release provides new features, these updates must be applied to match
the kernel modules and libraries they come with.
Note: Both libraries are BSD and GPL licensed. Linux kernel modules are GPL licensed.
While these libraries and kernel modules are available on OpenFabrics Alliance’s website and
provided by package managers on most distributions, this PMD requires Ethernet extensions
that may not be supported at the moment (this is a work in progress).
Mellanox OFED includes the necessary support and should be used in the meantime. For
DPDK, only libibverbs, libmlx5, mlnx-ofed-kernel packages and firmware updates are required
from that distribution.
Note: Several versions of Mellanox OFED are available. Installing the version this DPDK
release was developed and tested against is strongly recommended. Please check the pre-
requisites.
Supported NICs
Compared to librte_pmd_mlx4 that implements a single RSS configuration per port, li-
brte_pmd_mlx5 supports per-protocol RSS configuration.
Since testpmd defaults to IP RSS mode and there is currently no command-line parameter
to enable additional protocols (UDP and TCP as well as IP), the following commands must be
entered from its CLI to get the same behavior as librte_pmd_mlx4:
> port stop all
> port config all rss all
> port start all
Usage example
This section demonstrates how to launch testpmd with Mellanox ConnectX-4/ConnectX-5 de-
vices managed by librte_pmd_mlx5.
1. Load the kernel modules:
modprobe -a ib_uverbs mlx5_core mlx5_ib
Note: User space I/O kernel modules (uio and igb_uio) are not used and do not have to
be loaded.
2. Make sure Ethernet interfaces are in working order and linked to kernel verbs. Related
sysfs entries should be present:
ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5
Example output:
eth30
eth31
eth32
eth33
Example output:
-w 0000:05:00.1
-w 0000:06:00.0
-w 0000:06:00.1
-w 0000:05:00.0
Example output:
[...]
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1013 librte_pmd_mlx5
PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
PMD: librte_pmd_mlx5: 1 port(s) detected
PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL: probe driver: 15b3:1013 librte_pmd_mlx5
PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
PMD: librte_pmd_mlx5: 1 port(s) detected
PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1013 librte_pmd_mlx5
PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
PMD: librte_pmd_mlx5: 1 port(s) detected
PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL: probe driver: 15b3:1013 librte_pmd_mlx5
PMD: librte_pmd_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
PMD: librte_pmd_mlx5: 1 port(s) detected
PMD: librte_pmd_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
Interactive-mode selected
Configuring Port 0 (socket 0)
PMD: librte_pmd_mlx5: 0x8cba80: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx5: 0x8cba80: RX queues number update: 0 -> 2
Port 0: E4:1D:2D:E7:0C:FE
Configuring Port 1 (socket 0)
PMD: librte_pmd_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
Port 1: E4:1D:2D:E7:0C:FF
Configuring Port 2 (socket 0)
PMD: librte_pmd_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
PMD: librte_pmd_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
Port 2: E4:1D:2D:E7:0C:FA
Netronome’s sixth generation of flow processors pack 216 programmable cores and over 100
hardware accelerators that uniquely combine packet, flow, security and content processing in
a single device that scales up to 400 Gbps.
This document explains how to use DPDK with the Netronome Poll Mode Driver (PMD) sup-
porting Netronome’s Network Flow Processor 6xxx (NFP-6xxx).
Currently the driver supports virtual functions (VFs) only.
Dependencies
Before using the Netronome’s DPDK PMD some NFP-6xxx configuration, which is not related
to DPDK, is required. The system requires installation of Netronome’s BSP (Board Support
Package) which includes Linux drivers, programs and libraries.
If you have a NFP-6xxx device you should already have the code and documentation for doing
this configuration. Contact support@netronome.com to obtain the latest available firmware.
The NFP Linux kernel drivers (including the required PF driver for the NFP) are available on
Github at https://github.com/Netronome/nfp-drv-kmods along with build instructions.
DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to allow access to
physical devices from userspace. The NFP PMD requires the igb_uio UIO driver, available
with DPDK, to perform correct initialization.
Netronome’s PMD code is provided in the drivers/net/nfp directory. Although NFP PMD has
Netronome´s BSP dependencies, it is possible to compile it along with other DPDK PMDs even
if no BSP was installed before. Of course, a DPDK app will require such a BSP installed for
using the NFP PMD.
Default PMD configuration is at common_linuxapp configuration file:
• CONFIG_RTE_LIBRTE_NFP_PMD=y
Once DPDK is built all the DPDK apps and examples include support for the NFP PMD.
System configuration
Using the NFP PMD is not different to using other PMDs. Usual steps are:
1. Configure hugepages: All major Linux distributions have the hugepages functionality
enabled by default. By default this allows the system uses for working with transparent
hugepages. But in this case some hugepages need to be created/reserved for use with
the DPDK through the hugetlbfs file system. First the virtual file system need to be
mounted:
mount -t hugetlbfs none /mnt/hugetlbfs
The command uses the common mount point for this file system and it needs to be
created if necessary.
Configuring hugepages is performed via sysfs:
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
This sysfs file is used to specify the number of hugepages to reserve. For example:
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
This will reserve 2GB of memory using 1024 2MB hugepages. The file may be read to
see if the operation was performed correctly:
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
For a NUMA system there will be a specific hugepage directory per node allowing control
of hugepage reservation. A common problem may occur when hugepages reservation
is performed after the system has been working for some time. Configuration using
the global sysfs hugepage interface will succeed but the per-node allocations may be
unsatisfactory.
The number of hugepages that need to be reserved depends on how the app uses TX
and RX descriptors, and packets mbufs.
2. Enable SR-IOV on the NFP-6xxx device: The current NFP PMD works with Virtual
Functions (VFs) on a NFP device. Make sure that one of the Physical Function (PF)
drivers from the above Github repository is installed and loaded.
Virtual Functions need to be enabled before they can be used with the PMD. Before
enabling the VFs it is useful to obtain information about the current NFP PCI device
detected by the system:
lspci -d19ee:
Now, for example, configure two virtual functions on a NFP-6xxx device whose PCI sys-
tem identity is “0000:03:00.0”:
echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
Two new PCI devices should appear in the output of the above command. The -k option
shows the device driver, if any, that devices are bound to. Depending on the modules
loaded at this point the new PCI devices may be bound to nfp_netvf driver.
3. To install the uio kernel module (manually): All major Linux distributions have support
for this kernel module so it is straightforward to install it:
modprobe uio
lspci -d19ee: -k
The output of lspci should now show that 0000:03:08.0 is not bound to any driver.
The next step is to add the NFP PCI ID to the IGB UIO driver:
echo 19ee 6003 > /sys/bus/pci/drivers/igb_uio/new_id
lspci -d19ee: -k
• dpdk-devbind.py
Configuration may be performed by running dpdk-setup.sh which invokes dpdk-
devbind.py as needed. Executing dpdk-setup.sh will display a menu of configuration
options.
The QEDE poll mode driver library (librte_pmd_qede) implements support for QLogic
FastLinQ QL4xxxx 10G/25G/40G/50G/100G CNA family of adapters as well as their vir-
tual functions (VF) in SR-IOV context. It is supported on several standard Linux distros like
RHEL7.x, SLES12.x and Ubuntu. It is compile-tested under FreeBSD OS.
More information can be found at QLogic Corporation’s Website.
Supported Features
• Unicast/Multicast filtering
• Promiscuous mode
• Allmulti mode
• Port hardware statistics
• Jumbo frames
• VLAN offload - Filtering and stripping
• Stateless checksum offloads (IPv4/TCP/UDP)
• Multiple Rx/Tx queues
• RSS (with RETA/hash table/key)
• TSS
• Multiple MAC address
• Default pause flow control
• SR-IOV VF
• MTU change
• Multiprocess aware
• Scatter-Gather
• VXLAN tunneling offload
Non-supported Features
• SR-IOV PF
• GENEVE and NVGRE Tunneling offloads
• LRO/TSO
• NPAR
Prerequisites
• Requires firmware version 8.14.x. and management firmware version 8.14.x or higher.
Firmware may be available inbox in certain newer Linux distros under the standard direc-
tory E.g. /lib/firmware/qed/qed_init_values-8.14.6.0.bin
• If the required firmware files are not available then visit QLogic Driver Download Center.
Performance note
The following options can be modified in the .config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_QEDE_PMD (default y)
Toggle compilation of QEDE PMD driver.
• CONFIG_RTE_LIBRTE_QEDE_DEBUG_INFO (default n)
Toggle display of generic debugging messages.
• CONFIG_RTE_LIBRTE_QEDE_DEBUG_DRIVER (default n)
Toggle display of ecore related messages.
• CONFIG_RTE_LIBRTE_QEDE_DEBUG_TX (default n)
Toggle display of transmit fast path run-time messages.
• CONFIG_RTE_LIBRTE_QEDE_DEBUG_RX (default n)
Toggle display of receive fast path run-time messages.
• CONFIG_RTE_LIBRTE_QEDE_FW (default “”)
Gives absolute path of firmware file. Eg: "/lib/firmware/qed/qed_init_values_zipped-8.1
Empty string indicates driver will pick up the firmware file from the default location.
Driver Compilation
To compile QEDE PMD for Linux x86_64 gcc target, run the following make command:
cd <DPDK-source-directory>
make config T=x86_64-native-linuxapp-gcc install
To compile QEDE PMD for Linux x86_64 clang target, run the following make command:
cd <DPDK-source-directory>
make config T=x86_64-native-linuxapp-clang install
To compile QEDE PMD for FreeBSD x86_64 clang target, run the following gmake command:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-clang install
To compile QEDE PMD for FreeBSD x86_64 gcc target, run the following gmake command:
cd <DPDK-source-directory>
gmake config T=x86_64-native-bsdapp-gcc install -Wl,-rpath=\
/usr/local/lib/gcc49 CC=gcc49
This section demonstrates how to launch testpmd with QLogic 4xxxx devices managed by
librte_pmd_qede in Linux operating system.
1. Request huge pages:
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages/ \
nr_hugepages
3. Bind the QLogic 4xxxx adapters to igb_uio loaded in the previous step:
./usertools/dpdk-devbind.py --bind igb_uio 0000:84:00.0 0000:84:00.1 \
0000:84:00.2 0000:84:00.3
[...]
Example output:
[...]
Capabilities: [1b8 v1] Alternative Routing-ID Interpretation (ARI)
[...]
Capabilities: [1c0 v1] Single Root I/O Virtualization (SR-IOV)
[...]
Kernel driver in use: igb_uio
Example output:
systemd-udevd[4848]: renamed network interface eth0 to ens5f0
systemd-udevd[4848]: renamed network interface eth1 to ens5f1
4. Create VF device(s):
Echo the number of VFs to be created into "sriov_numvfs" sysfs entry of the parent
PF.
Example output:
Example output:
ip link set ens5f0 vf 0 mac 52:54:00:2f:9d:e8
6. PCI Passthrough:
The VF devices may be passed through to the guest VM using virt-manager or virsh.
QEDE PMD should be used to bind the VF devices in the guest VM using the instructions
outlined in the Application notes above.
The SFC EFX PMD (librte_pmd_sfc_efx) provides poll mode driver support for Solarflare
SFN7xxx and SFN8xxx family of 10/40 Gbps adapters. SFC EFX PMD has support for the
latest Linux and FreeBSD operating systems.
More information can be found at Solarflare Communications website.
Features
• Scattered Rx DMA for packet that are larger that a single Rx descriptor
• Deferred receive and transmit queue start
• Transmit VLAN insertion (if running firmware variant supports it)
Non-supported Features
Limitations
Due to requirements on receive buffer alignment and usage of the receive buffer for the aux-
iliary packet information provided by the NIC up to extra 269 (14 bytes prefix plus up to 255
bytes for end padding) bytes may be required in the receive buffer. It should be taken into
account when mbuf pool for receive is created.
Supported NICs
Prerequisites
Pre-Installation Configuration
The following options can be modified in the .config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_SFC_EFX_PMD (default y)
Enable compilation of Solarflare libefx-based poll-mode driver.
• CONFIG_RTE_LIBRTE_SFC_EFX_DEBUG (default n)
Enable compilation of the extra run-time consistency checks.
Per-Device Parameters
The following per-device parameters can be passed via EAL PCI device whitelist option like
“-w 02:00.0,arg1=value1,...”.
Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify boolean parameters value.
• perf_profile [auto|throughput|low-latency] (default throughput)
Choose hardware tunning to be optimized for either throughput or low-latency. auto
allows NIC firmware to make a choice based on installed licences and firmware variant
configured using sfboot.
• debug_init [bool] (default n)
Enable extra logging during device intialization and startup.
• mcdi_logging [bool] (default n)
Enable extra logging of the communication with the NIC’s management CPU. The logging
is done using RTE_LOG() with INFO level and PMD type. The format is consumed by
the Solarflare netlogdecode cross-platform tool.
The SZEDATA2 poll mode driver library implements support for cards from COMBO family
(COMBO-80G, COMBO-100G). The SZEDATA2 PMD uses interface provided by libsze2 li-
brary to communicate with COMBO cards over sze2 layer.
More information about family of COMBO cards and used technology (NetCOPE platform) can
be found on the Liberouter website.
Note: This driver has external dependencies. Therefore it is disabled in default configuration
files. It can be enabled by setting CONFIG_RTE_LIBRTE_PMD_SZEDATA2=y and recompiling.
Note: Currently the driver is supported only on x86_64 architectures. Only x86_64 versions
of the external libraries are provided.
Prerequisites
This PMD requires kernel modules which are responsible for initialization and allocation of
resources needed for sze2 layer function. Communication between PMD and kernel modules
is mediated by libsze2 library. These kernel modules and library are not part of DPDK and
must be installed separately:
• libsze2 library
The library provides API for initialization of sze2 transfers, receiving and transmitting data
segments.
• Kernel modules
– combov3
– szedata2_cv3
Kernel modules manage initialization of hardware, allocation and sharing of resources
for user space applications.
Information about getting the dependencies can be found here.
Configuration
These configuration options can be modified before compilation in the .config file:
• CONFIG_RTE_LIBRTE_PMD_SZEDATA2 default value: n
Value y enables compilation of szedata2 PMD.
• CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS default value: 0
This option defines type of firmware address space. Currently supported value is:
– 0 for firmwares:
* NIC_100G1_LR4
* HANIC_100G1_LR4
* HANIC_100G1_SR10
From DPDK version 16.04 the type of SZEDATA2 PMD is changed to PMD_PDEV. SZEDATA2
device is automatically recognized during EAL initialization. No special command line options
are needed.
Kernel modules have to be loaded before running the DPDK application.
Example of usage
Read packets from 0. and 1. receive channel and write them to 0. and 1. transmit channel:
$RTE_TARGET/app/testpmd -c 0xf -n 2 \
-- --port-topology=chained --rxq=2 --txq=2 --nb-cores=2 -i -a
Example output:
[...]
EAL: PCI device 0000:06:00.0 on NUMA socket -1
EAL: probe driver: 1b26:c1c1 rte_szedata2_pmd
PMD: Initializing szedata2 device (0000:06:00.0)
PMD: SZEDATA2 path: /dev/szedataII0
PMD: Available DMA channels RX: 8 TX: 8
PMD: resource0 phys_addr = 0xe8000000 len = 134217728 virt addr = 7f48f8000000
PMD: szedata2 device (0000:06:00.0) successfully initialized
Interactive-mode selected
Auto-start selected
Configuring Port 0 (socket 0)
Port 0: 00:11:17:00:00:00
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Done
Start automatic packet forwarding
io packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=2 - nb forwarding ports=1
RX queues=2 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=2 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
testpmd>
The rte_eth_tap.c PMD creates a device using TUN/TAP interfaces on the local host. The
PMD allows for DPDK and the host to communicate using a raw device interface on the host
and in the DPDK application.
The device created is a TAP device, which sends/receives packet in a raw format with a L2
header. The usage for a TAP PMD is for connectivity to the local host using a TAP interface.
When the TAP PMD is initialized it will create a number of tap devices in the host accessed via
ifconfig -a or ip command. The commands can be used to assign and query the virtual
like device.
These TAP interfaces can be used with Wireshark or tcpdump or Pktgen-DPDK along with be-
ing able to be used as a network connection to the DPDK application. The method enable one
or more interfaces is to use the --vdev=net_tap0 option on the DPDK application command
line. Each --vdev=net_tap1 option give will create an interface named dtap0, dtap1, and
so on.
The interface name can be changed by adding the iface=foo0, for example:
--vdev=net_tap0,iface=foo0 --vdev=net_tap1,iface=foo1, ...
Also the speed of the interface can be changed from 10G to whatever number needed, but the
interface does not enforce that speed, for example:
--vdev=net_tap0,iface=foo0,speed=25000
After the DPDK application is started you can send and receive packets on the interface using
the standard rx_burst/tx_burst APIs in DPDK. From the host point of view you can use any
host tool like tcpdump, Wireshark, ping, Pktgen and others to communicate with the DPDK
application. The DPDK application may not understand network protocols like IPv4/6, UDP or
TCP unless the application has been written to understand these protocols.
If you need the interface as a real network interface meaning running and has a valid IP address
then you can do this with the following commands:
sudo ip link set dtap0 up; sudo ip addr add 192.168.0.250/24 dev dtap0
sudo ip link set dtap1 up; sudo ip addr add 192.168.1.250/24 dev dtap1
Example
The following is a simple example of using the TUN/TAP PMD with the Pktgen packet generator.
It requires that the socat utility is installed on the test system.
Build DPDK, then pull down Pktgen and build pktgen using the DPDK SDK/Target used to build
the dpdk you pulled down.
Run pktgen from the pktgen directory in a terminal with a commandline like the following:
sudo ./app/app/x86_64-native-linuxapp-gcc/app/pktgen -l 1-5 -n 4 \
--proc-type auto --log-level 8 --socket-mem 512,512 --file-prefix pg \
--vdev=net_tap0 --vdev=net_tap1 -b 05:00.0 -b 05:00.1 \
-b 04:00.0 -b 04:00.1 -b 04:00.2 -b 04:00.3 \
-b 81:00.0 -b 81:00.1 -b 81:00.2 -b 81:00.3 \
-b 82:00.0 -b 83:00.0 -- -T -P -m [2:3].0 -m [4:5].1 \
-f themes/black-yellow.theme
Verify with ifconfig -a command in a different xterm window, should have a dtap0 and
dtap1 interfaces created.
Next set the links for the two interfaces to up via the commands below:
sudo ip link set dtap0 up; sudo ip addr add 192.168.0.250/24 dev dtap0
sudo ip link set dtap1 up; sudo ip addr add 192.168.1.250/24 dev dtap1
Then on the Pktgen command line interface you can start sending packets using the com-
mands start 0 and start 1 or you can start both at the same time with start all. The
command str is an alias for start all and stp is an alias for stop all.
While running you should see the 64 byte counters increasing to verify the traffic is being
looped back. You can use set all size XXX to change the size of the packets after you
stop the traffic. Use pktgen help command to see a list of all commands. You can also use
the -f option to load commands at startup in command line or Lua script in pktgen.
The ThunderX NICVF PMD (librte_pmd_thunderx_nicvf) provides poll mode driver support
for the inbuilt NIC found in the Cavium ThunderX SoC family as well as their virtual functions
(VF) in SR-IOV context.
More information can be found at Cavium Networks Official Website.
Features
• CN88xx
• CN81xx
• CN83xx
Prerequisites
• Follow the DPDK Getting Started Guide for Linux to setup the basic DPDK environment.
Pre-Installation Configuration
The following options can be modified in the config file. Please note that enabling debugging
options may affect system performance.
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD (default n)
By default it is enabled only for defconfig_arm64-thunderx-* config. Toggle compilation
of the librte_pmd_thunderx_nicvf driver.
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_INIT (default n)
Toggle display of initialization related messages.
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX (default n)
Toggle display of receive fast path run-time message
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX (default n)
Toggle display of transmit fast path run-time message
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER (default n)
Toggle display of generic debugging messages
• CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX (default n)
Toggle display of PF mailbox related run-time check messages
Driver Compilation
To compile the ThunderX NICVF PMD for Linux arm64 gcc target, run the following “make”
command:
cd <DPDK-source-directory>
make config T=arm64-thunderx-linuxapp-gcc install
Linux
Running testpmd
This section demonstrates how to launch testpmd with ThunderX NIC VF device managed by
librte_pmd_thunderx_nicvf in the Linux operating system.
1. Load vfio-pci driver:
modprobe vfio-pci
Note: VFIO-NOIOMMU is required only when running in VM context and should not be
enabled otherwise. See also SR-IOV: Prerequisites and sample Application Notes.
3. Bind the ThunderX NIC VF device to vfio-pci loaded in the previous step:
Setup VFIO permissions for regular users and then bind to vfio-pci:
./usertools/dpdk-devbind.py --bind vfio-pci 0002:01:00.2
Example output:
...
...
EAL: probe driver: 177d:11 rte_nicvf_pmd
EAL: using IOMMU type 1 (Type 1)
EAL: PCI memory mapped at 0x3ffade50000
EAL: Trying to map BAR 4 that contains the MSI-X table.
Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
EAL: PCI memory mapped at 0x3ffadc60000
PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
loopback_supported=true
PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
Interactive-mode selected
Configuring Port 0 (socket 0)
...
Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port automatically
to virtual function (VF) and presented them as PCIe-like SR-IOV device. This section provides
instructions to configure SR-IOV with Linux OS.
1. Verify PF devices capabilities using lspci:
lspci -vvv
Example output:
0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
...
Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
...
Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
...
Note: Unless thunder-nic driver is in use make sure your kernel config includes
CONFIG_THUNDER_NIC_PF setting.
Example output:
0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
...
Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
...
Kernel driver in use: thunder-nicvf
...
Note: Unless thunder-nicvf driver is in use make sure your kernel config includes
CONFIG_THUNDER_NIC_VF setting.
Example output:
...
0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci
0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci
...
Example output:
...
0002:01:00.1 'Device 0011' drv=vfio-pci unused=
0002:01:00.2 'Device 0011' drv=vfio-pci unused=
...
0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci
...
8. Refer to section Running testpmd for instruction how to launch testpmd application.
Note: The VNIC driver in the multiqueue setup works differently than other drivers
like ixgbe. We need to bind separately each specific queue set device with the
usertools/dpdk-devbind.py utility.
Note: Depending on the hardware used, the kernel driver sets a threshold vf_id.
VFs that try to attached with an id below or equal to this boundary are considered
primary VFs. VFs that try to attach with an id above this boundary are considered
secondary VFs.
Note: NUMA systems have 12 VFs per port and non-NUMA 6 VFs per port.
# usertools/dpdk-devbind.py --status
We want to bind two physical interfaces with 24 queues each device, we attach two primary VFs
and four secondary queues. In our example we choose two 10G interfaces eth1 (0002:01:00.2)
and eth2 (0002:01:00.3). We will choose four secondary queue sets from the ending of the list
(0002:01:01.7-0002:01:02.2).
1. Bind two primary VFs to the vfio-pci driver:
usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.2
usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.3
The nicvf thunderx driver will make use of attached secondary VFs automatically during the
interface configuration stage.
Limitations
CRC striping
The ThunderX SoC family NICs strip the CRC for every packets coming into the host inter-
face. So, CRC will be stripped even when the rxmode.hw_strip_crc member is set to 0 in
struct rte_eth_conf.
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value is fixed
and cannot be changed. So, even when the rxmode.max_rx_pkt_len member of struct
rte_eth_conf is set to a value lower than 9200, frames up to 9200 bytes can still reach the
host interface.
The ThunderX SoC family NICs support up to 12 segments per packet when working in scat-
ter/gather mode. So, setting MTU will result with EINVAL when the frame size does not fit in
the maximum number of segments.
For details about the virtio spec, refer to Virtio PCI Card Specification written by Rusty Russell.
As a PMD, virtio provides packet reception and transmission callbacks virtio_recv_pkts and
virtio_xmit_pkts.
In virtio_recv_pkts, index in range [vq->vq_used_cons_idx , vq->vq_ring.used->idx) in vring is
available for virtio to burst out.
In virtio_xmit_pkts, same index range in vring is available for virtio to clean. Virtio will enqueue
to be transmitted packets into vring, advance the vq->vq_ring.avail->idx, and then notify the
host back end if necessary.
In this release, the virtio PMD driver provides the basic functionality of packet reception and
transmission.
• It supports merge-able buffers per packet when receiving packets and scattered buffer
per packet when transmitting packets. The packet size supported is from 64 to 1518.
• It supports multicast packets and promiscuous mode.
• The descriptor number for the Rx/Tx queue is hard-coded to be 256 by qemu. If given a
different descriptor number by the upper application, the virtio PMD generates a warning
and fall back to the hard-coded value.
• Features of mac/vlan filter are supported, negotiation with vhost/backend are needed to
support them. When backend can’t support vlan filter, virtio app on guest should disable
vlan filter to make sure the virtio port is configured correctly. E.g. specify ‘–disable-hw-
vlan’ in testpmd command line.
• RTE_PKTMBUF_HEADROOM should be defined larger than sizeof(struct vir-
tio_net_hdr), which is 10 bytes.
• Virtio does not support runtime configuration.
• Virtio supports Link State interrupt.
• Virtio supports Rx interrupt (so far, only support 1:1 mapping for queue/interrupt).
• Virtio supports software vlan stripping and inserting.
• Virtio supports using port IO to get PCI resource when uio/igb_uio module is not avail-
able.
Prerequisites
This section demonstrates kni vhost back end example setup for Phy-VM Communication.
Fig. 8.5: Host2VM Communication Example Using kni vhost Back End
Other basic DPDK preparations like hugepage enabling, uio port binding are not listed
here. Please refer to the DPDK Getting Started Guide for detailed instructions.
2. Launch the kni user application:
examples/kni/build/app/kni -c 0xf -n 4 -- -p 0x1 -P --config="(0,1,3)"
This command generates one network device vEth0 for physical port. If specify more
physical ports, the generated network device will be vEth1, vEth2, and so on.
For each physical port, kni creates two user threads. One thread loops to fetch packets
from the physical NIC port into the kni receive queue. The other user thread loops to
send packets in the kni transmit queue.
For each physical port, kni also creates a kernel thread that retrieves packets from the kni
receive queue, place them onto kni’s raw socket’s queue and wake up the vhost kernel
thread to exchange packets with the virtio virt queue.
For more details about kni, please refer to Kernel NIC Interface.
3. Enable the kni raw socket functionality for the specified physical NIC port, get the gener-
ated file descriptor and set it in the qemu command line parameter. Always remember to
set ioeventfd_on and vhost_on.
Example:
echo 1 > /sys/class/net/vEth0/sock_en
fd=`cat /sys/class/net/vEth0/sock_fd`
exec qemu-system-x86_64 -enable-kvm -cpu host \
-m 2048 -smp 4 -name dpdk-test1-vm1 \
-drive file=/data/DPDKVMS/dpdk-vm.img \
-netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
-device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
-vnc:1 -daemonize
In the above example, virtio port 0 in the guest VM will be associated with vEth0, which
in turns corresponds to a physical port, which means received packets come from vEth0,
and transmitted packets is sent to vEth0.
4. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the
forwarding application. When the virtio port in guest bursts Rx, it is getting packets from
the raw socket’s receive queue. When the virtio port bursts Tx, it is sending packet to the
tx_q.
modprobe uio
echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
modprobe uio_pci_generic
python usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
5. Use IXIA packet generator to inject a packet stream into the KNI physical port.
The packet reception and transmission flow path is:
IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest VM
virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue ->82599 PF->
IXIA packet generator
Fig. 8.7: Host2VM Communication Example Using qemu vhost Back End
-netdev tap,id=vm1_p1,ifname=tap0,script=no,vhost=on
-device virtio-net-pci,netdev=vm1_p1,bus=pci.0,addr=0x3,ioeventfd=on
-device pci-assign,host=04:10.1 \
Interrupt mode
There are three kinds of interrupts from a virtio device over PCI bus: config interrupt, Rx
interrupts, and Tx interrupts. Config interrupt is used for notification of device configuration
changes, especially link status (lsc). Interrupt mode is translated into Rx interrupts in the
context of DPDK.
Example
This PMD is a thin wrapper of the DPDK vhost library. The user can handle virtqueues as one
of normal DPDK port.
Please refer to Chapter “Vhost Library” of DPDK Programmer’s Guide to know detail of vhost.
Currently, the vhost PMD provides the basic functionality of packet reception, transmission and
event handling.
• It has multiple queues support.
• It supports RTE_ETH_EVENT_INTR_LSC and RTE_ETH_EVENT_QUEUE_STATE events.
• It supports Port Hotplug functionality.
• Don’t need to stop RX/TX, when the user wants to stop a guest or a virtio-net driver on
guest.
This section demonstrates vhost PMD with testpmd DPDK sample application.
1. Launch the testpmd with vhost PMD:
./testpmd -c f -n 4 --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
Other basic DPDK preparations like hugepage enabling here. Please refer to the DPDK
Getting Started Guide for detailed instructions.
2. Launch the QEMU:
qemu-system-x86_64 <snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0
This command attaches one virtio-net device to QEMU guest. After initialization pro-
cesses between QEMU and DPDK vhost library are done, status of the port will be linked
up.
The VMXNET3 adapter is the next generation of a paravirtualized NIC, introduced by VMware*
ESXi. It is designed for performance, offers all the features available in VMXNET2, and adds
several new features such as, multi-queue support (also known as Receive Side Scaling, RSS),
IPv6 offloads, and MSI/MSI-X interrupt delivery. One can use the same device in a DPDK
application with VMXNET3 PMD introduced in DPDK API.
In this chapter, two setups with the use of the VMXNET3 PMD are demonstrated:
1. Vmxnet3 with a native NIC connected to a vSwitch
2. Vmxnet3 chaining VMs connected to a vSwitch
For details on the VMXNET3 device, refer to the VMXNET3 driver’s vmxnet3 directory and
support manual from VMware*.
For performance details, refer to the following link from VMware:
http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf
As a PMD, the VMXNET3 driver provides the packet reception and transmission callbacks,
vmxnet3_recv_pkts and vmxnet3_xmit_pkts.
The VMXNET3 PMD handles all the packet buffer memory allocation and resides in guest
address space and it is solely responsible to free that memory when not needed. The packet
buffers and features to be supported are made available to hypervisor via VMXNET3 PCI
configuration space BARs. During RX/TX, the packet buffers are exchanged by their GPAs,
and the hypervisor loads the buffers with packets in the RX case and sends packets to vSwitch
in the TX case.
The VMXNET3 PMD is compiled with vmxnet3 device headers. The interface is similar to that
of the other PMDs available in the DPDK API. The driver pre-allocates the packet buffers and
loads the command ring descriptors in advance. The hypervisor fills those packet buffers on
packet arrival and write completion ring descriptors, which are eventually pulled by the PMD.
After reception, the DPDK application frees the descriptors and loads new packet buffers for
the coming packets. The interrupts are disabled and there is no notification required. This
keeps performance up on the RX side, even though the device provides a notification feature.
In the transmit routine, the DPDK application fills packet buffer pointers in the descriptors of
the command ring and notifies the hypervisor. In response the hypervisor takes packets and
passes them to the vSwitch, It writes into the completion descriptors ring. The rings are read
by the PMD in the next transmit routine call and the buffers and descriptors are freed from
memory.
In release 1.6.0, the VMXNET3 PMD provides the basic functionality of packet reception and
transmission. There are several options available for filtering packets at VMXNET3 device level
including:
1. MAC Address based filtering:
• Unicast, Broadcast, All Multicast modes - SUPPORTED BY DEFAULT
• Multicast with Multicast Filter table - NOT SUPPORTED
• Promiscuous mode - SUPPORTED
• RSS based load balancing between queues - SUPPORTED
2. VLAN filtering:
• VLAN tag based filtering without load balancing - SUPPORTED
Note:
• Release 1.6.0 does not support separate headers and body receive cmd_ring and hence,
multiple segment buffers are not supported. Only cmd_ring_0 is used for packet buffers,
one for each descriptor.
• Receive and transmit of scattered packets is not supported.
• Multicast with Multicast Filter table is not supported.
Prerequisites
Note: Depending on the Virtual Machine type, the VMware vSphere Client shows Ethernet
adaptors while adding an Ethernet device. Ensure that the VM type used offers a VMXNET3
device. Refer to the VMware documentation for a listed of VMs.
Note: Follow the DPDK Getting Started Guide to setup the basic DPDK environment.
Note: Follow the DPDK Sample Application’s User Guide, L2 Forwarding/L3 Forwarding and
TestPMD for instructions on how to run a DPDK application using an assigned VMXNET3
device.
Note: Other instructions on preparing to use DPDK such as, hugepage enabling, uio port
binding are not listed here. Please refer to DPDK Getting Started Guide and DPDK Sample
Application’s User Guide for detailed instructions.
address needs to be written in packets to hit the other VM’s VMXNET3 interface.
In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware, the DPDK also
includes two pure-software PMDs. These two drivers are:
• A libpcap -based PMD (librte_pmd_pcap) that reads and writes packets using libpcap, -
both from files on disk, as well as from physical NIC devices using standard Linux kernel
drivers.
• A ring-based PMD (librte_pmd_ring) that allows a set of software FIFOs (that is, rte_ring)
to be accessed using the PMD APIs, as though they were physical NICs.
Note: The libpcap -based PMD is disabled by default in the build configuration files, owing
to an external dependency on the libpcap development files which must be installed on the
board. Once the libpcap development files are installed, the library can be enabled by setting
CONFIG_RTE_LIBRTE_PMD_PCAP=y and recompiling the DPDK.
For ease of use, the DPDK EAL also has been extended to allow pseudo-Ethernet devices,
using one or more of these drivers, to be created at application startup time during EAL initial-
ization.
To do so, the –vdev= parameter must be passed to the EAL. This takes take options to allow
ring and pcap-based Ethernet to be allocated and used transparently by the application. This
can be used, for example, for testing on a virtual machine where there are no Ethernet ports.
Libpcap-based PMD
Pcap-based devices can be created using the virtual device –vdev option. The device name
must start with the net_pcap prefix followed by numbers or letters. The name is unique for
each device. Each device can have multiple stream options and multiple devices can be used.
Multiple device definitions can be arranged using multiple –vdev. Device name and stream
options must be separated by commas as shown below:
$RTE_TARGET/app/testpmd -c f -n 4 \
--vdev 'net_pcap0,stream_opt0=..,stream_opt1=..' \
--vdev='net_pcap1,stream_opt0=..'
Device Streams
Multiple ways of stream definitions can be assessed and combined as long as the following
two rules are respected:
• A device is provided with two different streams - reception and transmission.
• A device is provided with one network interface name used for reading and writing pack-
ets.
The different stream types are:
• rx_pcap: Defines a reception stream based on a pcap file. The driver reads each packet
within the given pcap file as if it was receiving it from the wire. The value is a path to a
valid pcap file.
rx_pcap=/path/to/file.pcap
• tx_pcap: Defines a transmission stream based on a pcap file. The driver writes each
received packet to the given pcap file. The value is a path to a pcap file. The file is
overwritten if it already exists and it is created if it does not.
tx_pcap=/path/to/file.pcap
• rx_iface: Defines a reception stream based on a network interface name. The driver
reads packets coming from the given interface using the Linux kernel driver for that inter-
face. The value is an interface name.
rx_iface=eth0
• tx_iface: Defines a transmission stream based on a network interface name. The driver
sends packets to the given interface using the Linux kernel driver for that interface. The
value is an interface name.
tx_iface=eth0
• iface: Defines a device mapping a network interface. The driver both reads and writes
packets from and to the given interface. The value is an interface name.
iface=eth0
Examples of Usage
Read packets from one pcap file and write them to another:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 \
--vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_pcap=file_tx.pcap' \
-- --port-topology=chained
Read packets from a network interface and write them to a pcap file:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 \
--vdev 'net_pcap0,rx_iface=eth0,tx_pcap=file_tx.pcap' \
-- --port-topology=chained
Read packets from a pcap file and write them to a network interface:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 \
--vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_iface=eth1' \
-- --port-topology=chained
One of the first things that testpmd does before starting to forward packets is to flush the RX
streams by reading the first 512 packets on every RX stream and discarding them. When using
a libpcap-based PMD this behavior can be turned off using the following command line option:
--no-flush-rx
It is useful for the case where the rx_pcap is being used and no packets are meant to be
discarded. Otherwise, the first 512 packets from the input pcap file will be discarded by the RX
flushing operation.
$RTE_TARGET/app/testpmd -c '0xf' -n 4 \
--vdev 'net_pcap0,rx_pcap=file_rx.pcap,tx_pcap=file_tx.pcap' \
-- --port-topology=chained --no-flush-rx
Rings-based PMD
To run a DPDK application on a machine without any Ethernet devices, a pair of ring-based
rte_ethdevs can be used as below. The device names passed to the –vdev option must start
with net_ring and take no additional parameters. Multiple devices may be specified, separated
by commas.
./testpmd -c E -n 4 --vdev=net_ring0 --vdev=net_ring1 -- -i
EAL: Detected lcore 1 as core 1 on socket 0
...
Interactive-mode selected
Configuring Port 0 (socket 0)
Configuring Port 1 (socket 0)
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
Done.
Both drivers can provide similar APIs to allow the user to create a PMD, that is, rte_ethdev
structure, instances at run-time in the end-application, for example, using rte_eth_from_rings()
or rte_eth_from_pcaps() APIs. For the rings-based PMD, this functionality could be used, for
example, to allow data exchange between cores using rings to be done in exactly the same
way as sending or receiving packets from an Ethernet device. For the libpcap-based PMD, it
allows an application to open one or more pcap files and use these as a source of packet input
to the application.
Usage Examples
To create two pseudo-Ethernet ports where all traffic sent to a port is looped back for reception
on the same port (error handling omitted for clarity):
#define RING_SIZE 256
#define NUM_RINGS 2
#define SOCKET0 0
To create two pseudo-Ethernet ports where the traffic is switched between them, that is, traffic
sent to port 0 is read back from port 1 and vice-versa, the final two lines could be changed as
below:
port0 = rte_eth_from_rings("net_ring0", &ring[0], 1, &ring[1], 1, SOCKET0);
port1 = rte_eth_from_rings("net_ring1", &ring[1], 1, &ring[0], 1, SOCKET0);
This type of configuration could be useful in a pipeline model, for example, where one may
want to have inter-core communication using pseudo Ethernet devices rather than raw rings,
for reasons of API consistency.
Enqueuing and dequeuing items from an rte_ring using the rings-based PMD may be slower
than using the native rings API. This is because DPDK Ethernet drivers make use of func-
tion pointers to call the appropriate enqueue or dequeue functions, while the rte_ring specific
functions are direct function calls in the code and are often inlined by the compiler.
Once an ethdev has been created, for either a ring or a pcap-based PMD, it should
be configured and started in the same way as a regular Ethernet device, that is, by
calling rte_eth_dev_configure() to set the number of receive and transmit queues,
then calling rte_eth_rx_queue_setup() / tx_queue_setup() for each of those queues
and finally calling rte_eth_dev_start() to allow transmission and reception of pack-
ets to begin.
Figures
Fig. 8.1 Virtualization for a Single Port NIC in SR-IOV Mode
Fig. 8.2 Performance Benchmark Setup
Fig. 8.3 Fast Host-based Packet Processing
Fig. 8.4 Inter-VM Communication
Fig. 8.5 Host2VM Communication Example Using kni vhost Back End
Fig. 8.7 Host2VM Communication Example Using qemu vhost Back End
Fig. 8.8 Assigning a VMXNET3 interface to a VM using VMware vSphere Client
Fig. 8.9 VMXNET3 with a Native NIC Connected to a vSwitch
Fig. 8.10 VMXNET3 Chaining VMs Connected to a vSwitch
651
DPDK documentation, Release 17.05.0-rc0
The AESNI MB PMD (librte_pmd_aesni_mb) provides poll mode crypto driver support for
utilizing Intel multi buffer library, see the white paper Fast Multi-buffer IPsec Implementations
on Intel® Architecture Processors.
The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc.
Features
Hash algorithms:
• RTE_CRYPTO_HASH_SHA1_HMAC
• RTE_CRYPTO_HASH_SHA224_HMAC
• RTE_CRYPTO_HASH_SHA256_HMAC
• RTE_CRYPTO_HASH_SHA384_HMAC
• RTE_CRYPTO_HASH_SHA512_HMAC
Limitations
Installation
To build DPDK with the AESNI_MB_PMD the user is required to download the multi-
buffer library from here and compile it on their user system before building DPDK. The lat-
est version of the library supported by this PMD is v0.44, which can be downloaded in
https://github.com/01org/intel-ipsec-mb/archive/v0.44.zip.
make
Initialization
• max_nb_sessions: Specify the maximum number of sessions that can be created (2048
by default).
Example:
./l2fwd-crypto -c 40 -n 4 --vdev="crypto_aesni_mb,socket_id=1,max_nb_sessions=128"
The AES-NI GCM PMD (librte_pmd_aesni_gcm) provides poll mode crypto driver support for
utilizing Intel ISA-L crypto library, which provides operation acceleration through the AES-NI
instruction sets for AES-GCM authenticated cipher algorithm.
Features
Installation
To build DPDK with the AESNI_GCM_PMD the user is required to install the
libisal_crypto library in the build environment. For download and more details please
visit https://github.com/01org/isa-l_crypto.
Initialization
• max_nb_sessions: Specify the maximum number of sessions that can be created (2048
by default).
Example:
./l2fwd-crypto -c 40 -n 4 --vdev="crypto_aesni_gcm,socket_id=1,max_nb_sessions=128"
Limitations
• Chained mbufs are supported but only out-of-place (destination mbuf must be contigu-
ous).
• Hash only is not supported.
• Cipher only is not supported.
This code provides the initial implementation of the ARMv8 crypto PMD. The driver uses
ARMv8 cryptographic extensions to process chained crypto operations in an optimized way.
The core functionality is provided by a low-level library, written in the assembly code.
Features
ARMv8 Crypto PMD has support for the following algorithm pairs:
Supported cipher algorithms:
• RTE_CRYPTO_CIPHER_AES_CBC
Supported authentication algorithms:
• RTE_CRYPTO_AUTH_SHA1_HMAC
• RTE_CRYPTO_AUTH_SHA256_HMAC
Installation
• RTE_CPUFLAG_SHA1
• RTE_CPUFLAG_SHA2
• RTE_CPUFLAG_NEON
Initialization
User can use app/test application to check how to use this PMD and to verify crypto processing.
Test name is cryptodev_sw_armv8_autotest. For performance test cryp-
todev_sw_armv8_perftest can be used.
Limitations
The KASUMI PMD (librte_pmd_kasumi) provides poll mode crypto driver support for utilizing
Intel Libsso library, which implements F8 and F9 functions for KASUMI UEA1 cipher and UIA1
hash algorithms.
Features
Limitations
Installation
To build DPDK with the KASUMI_PMD the user is required to download the export controlled
libsso_kasumi library, by requesting it from https://networkbuilders.intel.com/network-
technologies/dpdk. Once approval has been granted, the user needs to log in
https://networkbuilders.intel.com/dpdklogin and click on “Kasumi Bit Stream crypto library” link,
to download the library. After downloading the library, the user needs to unpack and compile it
on their system before building DPDK:
make
Note: To build the PMD as a shared library, the libsso_kasumi library must be built as follows:
make KASUMI_CFLAGS=-DKASUMI_C
Initialization
This code provides the initial implementation of the openssl poll mode driver. All cryptogra-
phy operations are using Openssl library crypto API. Each algorithm uses EVP interface from
openssl API - which is recommended by Openssl maintainers.
For more details about openssl library please visit openssl webpage: https://www.openssl.org/
Features
Installation
To compile openssl PMD, it has to be enabled in the config/common_base file and appropriate
openssl packages have to be installed in the build environment.
The newest openssl library version is supported: * 1.0.2h-fips 3 May 2016. Older versions that
were also verified: * 1.0.1f 6 Jan 2014 * 1.0.1 14 Mar 2012
For Ubuntu 14.04 LTS these packages have to be installed in the build system: sudo apt-get
install openssl sudo apt-get install libc6-dev-i386 (for i686-native-linuxapp-gcc target)
This code was also verified on Fedora 24. This code was NOT yet verified on FreeBSD.
Initialization
User can use app/test application to check how to use this pmd and to verify crypto processing.
Test name is cryptodev_openssl_autotest. For performance test cryptodev_openssl_perftest
can be used.
To verify real traffic l2fwd-crypto example can be used with this command:
sudo ./build/l2fwd-crypto -c 0x3 -n 4 --vdev "crypto_openssl"
--vdev "crypto_openssl"-- -p 0x3 --chain CIPHER_HASH
--cipher_op ENCRYPT --cipher_algo AES_CBC
--cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
--iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:ff
--auth_op GENERATE --auth_algo SHA1_HMAC
--auth_key 11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
Limitations
The Null Crypto PMD (librte_pmd_null_crypto) provides a crypto poll mode driver which
provides a minimal implementation for a software crypto device. As a null device it does not
modify the data in the mbuf on which the crypto operation is to operate and it only has support
for a single cipher and authentication algorithm.
When a burst of mbufs is submitted to a Null Crypto PMD for processing then each mbuf in the
burst will be enqueued in an internal buffer for collection on a dequeue call as long as the mbuf
has a valid rte_mbuf_offload operation with a valid rte_cryptodev_session or rte_crypto_xform
chain of operations.
Features
Modes:
• RTE_CRYPTO_XFORM_CIPHER ONLY
• RTE_CRYPTO_XFORM_AUTH ONLY
• RTE_CRYPTO_XFORM_CIPHER THEN RTE_CRYPTO_XFORM_AUTH
• RTE_CRYPTO_XFORM_AUTH THEN RTE_CRYPTO_XFORM_CIPHER
Cipher algorithms:
• RTE_CRYPTO_CIPHER_NULL
Authentication algorithms:
• RTE_CRYPTO_AUTH_NULL
Limitations
• Only in-place is currently supported (destination address is the same as source address).
Installation
The Null Crypto PMD is enabled and built by default in both the Linux and FreeBSD builds.
Initialization
Scheduler PMD is a software crypto PMD, which has the capabilities of attaching hardware
and/or software cryptodevs, and distributes ingress crypto ops among them in a certain man-
ner.
Limitations
Installation
Initialization
• socket_id: Specify the socket where the memory for the device is going to be allocated
(by default, socket_id will be the socket where the core that is creating the PMD is running
on).
• max_nb_sessions: Specify the maximum number of sessions that can be created. This
value may be overwritten internally if there are too many devices are attached.
• slave: If a cryptodev has been initialized with specific name, it can be attached to the
scheduler using this parameter, simply filling the name here. Multiple cryptodevs can be
attached initially by presenting this parameter multiple times.
Example:
... --vdev "crypto_aesni_mb_pmd,name=aesni_mb_1" --vdev "crypto_aesni_mb_pmd,name=aesni_mb_2" -
Note:
• The scheduler cryptodev cannot be started unless the scheduling mode is set and at least
one slave is attached. Also, to configure the scheduler in the run-time, like attach/detach
slave(s), change scheduling mode, or enable/disable crypto op ordering, one should stop
the scheduler first, otherwise an error will be returned.
• The crypto op reordering feature requires using the userdata field of every mbuf to be
processed to store temporary data. By the end of processing, the field is set to pointing
to NULL, any previously stored value of this field will be lost.
Currently the Crypto Scheduler PMD library supports following modes of operation:
• CDEV_SCHED_MODE_ROUNDROBIN:
Round-robin mode, which distributes the enqueued burst of crypto ops among its
slaves in a round-robin manner. This mode may help to fill the throughput gap
between the physical core and the existing cryptodevs to increase the overall per-
formance.
The SNOW 3G PMD (librte_pmd_snow3g) provides poll mode crypto driver support for utiliz-
ing Intel Libsso library, which implements F8 and F9 functions for SNOW 3G UEA2 cipher and
UIA2 hash algorithms.
Features
• RTE_CRYPTO_AUTH_SNOW3G_UIA2
Limitations
Installation
To build DPDK with the SNOW3G_PMD the user is required to download the export controlled
libsso_snow3g library, by requesting it from https://networkbuilders.intel.com/network-
technologies/dpdk. Once approval has been granted, the user needs to log in
https://networkbuilders.intel.com/dpdklogin and click on “Snow3G Bit Stream crypto library”
link, to download the library. After downloading the library, the user needs to unpack and
compile it on their system before building DPDK:
make snow3G
Initialization
The QAT PMD provides poll mode crypto driver support for Intel QuickAssist Technology
DH895xxC, Intel QuickAssist Technology C62x and Intel QuickAssist Technology C3xxx
hardware accelerator.
Features
Limitations
• Hash only is not supported except SNOW 3G UIA2 and KASUMI F9.
• Only supports the session-oriented API implementation (session-less APIs are not sup-
ported).
• SNOW 3G (UEA2) and KASUMI (F8) supported only if cipher length, cipher offset fields
are byte-aligned.
• SNOW 3G (UIA2) and KASUMI (F9) supported only if hash length, hash offset fields are
byte-aligned.
• No BSD support as BSD QAT kernel driver not available.
Installation
To use the DPDK QAT PMD an SRIOV-enabled QAT kernel driver is required. The VF devices
exposed by this driver will be used by QAT PMD.
To enable QAT in DPDK, follow the instructions mentioned in
http://dpdk.org/doc/guides/linux_gsg/build_dpdk.html
Quick instructions as follows:
make config T=x86_64-native-linuxapp-gcc
sed -i 's,\(CONFIG_RTE_LIBRTE_PMD_QAT\)=n,\1=y,' build/.config
make
If you are running on kernel 4.4 or greater, see instructions for Installation using kernel.org
driver below. If you are on a kernel earlier than 4.4, see Installation using 01.org QAT driver .
For Intel QuickAssist Technology C62x and Intel QuickAssist Technology C3xxx device,
kernel 4.5 or greater is needed. See instructions for Installation using kernel.org driver below.
NOTE: There is no driver available for Intel QuickAssist Technology C62x and Intel Quick-
Assist Technology C3xxx devices on 01.org.
Download the latest QuickAssist Technology Driver from 01.org Consult the Getting Started
Guide at the same URL for further information.
The steps below assume you are:
• Building on a platform with one DH895xCC device.
• Using package qatmux.l.2.3.0-34.tgz.
• On Fedora21 kernel 3.17.4-301.fc21.x86_64.
In the BIOS ensure that SRIOV is enabled and VT-d is disabled.
Uninstall any existing QAT driver, for example by running:
• ./installer.sh uninstall in the directory where originally installed.
• or rmmod qat_dh895xcc; rmmod intel_qat.
Build and install the SRIOV-enabled QAT driver:
mkdir /QAT
cd /QAT
# copy qatmux.l.2.3.0-34.tgz to this location
tar zxof qatmux.l.2.3.0-34.tgz
export ICP_WITHOUT_IOMMU=1
./installer.sh install QAT1.6 host
If the build fails due to missing header files you may need to do following:
• sudo yum install zlib-devel
• sudo yum install openssl-devel
If the build or install fails due to mismatching kernel sources you may need to do the following:
• sudo yum install kernel-headers-‘uname -r‘
• sudo yum install kernel-src-‘uname -r‘
• sudo yum install kernel-devel-‘uname -r‘
Next, you need to expose the Virtual Functions (VFs) using the sysfs file system.
First find the bdf of the physical function (PF) of the DH895xCC device:
lspci -d : 435
If you get an error, it’s likely you’re using a QAT kernel driver earlier than kernel 4.4.
To verify that the VFs are available for use - use lspci -d:443 to confirm the bdf of the 32
VF devices are available per DH895xCC device.
To complete the installation - follow instructions in Binding the available VFs to the DPDK UIO
driver .
Note: If the QAT kernel modules are not loaded and you see an error like Failed to
load MMP firmware qat_895xcc_mmp.bin this may be as a result of not using a
distribution, but just updating the kernel directly.
Download firmware from the kernel firmware repo at: http://git.kernel.org/cgit/linux/kernel/git/firmware/linux-
firmware.git/tree/
Copy qat binaries to /lib/firmware: * cp qat_895xcc.bin /lib/firmware * cp
qat_895xcc_mmp.bin /lib/firmware
cd to your linux source root directory and start the qat kernel modules: *
insmod ./drivers/crypto/qat/qat_common/intel_qat.ko * insmod
./drivers/crypto/qat/qat_dh895xcc/qat_dh895xcc.ko
Note:The following warning in /var/log/messages can be ignored: IOMMU should be
enabled for SR-IOV to work correctly
For Intel QuickAssist Technology C62x: Assuming you are running on at least a 4.5 kernel,
you can use the stock kernel.org QAT driver to start the QAT hardware.
The steps below assume you are:
• Running DPDK on a platform with one C62x device.
• On a kernel at least version 4.5.
In BIOS ensure that SRIOV is enabled and either a) disable VT-d or b) enable VT-d and set
"intel_iommu=on iommu=pt" in the grub file.
Ensure the QAT driver is loaded on your system, by executing:
lsmod | grep qat
Next, you need to expose the VFs using the sysfs file system.
First find the bdf of the C62x device:
lspci -d:37c8
For each c62x device there are 3 PFs. Using the sysfs, for each PF, enable the 16 VFs:
echo 16 > /sys/bus/pci/drivers/c6xx/0000\:1a\:00.0/sriov_numvfs
If you get an error, it’s likely you’re using a QAT kernel driver earlier than kernel 4.5.
To verify that the VFs are available for use - use lspci -d:37c9 to confirm the bdf of the 48
VF devices are available per C62x device.
To complete the installation - follow instructions in Binding the available VFs to the DPDK UIO
driver .
For Intel QuickAssist Technology C3xxx: Assuming you are running on at least a 4.5 kernel,
you can use the stock kernel.org QAT driver to start the QAT hardware.
The steps below assume you are:
• Running DPDK on a platform with one C3xxx device.
• On a kernel at least version 4.5.
In BIOS ensure that SRIOV is enabled and either a) disable VT-d or b) enable VT-d and set
"intel_iommu=on iommu=pt" in the grub file.
Ensure the QAT driver is loaded on your system, by executing:
lsmod | grep qat
Next, you need to expose the Virtual Functions (VFs) using the sysfs file system.
First find the bdf of the physical function (PF) of the C3xxx device
lspci -d:19e2
You should see output similar to:
01:00.0 Co-processor: Intel Corporation Device 19e2
For c3xxx device there is 1 PFs. Using the sysfs, enable the 16 VFs:
If you get an error, it’s likely you’re using a QAT kernel driver earlier than kernel 4.5.
To verify that the VFs are available for use - use lspci -d:19e3 to confirm the bdf of the 16
VF devices are available per C3xxx device. To complete the installation - follow instructions in
Binding the available VFs to the DPDK UIO driver .
For Intel(R) QuickAssist Technology DH895xcc device: The unbind command below as-
sumes bdfs of 03:01.00-03:04.07, if yours are different adjust the unbind command be-
low:
cd $RTE_SDK
modprobe uio
insmod ./build/kmod/igb_uio.ko
You can use lspci -vvd:443 to confirm that all devices are now in use by igb_uio kernel
driver.
For Intel(R) QuickAssist Technology C62x device: The unbind command below assumes
bdfs of 1a:01.00-1a:02.07, 3d:01.00-3d:02.07 and 3f:01.00-3f:02.07, if yours
are different adjust the unbind command below:
cd $RTE_SDK
modprobe uio
insmod ./build/kmod/igb_uio.ko
You can use lspci -vvd:37c9 to confirm that all devices are now in use by igb_uio kernel
driver.
For Intel(R) QuickAssist Technology C3xxx device: The unbind command below assumes
bdfs of 01:01.00-01:02.07, if yours are different adjust the unbind command below:
cd $RTE_SDK
modprobe uio
insmod ./build/kmod/igb_uio.ko
done; \
done
You can use lspci -vvd:19e3 to confirm that all devices are now in use by igb_uio kernel
driver.
The other way to bind the VFs to the DPDK UIO driver is by using the dpdk-devbind.py
script:
cd $RTE_SDK
./usertools/dpdk-devbind.py -b igb_uio 0000:03:01.1
The ZUC PMD (librte_pmd_zuc) provides poll mode crypto driver support for utilizing Intel
Libsso library, which implements F8 and F9 functions for ZUC EEA3 cipher and EIA3 hash
algorithms.
Features
Limitations
Installation
To build DPDK with the ZUC_PMD the user is required to download the export con-
trolled libsso_zuc library, by requesting it from https://networkbuilders.intel.com/network-
technologies/dpdk. Once approval has been granted, the user needs to log in
https://networkbuilders.intel.com/dpdklogin and click on “ZUC Library” link, to download the
library. After downloading the library, the user needs to unpack and compile it on their system
before building DPDK:
make
Initialization
Xen Guide
Introduction
DPDK provides a para-virtualization packet switching solution, based on the Xen hypervisor’s
Grant Table, Note 1, which provides simple and fast packet switching capability between guest
domains and host domain based on MAC address or VLAN tag.
This solution is comprised of two components; a Poll Mode Driver (PMD) as the front end in
the guest domain and a switching back end in the host domain. XenStore is used to exchange
configure information between the PMD front end and switching back end, including grant
reference IDs for shared Virtio RX/TX rings, MAC address, device state, and so on. XenStore
is an information storage space shared between domains, see further information on XenStore
below.
The front end PMD can be found in the DPDK directory lib/ librte_pmd_xenvirt and back end
example in examples/vhost_xen.
The PMD front end and switching back end use shared Virtio RX/TX rings as para- virtualized
interface. The Virtio ring is created by the front end, and Grant table references for the ring are
passed to host. The switching back end maps those grant table references and creates shared
rings in a mapped address space.
The following diagram describes the functionality of the DPDK Xen Packet- Switching Solution.
Note 1 The Xen hypervisor uses a mechanism called a Grant Table to share memory between
domains (http://wiki.xen.org/wiki/Grant Table).
A diagram of the design is shown below, where “gva” is the Guest Virtual Address, which is the
data pointer of the mbuf, and “hva” is the Host Virtual Address:
In this design, a Virtio ring is used as a para-virtualized interface for better performance over
a Xen private ring when packet switching to and from a VM. The additional performance is
gained by avoiding a system call and memory map in each memory copy with a XEN private
ring.
Device Creation
671
DPDK documentation, Release 17.05.0-rc0
Multiple mempools and multiple Virtios may exist in the guest domain, the first number is the
index, starting from zero.
The idx#_mempool_va stores the guest virtual address for mempool idx#.
The idx#_ether_adder stores the MAC address of the guest Virtio device.
After all gref# IDs are retrieved, the host maps them to a continuous virtual address space.
With the guest mempool virtual address, the host establishes 1:1 address mapping. With
multiple guest mempools, the host establishes multiple address translation regions.
The switching back end monitors changes in XenStore. When the back end detects that a new
Virtio device has been created in a guest domain, it will:
1. Retrieve Grant and configuration information from XenStore.
2. Map and create a Virtio ring.
3. Map mempools in the host and establish address translation between the guest address
and host address.
4. Select a free VMDQ pool, set its affinity with the Virtio device, and set the MAC/ VLAN
filter.
Packet Reception
When packets arrive from an external network, the MAC?VLAN filter classifies packets into
queues in one VMDQ pool. As each pool is bonded to a Virtio device in some guest domain,
the switching back end will:
1. Fetch an available entry from the Virtio RX ring.
2. Get gva, and translate it to hva.
3. Copy the contents of the packet to the memory buffer pointed to by gva.
The DPDK application in the guest domain, based on the PMD front end, is polling the shared
Virtio RX ring for available packets and receives them on arrival.
Packet Transmission
When a Virtio device in one guest domain is to transmit a packet, it puts the virtual address of
the packet’s data area into the shared Virtio TX ring.
The packet switching back end is continuously polling the Virtio TX ring. When new packets
are available for transmission from a guest, it will:
1. Fetch an available entry from the Virtio TX ring.
2. Get gva, and translate it to hva.
3. Copy the packet from hva to the host mbuf’s data area.
4. Compare the destination MAC address with all the MAC addresses of the Virtio devices
it manages. If a match exists, it directly copies the packet to the matched VIrtio RX ring.
Otherwise, it sends the packet out through hardware.
Note: The packet switching back end is for demonstration purposes only. The user could
implement their switching logic based on this example. In this example, only one physical port
on the host is supported. Multiple segments are not supported. The biggest mbuf supported
is 4KB. When the back end is restarted, all front ends must also be restarted.
Validated Environment
Host:
Xen-hypervisor: 4.2.2
Distribution: Fedora release 18
Kernel: 3.10.0
Xen development package (including Xen, Xen-libs, xen-devel): 4.2.3
Guest:
Distribution: Fedora 16 and 18
Kernel: 3.6.11
Note that the following commands might not be the same on different Linux* distributions.
• Install xen-devel package:
yum install xen-devel.x86_64
modprobe -r xen_gntdev
modprobe xen_gntdev limit=1000000
Note: The default limit for earlier versions of the xen_gntdev driver is 1024. That is insufficient
to support the mapping of multiple Virtio devices into multiple VMs, so it is necessary to enlarge
the limit by reloading this module. The default limit of recent versions of xen_gntdev is 1048576.
The rough calculation of this limit is:
limit=nb_mbuf# * VM#.
In DPDK examples, nb_mbuf# is normally 8192.
1. Edit config/common_linuxapp, and change the default configuration value for the follow-
ing two items:
CONFIG_RTE_LIBRTE_XEN_DOM0=y
CONFIG RTE_LIBRTE_PMD_XENVIRT=n
3. Ensure that RTE_SDK and RTE_TARGET are correctly set. Build the switching example:
make -C examples/vhost_xen/
4. Load the Xen DPDK memory management module and preallocate memory:
insmod ./x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/xen_dom0/rte_dom0_mm.ko
echo 2048> /sys/kernel/mm/dom0-mm/memsize-mB/memsize
Note: On Xen Dom0, there is no hugepage support. Under Xen Dom0, the DPDK uses a
special memory management kernel module to allocate chunks of physically continuous
memory. Refer to the DPDK Getting Started Guide for more information on memory
management in the DPDK. In the above command, 4 GB memory is reserved (2048 of 2
MB pages) for DPDK.
In this case, 0000:09:00.0 is the PCI address for the NIC controller.
6. Run the switching back end example:
examples/vhost_xen/build/vhost-switch -c f -n 3 --xen-dom0 -- -p1
Note: The -xen-dom0 option instructs the DPDK to use the Xen kernel module to allocate
memory.
Other Parameters:
• -vm2vm
The vm2vm parameter enables/disables packet switching in software. Disabling vm2vm
implies that on a VM packet transmission will always go to the Ethernet port and will not
be switched to another VM
• -Stats
The Stats parameter controls the printing of Virtio-net device statistics. The parameter
specifies the interval (in seconds) at which to print statistics, an interval of 0 seconds will
disable printing statistics.
Note: Before the Linux kernel version 3.8-rc5, Jan 15th 2013, a critical defect occurs when a
guest is heavily allocating Grant pages. The Grant driver allocates fewer pages than expected
which causes kernel memory corruption. This happens, for example, when a guest uses the v1
format of a Grant table entry and allocates more than 8192 Grant pages (this number might be
different on different hypervisor versions). To work around this issue, set the limit for gntalloc
driver to 6000. (The kernel normally allocates hundreds of Grant pages with one Xen front end
per virtualized device). If the kernel allocates a lot of Grant pages, for example, if the user uses
multiple net front devices, it is best to upgrade the Grant alloc driver. This defect has been
fixed in kernel version 3.8-rc5 and later.
3. Enable hugepages. Refer to the DPDK Getting Started Guide for instructions on how to
use hugepages in the DPDK.
4. Run TestPMD. Refer to DPDK TestPMD Application User Guide for detailed parameter
usage.
./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 --vdev="net_xenvirt0,mac=00:00:00:00:00
testpmd>set fwd mac
testpmd>start
Loopback Mode
The above message indicates that device 0 has been registered with MAC address
00:00:00:00:00:11 and VLAN tag 1000. Any packets received on the NIC with these values
is placed on the device’s receive queue.
Configure a packet stream in the packet generator, set the destination MAC address to
00:00:00:00:00:11, and VLAN to 1000, the guest Virtio receives these packets and sends them
out with destination MAC address 00:00:00:00:00:22.
Inter-VM Mode
Configure a packet stream in the packet generator, and set the destination MAC address to
00:00:00:00:00:11 and VLAN to 1000. The packets received in Virtio in guest VM1 will be
forwarded to Virtio in guest VM2 and then sent out through hardware with destination MAC
address 00:00:00:00:00:33.
The packet flow is:
packet generator->Virtio in guest VM1->switching backend->Virtio in guest VM2->switching
backend->wire
Contributor’s Guidelines
Description
This document specifies the preferred style for source files in the DPDK source tree. It is based
on the Linux Kernel coding guidelines and the FreeBSD 7.2 Kernel Developer’s Manual (see
man style(9)), but was heavily modified for the needs of the DPDK.
General Guidelines
The rules and guidelines given in this document cannot cover every situation, so the following
general guidelines should be used as a fallback:
• The code style should be consistent within each individual file.
• In the case of creating new files, the style should be consistent within each file in a given
directory or module.
• The primary reason for coding standards is to increase code readability and comprehen-
sibility, therefore always use whatever option will make the code easiest to read.
Line length is recommended to be not more than 80 characters, including comments. [Tab stop
size should be assumed to be 8-characters wide].
Note: The above is recommendation, and not a hard limit. However, it is expected that the
recommendations should be followed in all but the rarest situations.
C Comment Style
Usual Comments
These comments should be used in normal cases. To document a public API, a doxygen-like
format must be used: refer to Doxygen Guidelines.
/*
* VERY important single-line comments look like this.
*/
679
DPDK documentation, Release 17.05.0-rc0
/*
* Multi-line comments look like this. Make them real sentences. Fill
* them so they look like real paragraphs.
*/
License Header
Each file should begin with a special comment containing the appropriate copyright and license
for the file. Generally this is the BSD License, except for code for Linux Kernel modules.
After any copyright header, a blank line should be left before any other contents, e.g. include
statements in a C file.
C Preprocessor Directives
Header Includes
#include <rte_eal.h>
#include <rte_ring.h>
#include <rte_mempool.h>
#include "application.h"
/* Code */
#endif /* _FILE_H_ */
Macros
Do not #define or declare names except with the standard DPDK prefix: RTE_. This is to
ensure there are no collisions with definitions in the application itself.
The names of “unsafe” macros (ones that have side effects), and the names of macros for
manifest constants, are all in uppercase.
The expansions of expression-like macros are either a single token or have outer parentheses.
If a macro is an inline expansion of a function, the function name is all in lowercase and the
macro has the same name all in uppercase. If the macro encapsulates a compound statement,
enclose it in a do-while loop, so that it can be used safely in if statements. Any final statement-
terminating semicolon should be supplied by the macro invocation rather than the macro, to
make parsing easier for pretty-printers and editors.
For example:
#define MACRO(x, y) do { \
variable = (x) + (y); \
(y) += 2; \
} while(0)
Note: Wherever possible, enums and inline functions should be preferred to macros, since
they provide additional degrees of type-safety and can allow compilers to emit extra warnings
about unsafe code.
Conditional Compilation
• When code is conditionally compiled using #ifdef or #if, a comment may be added
following the matching #endif or #else to permit the reader to easily discern where
conditionally compiled code regions end.
• This comment should be used only for (subjectively) long regions, regions greater than 20
lines, or where a series of nested #ifdef‘s may be confusing to the reader. Exceptions
may be made for cases where code is conditionally not compiled for the purposes of
lint(1), or other tools, even though the uncompiled region may be small.
• The comment should be separated from the #endif or #else by a single space.
• For short conditionally compiled regions, a closing comment should not be used.
• The comment for #endif should match the expression used in the corresponding #if
or #ifdef.
• The comment for #else and #elif should match the inverse of the expression(s) used
in the preceding #if and/or #elif statements.
• In the comments, the subexpression defined(FOO) is abbreviated as “FOO”. For the
purposes of comments, #ifndef FOO is treated as #if !defined(FOO).
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
#ifdef COMPAT_43
/* A large region here, or other conditional code. */
#else /* !COMPAT_43 */
/* Or here. */
#endif /* COMPAT_43 */
#ifndef COMPAT_43
/* Yet another large region here, or other conditional code. */
#else /* COMPAT_43 */
/* Or here. */
#endif /* !COMPAT_43 */
Note: Conditional compilation should be used only when absolutely necessary, as it increases
the number of target binaries that need to be built and tested.
C Types
Integers
For fixed/minimum-size integer values, the project uses the form uintXX_t (from stdint.h) in-
stead of older BSD-style integer identifiers of the form u_intXX_t.
Enumerations
• Enum types should be used in preference to macros #defining a set of (sequential) val-
ues.
• Enum types should be prefixed with rte_ and the elements by a suitable prefix [gener-
ally starting RTE_<enum>_ - where <enum> is a shortname for the enum type] to avoid
namespace collisions.
Bitfields
The developer should group bitfields that are included in the same integer, as follows:
struct grehdr {
uint16_t rec:3,
srr:1,
seq:1,
key:1,
routing:1,
csum:1,
version:3,
reserved:4,
ack:1;
/* ... */
}
Variable Declarations
In declarations, do not put any whitespace between asterisks and adjacent tokens, except for
tokens that are identifiers related to types. (These identifiers are the names of basic types, type
qualifiers, and typedef-names other than the one being declared.) Separate these identifiers
from asterisks using a single space.
For example:
int *x; /* no space after asterisk */
int * const x; /* space after asterisk when using a type qualifier */
• All externally-visible variables should have an rte_ prefix in the name to avoid names-
pace collisions.
• Do not use uppercase letters - either in the form of ALL_UPPERCASE, or CamelCase -
in variable names. Lower-case letters and underscores only.
Structure Declarations
• In general, when declaring variables in new structures, declare them sorted by use, then
by size (largest to smallest), and then in alphabetical order. Sorting by use means that
commonly used variables are used together and that the structure layout makes logical
sense. Ordering by size then ensures that as little padding is added to the structure as
possible.
• For existing structures, additions to structures should be added to the end so for back-
ward compatibility reasons.
• Each structure element gets its own line.
• Try to make the structure readable by aligning the member names using spaces as shown
below.
• Names following extremely long types, which therefore cannot be easily aligned with the
rest, should be separated by a single space.
struct foo {
struct foo *next; /* List of active foo. */
struct mumble amumble; /* Comment for mumble. */
int bar; /* Try to align the comments. */
struct verylongtypename *baz; /* Won't fit with other members */
};
• Major structures should be declared at the top of the file in which they are used, or in
separate header files if they are used in multiple source files.
• Use of the structures should be by separate variable declarations and those declarations
must be extern if they are declared in a header file.
• Externally visible structure definitions should have the structure name prefixed by rte_
to avoid namespace collisions.
Queues
Use queue(3) macros rather than rolling your own lists, whenever possible. Thus, the previous
example would be better written:
#include <sys/queue.h>
struct foo {
LIST_ENTRY(foo) link; /* Use queue macros for foo lists. */
struct mumble amumble; /* Comment for mumble. */
DPDK also provides an optimized way to store elements in lockless rings. This should be used
in all data-path code, when there are several consumer and/or producers to avoid locking for
concurrent access.
Typedefs
rather than:
typedef struct my_struct_type {
/* ... */
} my_struct_type;
my_struct_type my_var
Typedefs are problematic because they do not properly hide their underlying type; for example,
you need to know if the typedef is the structure itself, as shown above, or a pointer to the
structure. In addition, they must be declared exactly once, whereas an incomplete structure
type can be mentioned as many times as necessary. Typedefs are difficult to use in stand-
alone header files. The header that defines the typedef must be included before the header
that uses it, or by the header that uses it (which causes namespace pollution), or there must
be a back-door mechanism for obtaining the typedef.
Note that #defines used instead of typedefs also are problematic (since they do not propa-
gate the pointer type correctly due to direct text replacement). For example, #define pint
int * does not work as expected, while typedef int *pint does work. As stated when
discussing macros, typedefs should be preferred to macros in cases like this.
When convention requires a typedef; make its name match the struct tag. Avoid typedefs
ending in _t, except as specified in Standard C or by POSIX.
Note: It is recommended to use typedefs to define function pointer types, for reasons of code
readability. This is especially true when the function type is used as a parameter to another
function.
For example:
/**
* Definition of a remote launch function.
*/
typedef int (lcore_function_t)(void *);
C Indentation
General
• Indentation is a hard tab, that is, a tab character, not a sequence of spaces,
Note: Global whitespace rule in DPDK, use tabs for indentation, spaces for alignment.
Note: As with all style guidelines, code should match style already in use in an existing file.
}
for (;;) {
if (cond)
stmt;
}
if (val != NULL)
val = realloc(val, newsize);
• Closing and opening braces go on the same line as the else keyword.
• Braces that are not necessary should be left out.
if (test)
stmt;
else if (bar) {
stmt;
stmt;
} else
stmt;
Function Calls
Operators
Exit
Local Variables
• Variables should be declared at the start of a block of code rather than in the middle. The
exception to this is when the variable is const in which case the declaration must be at
the point of first use/assignment.
• When declaring variables in functions, multiple variables per line are OK. However, if
multiple declarations would cause the line to exceed a reasonable line length, begin a
new set of declarations on the next line rather than using a line continuation.
• Be careful to not obfuscate the code by initializing variables in the declarations, only the
last variable on a line should be initialized. If multiple variables are to be initialized when
defined, put one per line.
• Do not use function calls in initializers, except for const variables.
int i = 0, j = 0, k = 0; /* bad, too many initializer */
Prototypes
• It is recommended (and generally required by the compiler) that all non-static functions
are prototyped somewhere.
• Functions local to one source module should be declared static, and should not be pro-
totyped unless absolutely necessary.
• Functions used from other parts of code (external API) must be prototyped in the relevant
include file.
• Function prototypes should be listed in a logical order, preferably alphabetical unless
there is a compelling reason to use a different ordering.
• Functions that are used locally in more than one module go into a separate header file,
for example, “extern.h”.
• Do not use the __P macro.
• Functions that are part of an external API should be documented using Doxygen-like
comments above declarations. See Doxygen Guidelines for details.
• Functions that are part of the external API must have an rte_ prefix on the function
name.
• Short function prototypes should be contained on a single line. Longer prototypes, e.g.
those with many parameters, can be split across multiple lines. The second and subse-
quent lines should be further indented as for line statement continuations as described
in the previous section.
static char *function1(int _arg, const char *_arg2,
struct foo *_arg3,
struct bar *_arg4,
struct baz *_arg5);
static void usage(void);
Note: Unlike function definitions, the function prototypes do not need to place the function
return type on a separate line.
Definitions
• Do not declare functions inside other functions. ANSI C states that such declarations
have file scope regardless of the nesting of the declaration. Hiding file declarations in
what appears to be a local scope is undesirable and will elicit complaints from a good
compiler.
• Old-style (K&R) function declaration should not be used, use ANSI function declarations
instead as shown below.
• Long argument lists should be wrapped as described above in the function prototypes
section.
/*
* All major routines should have a comment briefly describing what
* they do. The comment before the "main" routine should describe
* what the program does.
*/
int
main(int argc, char *argv[])
{
char *ep;
long num;
int ch;
NULL Pointers
• NULL is the preferred null pointer constant. Use NULL instead of (type *)0 or (type
*)NULL, except where the compiler does not know the destination type e.g. for variadic
args to a function.
• Test pointers against NULL, for example, use:
if (p == NULL) /* Good, compare pointer to NULL */
Return Value
• Functions which create objects, or allocate memory, should return pointer types, and
NULL on error. The error type should be indicated may setting the variable rte_errno
appropriately.
• Functions which work on bursts of packets, such as RX-like or TX-like functions, should
return the number of packets handled.
• Other functions returning int should generally behave like system calls: returning 0 on
success and -1 on error, setting rte_errno to indicate the specific type of error.
• Where already standard in a given library, the alternative error approach may be used
where the negative value is not -1 but is instead -errno if relevant, for example,
-EINVAL. Note, however, to allow consistency across functions returning integer or
pointer types, the previous approach is preferred for any new libraries.
• For functions where no error is possible, the function type should be void not int.
• Routines returning void * should not have their return values cast to any pointer type.
(Typecasting can prevent the compiler from warning about missing prototypes as any
implicit definition of a function returns int, which, unlike void *, needs a typecast to
assign to a pointer variable.)
Note: The above rule about not typecasting void * applies to malloc, as well as to DPDK
functions.
Branch Prediction
• When a test is done in a critical zone (called often or in a data path) the code can use
the likely() and unlikely() macros to indicate the expected, or preferred fast path.
They are expanded as a compiler builtin and allow the developer to indicate if the branch
is likely to be taken or not. Example:
#include <rte_branch_prediction.h>
if (likely(x > 1))
do_stuff();
Note: The use of likely() and unlikely() should only be done in performance critical
paths, and only when there is a clearly preferred path, or a measured performance increase
gained from doing so. These macros should be avoided in non-performance-critical code.
• All functions and variables that are local to a file must be declared as static because it
can often help the compiler to do some optimizations (such as, inlining the code).
• Functions that should be inlined should to be declared as static inline and can be
defined in a .c or a .h file.
Note: Static functions defined in a header file must be declared as static inline in order
to prevent compiler warnings about the function being unused.
Const Attribute
The const attribute should be used as often as possible when a variable is read-only.
The asm and volatile keywords do not have underscores. The AT&T syntax should be used.
Input and output operands should be named to avoid confusion, as shown in the following
example:
asm volatile("outb %[val], %[port]"
: :
[port] "dN" (port),
[val] "a" (val));
Control Statements
• Forever loops are done with for statements, not while statements.
• Elements in a switch statement that cascade should have a FALLTHROUGH comment.
For example:
switch (ch) { /* Indent the switch. */
case 'a': /* Don't indent the case. */
aflag = 1; /* Indent case body one tab. */
/* FALLTHROUGH */
case 'b':
bflag = 1;
break;
case '?':
default:
usage();
/* NOTREACHED */
}
Python Code
All Python code should work with Python 2.7+ and 3.2+ and be compliant with PEP8 (Style
Guide for Python Code).
The pep8 tool can be used for testing compliance with the guidelines.
Design
In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or
to an executive environment (bsdapp or linuxapp) and so on. As far as is possible, all such
instances of architecture or env-specific code should be provided via standard APIs in the EAL.
By convention, a file is common if it is not located in a directory indicating that it is specific. For
instance, a file located in a subdir of “x86_64” directory is specific to this architecture. A file
located in a subdir of “linuxapp” is specific to this execution environment.
Note: Code in DPDK libraries and applications should be generic. The correct location for
architecture or executive environment specific code is in the EAL.
When absolutely necessary, there are several ways to handle specific code:
• Use a #ifdef with the CONFIG option in the C code. This can be done when the
differences are small and they can be embedded in the same C file:
#ifdef RTE_ARCH_I686
toto();
#else
titi();
#endif
• Use the CONFIG option in the Makefile. This is done when the differences are more
significant. In this case, the code is split into two separate files that are architecture or
environment specific. This should only apply inside the EAL library.
Note: As in the linux kernel, the CONFIG_ prefix is not used in C code. This is only needed in
Makefiles or shell scripts.
Library Statistics
Description
This document describes the guidelines for DPDK library-level statistics counter support. This
includes guidelines for turning library statistics on and off and requirements for preventing ABI
changes when implementing statistics.
Each library that maintains statistics counters should provide a single build time flag that de-
cides whether the statistics counter collection is enabled or not. This flag should be exposed as
a variable within the DPDK configuration file. When this flag is set, all the counters supported
by current library are collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library are collected
for any instance of any object type provided by the library:
# DPDK file config/common_linuxapp, config/common_bsdapp, etc.
CONFIG_RTE_<LIBRARY_NAME>_STATS_COLLECT=y/n
The default value for this DPDK configuration file variable (either “yes” or “no”) is decided by
each library.
The layout of data structures and prototype of functions that are part of the library API should
not be affected by whether the collection of statistics counters is turned on or off for the current
library. In practical terms, this means that space should always be allocated in the API data
structures for statistics counters and the statistics related API functions are always built into the
code, regardless of whether the statistics counter collection is turned on or off for the current
library.
When the collection of statistics counters for the current library is turned off, the counters
retrieved through the statistics related API functions should have a default value of zero.
It is highly recommended that each library provides statistics counters to allow an applica-
tion to monitor the library-level run-time events. Typical counters are: number of packets
received/dropped/transmitted, number of buffers allocated/freed, number of occurrences for
specific events, etc.
However, the resources consumed for library-level statistics counter collection have to be spent
out of the application budget and the counters collected by some libraries might not be rele-
vant to the current application. In order to avoid any unwanted waste of resources and/or
performance impacts, the application should decide at build time whether the collection of
library-level statistics counters should be turned on or off for each library individually.
Library-level statistics counters can be relevant or not for specific applications:
• For Application A, counters maintained by Library X are always relevant and the applica-
tion needs to use them to implement certain features, such as traffic accounting, logging,
application-level statistics, etc. In this case, the application requires that collection of
statistics counters for Library X is always turned on.
• For Application B, counters maintained by Library X are only useful during the application
debug stage and are not relevant once debug phase is over. In this case, the application
may decide to turn on the collection of Library X statistics counters during the debug
phase and at a later stage turn them off.
• For Application C, counters maintained by Library X are not relevant at all. It might be
that the application maintains its own set of statistics counters that monitor a different set
of run-time events (e.g. number of connection requests, number of active users, etc). It
might also be that the application uses multiple libraries (Library X, Library Y, etc) and it
is interested in the statistics counters of Library Y, but not in those of Library X. In this
case, the application may decide to turn the collection of statistics counters off for Library
X and on for Library Y.
The statistics collection consumes a certain amount of CPU resources (cycles, cache band-
width, memory bandwidth, etc) that depends on:
• Number of libraries used by the current application that have statistics counters collection
turned on.
• Number of statistics counters maintained by each library per object type instance (e.g.
per port, table, pipeline, thread, etc).
• Number of instances created for each object type supported by each library.
• Complexity of the statistics logic collection for each counter: when only some occur-
rences of a specific event are valid, additional logic is typically needed to decide whether
the current occurrence of the event should be counted or not. For example, in the event
of packet reception, when only TCP packets with destination port within a certain range
should be recorded, conditional branches are usually required. When processing a burst
of packets that have been validated for header integrity, counting the number of bits set
in a bitmask might be needed.
PF and VF Considerations
The primary goal of DPDK is to provide a userspace dataplane. Managing VFs from a PF
driver is a control plane feature and developers should generally rely on the Linux Kernel for
that.
Developers should work with the Linux Kernel community to get the required functionality up-
stream. PF functionality should only be added to DPDK for testing and prototyping purposes
while the kernel work is ongoing. It should also be marked with an “EXPERIMENTAL” tag. If
the functionality isn’t upstreamable then a case can be made to maintain the PF functionality
in DPDK without the EXPERIMENTAL tag.
Description
This document details some methods for handling ABI management in the DPDK. Note this
document is not exhaustive, in that C library versioning is flexible allowing multiple methods to
achieve various goals, but it will provide the user with some introductory methods
General Guidelines
What is an ABI
An ABI (Application Binary Interface) is the set of runtime interfaces exposed by a library. It is
similar to an API (Application Programming Interface) but is the result of compilation. It is also
effectively cloned when applications link to dynamic libraries. That is to say when an applica-
tion is compiled to link against dynamic libraries, it is assumed that the ABI remains constant
between the time the application is compiled/linked, and the time that it runs. Therefore, in the
case of dynamic linking, it is critical that an ABI is preserved, or (when modified), done in such
a way that the application is unable to behave improperly or in an unexpected fashion.
ABI versions are set at the time of major release labeling, and the ABI may change multiple
times, without warning, between the last release label and the HEAD label of the git tree.
ABI versions, once released, are available until such time as their deprecation has been noted
in the Release Notes for at least one major release cycle. For example consider the case
where the ABI for DPDK 2.0 has been shipped and then a decision is made to modify it during
the development of DPDK 2.1. The decision will be recorded in the Release Notes for the
DPDK 2.1 release and the modification will be made available in the DPDK 2.2 release.
ABI versions may be deprecated in whole or in part as needed by a given update.
Some ABI changes may be too significant to reasonably maintain multiple versions. In those
cases ABI’s may be updated without backward compatibility being provided. The requirements
for doing so are:
1. At least 3 acknowledgments of the need to do so must be made on the dpdk.org mailing
list.
2. The changes (including an alternative map file) must be gated with the RTE_NEXT_ABI
option, and provided with a deprecation notice at the same time. It will become the default
ABI in the next release.
3. A full deprecation cycle, as explained above, must be made to offer downstream con-
sumers sufficient warning of the change.
4. At the beginning of the next release cycle, every RTE_NEXT_ABI conditions will be re-
moved, the LIBABIVER variable in the makefile(s) where the ABI is changed will be
incremented, and the map files will be updated.
Note that the above process for ABI deprecation should not be undertaken lightly. ABI stability
is extremely important for downstream consumers of the DPDK, especially when distributed in
shared object form. Every effort should be made to preserve the ABI whenever possible. The
ABI should only be changed for significant reasons, such as performance enhancements. ABI
breakage due to changes such as reorganizing public structure fields for aesthetic or readability
purposes should be avoided.
The following are some examples of ABI deprecation notices which would be added to the
Release Notes:
• The Macro #RTE_FOO is deprecated and will be removed with version 2.0, to be replaced
with the inline function rte_foo().
• The function rte_mbuf_grok() has been updated to include a new parameter in ver-
sion 2.0. Backwards compatibility will be maintained for this function until the release of
version 2.1
• The members of struct rte_foo have been reorganized in release 2.0 for perfor-
mance reasons. Existing binary applications will have backwards compatibility in release
2.0, while newly built binaries will need to reference the new structure variant struct
rte_foo2. Compatibility will be removed in release 2.2, and all applications will require
updating and rebuilding to the new structure at that time, which will be renamed to the
original struct rte_foo.
• Significant ABI changes are planned for the librte_dostuff library. The upcoming
release 2.0 will not contain these changes, but release 2.1 will, and no backwards com-
patibility is planned due to the extensive nature of these changes. Binaries using this
library built prior to version 2.1 will require updating and recompilation.
Versioning Macros
When a symbol is exported from a library to provide an API, it also provides a calling convention
(ABI) that is embodied in its name, return type and arguments. Occasionally that function may
need to change to accommodate new functionality or behavior. When that occurs, it is desirable
to allow for backward compatibility for a time with older binaries that are dynamically linked to
the DPDK.
To support backward compatibility the lib/librte_compat/rte_compat.h header file
provides macros to use when updating exported functions. These macros are used in con-
junction with the rte_<library>_version.map file for a given library to allow multiple ver-
sions of a symbol to exist in a shared library so that older binaries need not be immediately
recompiled.
The macros exported are:
• VERSION_SYMBOL(b, e, n): Creates a symbol version table entry binding versioned
symbol b@DPDK_n to the internal function b_e.
• BIND_DEFAULT_SYMBOL(b, e, n): Creates a symbol version entry instructing the
linker to bind references to symbol b to the internal symbol b_e.
• MAP_STATIC_SYMBOL(f, p): Declare the prototype f, and map it to the fully qualified
function p, so that if a symbol becomes versioned, it can still be mapped back to the
public symbol name.
Assume that struct rte_acl_ctx is a private structure, and that a developer wishes to enhance
the acl api so that a debugging flag can be enabled on a per-context basis. This requires an
addition to the structure (which, being private, is safe), but it also requires modifying the code
as follows
/*
* Create an acl context object for apps to
* manipulate
*/
struct rte_acl_ctx *
rte_acl_create(const struct rte_acl_param *param, int debug)
{
...
}
Note also that, being a public function, the header file prototype must also be changed, as
must all the call sites, to reflect the new ABI footprint. We will maintain previous ABI versions
that are accessible only to previously compiled binaries
The addition of a parameter to the function is ABI breaking as the function is public, and existing
application may use it in its current form. However, the compatibility macros in DPDK allow a
developer to use symbol versioning so that multiple functions can be mapped to the same
public symbol based on when an application was linked to it. To see how this is done, we start
with the requisite libraries version map file. Initially the version map file for the acl library looks
like this
DPDK_2.0 {
global:
rte_acl_add_rules;
rte_acl_build;
rte_acl_classify;
rte_acl_classify_alg;
rte_acl_classify_scalar;
rte_acl_create;
rte_acl_dump;
rte_acl_find_existing;
rte_acl_free;
rte_acl_ipv4vlan_add_rules;
rte_acl_ipv4vlan_build;
rte_acl_list_dump;
rte_acl_reset;
rte_acl_reset_rules;
rte_acl_set_ctx_classify;
local: *;
};
rte_acl_add_rules;
rte_acl_build;
rte_acl_classify;
rte_acl_classify_alg;
rte_acl_classify_scalar;
rte_acl_create;
rte_acl_dump;
rte_acl_find_existing;
rte_acl_free;
rte_acl_ipv4vlan_add_rules;
rte_acl_ipv4vlan_build;
rte_acl_list_dump;
rte_acl_reset;
rte_acl_reset_rules;
rte_acl_set_ctx_classify;
local: *;
};
DPDK_2.1 {
global:
rte_acl_create;
} DPDK_2.0;
The addition of the new block tells the linker that a new version node is available (DPDK_2.1),
which contains the symbol rte_acl_create, and inherits the symbols from the DPDK_2.0 node.
This list is directly translated into a list of exported symbols when DPDK is compiled as a
shared library
Next, we need to specify in the code which function map to the rte_acl_create symbol at which
versions. First, at the site of the initial symbol definition, we need to update the function so that
it is uniquely named, and not in conflict with the public symbol name
struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param)
+rte_acl_create_v20(const struct rte_acl_param *param)
{
size_t sz;
struct rte_acl_ctx *ctx;
...
Note that the base name of the symbol was kept intact, as this is conducive to the macros
used for versioning symbols. That is our next step, mapping this new symbol name to the initial
symbol name at version node 2.0. Immediately after the function, we add this line of code
VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
Remembering to also add the rte_compat.h header to the requisite c file where these
changes are being made. The above macro instructs the linker to create a new symbol
rte_acl_create@DPDK_2.0, which matches the symbol created in older builds, but now
points to the above newly named function. We have now mapped the original rte_acl_create
symbol to the original function (but with a new name)
Next, we need to create the 2.1 version of the symbol. We create a new function name, with a
different suffix, and implement it appropriately
struct rte_acl_ctx *
rte_acl_create_v21(const struct rte_acl_param *param, int debug);
{
struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
ctx->debug = debug;
return ctx;
}
This code serves as our new API call. Its the same as our old call, but adds the new parameter
in place. Next we need to map this function to the symbol rte_acl_create@DPDK_2.1. To
do this, we modify the public prototype of the call in the header file, adding the macro there to
inform all including applications, that on re-link, the default rte_acl_create symbol should point
to this function. Note that we could do this by simply naming the function above rte_acl_create,
and the linker would chose the most recent version tag to apply in the version script, but we
can also do this in the header file
struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param);
+rte_acl_create(const struct rte_acl_param *param, int debug);
+BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
The BIND_DEFAULT_SYMBOL macro explicitly tells applications that include this header, to
link to the rte_acl_create_v21 function and apply the DPDK_2.1 version node to it. This method
is more explicit and flexible than just re-implementing the exact symbol name, and allows for
other features (such as linking to the old symbol version by default, when the new ABI is to be
opt-in for a period.
One last thing we need to do. Note that we’ve taken what was a public symbol, and duplicated
it into two uniquely and differently named symbols. We’ve then mapped each of those back to
the public symbol rte_acl_create with different version tags. This only applies to dynamic
linking, as static linking has no notion of versioning. That leaves this code in a position of no
longer having a symbol simply named rte_acl_create and a static build will fail on that
missing symbol.
To correct this, we can simply map a function of our choosing back to the public symbol in
the static build with the MAP_STATIC_SYMBOL macro. Generally the assumption is that the
most recent version of the symbol is the one you want to map. So, back in the C file where,
immediately after rte_acl_create_v21 is defined, we add this
struct rte_acl_ctx *
rte_acl_create_v21(const struct rte_acl_param *param, int debug)
{
...
}
MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int deb
That tells the compiler that, when building a static library, any calls to the symbol
rte_acl_create should be linked to rte_acl_create_v21
That’s it, on the next shared library rebuild, there will be two versions of rte_acl_create, an old
DPDK_2.0 version, used by previously built applications, and a new DPDK_2.1 version, used
by future built applications.
Lets assume that you’ve done the above update, and after a few releases have passed you
decide you would like to retire the old version of the function. After having gone through the
ABI deprecation announcement process, removal is easy. Start by removing the symbol from
the requisite version map file:
DPDK_2.0 {
global:
rte_acl_add_rules;
rte_acl_build;
rte_acl_classify;
rte_acl_classify_alg;
rte_acl_classify_scalar;
rte_acl_dump;
- rte_acl_create
rte_acl_find_existing;
rte_acl_free;
rte_acl_ipv4vlan_add_rules;
rte_acl_ipv4vlan_build;
rte_acl_list_dump;
rte_acl_reset;
rte_acl_reset_rules;
rte_acl_set_ctx_classify;
local: *;
};
DPDK_2.1 {
global:
rte_acl_create;
} DPDK_2.0;
Note that the internal function definition could also be removed, but its used in our example by
the newer version _v21, so we leave it in place. This is a coding style choice.
Lastly, we need to bump the LIBABIVER number for this library in the Makefile to indicate to
applications doing dynamic linking that this is a later, and possibly incompatible library version:
-LIBABIVER := 1
+LIBABIVER := 2
While removing a symbol from and ABI may be useful, it is often more practical to remove an
entire version node at once. If a version node completely specifies an API, then removing part
of it, typically makes it incomplete. In those cases it is better to remove the entire node
To do this, start by modifying the version map file, such that all symbols from the node to be
removed are merged into the next node in the map
In the case of our map above, it would transform to look as follows
DPDK_2.1 {
global:
rte_acl_add_rules;
rte_acl_build;
rte_acl_classify;
rte_acl_classify_alg;
rte_acl_classify_scalar;
rte_acl_dump;
rte_acl_create
rte_acl_find_existing;
rte_acl_free;
rte_acl_ipv4vlan_add_rules;
rte_acl_ipv4vlan_build;
rte_acl_list_dump;
rte_acl_reset;
rte_acl_reset_rules;
rte_acl_set_ctx_classify;
local: *;
};
Then any uses of BIND_DEFAULT_SYMBOL that pointed to the old node should be updated
to point to the new version node in any header files for all affected symbols.
-BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 2.0);
+BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
Lastly, any VERSION_SYMBOL macros that point to the old version node should be removed,
taking care to keep, where need old code in place to support newer versions of the symbol.
The devtools directory in the DPDK source tree contains a utility program,
validate-abi.sh, for validating the DPDK ABI based on the Linux ABI Compliance
Checker.
This has a dependency on the abi-compliance-checker and and abi-dumper utilities
which can be installed via a package manager. For example:
sudo yum install abi-compliance-checker
sudo yum install abi-dumper
After the validation script completes (it can take a while since it need to compile both tags) it
will create compatibility reports in the ./compat_report directory. Listed incompatibilities
can be found as follows:
grep -lr Incompatible compat_reports/
This document outlines the guidelines for writing the DPDK Guides and API documentation in
RST and Doxygen format.
It also explains the structure of the DPDK documentation and shows how to build the Html and
PDF versions of the documents.
The DPDK source code repository contains input files to build the API documentation and User
Guides.
The main directories that contain files related to documentation are shown below:
lib
|-- librte_acl
|-- librte_cfgfile
|-- librte_cmdline
|-- librte_compat
|-- librte_eal
| |-- ...
...
doc
|-- api
+-- guides
|-- freebsd_gsg
|-- linux_gsg
|-- prog_guide
|-- sample_app_ug
|-- guidelines
|-- testpmd_app_ug
|-- rel_notes
|-- nics
|-- xen
|-- ...
The API documentation is built from Doxygen comments in the header files. These files
are mainly in the lib/librte_* directories although some of the Poll Mode Drivers in
drivers/net are also documented with Doxygen.
The configuration files that are used to control the Doxygen output are in the doc/api direc-
tory.
The user guides such as The Programmers Guide and the FreeBSD and Linux Getting Started
Guides are generated from RST markup text files using the Sphinx Documentation Generator.
These files are included in the doc/guides/ directory. The output is controlled by the
doc/guides/conf.py file.
The following items outline the roles of the different parts of the documentation and when they
need to be updated or added to by the developer.
• Release Notes
The Release Notes document which features have been added in the current and previ-
ous releases of DPDK and highlight any known issues. The Releases Notes also contain
notifications of features that will change ABI compatibility in the next major release.
Developers should include updates to the Release Notes with patch sets that relate to
any of the following sections:
– New Features
– Resolved Issues (see below)
– Known Issues
– API Changes
– ABI Changes
– Shared Library Versions
Resolved Issues should only include issues from previous releases that have been re-
solved in the current release. Issues that are introduced and then fixed within a release
cycle do not have to be included here.
Refer to the Release Notes from the previous DPDK release for the correct format of
each section.
• API documentation
The API documentation explains how to use the public DPDK functions. The API index
page shows the generated API documentation with related groups of functions.
The API documentation should be updated via Doxygen comments when new functions
are added.
• Getting Started Guides
The Getting Started Guides show how to install and configure DPDK and how to run
DPDK based applications on different OSes.
A Getting Started Guide should be added when DPDK is ported to a new OS.
• The Programmers Guide
The Programmers Guide explains how the API components of DPDK such as the EAL,
Memzone, Rings and the Hash Library work. It also explains how some higher level
functionality such as Packet Distributor, Packet Framework and KNI work. It also shows
the build system and explains how to add applications.
The Programmers Guide should be expanded when new functionality is added to DPDK.
• App Guides
The app guides document the DPDK applications in the app directory such as testpmd.
The app guides should be updated if functionality is changed or added.
• Sample App Guides
The sample app guides document the DPDK example applications in the examples di-
rectory. Generally they demonstrate a major feature such as L2 or L3 Forwarding, Multi
Process or Power Management. They explain the purpose of the sample application,
how to run it and step through some of the code to explain the major functionality.
A new sample application should be accompanied by a new sample app guide. The
guide for the Skeleton Forwarding app is a good starting reference.
• Network Interface Controller Drivers
The NIC Drivers document explains the features of the individual Poll Mode Drivers, such
as software requirements, configuration and initialization.
New documentation should be added for new Poll Mode Drivers.
• Guidelines
The guideline documents record community process, expectations and design directions.
They can be extended, amended or discussed by submitting a patch and getting com-
munity approval.
Dependencies
# Red Hat/Fedora.
sudo dnf -y install doxygen
Sphinx is a Python documentation tool for converting RST files to Html or to PDF (via LaTeX).
For full support with figure and table captioning the latest version of Sphinx can be installed as
follows:
# Ubuntu/Debian.
sudo apt-get -y install python-pip
sudo pip install --upgrade sphinx
sudo pip install --upgrade sphinx_rtd_theme
# Red Hat/Fedora.
sudo dnf -y install python-pip
sudo pip install --upgrade sphinx
sudo pip install --upgrade sphinx_rtd_theme
For further information on getting started with Sphinx see the Sphinx Tutorial.
Note: To get full support for Figure and Table numbering it is best to install Sphinx 1.3.1 or
later.
Inkscape is a vector based graphics program which is used to create SVG images and also to
convert SVG images to PDF images. It can be installed as follows:
# Ubuntu/Debian.
sudo apt-get -y install inkscape
# Red Hat/Fedora.
sudo dnf -y install inkscape
TexLive is an installation package for Tex/LaTeX. It is used to generate the PDF versions of the
documentation. The main required packages can be installed as follows:
# Ubuntu/Debian.
sudo apt-get -y install texlive-latex-extra
Build commands
The documentation is built using the standard DPDK build system. Some examples are shown
below:
• Generate all the documentation targets:
make doc
Note: Make sure to fix any Sphinx or Doxygen warnings when adding or updating documen-
tation.
Document Guidelines
RST Guidelines
The RST (reStructuredText) format is a plain text markup format that can be converted to Html,
PDF or other formats. It is most closely associated with Python but it can be used to document
any language. It is used in DPDK to document everything apart from the API.
The Sphinx documentation contains a very useful RST Primer which is a good place to learn
the minimal set of syntax required to format a document.
The official reStructuredText website contains the specification for the RST format and also
examples of how to use it. However, for most developers the RST Primer is a better resource.
The most common guidelines for writing RST text are detailed in the Documenting Python
guidelines. The additional guidelines below reiterate or expand upon those guidelines.
Line Length
• The recommended style for the DPDK documentation is to put sentences on separate
lines. This allows for easier reviewing of patches. Multiple sentences which are not
separated by a blank line are joined automatically into paragraphs, for example:
Here is an example sentence.
Long sentences over the limit shown below can be wrapped onto
a new line.
These three sentences will be joined into the same paragraph.
Whitespace
Section Headers
Level 2 Heading
---------------
Level 3 Heading
~~~~~~~~~~~~~~~
Level 4 Heading
^^^^^^^^^^^^^^^
Lists
* Item two is a long line that is wrapped and then indented to match
the start of the previous line.
* One space character between the bullet and the text is preferred.
• Numbered lists can be formatted with a leading number but the preference is to use #.
which will give automatic numbering. This is more convenient when adding or removing
items:
#. Item one.
#. Item two is a long line that is wrapped and then indented to match
the start of the previous line.
#. Item three.
* Item two.
• All lists, and sub-lists, must be separated from the preceding text by a blank line. This is
a syntax requirement.
• All list items should be separated by a blank line for readability.
• Inline text that is required to be rendered with a fixed width font should be enclosed in
backquotes like this: ‘‘text‘‘, so that it appears like this: text.
• Fixed width, literal blocks of texts should be indented at least 3 spaces and prefixed with
:: like this:
• It is also possible to specify an encoding for a literal block using the .. code-block::
directive so that syntax highlighting can be applied. Examples of supported highlighting
are:
.. code-block:: console
.. code-block:: c
.. code-block:: python
.. code-block:: diff
.. code-block:: none
#include<stdio.h>
int main() {
printf("Hello World\n");
return 0;
}
int main() {
printf("Hello World\n");
return 0;
}
• The default encoding for a literal block using the simplified :: directive is none.
• Lines in literal blocks must be less than 80 characters since they can exceed the page
width when converted to PDF documentation. For long literal lines that exceed that limit
try to wrap the text at sensible locations. For example a long command line could be
documented like this and still work if copied directly from the docs:
build/app/testpmd -c7 -n3 --vdev=net_pcap0,iface=eth0 \
--vdev=net_pcap1,iface=eth1 \
-- -i --nb-cores=2 --nb-ports=2 \
--total-num-mbufs=2048
• Long lines that cannot be wrapped, such as application output, should be truncated to be
less than 80 characters.
Images
• All images should be in SVG scalar graphics format. They should be true SVG XML files
and should not include binary formats embedded in a SVG wrapper.
• The DPDK documentation contains some legacy images in PNG format. These will be
converted to SVG in time.
• Inkscape is the recommended graphics editor for creating the images. Use some
of the older images in doc/guides/prog_guide/img/ as a template, for example
mbuf1.svg or ring-enqueue1.svg.
• The SVG images should include a copyright notice, as an XML comment.
• Images in the documentation should be formatted as follows:
– The image should be preceded by a label in the format .. _figure_XXXX: with
a leading underscore and where XXXX is a unique descriptive name.
– Images should be included using the .. figure:: directive and the file type
should be set to * (not .svg). This allows the format of the image to be changed if
required, without updating the documentation.
– Images must have a caption as part of the .. figure:: directive.
• Here is an example of the previous three guidelines:
.. _figure_mempool:
.. figure:: img/mempool.*
This would be rendered as: The mempool layout is shown in Fig 6.3.
Note: The :numref: directive requires Sphinx 1.3.1 or later. With earlier versions it will
still be rendered as a link but won’t have an automatically generated number.
• The caption of the image can be generated, with a link, using the :ref: directive:
:ref:`figure_mempool`
This would be rendered as: A mempool in memory with its associated ring.
Tables
• RST tables should be used sparingly. They are hard to format and to edit, they are often
rendered incorrectly in PDF format, and the same information can usually be shown just
as clearly with a definition or bullet list.
• Tables in the documentation should be formatted as follows:
– The table should be preceded by a label in the format .. _table_XXXX: with a
leading underscore and where XXXX is a unique descriptive name.
– Tables should be included using the .. table:: directive and must have a cap-
tion.
• Here is an example of the previous two guidelines:
.. _table_qos_pipes:
+----------+----------+----------+
| Header 1 | Header 2 | Header 3 |
| | | |
+==========+==========+==========+
| Text | Text | Text |
+----------+----------+----------+
| ... | ... | ... |
+----------+----------+----------+
• Tables can be linked to using the :numref: and :ref: directives, as shown in the
previous section for images. For example:
The QOS configuration is shown in :numref:`table_qos_pipes`.
• Tables should not include merged cells since they are not supported by the PDF renderer.
Hyperlinks
• They can contain alternative text. The following is rendered as Check out DPDK:
`Check out DPDK <http://dpdk.org>`_
• An internal link can be generated by placing labels in the document with the format ..
_label_name.
• The following links to the top of this section: Hyperlinks:
.. _links:
Hyperlinks
~~~~~~~~~~
Note: The label must have a leading underscore but the reference to it must omit it. This is a
frequent cause of errors and warnings.
• The use of a label is preferred since it works across files and will still work if the header
text changes.
Doxygen Guidelines
The DPDK API is documented using Doxygen comment annotations in the header files. Doxy-
gen is a very powerful tool, it is extremely configurable and with a little effort can be used to
create expressive documents. See the Doxygen website for full details on how to use it.
The following are some guidelines for use of Doxygen in the DPDK API documentation:
• New libraries that are documented with Doxygen should be added to the Doxygen con-
figuration file: doc/api/doxy-api.conf. It is only required to add the directory that
contains the files. It isn’t necessary to explicitly name each file since the configuration
matches all rte_*.h files in the directory.
• Use proper capitalization and punctuation in the Doxygen comments since they will be-
come sentences in the documentation. This in particular applies to single line comments,
which is the case the is most often forgotten.
• Document the purpose of a function, the parameters used and the return value:
/**
* Attach a new Ethernet device specified by arguments.
*
* @param devargs
* A pointer to a strings array describing the new device
* to be attached. The strings should be a pci address like
* `0000:01:00.0` or **virtual** device name like `net_pcap0`.
* @param port_id
* A pointer to a port identifier actually attached.
*
* @return
* 0 on success and port_id is filled, negative on error.
*/
int rte_eth_dev_attach(const char *devargs, uint8_t *port_id);
• Doxygen supports Markdown style syntax such as bold, italics, fixed width text and lists.
For example the second line in the devargs parameter in the previous example will be
rendered as:
The strings should be a pci address like 0000:01:00.0 or virtual device
name like net_pcap0.
• Use - instead of * for lists within the Doxygen comment since the latter can get confused
with the comment delimiter.
• Add an empty line between the function description, the @params and @return for
readability.
• Place the @params description on separate line and indent it by 2 spaces. (It would be
better to use no indentation since this is more common and also because checkpatch
complains about leading whitespace in comments. However this is the convention used
in the existing DPDK code.)
• Documented functions can be linked to simply by adding () to the function name:
/**
* The functions exported by the application Ethernet API to setup
* a device designated by its port identifier must be invoked in
* the following order:
* - rte_eth_dev_configure()
* - rte_eth_tx_queue_setup()
* - rte_eth_rx_queue_setup()
* - rte_eth_dev_start()
*/
In the API documentation the functions will be rendered as links, see the online section
of the rte_ethdev.h docs that contains the above text.
• The @see keyword can be used to create a see also link to another file or library. This
directive should be placed on one line at the bottom of the documentation section.
/**
* ...
*
* Some text that references mempools.
*
* @see eal_memzone.c
*/
• Doxygen supports two types of comments for documenting variables, constants and
members: prefix and postfix:
/** This is a prefix comment. */
#define RTE_FOO_ERROR 0x023.
• Postfix comments are preferred for struct members and constants if they can be docu-
mented in the same way:
struct rte_eth_stats {
uint64_t ipackets; /**< Total number of received packets. */
uint64_t opackets; /**< Total number of transmitted packets.*/
uint64_t ibytes; /**< Total number of received bytes. */
uint64_t obytes; /**< Total number of transmitted bytes. */
uint64_t imissed; /**< Total of RX missed packets. */
uint64_t ibadcrc; /**< Total of RX packets with CRC error. */
uint64_t ibadlen; /**< Total of RX packets with bad length. */
}
Note: postfix comments should be aligned with spaces not tabs in accordance with the
DPDK Coding Style.
• If a single comment type can’t be used, due to line length limitations then prefix comments
should be preferred. For example this section of the code contains prefix comments,
postfix comments on the same line and postfix comments on a separate line:
/** Number of elements in the elt_pa array. */
uint32_t pg_num __rte_cache_aligned;
uint32_t pg_shift; /**< LOG2 of the physical pages. */
uintptr_t pg_mask; /**< Physical page mask value. */
uintptr_t elt_va_start;
/**< Virtual address of the first mempool object. */
uintptr_t elt_va_end;
/**< Virtual address of the <size + 1> mempool object. */
phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
/**< Array of physical page addresses for the mempool buffer. */
This doesn’t have an effect on the rendered documentation but it is confusing for the
developer reading the code. It this case it would be clearer to use prefix comments
throughout:
/** Number of elements in the elt_pa array. */
uint32_t pg_num __rte_cache_aligned;
/** LOG2 of the physical pages. */
uint32_t pg_shift;
/** Physical page mask value. */
uintptr_t pg_mask;
/** Virtual address of the first mempool object. */
uintptr_t elt_va_start;
/** Virtual address of the <size + 1> mempool object. */
uintptr_t elt_va_end;
• Check for Doxygen warnings in new code by checking the API documentation build:
make doc-api-html >/dev/null
• Read the rendered section of the documentation that you have added for correctness,
clarity and consistency with the surrounding text.
The DPDK maintenance hierarchy is divided into a main repository dpdk and sub-repositories
dpdk-next-*.
There are maintainers for the trees and for components within the tree.
Trees and maintainers are listed in the MAINTAINERS file. For example:
Crypto Drivers
--------------
M: Some Name <some.name@email.com>
B: Another Name <another.name@email.com>
T: git://dpdk.org/next/dpdk-next-crypto
Where:
• M is a tree or component maintainer.
• B is a tree backup maintainer.
• T is a repository tree.
• F is a maintained file or directory.
Additional details are given in the MAINTAINERS file.
The role of the component maintainers is to:
• Review patches for the component or delegate the review. The review should be done,
ideally, within 1 week of submission to the mailing list.
• Add an acked-by to patches, or patchsets, that are ready for committing to a tree.
Component maintainers can be added or removed by submitting a patch to the MAINTAINERS
file. Maintainers should have demonstrated a reasonable level of contributions or reviews to
the component area. The maintainer should be confirmed by an ack from an established
contributor. There can be more than one component maintainer if desired.
The role of the tree maintainers is to:
• Maintain the overall quality of their tree. This can entail additional review, compilation
checks or other tests deemed necessary by the maintainer.
• Commit patches that have been reviewed by component maintainers and/or other contrib-
utors. The tree maintainer should determine if patches have been reviewed sufficiently.
• Ensure that patches are reviewed in a timely manner.
• Prepare the tree for integration.
• Ensure that there is a designated back-up maintainer and coordinate a handover for
periods where the tree maintainer can’t perform their role.
Tree maintainers can be added or removed by submitting a patch to the MAINTAINERS file. The
proposer should justify the need for a new sub-tree and should have demonstrated a sufficient
level of contributions in the area or to a similar area. The maintainer should be confirmed by an
ack from an existing tree maintainer. Disagreements on trees or maintainers can be brought
to the Technical Board.
The backup maintainer for the master tree should be selected from the existing sub-tree main-
tainers from the project. The backup maintainer for a sub-tree should be selected from among
the component maintainers within that sub-tree.
sub-repositories (list):
git clone git://dpdk.org/next/dpdk-next-*
git clone http://dpdk.org/git/next/dpdk-next-*
Make your planned changes in the cloned dpdk repo. Here are some guidelines and require-
ments:
• Follow the DPDK Coding Style guidelines.
• If you add new files or directories you should add your name to the MAINTAINERS file.
• New external functions should be added to the local version.map file. See the Guide-
lines for ABI policy and versioning. New external functions should also be added in
alphabetical order.
• Important changes will require an addition to the release notes in
doc/guides/rel_notes/. See the Release Notes section of the Documenta-
tion Guidelines for details.
• Test the compilation works with different targets, compilers and options, see Checking
Compilation.
• Don’t break compilation between commits with forward dependencies in a patchset. Each
commit should compile on its own to allow for git bisect and continuous integration
testing.
• Add tests to the the app/test unit test framework where possible.
• Add documentation, if relevant, in the form of Doxygen comments or a User Guide in
RST format. See the Documentation Guidelines.
Once the changes have been made you should commit them to your local repo.
For small changes, that do not require specific explanations, it is better to keep things together
in the same patch. Larger changes that require different explanations should be separated into
logical patches in a patchset. A good way of thinking about whether a patch should be split is
to consider whether the change could be applied without dependencies as a backport.
As a guide to how patches should be structured run git log on similar files.
The first, summary, line of the git commit message becomes the subject line of the patch email.
Here are some guidelines for the summary line:
• The summary line must capture the area and the impact of the change.
• Use the imperative of the verb (like instructions to the code base).
• Don’t add a period/full stop to the subject line or you will end up two in the patch name:
dpdk_description..patch.
The actual email subject line should be prefixed by [PATCH] and the version, if greater
than v1, for example: PATCH v2. The is generally added by git send-email or git
format-patch, see below.
If you are submitting an RFC draft of a feature you can use [RFC] instead of [PATCH]. An
RFC patch doesn’t have to be complete. It is intended as a way of getting early feedback.
The purpose of the signoff is explained in the Developer’s Certificate of Origin section of
the Linux kernel guidelines.
Note: All developers must ensure that they have read and understood the Developer’s
Certificate of Origin section of the documentation prior to applying the signoff and sub-
mitting a patch.
• The signoff must be a real name and not an alias or nickname. More than one signoff is
allowed.
• The text of the commit message should be wrapped at 72 characters.
• When fixing a regression, it is a good idea to reference the id of the commit which intro-
duced the bug. You can generate the required text using the following git alias:
git config alias.fixline "log -1 --abbrev=12 --format='Fixes: %h (\"%s\")'"
• When fixing an error or warning it is useful to add the error message and instructions on
how to reproduce it.
• Use correct capitalization, punctuation and spelling.
In addition to the Signed-off-by: name the commit messages can also have tags for who
reported, suggested, tested and reviewed the patch being posted. Please refer to the Tested,
Acked and Reviewed by section.
Creating Patches
It is possible to send patches directly from git but for new contributors it is recommended to
generate the patches with git format-patch and then when everything looks okay, and the
patches have been checked, to send them with git send-email.
Here are some examples of using git format-patch to generate patches:
# Generate a patch from the last commit.
git format-patch -1
Cover letters are useful for explaining a patchset and help to generate a logical threading to
the patches. Smaller notes can be put inline in the patch after the --- separator, for example:
Subject: [PATCH] fm10k/base: add FM10420 device ids
Add the device ID for Boulder Rapids and Atwood Channel to enable
drivers to support those devices.
drivers/net/fm10k/base/fm10k_api.c | 6 ++++++
drivers/net/fm10k/base/fm10k_type.h | 6 ++++++
2 files changed, 12 insertions(+)
...
Version 2 and later of a patchset should also include a short log of the changes so the reviewer
knows what has changed. This can be added to the cover letter or the annotations. For
example:
---
v3:
* Fixed issued with version.map.
v2:
* Added i40e support.
* Renamed ethdev functions from rte_eth_ieee15888_*() to rte_eth_timesync_*()
since 802.1AS can be supported through the same interfaces.
Patches should be checked for formatting and syntax issues using the checkpatches.sh
script in the devtools directory of the DPDK repo. This uses the Linux kernel development
tool checkpatch.pl which can be obtained by cloning, and periodically, updating the Linux
kernel sources.
The path to the original Linux script must be set in the environment variable
DPDK_CHECKPATCH_PATH. This, and any other configuration variables required by the de-
velopment tools, are loaded from the following files, in order of preference:
.develconfig
~/.config/dpdk/devel.config
/etc/dpdk/devel.config.
Where:
• -h: help, usage.
• -q: quiet. Don’t output anything for files without issues.
• -v: verbose.
• patchX: path to one or more patches.
Then the git logs should be checked using the check-git-log.sh script.
The script usage is:
check-git-log.sh [range]
Checking Compilation
Compilation of patches and changes should be tested using the the test-build.sh script in
the devtools directory of the DPDK repo:
devtools/test-build.sh x86_64-native-linuxapp-gcc+next+shared
Where:
• -h: help, usage.
• -jX: use X parallel jobs in “make”.
• -s: short test with only first config and without examples/doc.
• config: default config name plus config switches delimited with a + sign.
Examples of configs are:
x86_64-native-linuxapp-gcc
x86_64-native-linuxapp-gcc+next+shared
x86_64-native-linuxapp-clang+shared
export DPDK_DEP_ZLIB=y
export DPDK_DEP_PCAP=y
export DPDK_DEP_SSL=y
Sending Patches
Patches should be sent to the mailing list using git send-email. You can configure an
external SMTP with something like the following:
[sendemail]
smtpuser = name@domain.com
smtpserver = smtp.domain.com
smtpserverport = 465
smtpencryption = ssl
You can test the emails by sending it to yourself or with the --dry-run option.
If the patch is in relation to a previous email thread you can add it to the same thread using the
Message ID:
git send-email --to dev@dpdk.org --in-reply-to <1234-foo@bar.com> 000*.patch
The Message ID can be found in the raw text of emails or at the top of each Patchwork patch,
for example. Shallow threading (--thread --no-chain-reply-to) is preferred for a patch
series.
Once submitted your patches will appear on the mailing list and in Patchwork.
Experienced committers may send patches directly with git send-email without the git
format-patch step. The options --annotate and confirm = always are recommended
for checking patches before sending.
Patches are reviewed by the community, relying on the experience and collaboration of the
members to double-check each other’s work. There are a number of ways to indicate that you
have checked a patch on the mailing list.
To indicate that you have interacted with a patch on the mailing list you should respond to the
patch in an email with one of the following tags:
• Reviewed-by:
• Acked-by:
• Tested-by:
• Reported-by:
• Suggested-by:
The tag should be on a separate line as follows:
tag-here: Name Surname <email@address.com>
Each of these tags has a specific meaning. In general, the DPDK community follows the kernel
usage of the tags. A short summary of the meanings of each tag is given here for reference:
Reviewed-by: is a strong statement that the patch is an appropriate state for merging without
any remaining serious technical issues. Reviews from community members who are known to
understand the subject area and to perform thorough reviews will increase the likelihood of the
patch getting merged.
Acked-by: is a record that the person named was not directly involved in the preparation of
the patch but wishes to signify and record their acceptance and approval of it.
Tested-by: indicates that the patch has been successfully tested (in some environment) by
the person named.
Reported-by: is used to acknowledge person who found or reported the bug.
Suggested-by: indicates that the patch idea was suggested by the named person.
The more work you put into the previous steps the easier it will be to get a patch accepted.
The general cycle for patch review and acceptance is:
1. Submit the patch.
2. Check the automatic test reports in the coming hours.
3. Wait for review comments. While you are waiting review some other patches.
4. Fix the review comments and submit a v n+1 patchset:
git format-patch -3 -v 2
Note: When acking patches please remove as much of the text of the patch email as
possible. It is generally best to delete everything after the Signed-off-by: line.
7. Having the patch Reviewed-by: and/or Tested-by: will also help the patch to be
accepted.
8. If the patch isn’t deemed suitable based on being out of scope or conflicting with existing
functionality it may receive a nack. In this case you will need to make a more convincing
technical argument in favor of your patches.
9. In addition a patch will not be accepted if it doesn’t address comments from a previous
version with fixes or valid arguments.
10. Acked patches will be merged in the current or next merge window.
This section sets out the guidelines for the DPDK Stable Releases and the DPDK Long Term
Support releases (LTS).
Introduction
The purpose of the DPDK Stable Releases is to maintain releases of DPDK with backported
fixes over an extended period of time. This provides downstream consumers of DPDK with a
stable target on which to base applications or packages.
The Long Term Support release (LTS) is a designation applied to a Stable Release to indicate
longer term support.
Stable Releases
Any major release of DPDK can be designated as a Stable Release if a maintainer volunteers
to maintain it.
A Stable Release is used to backport fixes from an N release back to an N-1 release, for
example, from 16.11 to 16.07.
The duration of a stable is one complete release cycle (3 months). It can be longer, up to 1
year, if a maintainer continues to support the stable branch, or if users supply backported fixes,
however the explicit commitment should be for one release cycle.
The release cadence is determined by the maintainer based on the number of bugfixes and the
criticality of the bugs. Releases should be coordinated with the validation engineers to ensure
that a tagged release has been tested.
LTS Release
A stable release can be designated as an LTS release based on community agreement and a
commitment from a maintainer. An LTS release will have a maintenance duration of 2 years.
The current DPDK LTS release is 16.11.
It is anticipated that there will be at least 4 releases per year of the LTS or approximately 1
every 3 months. However, the cadence can be shorter or longer depending on the number and
criticality of the backported fixes. Releases should be coordinated with the validation engineers
to ensure that a tagged release has been tested.
The Stable and LTS release are coordinated on the stable@dpdk.org mailing list.
All fix patches to the master branch that are candidates for backporting should also be CCed
to the stable@dpdk.org mailing list.
Releasing
ABI
The Stable Release should not be seen as a way of breaking or circumventing the DPDK ABI
policy.
Patch Cheatsheet
Release Notes
Description of Release
This document contains the release notes for Data Plane Development Kit (DPDK) release
version 17.05.0-rc0 and previous releases.
It lists new features, fixed bugs, API and ABI changes and known issues.
For instructions on compiling and running the release, see the DPDK Getting Started Guide.
New Features
Resolved Issues
EAL
Drivers
Libraries
Examples
Other
Known Issues
API Changes
ABI Changes
The libraries prepended with a plus sign were incremented in this version.
724
DPDK documentation, Release 17.05.0-rc0
librte_acl.so.2
librte_cfgfile.so.2
librte_cmdline.so.2
librte_cryptodev.so.2
librte_distributor.so.1
librte_eal.so.3
librte_ethdev.so.6
librte_hash.so.2
librte_ip_frag.so.1
librte_jobstats.so.1
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
librte_mbuf.so.2
librte_mempool.so.2
librte_meter.so.1
librte_net.so.1
librte_pdump.so.1
librte_pipeline.so.3
librte_pmd_bond.so.1
librte_pmd_ring.so.2
librte_port.so.3
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
librte_table.so.2
librte_timer.so.1
librte_vhost.so.3
Tested Platforms
New Features
Virtio-user with vhost-kernel as the backend is a solution for the exception path, such as
KNI, which exchanges packets with the kernel networking stack. This solution is very
promising in:
– Maintenance: vhost and vhost-net (kernel) is an upstreamed and extensively used
kernel module.
– Features: vhost-net is designed to be a networking solution, which has lots of net-
working related features, like multi-queue, TSO, multi-seg mbuf, etc.
– Performance: similar to KNI, this solution would use one or more kthreads to
send/receive packets from user space DPDK applications, which has little impact
on user space polling thread (except that it might enter into kernel space to wake up
those kthreads if necessary).
• Added virtio Rx interrupt support.
Added a feature to enable Rx interrupt mode for virtio pci net devices as bound to VFIO
(noiommu mode) and driven by virtio PMD.
With this feature, the virtio PMD can switch between polling mode and interrupt mode, to
achieve best performance, and at the same time save power. It can work on both legacy
and modern virtio devices. In this mode, each rxq is mapped with an excluded MSIx
interrupt.
See the Virtio Interrupt Mode documentation for more information.
• Added ARMv8 crypto PMD.
A new crypto PMD has been added, which provides combined mode cryptographic oper-
ations optimized for ARMv8 processors. The driver can be used to enhance performance
in processing chained operations such as cipher + HMAC.
• Updated the QAT PMD.
The QAT PMD has been updated with additional support for:
– DES algorithm.
– Scatter-gather list (SGL) support.
• Updated the AESNI MB PMD.
– The Intel(R) Multi Buffer Crypto for IPsec library used in AESNI MB PMD has been
moved to a new repository, in GitHub.
– Support has been added for single operations (cipher only and authentication only).
• Updated the AES-NI GCM PMD.
The AES-NI GCM PMD was migrated from the Multi Buffer library to the ISA-L library.
The migration entailed adding additional support for:
– GMAC algorithm.
– 256-bit cipher key.
– Session-less mode.
– Out-of place processing
– Scatter-gather support for chained mbufs (only out-of place and destination mbuf
must be contiguous)
Resolved Issues
Drivers
Examples
API Changes
• Moved five APIs for VF management from the ethdev to the ixgbe PMD.
The following five APIs for VF management from the PF have been removed from the
ethdev, renamed, and added to the ixgbe PMD:
rte_eth_dev_set_vf_rate_limit()
rte_eth_dev_set_vf_rx()
rte_eth_dev_set_vf_rxmode()
rte_eth_dev_set_vf_tx()
rte_eth_dev_set_vf_vlan_filter()
ABI Changes
The libraries prepended with a plus sign were incremented in this version.
librte_acl.so.2
librte_cfgfile.so.2
librte_cmdline.so.2
librte_cryptodev.so.2
librte_distributor.so.1
librte_eal.so.3
+ librte_ethdev.so.6
librte_hash.so.2
librte_ip_frag.so.1
librte_jobstats.so.1
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
librte_mbuf.so.2
librte_mempool.so.2
librte_meter.so.1
librte_net.so.1
librte_pdump.so.1
librte_pipeline.so.3
librte_pmd_bond.so.1
librte_pmd_ring.so.2
librte_port.so.3
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
librte_table.so.2
librte_timer.so.1
librte_vhost.so.3
Tested Platforms
This release has been tested with the below list of CPU/device/firmware/OS. Each section
describes a different set of combinations.
• Intel(R) platforms with Mellanox(R) NICs combinations
– Platform details
* CentOS 7.0
* Fedora 23
* Fedora 24
* FreeBSD 10.3
* Red Hat Enterprise Linux 7.2
– Platform details
* CentOS 7.2
* Fedora 25
* FreeBSD 11
* Red Hat Enterprise Linux Server release 7.3
* SUSE Enterprise Linux 12
* Wind River Linux 8
* Ubuntu 16.04
* Ubuntu 16.10
– NICs:
New Features
For that reason, this feature is disabled by default. It can be enabled when the
RTE_VHOST_USER_DEQUEUE_ZERO_COPY flag is set. Check the VHost section of the
Programming Guide for more information.
• Added vhost-user indirect descriptors support.
If the indirect descriptor feature is enabled, each packet sent by the guest will take exactly
one slot in the enqueue virtqueue. Without this feature, as in the current version, even
64 bytes packets take two slots with Virtio PMD on guest side.
The main impact is better performance for 0% packet loss use-cases, as it behaves as if
the virtqueue size was enlarged, so more packets can be buffered in the case of system
perturbations. On the downside, small performance degradations were measured when
running micro-benchmarks.
• Added vhost PMD xstats.
Added extended statistics to vhost PMD from a per port perspective.
• Supported offloads with virtio.
Added support for the following offloads in virtio:
– Rx/Tx checksums.
– LRO.
– TSO.
• Added virtio NEON support for ARM.
Added NEON support for ARM based virtio.
• Updated the ixgbe base driver.
Updated the ixgbe base driver, including the following changes:
– Added X550em_a 10G PHY support.
– Added support for flow control auto negotiation for X550em_a 1G PHY.
– Added X550em_a FW ALEF support.
– Increased mailbox version to ixgbe_mbox_api_13.
– Added two MAC operations for Hyper-V support.
• Added APIs for VF management to the ixgbe PMD.
Eight new APIs have been added to the ixgbe PMD for VF management from the PF. The
declarations for the API’s can be found in rte_pmd_ixgbe.h.
• Updated the enic driver.
– Added update to use interrupt for link status checking instead of polling.
– Added more flow director modes on UCS Blade with firmware version >= 2.0(13e).
– Added full support for MTU update.
– Added support for the rte_eth_rx_queue_count function.
• Updated the mlx5 driver.
– Added support for RSS hash results.
Resolved Issues
Drivers
Known Issues
frequency scaling. In addition, L3fwd-power application requires the mbuf to have correct
packet type set but in some drivers the vector mode must be disabled for this.
Therefore, in order to use L3fwd-power, vector mode should be disabled via the config
file.
• Digest address must be supplied for crypto auth operation on QAT PMD.
The cryptodev API specifies that if the rte_crypto_sym_op.digest.data field, and by infer-
ence the digest.phys_addr field which points to the same location, is not set for an auth
operation the driver is to understand that the digest result is located immediately follow-
ing the region over which the digest is computed. The QAT PMD doesn’t correctly handle
this case and reads and writes to an incorrect location.
Callers can workaround this by always supplying the digest virtual and physical address
fields in the rte_crypto_sym_op for an auth operation.
API Changes
• The driver naming convention has been changed to make them more consistent. It es-
pecially impacts --vdev arguments. For example eth_pcap becomes net_pcap and
cryptodev_aesni_mb_pmd becomes crypto_aesni_mb.
For backward compatibility an alias feature has been enabled to support the original
names.
• The log history has been removed.
• The rte_ivshmem feature (including library and EAL code) has been removed in 16.11
because it had some design issues which were not planned to be fixed.
• The file_name data type of struct rte_port_source_params and struct
rte_port_sink_params is changed from char * to const char *.
• Improved device/driver hierarchy and generalized hotplugging.
The device and driver relationship has been restructured by introducing generic classes.
This paves the way for having PCI, VDEV and other device types as instantiated objects
rather than classes in themselves. Hotplugging has also been generalized into EAL so
that Ethernet or crypto devices can use the common infrastructure.
– Removed pmd_type as a way of segregation of devices.
– Moved numa_node and devargs into rte_driver from rte_pci_driver.
These can now be used by any instantiated object of rte_driver.
– Added rte_device class and all PCI and VDEV devices inherit from it
– Renamed devinit/devuninit handlers to probe/remove to make it more semantically
correct with respect to the device <=> driver relationship.
– Moved hotplugging support to EAL. Hereafter, PCI and vdev can use the APIs
rte_eal_dev_attach and rte_eal_dev_detach.
– Renamed helpers and support macros to make them more synonymous with their
device types (e.g. PMD_REGISTER_DRIVER => RTE_PMD_REGISTER_PCI).
– Device naming functions have been generalized from ethdev and cryptodev to EAL.
rte_eal_pci_device_name has been introduced for obtaining unique device
name from PCI Domain-BDF description.
– Virtual device registration APIs have been added: rte_eal_vdrv_register and
rte_eal_vdrv_unregister.
ABI Changes
The libraries prepended with a plus sign were incremented in this version.
librte_acl.so.2
librte_cfgfile.so.2
librte_cmdline.so.2
+ librte_cryptodev.so.2
librte_distributor.so.1
+ librte_eal.so.3
+ librte_ethdev.so.5
librte_hash.so.2
librte_ip_frag.so.1
librte_jobstats.so.1
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
librte_mbuf.so.2
librte_mempool.so.2
librte_meter.so.1
librte_net.so.1
librte_pdump.so.1
librte_pipeline.so.3
librte_pmd_bond.so.1
librte_pmd_ring.so.2
librte_port.so.3
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
librte_table.so.2
librte_timer.so.1
librte_vhost.so.3
Tested Platforms
1. SuperMicro 1U
• BIOS: 1.0c
• Processor: Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
2. SuperMicro 1U
• BIOS: 1.0a
• Processor: Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
• Onboard NIC: Intel(R) X552/X557-AT (2x10G)
– Firmware-version: 0x800001cf
Tested NICs
Tested OSes
• CentOS 7.2
• Fedora 23
• Fedora 24
• FreeBSD 10.3
• FreeBSD 11
• Red Hat Enterprise Linux Server release 6.7 (Santiago)
• Red Hat Enterprise Linux Server release 7.0 (Maipo)
• Red Hat Enterprise Linux Server release 7.2 (Maipo)
• SUSE Enterprise Linux 12
• Wind River Linux 6.0.0.26
• Wind River Linux 8
• Ubuntu 14.04
• Ubuntu 15.04
• Ubuntu 16.04
New Features
mailbox interrupt to receive the message and allow the application to register a callback
for it.
• Updated the ixgbe base driver.
The ixgbe base driver was updated with changes including the following:
– Added sgmii link for X550.
– Added MAC link setup for X550a SFP and SFP+.
– Added KR support for X550em_a.
– Added new PHY definitions for M88E1500.
– Added support for the VLVF to be bypassed when adding/removing a VFTA entry.
– Added X550a flow control auto negotiation support.
• Updated the i40e base driver.
Updated the i40e base driver including support for new devices IDs.
• Updated the enic driver.
The enic driver was updated with changes including the following:
– Optimized the Tx function.
– Added Scattered Rx capability.
– Improved packet type identification.
– Added MTU update in non Scattered Rx mode and enabled MTU of up to 9208 with
UCS Software release 2.2 on 1300 series VICs.
• Updated the mlx5 driver.
The mlx5 driver was updated with changes including the following:
– Data path was refactored to bypass Verbs to improve RX and TX performance.
– Removed compilation parameters for inline send, MLX5_MAX_INLINE, and added
command line parameter instead, txq_inline.
– Improved TX scatter gather support: Removed compilation parameter
MLX5_PMD_SGE_WR_N. Scatter-gather elements is set to the maximum value
the NIC supports. Removed linearization logic, this decreases the memory
consumption of the PMD.
– Improved jumbo frames support, by dynamically setting RX scatter gather ele-
ments according to the MTU and mbuf size, no need for compilation parameter
MLX5_PMD_SGE_WR_N
• Added support for virtio on IBM POWER8.
The ioports are mapped in memory when using Linux UIO.
• Added support for Virtio in containers.
Add a new virtual device, named virtio_user, to support virtio for containers.
Known limitations:
– Control queue and multi-queue are not supported yet.
– Configure the MAC address in the routing pipeline and automatic route updates with
change in link state.
– Enable RSS per network interface through the configuration file.
– Streamline the CLI code.
• Added keepalive enhancements.
Added support for reporting of core states other than “dead” to monitoring applications,
enabling the support of broader liveness reporting to external processes.
• Added packet capture framework.
– A new library librte_pdump is added to provide a packet capture API.
– A new app/pdump tool is added to demonstrate capture packets in DPDK.
• Added floating VEB support for i40e PF driver.
A “floating VEB” is a special Virtual Ethernet Bridge (VEB) which does not have an upload
port, but instead is used for switching traffic between virtual functions (VFs) on a port.
For information on this feature, please see the “I40E Poll Mode Driver” section of the
“Network Interface Controller Drivers” document.
• Added support for live migration of a VM with SRIOV VF.
Live migration of a VM with Virtio and VF PMD’s using the bonding PMD.
Resolved Issues
EAL
Drivers
Libraries
Examples
Other
Known Issues
API Changes
ABI Changes
• The rte_port_source_params structure has new fields to support PCAP files. It was
already in release 16.04 with RTE_NEXT_ABI flag.
• The rte_eth_dev_info structure has new fields nb_rx_queues and nb_tx_queues
to support the number of queues configured by software.
• A Vhost ABI refactoring has been made: the virtio_net structure is no longer ex-
ported directly to the application. Instead, a handle, vid, has been used to represent
this structure internally.
The libraries prepended with a plus sign were incremented in this version.
+ libethdev.so.4
librte_acl.so.2
librte_cfgfile.so.2
librte_cmdline.so.2
librte_cryptodev.so.1
librte_distributor.so.1
librte_eal.so.2
librte_hash.so.2
librte_ip_frag.so.1
librte_ivshmem.so.1
librte_jobstats.so.1
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
librte_mbuf.so.2
+ librte_mempool.so.2
librte_meter.so.1
librte_pdump.so.1
librte_pipeline.so.3
librte_pmd_bond.so.1
librte_pmd_ring.so.2
+ librte_port.so.3
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
librte_table.so.2
librte_timer.so.1
+ librte_vhost.so.3
Tested Platforms
1. SuperMicro 1U
• BIOS: 1.0c
• Processor: Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
2. SuperMicro 1U
• BIOS: 1.0a
• Processor: Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
• Onboard NIC: Intel(R) X552/X557-AT (2x10G)
– Firmware-version: 0x800001cf
– Device ID (PF/VF): 8086:15ad /8086:15a8
• kernel driver version: 4.2.5 (ixgbe)
3. SuperMicro 2U
• BIOS: 1.0a
• Processor: Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
4. Intel(R) Server board S2600GZ
• BIOS: SE5C600.86B.02.02.0002.122320131210
• Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
5. Intel(R) Server board W2600CR
• BIOS: SE5C600.86B.02.01.0002.082220131453
• Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
6. Intel(R) Server board S2600CWT
• BIOS: SE5C610.86B.01.01.0009.060120151350
• Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
7. Intel(R) Server board S2600WTT
• BIOS: SE5C610.86B.01.01.0005.101720141054
• Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
8. Intel(R) Server board S2600WTT
• BIOS: SE5C610.86B.11.01.0044.090120151156
• Processor: Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
Tested NICs
Tested OSes
• CentOS 7.0
• Fedora 23
• Fedora 24
• FreeBSD 10.3
• Red Hat Enterprise Linux 7.2
• SUSE Enterprise Linux 12
• Ubuntu 15.10
• Ubuntu 16.04 LTS
• Wind River Linux 8
New Features
When we want to use this E-tag support feature, the forwarding should be enabled to
forward the packets received by the PF to the indicated VFs. And insertion and stripping
should be enabled for VFs to offload the effort to hardware.
Features added:
– Support E-tag offloading of insertion and stripping.
– Support Forwarding E-tag packets to pools based on GRP and E-CID_base.
• Added support for VxLAN and NVGRE checksum off-load on X550.
– Added support for VxLAN and NVGRE RX/TX checksum off-load on X550. RX/TX
checksum off-load is provided on both inner and outer IP header and TCP header.
– Added functions to support VxLAN port configuration. The default VxLAN port num-
ber is 4789 but this can be updated programmatically.
• Added support for new X550EM_a devices.
Added support for new X550EM_a devices and their MAC types, X550EM_a and
X550EM_a_vf. Updated the relevant PMD to use the new devices and MAC types.
• Added x550em_x V2 device support.
Added support for x550em_x V2 device. Only x550em_x V1 was supported before. A
mask for V1 and V2 is defined and used to support both.
• Supported link speed auto-negotiation on X550EM_X
Normally the auto-negotiation is supported by firmware and software doesn’t care about
it. But on x550em_x, firmware doesn’t support auto-negotiation. As the ports of
x550em_x are 10GbE, if we connect the port with a peer which is 1GbE, the link will
always be down. We added the support for auto-negotiation by software to avoid this link
down issue.
• Added software-firmware sync on X550EM_a.
Added support for software-firmware sync for resource sharing. Use the PHY token,
shared between software-firmware for PHY access on X550EM_a.
• Updated the i40e base driver.
The i40e base driver was updated with changes including the following:
– Use RX control AQ commands to read/write RX control registers.
– Add new X722 device IDs, and removed X710 one was never used.
– Expose registers for HASH/FD input set configuring.
• Enabled PCI extended tag for i40e.
Enabled extended tag for i40e by checking and writing corresponding PCI config space
bytes, to boost the performance. The legacy method of reading/writing sysfile supported
by kernel module igb_uio is now deprecated.
• Added i40e support for setting mac addresses.
• Added dump of i40e registers and EEPROM.
• Supported ether type setting of single and double VLAN for i40e
Resolved Issues
Drivers
• cxgbe: Fixed crash due to incorrect size allocated for RSS table.
Fixed a segfault that occurs when accessing part of port 0’s RSS table that gets overwrit-
ten by subsequent port 1’s part of the RSS table due to incorrect size allocated for each
entry in the table.
• cxgbe: Fixed setting wrong device MTU.
Fixed an incorrect device MTU being set due to the Ethernet header and CRC lengths
being added twice.
• ixgbe: Fixed zeroed VF mac address.
Resolved an issue where the VF MAC address is zeroed out in cases where the VF driver
is loaded while the PF interface is down. The solution is to only set it when we get an
ACK from the PF.
• ixgbe: Fixed setting flow director flag twice.
Resolved an issue where packets were being dropped when switching to perfect filters
mode.
• ixgbe: Set MDIO speed after MAC reset.
The MDIO clock speed must be reconfigured after the MAC reset. The MDIO clock speed
becomes invalid, therefore the driver reads invalid PHY register values. The driver now
set the MDIO clock speed prior to initializing PHY ops and again after the MAC reset.
• ixgbe: Fixed maximum number of available TX queues.
In IXGBE, the maximum number of TX queues varies depending on the NIC operating
mode. This was not being updated in the device information, providing an incorrect
number in some cases.
• i40e: Generated MAC address for each VFs.
It generates a MAC address for each VFs during PF host initialization, and keeps the VF
MAC address the same among different VF launch.
• i40e: Fixed failure of reading/writing RX control registers.
Fixed i40e issue of failing to read/write rx control registers when under stress with traffic,
which might result in application launch failure.
• i40e: Enabled vector driver by default.
Previously, vector driver was disabled by default as it couldn’t fill packet type info for l3fwd
to work well. Now there is an option for l3fwd to analyze the packet type so the vector
driver is enabled by default.
• i40e: Fixed link info of VF.
Previously, the VF’s link speed stayed at 10GbE and status always was up. It did not
change even when the physical link’s status changed. Now this issue is fixed to make
VF’s link info consistent with physical link.
• mlx5: Fixed possible crash during initialization.
A crash could occur when failing to allocate private device context.
• mlx5: Added port type check.
Added port type check to prevent port initialization on non-Ethernet link layers and to
report an error.
• mlx5: Applied VLAN filtering to broadcast and IPv6 multicast flows.
Prevented reception of multicast frames outside of configured VLANs.
• mlx5: Fixed RX checksum offload in non L3/L4 packets.
Fixed report of bad checksum for packets of unknown type.
• aesni_mb: Fixed wrong return value when creating a device.
The cryptodev_aesni_mb_init() function was returning the device id of the device
created, instead of 0 (on success) that rte_eal_vdev_init() expects. This made it
impossible to create more than one aesni_mb device from the command line.
• qat: Fixed AES GCM decryption.
Allowed AES GCM on the cryptodev API, but in some cases gave invalid results due to
incorrect IV setting.
Libraries
• hash: Fixed CRC32c hash computation for non multiple of 4 bytes sizes.
Fix crc32c hash functions to return a valid crc32c value for data lengths not a multiple of
4 bytes.
• hash: Fixed hash library to support multi-process mode.
Fix hash library to support multi-process mode, using a jump table, instead of storing a
function pointer to the key compare function. Multi-process mode only works with the
built-in compare functions, however a custom compare function (not in the jump table)
can only be used in single-process mode.
• hash: Fixed return value when allocating an existing hash table.
Changed the rte_hash*_create() functions to return NULL and set rte_errno to
EEXIST when the object name already exists. This is the behavior described in the API
documentation in the header file. The previous behavior was to return a pointer to the
existing object in that case, preventing the caller from knowing if the object had to be
freed or not.
• lpm: Fixed return value when allocating an existing object.
Changed the rte_lpm*_create() functions to return NULL and set rte_errno to
EEXIST when the object name already exists. This is the behavior described in the API
documentation in the header file. The previous behavior was to return a pointer to the
existing object in that case, preventing the caller from knowing if the object had to be
freed or not.
• librte_port: Fixed segmentation fault for ring and ethdev writer nodrop.
Fixed core dump issue on txq and swq when dropless is set to yes.
Examples
API Changes
ABI Changes
• The RETA entry size in rte_eth_rss_reta_entry64 has been increased from 8-bit
to 16-bit.
• The ethdev flow director structure rte_eth_fdir_flow structure was changed. New
fields were added to extend flow director’s input set.
• The cmdline buffer size has been increase from 256 to 512.
The libraries prepended with a plus sign were incremented in this version.
+ libethdev.so.3
librte_acl.so.2
librte_cfgfile.so.2
+ librte_cmdline.so.2
librte_distributor.so.1
librte_eal.so.2
librte_hash.so.2
librte_ip_frag.so.1
librte_ivshmem.so.1
librte_jobstats.so.1
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
+ librte_pipeline.so.3
librte_pmd_bond.so.1
librte_pmd_ring.so.2
librte_port.so.2
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
librte_table.so.2
librte_timer.so.1
librte_vhost.so.2
Tested Platforms
1. SuperMicro 1U
• BIOS: 1.0c
• Processor: Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
2. SuperMicro 1U
• BIOS: 1.0a
• Processor: Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
• Onboard NIC: Intel(R) X552/X557-AT (2x10G)
– Firmware-version: 0x800001cf
– Device ID (PF/VF): 8086:15ad /8086:15a8
Tested NICs
New Features
Resolved Issues
EAL
Drivers
Bit 11 of PHYREG 0 is used to power down PHY. The use of PHYREG 16 is no longer
necessary.
• e1000/base: fix jumbo frame CRC failures.
Change the value of register 776.20[11:2] for jumbo mode from 0x1A to 0x1F. This is to
enlarge the gap between read and write pointers in the TX FIFO.
• e1000/base: Fix link flap on 82579.
Several customers have reported a link flap issue on 82579. The symptoms are random
and intermittent link losses when 82579 is connected to specific switches. the Issue was
root caused as an inter-operability problem between the NIC and at least some Broadcom
PHYs in the Energy Efficient Ethernet wake mechanism.
To fix the issue, we are disabling the Phase Locked Loop shutdown in 100M Low Power
Idle. This solution will cause an increase of power in 100M EEE link. It may cost an
additional 28mW in this specific mode.
• igb: Fixed IEEE1588 frame identification in I210.
Fixed issue where the flag PKT_RX_IEEE1588_PTP was not being set in the Intel I210
NIC, as the EtherType in RX descriptor is in bits 8:10 of Packet Type and not in the default
bits 0:2.
• igb: Fixed VF start with PF stopped.
VF needs the PF interrupt support initialized even if not started.
• igb: Fixed VF MAC address when using with DPDK PF.
Assign a random MAC address in VF when not assigned by PF.
• igb: Removed CRC bytes from byte counter statistics.
• ixgbe: Fixed issue with X550 DCB.
Fixed a DCB issue with x550 where for 8 TCs (Traffic Classes), if a packet with user
priority 6 or 7 was injected to the NIC, then the NIC would only put 3 packets into the
queue. There was also a similar issue for 4 TCs.
• ixgbe: Removed burst size restriction of vector RX.
Fixed issue where a burst size less than 32 didn’t receive anything.
• ixgbe: Fixed VF start with PF stopped.
VF needs the PF interrupt support initialized even if not started.
• ixgbe: Fixed TX hang when RS distance exceeds HW limit.
Fixed an issue where the TX queue can hang when a lot of highly fragmented packets
have to be sent. As part of that fix, tx_rs_thresh for ixgbe PMD is not allowed to be
greater then to 32 to comply with HW restrictions.
• ixgbe: Fixed rx error statistic counter.
Fixed an issue that the rx error counter of ixgbe was not accurate. The mac short packet
discard count (mspdc) was added to the counter. Mac local faults and mac remote faults
are removed as they do not count packets but errors, and jabber errors were removed as
they are already accounted for by the CRC error counter. Finally the XEC (l3 / l4 check-
sum error) counter was removed due to errata, see commit 256ff05a9cae for details.
Libraries
Examples
Other
• This release drops compatibility with Linux kernel 2.6.33. The minimum kernel require-
ment is now 2.6.34.
Known Issues
• Some drivers do not fill in the packet type when receiving. As the l3fwd example appli-
cation requires this info, the i40e vector driver must be disabled to benefit of the packet
type with i40e.
• Some (possibly all) VF drivers (e.g. i40evf) do not handle any PF reset events/requests
in the VF driver. This means that the VF driver may not work after a PF reset in the host
side. The workaround is to avoid triggering any PF reset events/requests on the host
side.
• 100G link report support is missing.
• Mellanox PMDs (mlx4 & mlx5):
– PMDs do not support CONFIG_RTE_BUILD_COMBINE_LIBS and CON-
FIG_RTE_BUILD_SHARED_LIB simultaneously.
– There is performance degradation for small packets when the PMD is compiled with
SGE_WR_N = 4 compared to the performance when SGE_WR_N = 1. If scattered
packets are not used it is recommended to compile the PMD with SGE_WR_N = 1.
– When a Multicast or Broadcast packet is sent to the SR-IOV mlx4 VF, it is returned
back to the port.
– PMDs report “bad” L4 checksum when IP packet is received.
– mlx5 PMD reports “bad” checksum although the packet has “good” checksum. Will
be fixed in upcoming MLNX_OFED release.
API Changes
ABI Changes
• librte_table hash: The key mask parameter is added to the hash table parameter struc-
ture for 8-byte key and 16-byte key extendable bucket and LRU tables.
• librte_port: Macros to access the packet meta-data stored within the packet buffer has
been adjusted to cover the packet mbuf structure.
• librte_cfgfile: Allow longer names and values by increasing the constants
CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
• vhost: a new field enabled is added to the vhost_virtqueue structure.
• vhost: a new field virt_qp_nb is added to virtio_net structure, and the virtqueue
field is moved to the end of virtio_net structure.
• vhost: a new operation vring_state_changed is added to
virtio_net_device_ops structure.
• vhost: a few spaces are reserved both at vhost_virtqueue and virtio_net struc-
ture for future extension.
The libraries prepended with a plus sign were incremented in this version.
+ libethdev.so.2
+ librte_acl.so.2
+ librte_cfgfile.so.2
librte_cmdline.so.1
librte_distributor.so.1
+ librte_eal.so.2
+ librte_hash.so.2
librte_ip_frag.so.1
librte_ivshmem.so.1
librte_jobstats.so.1
+ librte_kni.so.2
librte_kvargs.so.1
+ librte_lpm.so.2
+ librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
+ librte_pipeline.so.2
librte_pmd_bond.so.1
+ librte_pmd_ring.so.2
+ librte_port.so.2
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
+ librte_table.so.2
librte_timer.so.1
+ librte_vhost.so.2
New Features
– A new configuration file syntax has been introduced for IP pipeline applications.
Parsing of the configuration file is changed.
– Implementation of the IP pipeline application is modified to make it more structured
and user friendly.
– Implementation of the command line interface (CLI) for each pipeline type has been
moved to the separate compilation unit. Syntax of pipeline CLI commands has been
changed.
– Initialization of IP pipeline is modified to match the new parameters structure.
– New implementation of pass-through pipeline, firewall pipeline, routing pipeline, and
flow classification has been added.
– Master pipeline with CLI interface has been added.
– Added extended documentation of the IP Pipeline.
• Added API for IEEE1588 timestamping.
This feature adds an ethdev API to enable, disable and read IEEE1588/802.1AS PTP
timestamps from devices that support it. The following functions were added:
– rte_eth_timesync_enable()
– rte_eth_timesync_disable()
– rte_eth_timesync_read_rx_timestamp()
– rte_eth_timesync_read_tx_timestamp()
The “ieee1588” forwarding mode in testpmd was also refactored to demonstrate the new
API.
• Added multicast address filtering.
Added multicast address filtering via a new ethdev function set_mc_addr_list().
This overcomes a limitation in previous releases where the receipt of mul-
ticast packets on a given port could only be enabled by invoking the
rte_eth_allmulticast_enable() function. This method did not work for VFs
in SR-IOV architectures when the host PF driver does not allow these operation on VFs.
In such cases, joined multicast addresses had to be added individually to the set of
multicast addresses that are filtered by the [VF] port.
• Added Flow Director extensions.
Several Flow Director extensions were added such as:
– Support for RSS and Flow Director hashes in vector RX.
– Added Flow Director for L2 payload.
• Added RSS hash key size query per port.
This feature supports querying the RSS hash key size of each port. A new field
hash_key_size has been added in the rte_eth_dev_info struct for storing hash
key size in bytes.
• Added userspace ethtool support.
Added userspace ethtool support to provide a familiar interface for applications that man-
age devices via kernel-space ethtool_op and net_device_op.
The initial implementation focuses on operations that can be implemented through exist-
ing netdev APIs. More operations will be supported in later releases.
• Updated the ixgbe base driver.
The ixgbe base driver was updated with several changes including the following:
– Added a new 82599 device id.
– Added new X550 PHY ids.
– Added SFP+ dual-speed support.
– Added wait helper for X550 IOSF accesses.
– Added X550em features.
– Added X557 PHY LEDs support.
– Commands for flow director.
– Issue firmware command when resetting X550em.
See the git log for full details of the ixgbe/base changes.
• Added additional hotplug support.
Port hotplug support was added to the following PMDs:
– e1000/igb.
– ixgbe.
– i40e.
– fm10k.
– ring.
– bonding.
– virtio.
Port hotplug support was added to BSD.
• Added ixgbe LRO support.
Added LRO support for x540 and 82599 devices.
• Added extended statistics for ixgbe.
Implemented xstats_get() and xstats_reset() in dev_ops for ixgbe to expose
detailed error statistics to DPDK applications.
These will be implemented for other PMDs in later releases.
• Added proc_info application.
Created a new proc_info application, by refactoring the existing dump_cfg applica-
tion, to demonstrate the usage of retrieving statistics, and the new extended statistics
(see above), for DPDK interfaces.
when a new entry must be added to it, storing the evicted entry in an alternative location,
using a secondary hash function.
This gives the user the ability to store more entries when a bucket is full, in comparison
with the previous implementation.
The API has not been changed, although new fields have been added in the rte_hash
structure, which has been changed to internal use only.
The main change when creating a new table is that the number of entries per bucket is
now fixed, so its parameter is ignored now (it is still there to maintain the same parameters
structure).
Also, the maximum burst size in lookup_burst function hash been increased to 64, to
improve performance.
• Optimized KNI RX burst size computation.
Optimized KNI RX burst size computation by avoiding checking how many entries are in
kni->rx_q prior to actually pulling them from the fifo.
• Added KNI multicast.
Enabled adding multicast addresses to KNI interfaces by adding an empty callback for
set_rx_mode (typically used for setting up hardware) so that the ioctl succeeds. This is
the same thing as the Linux tap interface does.
• Added cmdline polling mode.
Added the ability to process console input in the same thread as packet processing by
using the poll() function.
• Added VXLAN Tunnel End point sample application.
Added a Tunnel End point (TEP) sample application that simulates a VXLAN Tunnel
Endpoint (VTEP) termination in DPDK. It is used to demonstrate the offload and filtering
capabilities of Intel XL710 10/40 GbE NICsfor VXLAN packets.
• Enabled combining of the ‘‘-m‘‘ and ‘‘–no-huge‘‘ EAL options.
Added option to allow combining of the -m and --no-huge EAL command line options.
This allows user application to run as non-root but with higher memory allocations, and
removes a constraint on --no-huge mode being limited to 64M.
Resolved Issues
Known Issues
• When running the vmdq sample or vhost sample applications with the Intel(R) XL710
(i40e) NIC, the configuration option CONFIG_RTE_MAX_QUEUES_PER_PORT should be
increased from 256 to 1024.
• VM power manager may not work on systems with more than 64 cores.
API Changes
• The order that user supplied RX and TX callbacks are called in has been changed to the
order that they were added (fifo) in line with end-user expectations. The previous calling
order was the reverse of this (lifo) and was counter intuitive for users. The actual API is
unchanged.
ABI Changes
New Features
• Poll-mode driver support for an early release of the PCIE host interface of the Intel(R)
Ethernet Switch FM10000.
– Basic Rx/Tx functions for PF/VF
– Interrupt handling support for PF/VF
– Per queue start/stop functions for PF/VF
– Support Mailbox handling between PF/VF and PF/Switch Manager
– Receive Side Scaling (RSS) for PF/VF
– Scatter receive function for PF/VF
– Reta update/query for PF/VF
– VLAN filter set for PF
– Link status query for PF/VF
Note: The software is intended to run on pre-release hardware and may contain unknown
or unresolved defects or issues related to functionality and performance. The poll mode driver
is also pre-release and will be updated to a released version post hardware and base driver
release. Should the official hardware release be made between DPDK releases an updated
poll-mode driver will be made available.
• Link Bonding
– Support for adaptive load balancing (mode 6) to the link bonding library.
– Support for registration of link status change callbacks with link bonding devices.
– Support for slaves devices which do not support link status change interrupts in the
link bonding library via a link status polling mechanism.
• PCI Hotplug with NULL PMD sample application
• ABI versioning
• x32 ABI
• Non-EAL Thread Support
• Multi-pthread Support
• Re-order Library
• ACL for AVX2
• Architecture Independent CRC Hash
• uio_pci_generic Support
• KNI Optimizations
• Vhost-user support
• Virtio (link, vlan, mac, port IO, perf)
• IXGBE-VF RSS
• RX/TX Callbacks
• Unified Flow Types
• Indirect Attached MBUF Flag
• Use default port configuration in TestPMD
• Tunnel offloading in TestPMD
• Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
– Support for Flow Director
– Support for ethertype filter
– Support RSS in VF
– Support configuring redirection table with different size from 1GbE and 10 GbE
– 128/512 entries of 40GbE PF
– 64 entries of 40GbE VF
– Support configuring hash functions
– Support for VXLAN packet on Intel® 40GbE Controllers
• Poll Mode Driver for Mellanox ConnectX-3 EN adapters (mlx4)
Note: This PMD is only available for Linux and is disabled by default due to external depen-
dencies (libibverbs and libmlx4). Please refer to the NIC drivers guide for more information.
Note: The hash values returned by the new jhash library are different from the ones returned
by the previous library.
New Features
• Link Bonding
– Support for 802.3ad link aggregation (mode 4) and transmit load balancing (mode
5) to the link bonding library.
– Support for registration of link status change callbacks with link bonding devices.
– Support for slaves devices which do not support link status change interrupts in the
link bonding library via a link status polling mechanism.
• Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
– Support for Flow Director
– Support for ethertype filter
– Support RSS in VF
– Support configuring redirection table with different size from 1GbE and 10 GbE
– 128/512 entries of 40GbE PF
– 64 entries of 40GbE VF
– Support configuring hash functions
– Support for VXLAN packet on Intel 40GbE Controllers
• Packet Distributor Sample Application
The following Linux distributions were successfully used to compiler or run DPDK.
• FreeBSD 10
• Fedora release 20
• Ubuntu 14.04 LTS
• Wind River Linux 6
• Red Hat Enterprise Linux 6.5
• SUSE Enterprise Linux 11 SP3
These distributions may need additional packages that are not installed by default, or a specific
kernel. Refer to the Linux guide and FreeBSD guide for details.
This section describes known issues with the DPDK software that aren’t covered in the version
specific release notes sections.
Description: Unit tests will fail in test_tlb_tx_burst() function with error for uneven dis-
tribution of packets.
Implication: Unit test link_bonding_autotest will fail.
Description: For igb devices rte_eth_flow_ctrl_set does not work as expected. Pause frames
are always forwarded on igb, regardless of the RFCE, MPMCF and DPF registers.
Implication: Pause frames will never be rejected by the host on 1G NICs and they will always
be forwarded.
Resolution/Workaround: There is no workaround available.
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Description: In packets provided by the PMD, some flags are missing. The application does
not have access to information provided by the hardware (packet is broadcast, packet is
multicast, packet is IPv4 and so on).
Implication: The ol_flags field in the rte_mbuf structure is not correct and should not be
used.
Resolution/Workaround: The application has to parse the Ethernet header itself to get the
information, which is slower.
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Resolution/Workaround: The application should not call these functions too often in the main
loop. An alternative is to use the TSC register through rte_rdtsc() which is faster, but
specific to an lcore and is a cycle reference, not a time reference.
Affected Environment/Platform: All.
Driver/Module: Environment Abstraction Layer (EAL).
Description: HPET timers do not work on the Osage customer reference platform which in-
cludes an Intel® Xeon® processor 5500 series processor) using the released BIOS from
Intel.
Implication: On Osage boards, the implementation of the rte_delay_us() function must
be changed to not use the HPET timer.
Resolution/Workaround: This can be addressed by building the system with the
CONFIG_RTE_LIBEAL_USE_HPET=n configuration option or by using the --no-hpet
EAL option.
Affected Environment/Platform: The Osage customer reference platform. Other vendor plat-
forms with Intel® Xeon® processor 5500 series processors should work correctly, pro-
vided the BIOS supports HPET.
Driver/Module: lib/librte_eal/common/include/rte_cycles.h
Not all variants of supported NIC types have been used in testing
Description: The supported network interface cards can come in a number of variants with
different device ID’s. Not all of these variants have been tested with the DPDK.
The NIC device identifiers used during testing:
• Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1584]
• Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583]
• Intel® Ethernet Controller X710 for 10GbE SFP+ [8086:1572]
• Intel® 82576 Gigabit Ethernet Controller [8086:10c9]
• Intel® 82576 Quad Copper Gigabit Ethernet Controller [8086:10e8]
• Intel® 82580 Dual Copper Gigabit Ethernet Controller [8086:150e]
• Intel® I350 Quad Copper Gigabit Ethernet Controller [8086:1521]
• Intel® 82599 Dual Fibre 10 Gigabit Ethernet Controller [8086:10fb]
• Intel® Ethernet Server Adapter X520-T2 [8086: 151c]
• Intel® Ethernet Controller X540-T2 [8086:1528]
• Intel® 82574L Gigabit Network Connection [8086:10d3]
• Emulated Intel® 82540EM Gigabit Ethernet Controller [8086:100e]
• Emulated Intel® 82545EM Gigabit Ethernet Controller [8086:100f]
Description: The multi-process example application assumes that it is possible to map the
hugepage memory to the same virtual addresses in client and server applications. Oc-
casionally, very rarely with 64-bit, this does not occur and a client application will fail on
startup. The Linux “address-space layout randomization” security feature can sometimes
cause this to occur.
Implication: A multi-process client application fails to initialize.
Resolution/Workaround: See the “Multi-process Limitations” section in the DPDK Program-
mer’s Guide for more information.
Affected Environment/Platform: All.
Driver/Module: Multi-process example application
Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC
is not the MAC assigned to the VF NIC
Description: The 1 GbE/10 GbE SR-IOV driver can only send packets when the Ethernet
header’s source MAC address is the same as that of the VF NIC. The reason for this is
that the Linux ixgbe driver module in the host OS has its anti-spoofing feature enabled.
Implication: Packets sent using the 1 GbE/10 GbE SR-IOV driver must have the source MAC
address correctly set to that of the VF NIC. Packets with other source address values are
dropped by the NIC if the application attempts to transmit them.
Resolution/Workaround: Configure the Ethernet source address in each packet to match
that of the VF NIC.
Affected Environment/Platform: All.
Driver/Module: 1 GbE/10 GbE VF Poll Mode Driver (PMD).
Description: The SR-IOV drivers only supports the following rte_ethdev API functions:
• rte_eth_dev_configure()
• rte_eth_tx_queue_setup()
• rte_eth_rx_queue_setup()
• rte_eth_dev_info_get()
• rte_eth_dev_start()
• rte_eth_tx_burst()
• rte_eth_rx_burst()
• rte_eth_dev_stop()
• rte_eth_stats_get()
• rte_eth_stats_reset()
• rte_eth_link_get()
• rte_eth_link_get_no_wait()
Implication: Calling an unsupported function will result in an application error.
Resolution/Workaround: Do not use other rte_ethdev API functions in applications that use
the SR-IOV drivers.
Affected Environment/Platform: All.
Driver/Module: VF Poll Mode Driver (PMD).
PMD does not work with –no-huge EAL command line parameter
Description: Currently, the DPDK does not store any information about memory allocated
by malloc()‘ (for example, NUMA node, physical address), hence
PMD drivers do not work when the ‘‘--no-huge command line parameter is
supplied to EAL.
Implication: Sending and receiving data with PMD will not work.
Resolution/Workaround: Use huge page memory or use VFIO to map devices.
Affected Environment/Platform: Systems running the DPDK on Linux
Driver/Module: Poll Mode Driver (PMD).
Description: Currently, configuration of the following items is not supported by the VF driver:
• IP/UDP/TCP checksum offload
• Jumbo Frame Receipt
• HW Strip CRC
Implication: Any configuration for these items in the VF register will be ignored. The behavior
is dependent on the current PF setting.
Resolution/Workaround: For the PF (Physical Function) status on which the VF driver de-
pends, there is an option item under PMD in the config file. For others, the VF will keep
the same behavior as PF setting.
Affected Environment/Platform: All.
Description: Kernel crash may occur when unbinding 1G ports from the igb_uio driver, on
2.6.3x kernels such as shipped with Fedora 14.
Implication: Kernel crash occurs.
Resolution/Workaround: Use newer kernels or do not unbind ports.
Affected Environment/Platform: 2.6.3x kernels such as shipped with Fedora 14
Driver/Module: IGB Poll Mode Driver (PMD).
Description: Twin Pond/Iron Pond NICs do not bring the physical link down when shutting
down the port.
Implication: The link is reported as up even after issuing shutdown command unless the
cable is physically disconnected.
Resolution/Workaround: None.
Affected Environment/Platform: Twin Pond and Iron Pond NICs
Driver/Module: Poll Mode Driver (PMD).
Description: Gigabit Ethernet devices from Intel include CRC bytes when calculating packet
reception statistics regardless of hardware CRC stripping state, while 10-Gigabit Ethernet
devices from Intel do so only when hardware CRC stripping is disabled.
Implication: There may be a discrepancy in how different NICs display packet reception statis-
tics.
Resolution/Workaround: None
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Description: On DPDK application startup, errors may be reported when opening files as part
of the initialization process. This occurs if a large number, for example, 500 or more, or if
hugepages are used, due to the per-process limit on the number of open files.
Implication: The DPDK application may fail to run.
Resolution/Workaround: If using 2 MB hugepages, consider switching to a fewer number of
1 GB pages. Alternatively, use the ulimit command to increase the number of files
which can be opened by a process.
Description: The Intel® Communications Chipset 89xx Series device does not fully support
NUMA on a 32-bit OS. Consequently, the sample application cannot work properly on
Shumway, since it requires NUMA on both nodes.
Implication: The sample application cannot work in 32-bit mode with emulated NUMA, on
multi-socket boards.
Resolution/Workaround: There is no workaround available.
Affected Environment/Platform: Shumway
Driver/Module: All.
Differences in how different Intel NICs handle maximum packet length for jumbo
frame
Description: 10 Gigabit Ethernet devices from Intel do not take VLAN tags into account when
calculating packet size while Gigabit Ethernet devices do so for jumbo frames.
Implication: When receiving packets with VLAN tags, the actual maximum size of useful pay-
load that Intel Gigabit Ethernet devices are able to receive is 4 bytes (or 8 bytes in the
case of packets with extended VLAN tags) less than that of Intel 10 Gigabit Ethernet
devices.
Resolution/Workaround: Increase the configured maximum packet size when using Intel Gi-
gabit Ethernet devices.
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Binding PCI devices to igb_uio fails on Linux kernel 3.9 when more than one
device is used
Description: A known bug in the uio driver included in Linux kernel version 3.9 prevents more
than one PCI device to be bound to the igb_uio driver.
Implication: The Poll Mode Driver (PMD) will crash on initialization.
Resolution/Workaround: Use earlier or later kernel versions, or apply the following patch.
Affected Environment/Platform: Linux systems with kernel version 3.9
Driver/Module: igb_uio module
GCC might generate Intel® AVX instructions for processors without Intel® AVX
support
Description: When compiling DPDK (and any DPDK app), gcc may generate Intel® AVX
instructions, even when the processor does not support Intel® AVX.
Implication: Any DPDK app might crash while starting up.
Resolution/Workaround: Either compile using icc or set EXTRA_CFLAGS=’-O3’ prior to
compilation.
Affected Environment/Platform: Platforms which processor does not support Intel® AVX.
Driver/Module: Environment Abstraction Layer (EAL).
Description: On Intel® Ethernet Controller 82599EB When Ethertype filter (priority enable)
was set, unmatched packets also could be received on the assigned queue, such as
ARP packets without 802.1q tags or with the user priority not equal to set value. Launch
the testpmd by disabling RSS and with multiply queues, then add the ethertype filter like
the following and then start forwarding:
add_ethertype_filter 0 ethertype 0x0806 priority enable 3 queue 2 index 1
When sending ARP packets without 802.1q tag and with user priority as non-3 by tester,
all the ARP packets can be received on the assigned queue.
Implication: The user priority comparing in Ethertype filter cannot work probably. It is a NIC’s
issue due to the following: “In fact, ETQF.UP is not functional, and the information will be
added in errata of 82599 and X540.”
Resolution/Workaround: None
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Description: On Intel® 40G Ethernet Controller you cannot set the link to specific speed.
Implication: The link speed cannot be changed forcibly, though it can be configured by appli-
cation.
Resolution/Workaround: None
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Devices bound to igb_uio with VT-d enabled do not work on Linux kernel 3.15-
3.17
result in IOMMU errors. Introduced in Linux kernel 3.15 commit, solved in Linux kernel
3.18 commit.
Implication: Devices will not be allowed to access memory, resulting in following kernel errors:
dmar: DRHD: handling fault status reg 2
dmar: DMAR:[DMA Read] Request device [02:00.0] fault addr a0c58000
DMAR:[fault reason 02] Present bit in context entry is clear
Resolution/Workaround: Use earlier or later kernel versions, or avoid driver binding on boot
by blacklisting the driver modules. I.e., in the case of ixgbe, we can pass the kernel
command line option: modprobe.blacklist=ixgbe. This way we do not need to
unbind the device to bind it to igb_uio.
Affected Environment/Platform: Linux systems with kernel versions 3.15 to 3.17.
Driver/Module: igb_uio module.
VM power manager may not work on systems with more than 64 cores
Description: When using VM power manager on a system with more than 64 cores, VM(s)
should not use cores 64 or higher.
Implication: VM power manager should not be used with VM(s) that are using cores 64 or
above.
Resolution/Workaround: Do not use cores 64 or above.
Affected Environment/Platform: Platforms with more than 64 cores.
Driver/Module: VM power manager application.
DPDK may not build on some Intel CPUs using clang < 3.7.0
Description: When compiling DPDK with an earlier version than 3.7.0 of clang, CPU flags
are not detected on some Intel platforms such as Intel Broadwell/Skylake (and possibly
future CPUs), and therefore compilation fails due to missing intrinsics.
Implication: DPDK will not build when using a clang version < 3.7.0.
Resolution/Workaround: Use clang 3.7.0 or higher, or gcc.
Affected Environment/Platform: Platforms with Intel Broadwell/Skylake using an old clang
version.
Driver/Module: Environment Abstraction Layer (EAL).
Description: The last EAL argument is replaced by program name in argv[] after
eal_parse_args is called. This is the intended behavior but it causes the pointer to
the last EAL argument to be lost.
Implication: If the last EAL argument in argv[] is generated by a malloc function, changing
it will cause memory issues when freeing the argument.
Description: Promiscuous mode is not supported by the DPDK i40e VF driver when using the
i40e Linux kernel driver as host driver.
Implication: The i40e VF does not receive packets when the destination MAC address is
unknown.
Resolution/Workaround: Use a explicit destination MAC address that matches the VF.
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Description: The uio_pci_generic module is not supported by XL710, since the errata
of XL710 states that the Interrupt Status bit is not implemented. The errata is the item
#71 from the xl710 controller spec. The hw limitation is the same as other X710/XXV710
NICs.
Implication: When use --bind=uio_pci_generic, the uio_pci_generic module
probes device and check the Interrupt Status bit. Since it is not supported by
X710/XL710/XXV710, it return a failed value. The statement that these products don’t
support INTx masking, is indicated in the related linux kernel commit.
Resolution/Workaround: Do not bind the uio_pci_generic module in
X710/XL710/XXV710 NICs.
Affected Environment/Platform: All.
Driver/Module: Poll Mode Driver (PMD).
Description: The standard TX function of virtio driver does not manage shared packets prop-
erly when doing TSO. These packets should be read-only but the driver modifies them.
When doing TSO, the virtio standard expects that the L4 checksum is set to the pseudo
header checksum in the packet data, which is different than the DPDK API. The driver
patches the L4 checksum to conform to the virtio standard, but this solution is invalid
when dealing with shared packets (clones), because the packet data should not be mod-
ified.
Implication: In this situation, the shared data will be modified by the driver, potentially causing
race conditions with the other users of the mbuf data.
Resolution/Workaround: The workaround in the application is to ensure that the network
headers in the packet data are not shared.
See the guidelines document for details of the ABI policy. API and ABI deprecation notices are
to be posted here.
Deprecation Notices
• ring: Changes are planned to rte_ring APIs in release 17.05. Proposed changes include:
– Removing build time options for the ring: CON-
FIG_RTE_RING_SPLIT_PROD_CONS CONFIG_RTE_RING_PAUSE_REP_COUNT
– Adding an additional parameter to enqueue functions to return the amount of free
space in the ring
– Adding an additional parameter to dequeue functions to return the number of re-
maining elements in the ring
– Removing direct support for watermarks in the rings, since the additional return
value from the enqueue function makes it unneeded
– Adjusting the return values of the bulk() enq/deq functions to make them consistent
with the burst() equivalents. [Note, parameter to these functions are changing too,
per points above, so compiler will flag them as needing update in legacy code]
– Updates to some library functions e.g. rte_ring_get_memsize() to allow for variably-
sized ring elements.
• igb_uio: iomem mapping and sysfs files created for iomem and ioport in igb_uio will be
removed, because we are able to detect these from what Linux has exposed, like the way
we have done with uio-pci-generic. This change targets release 17.05.
• vfio: Some functions are planned to be exported outside librte_eal in 17.05. VFIO APIs
like vfio_setup_device, vfio_get_group_fd can be used by subsystem other
than EAL/PCI. For that, these need to be exported symbols. Such APIs are planned
to be renamed according to rte_* naming convention and exported from librte_eal.
• The PCI and VDEV subsystems will be converted as drivers of the new bus model. It will
imply some EAL API changes in 17.05.
• eth_driver is planned to be removed in 17.05. This currently serves as a placeholder
for PMDs to register themselves. Changes for rte_bus will provide a way to handle
device initialization currently being done in eth_driver. Similarly, rte_pci_driver
is planned to be removed from rte_cryptodev_driver in 17.05.
• ethdev: An API change is planned for 17.05 for the function
_rte_eth_dev_callback_process. In 17.05 the function will return an int
instead of void and a fourth parameter void *ret_param will be added.
• ethdev: for 17.05 it is planned to deprecate the following nine rte_eth_dev_* functions
and move them into the ixgbe PMD:
rte_eth_dev_bypass_init, rte_eth_dev_bypass_state_set,
rte_eth_dev_bypass_state_show, rte_eth_dev_bypass_event_store,
rte_eth_dev_bypass_event_show, rte_eth_dev_wd_timeout_store,
rte_eth_dev_bypass_wd_timeout_show, rte_eth_dev_bypass_ver_show,
rte_eth_dev_bypass_wd_reset.
The following fields will be removed from struct eth_dev_ops:
bypass_init_t, bypass_state_set_t, bypass_state_show_t,
bypass_event_set_t, bypass_event_show_t, bypass_wd_timeout_set_t,
bypass_wd_timeout_show_t, bypass_ver_show_t, bypass_wd_reset_t.
The functions will be renamed to the following, and moved to the ixgbe PMD:
rte_pmd_ixgbe_bypass_init, rte_pmd_ixgbe_bypass_state_set,
rte_pmd_ixgbe_bypass_state_show, rte_pmd_ixgbe_bypass_event_set,
rte_pmd_ixgbe_bypass_event_show, rte_pmd_ixgbe_bypass_wd_timeout_set,
rte_pmd_ixgbe_bypass_wd_timeout_show, rte_pmd_ixgbe_bypass_ver_show,
rte_pmd_ixgbe_bypass_wd_reset.
• ABI changes are planned for 17.05 in the rte_mbuf structure: some fields may be
reordered to facilitate the writing of data_off, refcnt, and nb_segs in one operation,
because some platforms have an overhead if the store address is not naturally aligned.
Other mbuf fields, such as the port field, may be moved or removed as part of this mbuf
work. A timestamp will also be added.
• The mbuf flags PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT are deprecated and are
respectively replaced by PKT_RX_VLAN_STRIPPED and PKT_RX_QINQ_STRIPPED,
that are better described. The old flags and their behavior will be kept until 17.02 and will
be removed in 17.05.
• mempool: The functions rte_mempool_count and rte_mempool_free_count will
be removed in 17.05. They are replaced by rte_mempool_avail_count and
rte_mempool_in_use_count respectively.
• mempool: The functions for single/multi producer/consumer are deprecated and will be
removed in 17.05. It is replaced by rte_mempool_generic_get/put functions.
• ethdev: the legacy filter API, including rte_eth_dev_filter_supported(),
rte_eth_dev_filter_ctrl() as well as filter types MACVLAN, ETHERTYPE,
FLEXIBLE, SYN, NTUPLE, TUNNEL, FDIR, HASH and L2_TUNNEL, is superseded by
the generic flow API (rte_flow) in PMDs that implement the latter. Target release for
removal of the legacy API will be defined once most PMDs have switched to rte_flow.
• vhost: API/ABI changes are planned for 17.05, for making DPDK vhost library generic
enough so that applications can build different vhost-user drivers (instead of vhost-
user net only) on top of that. Specifically, virtio_net_device_ops will be re-
named to vhost_device_ops. Correspondingly, some API’s parameter need be
changed. Few more functions also need be reworked to let it be device aware.
For example, different virtio device has different feature set, meaning functions like
rte_vhost_feature_disable need be changed. Last, file rte_virtio_net.h will be
renamed to rte_vhost.h.
• kni: Remove KNI Working as a Kernel vHost Backend feature (KNI_VHOST) in 17.05
release. Vhost Library is currently preferred method for guest - host communication. Just
for clarification, this is not to remove KNI or VHOST feature, but KNI_VHOST which is a
KNI feature enabled via a compile time option, and disabled by default.
FAQ
This document contains some Frequently Asked Questions that arise when working with
DPDK.
This is most likely due to the test application not being run with sudo to promote the user to a
superuser. Alternatively, applications can also be run as regular user. For more information,
please refer to DPDK Getting Started Guide.
The number of pages allocated can be seen by executing the following command:
grep Huge /proc/meminfo
Once all the pages are mmapped by an application, they stay that way. If you start a test
application with less than the maximum, then you have free pages. When you stop and restart
the test application, it looks to see if the pages are available in the /dev/huge directory and
mmaps them. If you look in the directory, you will see n number of 2M pages files. If you
specified 1024, you will see 1024 page files. These are then placed in memory segments to
get contiguous memory.
If you need to change the number of pages, it is easier to first remove the pages. The
usertools/dpdk-setup.sh script provides an option to do this. See the “Quick Start Setup Script”
section in the DPDK Getting Started Guide for more information.
I have set up a total of 1024 Hugepages (that is, allocated 512 2M pages to each NUMA node).
802
DPDK documentation, Release 17.05.0-rc0
The -m command line parameter does not guarantee that huge pages will be reserved on
specific sockets. Therefore, allocated huge pages may not be on socket 0. To request memory
to be reserved on a specific socket, please use the –socket-mem command-line parameter
instead of -m.
32-bit applications have limitations in terms of how much virtual memory is available, hence
the number of hugepages they are able to allocate is also limited (1 GB per page size). If your
system has a lot (>1 GB per page size) of hugepage memory, not all of it will be allocated.
Due to hugepages typically being allocated on a local NUMA node, the hugepages allocation
the application gets during the initialization depends on which NUMA node it is running on (the
EAL does not affinitize cores until much later in the initialization process). Sometimes, the
Linux OS runs the DPDK application on a core that is located on a different NUMA node from
DPDK master core and therefore all the hugepages are allocated on the wrong socket.
To avoid this scenario, either lower the amount of hugepage memory available to 1 GB per
page size (or less), or run the application with taskset affinitizing the application to a would-be
master core.
For example, if your EAL coremask is 0xff0, the master core will usually be the first core in the
coremask (0x10); this is what you have to supply to taskset:
taskset 0x10 ./l2fwd -c 0xff0 -n 2
In this way, the hugepages have a greater chance of being allocated to the correct socket.
Additionally, a --socket-mem option could be used to ensure the availability of memory for
each socket, so that if hugepages were allocated on the wrong socket, the application simply
will not start.
It is also possible to change the default level at compile time with CONFIG_RTE_LOG_LEVEL.
13.4. I am running a 32-bit DPDK application on a NUMA system, and sometimes the803
application initializes fine but cannot allocate memory. Why is that happening?
DPDK documentation, Release 17.05.0-rc0
Traditionally, there is a trade-off between throughput and latency. An application can be tuned
to achieve a high throughput, but the end-to-end latency of an average packet typically in-
creases as a result. Similarly, the application can be tuned to have, on average, a low end-to-
end latency at the cost of lower throughput.
To achieve higher throughput, the DPDK attempts to aggregate the cost of processing each
packet individually by processing packets in bursts. Using the testpmd application as an exam-
ple, the “burst” size can be set on the command line to a value of 16 (also the default value).
This allows the application to request 16 packets at a time from the PMD. The testpmd appli-
cation then immediately attempts to transmit all the packets that were received, in this case, all
16 packets. The packets are not transmitted until the tail pointer is updated on the correspond-
ing TX queue of the network port. This behavior is desirable when tuning for high throughput
because the cost of tail pointer updates to both the RX and TX queues can be spread across
16 packets, effectively hiding the relatively slow MMIO cost of writing to the PCIe* device.
However, this is not very desirable when tuning for low latency, because the first packet that
was received must also wait for the other 15 packets to be received. It cannot be transmitted
until the other 15 packets have also been processed because the NIC will not know to transmit
the packets until the TX tail pointer has been updated, which is not done until all 16 packets
have been processed for transmission.
To consistently achieve low latency even under heavy system load, the application developer
should avoid processing packets in bunches. The testpmd application can be configured from
the command line to use a burst value of 1. This allows a single packet to be processed at a
time, providing lower latency, but with the added cost of lower throughput.
I have a dual Intel® Xeon® E5645 processors 2.40 GHz with four Intel® 82599 10 Gigabit
Ethernet NICs. Using eight logical cores on each processor with RSS set to distribute network
load from two 10 GbE interfaces to the cores on each processor.
Without NUMA enabled, memory is allocated from both sockets, since memory is interleaved.
Therefore, each 64B chunk is interleaved across both memory domains.
The first 64B chunk is mapped to node 0, the second 64B chunk is mapped to node 1, the third
to node 0, the fourth to node 1. If you allocated 256B, you would get memory that looks like
this:
256B buffer
Offset 0x00 - Node 0
Offset 0x40 - Node 1
Offset 0x80 - Node 0
Offset 0xc0 - Node 1
Therefore, packet buffers and descriptor rings are allocated from both memory domains, thus
incurring QPI bandwidth accessing the other memory and much higher latency. For best per-
formance with NUMA disabled, only one socket should be populated.
13.6. How can I tune my network application to achieve lower latency? 804
DPDK documentation, Release 17.05.0-rc0
As the DPDK operates, it opens a lot of files, which can result in reaching the open files limits,
which is set using the ulimit command or in the limits.conf file. This is especially true when
using a large number (>512) of 2 MB huge pages. Please increase the open file limit if your
application is not able to open files. This can be done either by issuing a ulimit command or
editing the limits.conf file. Please consult Linux manpages for usage information.
Some versions of Linux IXGBE driver do not assign a random MAC address to VF devices at
initialization. In this case, this has to be done manually on the VM host, using the following
command:
ip link set <interface> vf <VF function> mac <MAC address>
where <interface> being the interface providing the virtual functions for example, eth0, <VF
function> being the virtual function number, for example 0, and <MAC address> being the
desired MAC address.
Currently the table implementation is not a thread safe implementation and assumes that lock-
ing between threads and processes is handled by the user’s application. This is likely to be
supported in future releases.
DPDK uses a 1:1 mapping and does not support IOMMU. IOMMU allows for simpler VM phys-
ical address translation. The second role of IOMMU is to allow protection from unwanted
memory access by an unsafe device that has DMA privileges. Unfortunately, the protection
comes with an extremely high performance cost for high speed NICs.
Setting iommu=pt disables IOMMU support for the hypervisor.
Check on register LLE(PFVMTXSSW[n]), which allows an individual pool to send traffic and
have it looped back to itself.
The DPDK’s lcore threads are Linux pthreads bound onto specific cores. Configure the DPDK
to do work on the same cores and run the application’s other work on other cores using the
13.8. I am getting errors about not being able to open files. Why? 805
DPDK documentation, Release 17.05.0-rc0
DPDK’s “coremask” setting to specify which cores it should launch itself on.
Yes - DPDK processes are regular Linux/BSD processes, and can use all OS provided IPC
mechanisms.
I350 has RSS support and 8 queue pairs can be used in RSS mode. It should work with
multi-queue DPDK applications using RSS.
See the Primary and Secondary examples in the multi-process sample application.
13.14. Is it possible to exchange data between DPDK processes and regular userspace
806
processes via some shared memory or IPC mechanism?