User Guide¶
This section describes the details on controlling and configuring the QDMA IP
System Level Configurations¶
QDMA driver provides the sysfs interface to enable user to perform system level configurations. QDMA PF
and VF
drivers expose several sysfs
nodes under the pci
device root node. sysfs
provides an interface to configure the module.
[xilinx@]# lspci | grep -i Xilinx
01:00.0 Memory controller: Xilinx Corporation Device 903f
01:00.1 Memory controller: Xilinx Corporation Device 913f
01:00.2 Memory controller: Xilinx Corporation Device 923f
01:00.3 Memory controller: Xilinx Corporation Device 933f
Based on the above lspci output, traverse to /sys/bus/pci/devices/<device node>/qdma
to find the list of configurable parameters specific to PF or VF driver.
1. Instantiate the Virtual Functions¶
/sys/bus/pci/devices/<device node>
provides two configurable entries
sriov_totalvfs
: Indicates the maximum number of VFs supported for PF. This is a read only entry having the value the that was configured during bit stream generation.sriov_numvfs
: Enables the user to specify the number of VFs required for a PF
Display the currently supported max VFs:
[xilinx@]# cat /sys/bus/pci/devices/0000:01:00.0/sriov_totalvfs
Instantiate the required number of VFs for a PF:
[xilinx@]# echo 3 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
Once the VFS are instantiated, required number of queues can be allocated the VF using qmax
sysfs entry available in VF at
/sys/bus/pci/devices/<VF function number>/qdma/qmax
2. Allocate the Queues to a function¶
By default, all functions have 0 queues assigned.
qmax
configuration parameter enables the user to update the number of queues for a PF. This configuration parameter indicates “Maximum number of queues associated for the current pf”.
If the queue allocation needs to be different for any PF, access the qmax sysfs entry and set the required number.
Display the current value:
[xilinx@]# cat /sys/bus/pci/devices/0000:01:00.0/qdma/qmax
0
To set 1024 as qmax for PF0:
[xilinx@]# echo 1024 > /sys/bus/pci/devices/0000:01:00.0/qdma/qmax
[xilinx@]# dma-ctl dev list
qdma01000 0000:01:00.0 max QP: 1024, 0~1023
qdma01001 0000:01:00.1 max QP: 0, -~-
qdma01002 0000:01:00.2 max QP: 0, -~-
qdma01003 0000:01:00.3 max QP: 0, -~-
To set 1770 as qmax for PF0, 8 as q max for PF1, PF2, PF3:
[xilinx@]# echo 1770 > /sys/bus/pci/devices/0000\:01\:00.0/qdma/qmax
[xilinx@]# echo 8 > /sys/bus/pci/devices/0000\:01\:00.1/qdma/qmax
[xilinx@]# echo 8 > /sys/bus/pci/devices/0000\:01\:00.2/qdma/qmax
[xilinx@]# echo 8 > /sys/bus/pci/devices/0000\:01\:00.3/qdma/qmax
[xilinx@]# dma-ctl dev list
qdma01000 0000:01:00.0 max QP: 1770, 0~1769
qdma01001 0000:01:00.1 max QP: 8, 1770~1777
qdma01002 0000:01:00.2 max QP: 8, 1778~1785
qdma01003 0000:01:00.3 max QP: 8, 1786~1793
qmax
configuration parameter is available for virtual functions as well.
3. Reserve the Queues for VFs¶
Use the qmax
sysfs entry to allocate queues to VFs similar to PFs.
Display the current value:
[xilinx@] #cat /sys/bus/pci/devices/0000:81:00.4/qdma/qmax
0
To set 1024 as qmax for VF0 of PF0:
[xilinx@] #echo 1024 > /sys/bus/pci/devices/0000:81:00.4/qdma/qmax
4. Set Interrupt Ring Size¶
Interrupt ring size is associated with indirect interrupt mode.
When the mode is set to indirect interrupt mode, by default the interrupt aggregation ring size is set 0 index value. Each interrupt aggregation ring entry size is 64 Bytes. Index size 0 refers to 4KB size, i.e 4KB/64 = 512 entries.
User can configure the interrupt ring entries in multiples of 512 hence set the intr_rngsz
with multiplication factor
Display the current value:
[xilinx@]# cat /sys/bus/pci/devices/0000:81:00.0/qdma/intr_rngsz
0
To set value 2 to intr_rngsz:
[xilinx@]# echo 2 > /sys/bus/pci/devices/0000:81:00.0/qdma/intr_rngsz
Queue Management¶
QDMA driver comes with a command-line configuration utility called dma-ctl
to manage the queues in the system.