2020.2 Vitis™ Application Acceleration Development Flow TutorialsSee 2020.1 Vitis Application Acceleration Development Flow Tutorials
Using Multiple DDR Banks¶
By default, in the Vitis™ core development kit, the data transfer between the kernel and the DDR is achieved using a single DDR bank. In some applications, data movement is a performance bottleneck. In cases where the kernels need to move large amounts of data between the global memory (DDR) and the FPGA, you can use multiple DDR banks. This enables the kernels to access multiple memory banks simultaneously. As a result, the application performance increases.
The System Port mapping option using the
--sp switch allows the designer to map kernel ports to specific global memory banks, such as DDR or PLRAM. This tutorial shows you how to map kernel ports to multiple DDR banks.
This tutorial uses a simple example of vector addition. It shows the
vadd kernel reading data from
in2 and producing the result,
In this tutorial, you implement the vector addition application using three DDR banks.
Because the default behavior of the Vitis core development kit is to use a single DDR bank for data exchange between kernels and global memory, all data access through ports
out will be done through the default DDR bank for the platform.
Assume that in the application, you want to access:
To achieve the desired mapping, instruct the Vitis core development kit to connect each kernel argument to the desired bank.
The example in this tutorial uses a C++ kernel; however, the steps described are also the same for RTL and OpenCL™ API kernels.
Before You Begin¶
The labs in this tutorial use:
BASH Linux shell commands.
2020.2 Vitis core development kit release and the xilinx_u200_xdma_201830_2 platform. If necessary, it can be easily extended to other versions and platforms.
Before running any of the examples, make sure you have installed the Vitis core development kit as described in Installation in the Application Acceleration Development flow of the Vitis Unified Software Platform Documentation (UG1416).
If you run applications on Xilinx® Alveo™ Data Center accelerator cards, ensure the card and software drivers have been correctly installed by following the instructions on the Alveo Portfolio page.
Accessing the Tutorial Reference Files¶
To access the reference files, type the following into a terminal:
git clone https://github.com/Xilinx/Vitis-Tutorials.
Navigate to the
Runtime_and_System_Optimization/Feature_Tutorials/01-mult-ddr-banksdirectory, and then access the
To set up the Vitis core development kit, run the following commands.
#setup Xilinx Vitis tools, XILINX_VITIS and XILINX_VIVADO will be set in this step. source <VITIS install path>/settings64.sh. for example: source /opt/Xilinx/Vitis/2019.2/settings64.sh #Setup runtime. XILINX_XRT will be set in this step source /opt/xilinx/xrt/setup.sh
Execute the makefile to build the design for HW-Emulation.
cd reference-files make all
Makefile Options Descriptions
MODE := hw_emu: Set the build configuration mode to HW Emulation
PLATFORM := xilinx_u200_xdma_201830_2: Select the target platform
KERNEL_SRC := src/vadd.cpp: List the kernel source files
HOST_SRC := src/host.cpp: List the host source files
As previously mentioned, the default implementation of the design uses a single DDR bank. Observe the messages in the Console view during the link step; you should see messages similar to the following.
ip_name: vadd Creating apsys_0.xml INFO: [CFGEN 83-2226] Inferring mapping for argument vadd_1.in1 to DDR INFO: [CFGEN 83-2226] Inferring mapping for argument vadd_1.in2 to DDR INFO: [CFGEN 83-2226] Inferring mapping for argument vadd_1.out to DDR
This confirms the mapping is automatically inferred by the Vitis core development kit for each of the kernel arguments in the absence of explicit
--spoptions being specified.
Run HW-Emulation by executing the makefile with the
After the simulation is complete, the following memory connections for the kernel data transfer are reported.
TEST PASSED INFO: [Vitis-EM 22] [Wall clock time: 22:51, Emulation time: 0.0569014 ms] Data transfer between kernel(s) and global memory(s) vadd_1:m_axi_gmem0-DDR RD = 0.391 KB WR = 0.000 KB vadd_1:m_axi_gmem1-DDR RD = 0.391 KB WR = 0.000 KB vadd_1:m_axi_gmem2-DDR RD = 0.000 KB WR = 0.391 KB
Now, you will explore how the data transfers can be split across the following:
DDR Bank 0
DDR Bank 1
DDR Bank 2