/makefile/Run_RandomAddressRAMA.perf` file with the following data.
```
Addr Pattern Total Size(MB) Transaction Size(B) Throughput Achieved(GB/s)
Random 256 (M0->PC0) 64 4.75415
Random 256 (M0->PC0) 128 9.59875
Random 256 (M0->PC0) 256 12.6208
Random 256 (M0->PC0) 512 13.1328
Random 256 (M0->PC0) 1024 13.1261
Random 512 (M0->PC0_1) 64 6.39976
Random 512 (M0->PC0_1) 128 9.59946
Random 512 (M0->PC0_1) 256 12.799
Random 512 (M0->PC0_1) 512 13.9621
Random 512 (M0->PC0_1) 1024 14.1694
Random 1024 (M0->PC0_3) 64 6.39984
Random 1024 (M0->PC0_3) 128 9.5997
Random 1024 (M0->PC0_3) 256 12.7994
Random 1024 (M0->PC0_3) 512 13.7546
Random 1024 (M0->PC0_3) 1024 14.0694
```
The top 5 rows show the point to point accesses, i.e. 256 MB accesses, with a transaction size variation. The bandwidth achieved is very similar to the previous step without RAMA IP.
The next ten rows with access to 512 MB and 1024MB respectively show a significant increase in achieved bandwidth compared to the previous step when configuration didn't utilised RAMA IP.
#### Conclusion: The RAMA IP significantly improves memory access efficiency in cases where the required memory access exceeds 256 MB (one HBM pseudo-channel)
### Summary
Congratulations! You have completed the tutorial.
In this tutorial, you learned it's relatively easy to migrate a DDR-based application to HBM based application using v++ flow. You also experimented with how the HBM based application throughput varies based on the address patterns and the overall memory being accessed by the kernel.
Return to Start of Tutorial
Copyright© 2020-2021 Xilinx