Dataflow compiler for QNN inference on FPGAs

This project is maintained by Xilinx

FINN tutorial at FPGA'21

27 Jan 2021 - Yaman Umuroglu

This event has now concluded. You can find the materials at the bottom of this page.

We’re delighted to announce a two-hour FINN tutorial as part of the FPGA’21 conference. Details are as follows:


Mixing machine learning into high-throughput, low-latency edge applications needs co-designed solutions to meet the performance requirements. Quantized Neural Networks (QNNs) combined with custom FPGA dataflow implementations offer a good balance of performance and flexibility, but building such implementations by hand is difficult and time-consuming.

In this tutorial, we will introduce FINN, an open-source experimental framework by Xilinx Research Labs to help the broader community explore QNN inference on FPGAs. Providing a full-stack solution from quantization-aware training to bitfile, FINN generates high-performance dataflow-style FPGA architectures customized for each network. Participants will be introduced to efficient inference with QNNs and streaming dataflow architectures, the components of the project’s open-source ecosystem, and gain hands-on experience training a quantized neural network with Brevitas and deploying it with FINN.

Practical Information

Some prior knowledge of FPGAs, Vivado HLS, PyTorch and neural network training is recommended, but not required.

This will be a virtual event, with a Zoom video call and a hands-on Jupyter notebook lab. Registered participants will get access to a FINN setup running in the cloud. There are no special technical requirements besides a browser and Zoom client.

Connect with us and the other participants on the tutorial Gitter channel, or join the FINN Gitter channel.