Dataflow compiler for QNN inference on FPGAs
This project is maintained by Xilinx
05 Nov 2021 - Yaman Umuroglu
Important: The default branch has been renamed from master
to main
. You may want to make a fresh clone of the FINN compiler repository if you have a local copy.
We are delighted to announce the release of FINN v0.7! Below is a summary of the highlights from this release.
FINN is taking another step towards increasing the flexibility of the framework by supporting a new input format for neural networks, called QONNX. QONNX will enable FINN to be much more flexible in terms of representing weight and activation quantization, especially for higher precisions and fixed-point datatypes. For instance, this will enable future support for higher-precision quantized weights and activations, avoiding streamlining difficulties and expensive MultiThreshold-based activations. QONNX is being developed in close collaboration with hls4ml and will enable closer collaboration between FINN and hls4ml. Our long-time collaborator and current intern Hendrik Borras wrote a blog post about QONNX that has a lot more details, including a Netron interactive visualization of an example network in this new format.
Our examples repository finn-examples has been growing with new contributions, showcasing what our community is doing with FINN and dataflow NN accelerators on FPGAs. In particular, we have three new demos:
As always, you can get these accelerators alongside their Jupyter notebooks on your PYNQ board or Alveo U250 with pip3 install finn-examples
.
As the FINN community grows, it becomes increasingly important to future-proof the various pieces of infrastructure that makes the FINN compiler work. Here are a few of the infrastructure improvements that went into this release:
DataType
system: Previously, FINN’s DataType
support was a not-quite-exhaustive enumeration of some possible values, which limits the flexibility of what the compiler can do. We now have a new system in place that supports arbitrary-precision integers as well as fixed-point types, allowing the expression of things like DataType["UINT71"]
and DataType["FIXED<9,3>"]
. The compiler flows to actually take advantage of these types in end-to-end flows will be coming in the near future.Nowadays we’re getting lots of support requests, and though our Gitter channel is alive and well we wanted to make it easier to organize discussions, find answers and react to posts. Towards that end, the primary support channel for FINN is now on GitHub discussions. The Frequently Asked Questions and Getting Started sections in the documentation have also seen major updates.
The release (tag v0.7) is now available on GitHub. We’re continuously working to improve FINN in terms of layer, network and infrastructure. If you’d like to help out, please check out the contribution guidelines and share your ideas on the FINN GitHub Discussions!