Fast, Scalable Quantized Neural Network Inference on FPGAs

This project is maintained by Xilinx

FINN

drawing

FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. It is not intended to be a generic DNN accelerator like xDNN, but rather a tool for exploring the design space of DNN inference accelerators on FPGAs.

A new, more modular version of FINN is currently under development on GitHub, and we welcome contributions from the community!

Quickstart

Depending on what you would like to do, we have different suggestions on where to get started: