What is FINN?
FINN is an
experimental framework from Xilinx Research Labs to explore deep neural network
inference on FPGAs.
It specifically targets quantized neural
networks, with emphasis on
generating dataflow-style architectures customized for each network.
It is not
intended to be a generic DNN accelerator like xDNN, but rather a tool for
exploring the design space of DNN inference accelerators on FPGAs.
- Templated Vitis HLS library of streaming components: FINN comes with an
HLS hardware library that implements convolutional, fully-connected, pooling and
LSTM layer types as streaming components. The library uses C++ templates to
support a wide range of precisions.
- Ultra low-latency and high performance
with dataflow: By composing streaming components for each layer, FINN can
generate accelerators that can classify images at sub-microsecond latency.
- Many end-to-end example designs: We provide examples that start from training a
quantized neural network, all the way down to an accelerated design running on
hardware. The examples span a range of datasets and network topologies.
- Toolflow for rapid design generation: The FINN toolflow supports allocating
separate compute resources per layer, either automatically or manually, and
generating the full design for synthesis. This enables rapid exploration of the
Who are we?
The FINN team consists of members of AMD Research under Ivo Bolsens (CTO) and members of CommsDC Solutions Engineering under Allen Chen (AECG-CommsDCSolnEng), working very closely with the Pynq team and Kristof Denolf and Jack Lo for integration with video processing.
From top left to bottom right: Yaman Umuroglu, Michaela Blott, Alessandro Pappalardo, Lucian Petrica, Nicholas Fraser,
Thomas Preusser, Jakoba Petri-Koenig, Ken O’Brien
From top left to bottom right: Eamonn Dunbar, Kasper Feurer, Aziz Bahri, Fionn O’Donohoe, Mirza Mrahorovic