Frequently Asked Questions

Is VVAS open source?


What type of licensing options are available for VVAS source code (package)?

VVAS Release has been covered by below mentioned licenses:

  • Apache License

  • Version 2.0

  • 3-Clause BSD License


  • The MIT License

What platforms and OS are compatible with VVAS?

VVAS is tested on PetaLinux for embedded platforms and UBUntu 20.04 for PCIe based platforms. For more information about supported platforms, refer to Platforms And Applications.

Which AI models are supported with VVAS?

The following models are supported for this release:

  • resnet50

  • resnet18

  • mobilenet_v2

  • inception_v1

  • ssd_adas_pruned_0_95

  • ssd_traffic_pruned_0_9

  • ssd_mobilenet_v2

  • ssd_pedestrian_pruned_0_97

  • plate detection

  • yolov3_voc_tf

  • yolov3_adas_pruned_0_9

  • refinedet_pruned_0_96

  • yolov2_voc

  • yolov2_voc_pruned_0_77

  • densebox_320_320

  • densebox_640_360

  • Semantic Segmentation

  • TBD

How do I enable models that are not officially supported?

If the model is not in DPU deployable format, then it first needs to be converted into DPU deployable state. For this refer to Vitis AI 2.5 documentation.

What is the version of Vitis AI tool used for VVAS?

This VVAS release supports Vitis AI 2.5.

Is VVAS compatible with lower versions of Vitis AI tools, such as VAI 1.3?

No, it has dependencies on Vitis AI 2.5.

How can I change the model in the pipeline?

The model name to be used for inferencing has to be provided in the JSON file for vvas_xdpuinfer acceleration library. For more details, see DPU Infer.

Can the model be changed dynamically?

while a pipeline is running, the model parameters cannot be modified. To change the model’s parameters, stop the running pipeline, and then update the JSON file and then re-start the pipeline.

What types of input streams are supported?
  • H.264, H.265 encoded video streams

  • Raw video frames in NV12, BGR/RGB formats

Is receiving RTSP stream supported?

Receiving RTSP stream is supported by an open source plugin.

Is multi-stream processing supported (such as muletiple decode and detections)?

Yes, VVAS suports simultaneous execution of multiple instances of plugins to realize multistream decode and ML operations.

How do I develop kernel libraries

Refer to Acceleration s/w development guide.

Do I need FPGA design experience to develop video analytics applications with VVAS?

No. VVAS SDK ships with most of the building blocks needed for video alalytics applications. These building blocks are highly optimized and ready to use. There are several example designs available with this release for video analytics applications. You may directly use these or make modifications as per your needs to build video analytics application. Refer Platforms And Applications.

Is ROI-based encoding supported?

Yes. The ROI Plug-in that generates ROI data required for encoders.

Can I generate multiple output resolutions for a single input frame?

Yes. The vvas_xabrscaler plug-in controls the multiscaler kernel to generate up to 8 different resolutions for one input frame. This plugin, along with resize, can also do colorspace conversion.

Is audio analytics supported?


Are there sample accelerated applications developed using VVAS?

Yes. There are sample accelerated platforms and applications provided that you can execute by following a few steps. Start at Platforms And Applications.

Is there support for multi-stage (cascading) network?

One can connect multipe instances of vvas_xinfer one after another to implement multi-stage cascading network. Inference data generated by current ML operation will be appended to the Inference data generated by the previous ML stages.

How to debug VVAS application if there are any issues?

VVAS is based on GStreamer framework. It relies of debugging tools supported by GStreamer framework. For more details, you may refer to GStreamer Debugging Tools.

How do I check the throughput of VVAS application/pipeline?

Using GStreamer’s native fps display mechanism.

How do I compile and prune the model to be used?

Refer to Vitis AI 2.5 documentation.

How do I build plugins?

For Embedded platforms, refer to Building VVAS Plugins and Libraries.

What if I cannot find the information that i am looking for?

Contact support.