Roadmap¶
The AMD Inference Server is in active development and this is the tentative and non-exhaustive roadmap of features we would like to add. Of course, this is subject to change based on our own assessment and on feedback from the community, both of which may affect which features take priority over others. More detailed information about the work that’s ongoing and/or completed can be found in the change log and the Github projects.
2022 Q1¶
gRPC support
2022 Q2¶
ZenDNN CPU support
Official integration with KServe
Future¶
GPU support
Compatibility with AMD APIs
Expanded testing with models in Vitis AI model zoo
Benchmarking with MLPerf