AMD Inference Server

Welcome to the official documentation for the AMD Inference Server: an open-source tool to deploy your machine learning models and make them accessible to clients for inference.

If you are new to the project, start with the Introduction to get an overview of what it’s about and how this documentation is organized.

Use the sidebar to navigate through the different pages in the documentation. Note that the documentation is versioned. By default, it shows the latest documentation corresponding to the current state of the code. To see other versions, you can use “Read the Docs” panel on the bottom-left, visit this landing page or edit the version in the URL directly.