AMD Inference Server

Welcome to the official documentation for the AMD Inference Server: an open-source tool to deploy your machine learning models and make them accessible to clients for inference.

If you are new to the project, start with the Introduction and the rest of the Getting Started section to get a high-level look at the features, documentation organization and important concepts.

Use the sidebar to navigate through the different pages in the documentation. Note that the documentation is versioned. By default, it shows the latest documentation corresponding to the current state of the code. To see other versions, you can use “Read the Docs” panel on the bottom-left or edit the version in the URL directly.