Introduction

The AMD Inference Server is an open-source tool to deploy your machine learning models and make them accessible to clients for inference. Out-of-the-box, the server can support selected models that run on AMD CPUs, GPUs or FPGAs by leveraging existing libraries. For all these models and hardware accelerators, the server presents a common user interface based on community standards so clients can make requests to any using the same API. The server provides HTTP/REST and gRPC interfaces for clients to submit requests. For both, there are C++ and Python bindings to simplify writing client programs. You can also use the server backend directly using the native C++ API to write local applications.

Features

  • Supports client requests using HTTP/REST, gRPC and websocket protocols using an API based on KServe’s v2 specification

  • Custom applications can directly call the backend bypassing the other protocols using the native C++ API

  • C++ library with Python bindings to simplify making requests to the server

  • Incoming requests are transparently batched based on the user specifications

  • Users can define how many models, and how many instances of each, to run in parallel

The AMD Inference Server is integrated with the following libraries out of the gate:

  • TensorFlow and PyTorch models with ZenDNN on AMD CPUs

  • ONNX models with MIGraphX on AMD GPUs

  • XModel models with Vitis AI on AMD (Xilinx) FPGAs

  • A graph of computation including pre- and post-processing for end-to-end inference with AKS on AMD (Xilinx) FPGAs

Documentation overview

The remainder of this documentation is organized as follows:

  • The About section continues to talk about the dependencies, licenses, roadmap and the changelog for the project

  • The Quickstart section presents quick starts for different types of users

  • The Libraries and API section goes over the different libraries, APIs and tools available to users

  • The Examples section provides more commentary around the examples in the repository

  • The Using the Server section discusses how to use the server in more depth

  • The Developers section has useful information for contributors and more detail about the internal operations of the server

Support

This documentation is your best source of support. You can also raise issues on Github if you run into a bug or have a question.

The AMD Inference Server is open-source and welcomes contributions. If you are interested in adding a feature, raise an issue first so your proposal can be discussed. Follow the Contributing guidelines when making a new pull request.