GPUs - MIGraphX¶
Using the AMD Inference Server with MIGraphX and GPUs requires some additional setup prior to use.
Set up the host and GPUs¶
Prior to installing the Inference Server, first ensure your system recognizes your GPU(s). Start by following the ROCm installation instructions for version 5.4.1 or newer. Once your system recognizes your GPU(s), proceed to the next step.
Build an image¶
To build an image with MIGraphX enabled, you need to add the --migraphx
to the amdinfer dockerize
command:
# create the Dockerfile
python3 docker/generate.py
# build the dev image $(whoami)/amdinfer-dev-migraphx:latest
./amdinfer dockerize --migraphx --suffix="-migraphx"
# build the deployment image $(whoami)/amdinfer-migraphx:latest
./amdinfer dockerize --migraphx --suffix="-migraphx" --production
Start an image¶
The development container can be started with:
$ ./amdinfer run --dev
This automatically adds the detected devices, publishes ports, and mounts some convenient directories, such as your SSH directory, and drops you into a terminal in the container.
You can start the deployment container on Docker with something like:
$ docker run --device /dev/kfd --device /dev/dri [--volume ...]
These --device
flags pass the GPU to the container and you can mount other directories as needed to make models available.
Get assets and models¶
You can download the assets and models used for tests and examples with:
$ ./amdinfer get --migraphx