CPlusPlus¶
The CPlusPlus backend can run arbitrary C++ code targeting any hardware accelerator that adheres to the expected interface.
Model support¶
Since this backend is for running C++ code, anything you can compile to a shared library correctly should work with this backend. It supports models with multiple input and output tensors.
Hardware support¶
Depending on the content of the shared library, the backend could target CPUs, GPUs or FPGAs or other hardware accelerators. You will need any compile-time or run-time dependencies of what you are running if it requires additional software.
Host setup¶
Any setup needed for the host will depend on which models you want to deploy and what hardware/software they use. Follow the directions for the original tools you are using.
Build an image¶
To build an image with the CPlusPlus backend enabled, you don’t need to add any special flags. It is always enabled.
# create the Dockerfile
python3 docker/generate.py
# build the development image $(whoami)/amdinfer-dev:latest
./amdinfer dockerize
# build the development image $(whoami)/amdinfer-dev-basic:latest
./amdinfer dockerize --suffix="-basic"
# build the deployment image $(whoami)/amdinfer-basic:latest
./amdinfer dockerize --suffix="-basic" --production
Start a container¶
Depending on your use case and how you are using the server, you can start a container to use this backend in multiple ways.
Deployment¶
You can start a deployment container with something like:
$ docker run [--device ...] [--volume ...]
Depending on the requirements of the models you want to deploy, you may need to pass hardware devices to the container using the --device
flags or mount volumes with --volume
flags.
Development¶
A development container can be started with:
$ ./amdinfer run --dev
This automatically publishes ports and mounts some convenient directories, such as your SSH directory, and drops you into a terminal in the container.
Get test assets¶
Assets and models used with this backend are always downloaded so you can get them with any flags to:
$ ./amdinfer get
Loading the backend¶
There are multiple ways to load this backend to make it available for inference requests from clients.
If you are using a client’s workerLoad()
method:
// amdinfer::Client* client;
// amdinfer::ParameterMap parameters;
std::string endpoint = client->workerLoad("cplusplus", parameters)
# client = amdinfer.Client()
# parameters = amdinfer.ParameterMap()
endpoint = client.workerLoad("cplusplus", parameters)
With a client’s modelLoad()
method or using the repository approach, you need to create a model repository and put a model in it.
To use this backend with your model, use amdinfer_cpp
as the platform for your model.
Then, you can load the model from the server after setting up the path to the model repository.
The server may be set to automatically load all models from the configured model repository or you can load it manually using modelLoad()
.
In this case, the endpoint is defined in the model’s configuration file in the repository and it is used as the argument to modelLoad()
.
// amdinfer::Client* client;
// amdinfer::ParameterMap parameters;
client->modelLoad(<model>, parameters)
# client = amdinfer.Client()
# parameters = amdinfer.ParameterMap()
client.modelLoad(<model>, parameters)
Parameters¶
You can provide the following backend-specific parameters at load-time:
Parameter |
Type |
Usage |
|
---|---|---|---|
|
integer |
Requested batch size for incoming batches |
|
|
string |
Full path to the shared library to load. Alternatively |
you can pass a string not ending with |
Troubleshooting¶
If you run into problems, first check the general troubleshooting guide guide. Then continue on to this CPlusPlus specific troubleshooting guide. You will need access to the machine where the inference server is running to debug.
TODO